This repository contains the official code for the paper The Saddle-Point Method in Differential Privacy
(ICML 2022, [link]).
Our work introduces a novel mathematical technique to compose privacy curves of differentially private (DP) algorithms in the large composition regime, i.e., when a privacy mechanism is sequentially applied a large number of times. Our method—dubbed the saddle-point accountant (SPA)—utilizes exponential tilting of the privacy loss random variable to approximate the privacy curve as expressed in the form of a contour integral in our Theorem 3.1. The SPA is based on the saddle-point method (or the method of steepest descent), which is a well-known technique in the mathematical physics and statistics literature for approximating contour integrals.
The notebook Main_Body_Figures.ipynb
replicates all figures found in the paper.
The directory saddlepoint_accountant/
contains all code for the Saddle-Point Accountant (SPA) implementation. This includes:
- The method of steepest descent based version of the SPA:
$\delta^{(k)}_{L, \text{ SP-MSD}}(\varepsilon)$ as defined in Definition 3.6, and the inverted curve$\varepsilon^{(k)}_{L, \text{ SP-MSD}}(\delta)$ . - The central limit theorem based version of the SPA:
$\delta_{L, \text{ SP-CLT}}(\varepsilon)$ as defined in Definition 5.1, and the inverted curve$\varepsilon_{L,\text{ SP-CLT}}(\delta)$ . - The error in the central limit theorem based SPA:
$\text{err}_{\text{SP}}(\varepsilon;t_0)$ as defined in Theorem 5.7, with the choice of$t_0$ being the saddle-point as defined in Definition 3.2.
The subdirectory saddlepoint_accountant/other_accountants/
contains code for the GDP and Ground-Truth Accountant.
- Of independent interest is the ground-truth accountant, which numerically computes the exact formula for the privacy curve as a contour integral derived in equation (23) in Theorem 3.1. See Appendix M for how this integral is computed, and the README file under
saddlepoint_accountant/other_accountants/
for more implementation details.
Currently the following mechanisms are supported:
from saddlepoint_accountant import GaussianMechanism
from saddlepoint_accountant import SaddlePointAccountant
sigma = 2
sampling_prob = 0.01
gaussian_mechanism = GaussianMechanism(sampling_probability = sampling_prob, noise_multiplier = sigma)
saddle_point_accountant = SaddlePointAccountant(gaussian_mechanism)
comps = 3000
eps = 1
delta_approx, error_bound = saddle_point_accountant.compute_delta_clt(eps, comps)
In the code above, delta_approx
computes our approximation error_bound
computes the error
as in equation (45). The setup is for the subsampled Gaussian mechanism with standard deviation eps
. Varying the value of eps
produces our results in Figure 1 in the paper (indicated by the dashed lines therein).
Additionally, to compute the other versions of the SPA saddle_point_accountant
class methods:
saddle_point_accountant.compute_delta_msd(eps, comps, k)
saddle_point_accountant.compute_epsilon_msd(delta, comps, k)
saddle_point_accountant.compute_epsilon_clt(delta, comps)
from saddlepoint_accountant import LaplaceMechanism
from saddlepoint_accountant import SaddlePointAccountant
b = 1; sensitivity = 0.01
effective_sensitivity = sensitivity / b
laplace_mechanism = LaplaceMechanism(sens = effective_sensitivity)
saddle_point_accountant = SaddlePointAccountant(laplace_mechanism)
comps = 1000
eps = 1
delta_approx, error_bound = saddle_point_accountant.compute_delta_clt(eps, comps)
In the code above, delta_approx
computes our approximation error_bound
computes the error
as in equation (45). The setup this time is for the Laplace mechanism with scale parameter eps
. Varying the value of eps
produces our results in Figure 6 in the paper (indicated by the dashed lines therein).