Lower Bounds for Reconstruction Attacks on Differentially Private Learning

Speaker: Satya Lokam


Abstract

Differential Privacy (DP) has evolved to be the de facto standard for privacy in machine learning. However, the mathematical guarantees of DP are often difficult to interpret in terms of their resilience to specific attacks on learning algorithms. We continue the line of work that attempts to understand the semantics of DP by quantifying the level of protection offered by DP-learners against specific classes of attacks. In this talk, we will consider training data reconstruction attacks by informed adversaries. For such attacks, we derive concrete lower bounds on the adversary’s reconstruction error in terms of DP parameters, data dimension, and the adversary’s query budget. Our results improve and generalize previous asymptotic bounds due to Guo et al. (Bounding training data reconstruction in private (deep) learning, ICML 2022). [Joint work with Prateeti Mukherjee]

Bio

Satya Lokam is a Principal Researcher at the Microsoft Research lab in Bangalore, India. His research interests include Cryptography, Privacy, Complexity Theory, and Theoretical Computer Science in general. Before moving to Microsoft Research, Satya was a faculty member at the University of Michigan, Ann Arbor. He received his Ph.D. from the University of Chicago and held postdoctoral positions at the University of Toronto and at the Institute for Advanced Study (IAS) in Princeton, NJ.