Privacy, and the related issue of security, in CPS is of critical importance in preventing catastrophic failures. As demonstrated by some high profile cyber attacks, privacy and security in CPS is a complex problem and requires a multi-layered solution. One such layer must be at the level of control and communication itself – instead of securing the operation of CPS independently after the design of control and communication policies, an integrated approach must be taken. In this theme, this work makes two contributions.
The first contribution is a differentially private algorithm for multi-agent average consensus. In the first problem, the agents seek to converge to the average of their initial values while keeping this information differentially private from an adversary that has access to all the messages. Our algorithm converges almost surely to an unbiased estimate of the average of agentsâ€™ initial states. We formally characterize its differential privacy properties. We show that the optimal choice of our design parameters (with respect to the variance of the convergence point around the exact average) corresponds to a one-shot perturbation of initial states.
The second contribution is a differentially private algorithm for distributed convex optimization, wherein a group of agents aim to minimize the sum of individual objective functions while each desires that any information about its objective function is kept private. We proved the impossibility of achieving differential privacy using strategies based on perturbing the inter-agent messages with noise when the underlying noise-free dynamics are asymptotically stable. This justifies our algorithmic solution based on the perturbation of individual functions with Laplace noise. To this end, we establish a general framework for differentially private handling of functional data. We further design post-processing steps that ensure the perturbed functions regain the smoothness and convexity properties of the original functions while preserving the differentially private guarantees of the functional perturbation step. This methodology allows us to use any distributed coordination algorithm to solve the optimization problem on the noisy functions. Finally, we explicitly bound the magnitude of the expected distance between the perturbed and true optimizers which leads to an upper bound on the privacy- accuracy trade-off curve.