Accuracy-Robustness Trade-offs in Data-driven Control

Despite the widespread use of ML/data-driven algorithms, provable guarantees on their performance are critically lacking, especially when the data originates from unreliable/adversarial sources. We show that in a quest to optimize their accuracy, data-driven control and classification algorithms inevitably become less robust to adversarial and environmental data perturbations. This trade-off is fundamental to these problems and cannot be improved by using more data or increasing the complexity of the algorithms. This research provides hints on why these algorithms perform differently under normal and adversarial conditions, and also provides mechanisms to tune their level of accuracy/robustness.


A. A. Al Makdah, V. Katewa and F. Pasqualetti.A Fundamental Performance Limitation for Adversarial Classification.IEEE Control Systems Letters, 4(1):169-174, 2020.

A. A. Al Makdah, V. Katewa and F. Pasqualetti. Accuracy Prevents Robustness in Perception-based Control. American Control Conference (ACC), Denver, USA, pp. 3940-3946, 2020.



Faculty: Vaibhav Katewa, ECE
Click image to view enlarged version

Scroll Up