Adversarial robustness has become a fundamental requirement in modern machine learning applications. Yet, there has been surprisingly little statistical understanding so far. In this work, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under a Gaussian mixture model studied by Schmidt et al. 2018. The results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting. We establish an excess risk lower bound and design a computationally efficient estimator that achieves this optimal rate. Our results built upon a minimal set of assumptions while covering a wide spectrum of adversarial perturbations including L_p balls for any p>1. Joint work with Yuting Wei and Pradeep Ravikumar.
Chen Dan is a 6th year Ph.D. student at Computer Science Department, Carnegie Mellon University, advised by Pradeep Ravikumar. His research interest is in the broad area of robust statistical learning, with an emphasis on the theoretical understanding and practical algorithms for learning under various types of adversarial distribution shift. Prior to joining CMU, Chen received his bachelor degree from School of EECS, Peking University in 2016.
The AI Seminar is sponsored by Morgan Stanley.
Zoom Participation. See announcement.