When NAS Meets Robustness:
In Search of Robust Architectures against Adversarial Attacks

Minghao Guo*     Yuzhe Yang*      Rui Xu     Ziwei Liu     Dahua Lin
(* indicates equal contribution, alphabetical order)


Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks. Since then, extensive efforts have been devoted to enhancing the robustness of deep networks via specialized learning algorithms and loss functions. In this work, we take an architectural perspective and investigate the patterns of network architectures that are resilient to adversarial attacks. To obtain the large number of networks needed for this study, we adopt one-shot neural architecture search, training a large network for once and then finetuning the sub-networks sampled therefrom. The sampled architectures together with the accuracies they achieve provide a rich basis for our study. Our ''robust architecture Odyssey'' reveals several valuable observations: 1) densely connected patterns result in improved robustness; 2) under computational budget, adding convolution operations to direct connection edge is effective; 3) flow of solution procedure (FSP) matrix is a good indicator of network robustness. Based on these observations, we discover a family of robust architectures (RobNets). On various datasets, including CIFAR, SVHN, Tiny-ImageNet, and ImageNet, RobNets exhibit superior robustness performance to other widely used architectures. Notably, RobNets substantially improve the robust accuracy (~5% absolute gains) under both white-box and black-box attacks, even with fewer parameter numbers.


When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
Minghao Guo*, Yuzhe Yang*, Rui Xu, Ziwei Liu, Dahua Lin (* indicates equal contribution)
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2020)
[Paper]  •  [arXiv]  •  [Code]  •  [Slides]  •  [Poster]  •  [BibTeX]  



Representative Results

Figure 1. Correlation between architecture density and adversarial accuracy: Densely connected pattern can benefit the network robustness.

Figure 2. Architecture studies under computational budget: Under small computational budget, adding convolution operations to direct edges is more effective to improve model robustness.

Figure 3. FSP matrix distance as robustness indicator: A robust network has a lower FSP matrix loss in the deeper cells of network.

Also Check Out

ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
Yuzhe Yang, Guo Zhang, Dina Katabi, and Zhi Xu
ICML 2019  •  [Paper]  •  [Slides]  •  [Poster]  •  [Talk]