Overview of adversarial attacks and defenses for object detectors

Ekaterina Chekhonina, Vasily Kostyumov

Abstract


Nowadays object detection is considered as one of the most popular fields of deep neural networks with numerous applications in critical areas: natural language processing, big data processing, DNA analysis, autonomous vehicles. However, detection object systems are sensitive to small perturbations in input data. They are imperceptible to the human eye, but they can completely mislead the DNNs. Object detectors are vulnerable against adversarial attacks and hardly could be embedded in real-life applications. Existing adversarial attacks can be divided into digital and physical adversarial attacks. Attacks in the digital world have strong attack performance in lab environments but are not so effective in the real world, unlike physical attacks. Defenses can be divided into empirical and certified. Certified methods guarantee reliability. Empirical defenses can be vulnerable against complex adversarial attacks. While the field of adversarial robustness is very popular, the majority of the work has been focused on the task of image classification due to it being simpler in structure than object detection. We review the prominent attack and defense mechanism related to object detection and propose its classification.

Full Text:

PDF (Russian)

References


Sharma Akanksha Rai, Kaushik Pranav. Literature survey of statistical, deep and reinforcement learning in natural language processing // International Conference on Computing, Communication and Automation. — 2017. — P. 350–354.

Intelligent fault diagnosis of the high-speed train with big data based on deep neural networks / Hexuan Hu, Bo Tang, Xuejiao Gong et al. // IEEE Transactions on Industrial Informatics. — 2017. — Vol. 13, no. 4. — P. 2106–2116.

Deng Lei, Wu Hui, Liu Hui. D2vcb: a hybrid deep neural network for the prediction of in-vivo protein-dna binding from combined dna sequence // IEEE International Conference on Bioinformatics and Biomedicine. — 2019. — P. 74–77.

Ackerman Evan. How drive.ai is mastering autonomous driving with deep learning // IEEE Spectrum Magazine. — 2017. — URL: https://spectrum.ieee.org/how-driveai-is-mastering-autonomous-driving-with-deep-learning.

Novel arithmetics in deep neural networks signal processing for autonomous driving: challenges and opportunities / Marco Cococcioni, Federico Rossi, Emanuele Ruffaldi et al. // IEEE Signal Processing Magazine. — 2020. — Vol. 38, no. 1. — P. 97–110.

Cococcioni Marco, Ruffaldi Emanuele, Saponara Sergio. Exploiting posit arithmetic for deep neural networks in autonomous driving applications // International Conference of Electrical and Electronic Technologies for Automotive. — 2018. — P. 1–6.

Okuyama Takafumi, Gonsalves Tad, Upadhay Jaychand. Autonomous driving system based on deep q learning // International Conference on Intelligent Autonomous Systems. — 2018. — P. 201–205.

Ben-Tal Aharon, El Ghaoui Laurent, Nemirovski Arkadi. Robust optimization. — Princeton University Press, 2009.

Papernot Nicolas, McDaniel Patrick, Goodfellow Ian. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples // arXiv preprint arXiv:1605.07277. — 2016.

Intriguing properties of neural networks / Christian Szegedy, Wojciech Zaremba, Ilya Sutskever et al. // arXiv preprint arXiv:1312.6199. — 2013.

Lu Jiajun, Issaranon Theerasit, Forsyth David. Safety net: detecting

and rejecting adversarial examples robustly // IEEE International Conference on Computer Vision. — 2017.

A survey on physical adversarial attack in computer vision / Donghua Wang, Wen Yao, Tingsong Jiang et al. // arXiv preprint arXiv:2209.14262. — 2022.

Practical black-box attacks against machine learning / Nicolas Papernot, Patrick McDaniel, Ian Goodfellow et al. // Proceedings of the 2017 ACM on Asia conference on computer and communications security. — 2017. — P. 506–519.

Adversarial examples: attacks and defenses for deep learning / Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li // IEEE Transactions on Neural Networks and Learning Systems. — 2019. — Vol. 30, no. 9. — P. 2805–2824.

Adversarial objectness gradient attacks in real-time object detection

systems / Ka-Ho Chow, Ling Liu, Margaret Loper et al. // Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications. — 2020.

Adversarial examples for semantic segmentation and object detection / Cihang Xie, Jianyu Wang, Zhishuai Zhang et al. // IEEE International Conference on Computer Vision. — 2017. — P. 1369–1378.

An adversarial attack on dnn-based black-box object detectors / Yajie Wang, Yu-an Tan, Wenjiao Zhang et al. // Journal of Network and Computer Applications. — 2020. — Vol. 161.

Robust adversarial perturbation on deep proposal-based models / Yuezun Li, Daniel Tian, Ming-Ching Chang et al. // arXiv preprint arXiv:1809.05962 s. — 2018.

Faster r-cnn: Towards real-time object detection with region proposal networks / Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun // Advances in neural information processing systems. — 2015. — Vol. 28.

Physical adversarial examples for object detectors / Dawn Song, Kevin Eykholt, Ivan Evtimov et al. // 12th USENIX workshop on offensive technologies (WOOT 18). — 2018.

Synthesizing robust adversarial examples / Anish Athalye, Logan Engstrom, Andrew Ilyas, Kevin Kwok // International conference on machine learning / PMLR. — 2018. — P. 284–293.

Adversarial patch / Tom B Brown, Dandelion Mané, Aurko Roy et al. // arXiv preprint arXiv:1712.09665. — 2017.

Thys Simen, Van Ranst Wiebe, Goedemé Toon. Fooling automated surveillance cameras: adversarial patches to attack person detection // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. — 2019. — P. 0–0.

Adversarial t-shirt! evading person detectors in a physical world / Kaidi Xu, Gaoyuan Zhang, Sijia Liu et al. // Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16 / Springer. — 2020. — P. 665–681.

Universal physical camouflage attacks on object detectors / Lifeng Huang, Chengying Gao, Yuyin Zhou et al. // IEEE/CVF Conference on Computer Vision and Pattern Recognition. — 2020. — P. 720–729.

Slap: Improving physical adversarial examples with shortlived adversarial perturbations / Giulio Lovisotto, Henry Turner, Ivo Sluganovic et al. // 30th USENIX Security Symposium. — 2021. — P. 1865–1882.

Cohen Jeremy, Rosenfeld Elan, Kolter Zico. Certified adversarial robustness via randomized smoothing // international conference on machine learning / PMLR. — 2019. — P. 1310–1320.

Detection as regression: Certified object detection with median smoothing / Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader et al. // Advances in Neural Information Processing Systems. — 2020. — Vol. 33. — P. 1275–1286.

Xiang Chong, Mittal Prateek. Detectorguard: Provably securing object detectors against localized patch hiding attacks // Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. — 2021. — P. 3177–3196.

Objectseeker: Certifiably robust object detection against patch hiding attacks via patch-agnostic masking / Chong Xiang, Alexan9 der Valtchanov, Saeed Mahloujifar, Prateek Mittal // arXiv preprint arXiv:2202.01811. — 2022.

Zhang Haichao, Wang Jianyu. Towards adversarially robust object detection // IEEE/CVF International Conference on Computer Vision. — 2019. — P. 421–430.

Role of spatial context in adversarial robustness for object detection / Aniruddha Saha, Akshayvarun Subramanya, Koninika Patil, Pirsiavash Hamed // IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. — 2020.

Grad-cam: Visual explanations from deep networks via gradientbased localization / Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das et al. // IEEE International Conference on Computer Vision. — 2017.

Chiang Ping-Han, Chan Chi-Shen, Wu Shan-Hung. Adversarial pixel masking: A defense against physical attacks for pre-trained object detectors // 29th ACM International Conference on Multimedia. — 2021.

Amirkhani Abdollah, Karimi Mohammad Parsa. Adversarial defenses for object detectors based on gabor convolutional layers // The Visual Computer. — 2022. — Vol. 38, no. 6. — P. 1929–1944.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162