Methods for Formal Verification of Artificial Neural Networks: A Review of Existing Approaches
Abstract
Among recent publications containing an overview and systematization of algorithms for formal verification of neural networks, a classification of algorithms is proposed which is based on three following properties: reachability, optimization, and search. Reachability-based methods select data from a range of input values based on predefined constraints on the input values, and approximate this set using symbolic mathematical constructs. The main problem of this approach is overapproximation, i.e., the output set is too wide. Another big problem is preservation of linearity after applying the ReLU activation function, which is necessary for applying the backpropagation algorithm. In present article we analyze, compare, and sistemathize mathematical constructions and attempts to approximate the point data by flat continuous sets with the least possible approximation. We also discuss the most effective methods of solving the problem of applying the ReLU activation function to symbolic elements.
Full Text:
PDF (Russian)References
Nnv: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems / Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez et al. — 2020. — URL: https://arxiv.org/abs/2004.05519.
Muhammad Shafique Mahum Naseer Theocharis Theocharides. Robust Machine Learning Systems: Challenges,Current Trends, Perspectives, and the Road Ahead. — 2020. — . — URL: https://ieeexplore. ieee.org/document/8979377.
Safety verification of deep neural networks / Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu. — 2016. — URL: https: //arxiv.org/abs/1610.06940.
Ehlers Ruediger. Formal verification of piece-wise linear feed-forward neural networks. — 2017. — URL: https://arxiv.org/abs/1705.01320.
Shriver David, Elbaum Sebastian, Dwyer Matthew B. Dnnv: A framework for deep neural network verification. — 2021. — URL: https://arxiv.org/abs/2105.12841.
Verification of deep convolutional neural networks using imagestars / Hoang-Dung Tran, Stanley Bak, Weiming Xiang, Taylor T. Johnson. — 2020. — URL: https://arxiv.org/abs/2004.05511.
Tjeng Vincent, Xiao Kai, Tedrake Russ. Evaluating robustness of neural networks with mixed integer programming. — 2017. — URL: https://arxiv.org/abs/1711.07356.
A dual approach to scalable verification of deep networks / Krishnamurthy, Dvijotham, Robert Stanforth et al. — 2018. — URL: https://arxiv.org/abs/1803.06567.
Boosting robustness certification of neural networks / Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev // International Conference on Learning Representations. — 2019. — URL: https://openreview.net/forum?id=HJgeEh09KQ.
Xiang Weiming, Tran Hoang-Dung, Johnson Taylor T. Specificationguided safety verification for feedforward neural networks. — 2018. — URL: https://arxiv.org/abs/1812.06161.
Formal security analysis of neural networks using symbolic intervals / Shiqi Wang, Kexin Pei, Justin Whitehouse et al. — 2018. — URL: https://arxiv.org/abs/1804.10829.
Efficient formal safety analysis of neural networks / Shiqi Wang, Kexin Pei, Justin Whitehouse et al. — 2018. — URL: https://arxiv.
org/abs/1809.08098.
Timon Gehr Matthew Mirman Dana Drachsler-Cohen. AI2: Safety and Robustness Certification of Neural networks with Abstract Interpretation.
— 2018. — URL: https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf.
Ghorbal Khalil, Goubault Eric, Putot Sylvie. The zonotope abstract domain taylor1+. — 2009. — URL: https://link.springer.com/chapter/ 10.1007/978-3-642-02658-4_47#chapter-info.
Fast and effective robustness certification / Gagandeep Singh, Timon Gehr, Matthew Mirman et al. // Advances in Neural Information Processing Systems / Ed. by S. Bengio, H. Wallach, H. Larochelle et al. — Vol. 31. — Curran Associates, Inc., 2018. — URL: https://proceedings.neurips.cc/paper/2018/file/ f2f446980d8e971ef3da97af089481c3-Paper.pdf.
Duggirala Parasara Sridhar, Viswanathan Mahesh. Parsimonious, simulation based verification of linear systems. — 2016. — URL: https:
//www.cs.unc.edu/~psd/files/research/2016/psd-mv-cav-16.pdf.
Optimization and abstraction: a synergistic approach for analyzing neural network robustness / Greg Anderson, Shankara Pailoor, Isil Dillig, Swarat Chaudhuri. — ACM, 2019. — jun. — URL: https://doi.org/10.1145/3314221.3314614.
Improved geometric path enumeration for verifying relu neural networks / Stanley Bak, Hoang-Dung Tran, Kerianne Hobbs, Taylor
T. Johnson. — 2020. — URL: https://link.springer.com/chapter/ 10.1007/978-3-030-53288-8_4.
Namiot Dmitry Eugene Ilyushin, Pilipenko Oleg. On trusted ai platforms. — 2022. — URL: http://injoit.org/index.php/j1/article/view/1398.
Namiot Dmitry Eugene Ilyushin, Chizhov Ivan. Attacks on machine learning systems-common problems and methods. — 2022. — URL: http://injoit.org/index.php/j1/article/view/1398.
Dmitry Namiot, Ilyushin Eugene. Generative models in machine learning. — 2022. — URL: http://injoit.org/index.php/j1/article/view/1398.
Iskusstvennyj intellekt v kiberbezopasnosti. — Provereno: 08.08.2022. — URL: https://cs.msu.ru/node/3732.
Refbacks
- There are currently no refbacks.
Abava Кибербезопасность IT Congress 2024
ISSN: 2307-8162