Research and development of a formal verification algorithm and quality assessment metrics based on ANN dimensionality reduction methods

Roman Selevenko, Ekaterina Stroeva

Abstract


Checking the quality of operation of artificial neural networks (ANN) using formal verification algorithms is the most guaranteed method among others, but there is a problem of computational complexity and very high time costs. The priority direction of research is to reduce the operating time of algorithms, which can be done either by directly changing the formal verification algorithm itself, or by carrying out some actions on the artificial neural network architecture being verified. This paper presents a study of methods for formal verification of  artificial neural networks based on ANN reduction methods, describes  experiments conducted to test the properties of selected neural network  architectures, and analyzes these methods. The Planet algorithm was taken as a  formal verification algorithm, the operation of the architecture with the  ReLU activation function was checked, so LeNet was chosen, dimensionality reduction methods such as dropout, pruning and quantization were used, the reachability property was checked, and the quality was determined using the RNE metric. The experimental results showed that the use of pruning gives a time gain, but the dropout method turned out to be the most effective, while the use of quantization for the selected Planet algorithm turned out to be impossible.


Full Text:

PDF (Russian)

References


Ehlers. R. Formal verification of piece-wise linear feed-forward neural networks. // 15th International Symposium on Automated Technology

for Verification and Analysis. — 2017.

Ekaterina Stroeva Aleksey Tonkikh. Methods for formal verification of artificial neural networks: A review of existing approaches // International Journal of Open Information Technologies. — 2022. — Vol. 10. — P. 21– 28.

Dantzig G. Linear programming and extensions // Princeton university press. — 1959.

Dantzig G. Programming in a linear structure. // Technical report, U.S. Air Force Comptroller, USAF, Washington, D.C. — 1948.

Dantzig G. Maximization of a linear function of variables subject to linear inequalities. // Activity Analysis of Production and Allocation,. — 1951. — no. 13. — P. 339–347.

G. Dantzig A. Orden P. Wilfe. The generalized simplex method for minimizing a linear form under linear inequality restraints. — Pacific Journal of Mathematics, 1955. — Vol. 5.

Brandon Paulsen Jingbo Wang Chao Wang. Reludiff: Differential verification of deep neural networks // ICSE ’20: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. — 2020. — no. 42. — P. 714–726.

Huang Yanping, Yonglong Cheng, Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. — 2018.

Zhang Dongqing, Yang Jiaolong, Ye Dongqiangzi, Hua Gang. Lq-nets:

Learned quantization for highly accurate and compact deep neural networks. — 2018. — 1807.10029.

Niklas Een Niklas Sorensson. An extensible sat-solver. // 6th International Conference on Theory and Applications of Satisfiability Testing, (SAT). Selected Revised Papers. — 2003. — P. 502–518.

Yann LeCun Leon Bottou Yoshua Bengio. Gradientbased learning applied to document recognition // Proceedings of the IEEE. — 1998. —

P. 2278–2324.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность IT Congress 2024

ISSN: 2307-8162