Attacks on machine learning systems - common problems and methods

Eugene Ilyushin, Dmitry Namiot, Ivan Chizhov

Abstract


The paper deals with the problem of adversarial attacks on machine learning systems. Such attacks are understood as special actions on the elements of the machine learning pipeline (training data, the model itself, test data) in order to either achieve the desired behavior of the system or prevent it from working correctly. In general, this problem is a consequence of a fundamental moment for all machine learning systems - the data at the testing (operation) stage differs from the same data on which the system was trained. Accordingly, a violation of the machine learning system is possible without targeted actions, simply because we encountered data at the operational stage for which the generalization achieved at the training stage does not work. An attack on a machine learning system is, in fact, a targeted introduction of the system into the data area on which the system was not trained. Today, this problem, which is generally associated with the stability of machine learning systems, is the main obstacle to the use of machine learning in critical applications.

Full Text:

PDF (Russian)

References


MSU AI https://event.msu.ru/aiconference Retrieved: Mar, 2022

Namiot D.E., Il'jushin E.A., Chizhov I.V. Tekushhie akademicheskie i industrial'nye proekty, posvjashhennye ustojchivomu mashinnomu obucheniju //International Journal of Open Information Technologies. – 2021. – T. 9. – No. 10. – S. 35-46.

Namiot D. E., Il'jushin E. A., Chizhov I. V. VOENNYE PRIMENENIJa MAShINNOGO OBUChENIJa //International Journal of Open Information Technologies. – 2022. – T. 10. – #. 1. – S. 69-76.

Namiot D. E., Il'jushin E. A., Chizhov I. V. Osnovanija dlja rabot po ustojchivomu mashinnomu obucheniju //International Journal of Open Information Technologies. – 2021. – T. 9. – #. 11. – S. 68-74.

Artificial Intelligence in Cybersecurity. http://master.cmc.msu.ru/?q=ru/node/3496 (in Russian) Retrieved: Dec, 2021

Yuan X. et al. Adversarial examples: Attacks and defenses for deep learning //IEEE transactions on neural networks and learning systems. – 2019. – T. 30. – #. 9. – S. 2805-2824.

How to attack Machine Learning ( Evasion, Poisoning, Inference, Trojans, Backdoors) https://towardsdatascience.com/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c Retrieved: Mar, 2022

Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE symposium on security and privacy (SP). IEEE, 2017.

Nasr, Milad, Reza Shokri, and Amir Houmansadr. "Comprehensive privacy analysis of deep learning." Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP). 2018.

Yan, Haonan, et al. "Monitoring-based differential privacy mechanism against query flooding-based model extraction attack." IEEE Transactions on Dependable and Secure Computing (2021).

Wang, Binghui, and Neil Zhenqiang Gong. "Stealing hyperparameters in machine learning." 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 2018.

Ribeiro, Mauro, Katarina Grolinger, and Miriam AM Capretz. "Mlaas: Machine learning as a service." 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA). IEEE, 2015.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность IT Congress 2024

ISSN: 2307-8162