Trusted Artificial Intelligence Platforms: Certification and Audit

Dmitry Namiot, Eugene Ilyushin

Abstract


Artificial intelligence systems in this work refer to machine learning systems. It is machine learning (deep learning) systems that are, today, the main examples of the use of Artificial Intelligence in a wide variety of areas. From a practical point of view, we can say that machine learning is synonymous with the concept of Artificial Intelligence. At the same time, machine learning systems, by their nature, depend on the data on which they are trained and, in principle, produce non-deterministic results. Trusted platforms, as their name suggests, are a set of tools designed to increase trust (user confidence) in the output of machine learning models. The use of machine learning systems in so-called critical areas (avionics, automatic driving, etc.) requires guarantees of software functionality, which is confirmed by a certification procedure (process). And by audit, we mean the identification of possible problems with the performance and security of machine learning systems.

Full Text:

PDF (Russian)

References


Namiot, Dmitry, Eugene Ilyushin, and Oleg Pilipenko. "On Trusted AI Platforms." International Journal of Open Information Technologies 10.7 (2022): 119-127.

Namiot, Dmitry. "Introduction to Data Poison Attacks on Machine Learning Models." International Journal of Open Information Technologies 11.3 (2023): 58-68.

Namiot, D. E., and E. A. Il'jushin. "Monitoring sdviga dannyh v modeljah mashinnogo obuchenija." International Journal of Open Infor

mation Technologies 10.12: 2022.

Namiot, Dmitry. "Schemes of attacks on machine learning models." International Journal of Open Information Technologies 11.5 (2023): 68-86.

Namiot, Dmitry, and Eugene Ilyushin. "On the robustness and security of Artificial Intelligence systems." International Journal of Open Information Technologies 10.9 (2022): 126-134.

MITRE Atlas mitigations https://atlas.mitre.org/mitigations/ Retrieved: Dec, 2023

GRID 2023 https://indico.jinr.ru/event/3505/ Retrieved: Dec, 2023

Namiot, Dmitry, and Manfred Sneps-Sneppe. "On Audit and Certification of Machine Learning Systems." 2023 34th Conference of Open Innovations Association (FRUCT). IEEE, 2023.

Robust and Verified Deep Learning group https://deepmindsafetyresearch.medium.com/towards-robust-and-verified-ai-specification-testing-robust-training-and-formal-verification-69bd1bc48bda Retrieved: Dec, 2023

Madry Lab https://people.csail.mit.edu/madry/6.S979/files/lecture_4.pdf Retrieved: Dec, 2023

Kostyumov, Vasily. "A survey and systematization of evasion attacks in computer vision." International Journal of Open Information Technologies 10.10 (2022): 11-20. (in Russian)

Song, Junzhe, and Dmitry Namiot. "A Survey of the Implementations of Model Inversion Attacks." Distributed Computer and Communication Networks: 25th International Conference, DCCN 2022, Moscow, Russia, September 26–29, 2022, Revised Selected Papers. Cham: Springer Nature Switzerland, 2023.

Bidzhiev, Temirlan, and Dmitry Namiot. "Research of existing approaches to embedding malicious software in artificial neural networks." International Journal of Open Information Technologies 10.9 (2022): 21-31. (in Russian)

Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv, 2014; arXiv:1412.6572.

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Ongoing academic and industrial projects dedicated to robust machine learning." International Journal of Open Information Technologies 9.10 (2021): 35-46. (in Russian)

Borg, Markus, et al. "Safely entering the deep: A review of verification and validation for machine learning and a challenge elicitation in the automotive industry." arXiv preprint arXiv:1812.05389 (2018)

Why Robustness is not Enough for Safety and Security in Machine Learning https://towardsdatascience.com/why-robustness-is-not-enoughfor-safety-and-security-in-machine-learning-1a35f6706601 Retrieved: Jun, 2023

Gu, Kang, et al. "Towards Sentence Level Inference Attack Against Pre-trained Language Models." Proceedings on Privacy Enhancing Technologies 3 (2023): 62-78.

OWASP Top 10 List for Large Language Models version 0.1 https://owasp.org/www-project-top-10-for-large-language-model-applications/descriptions/

Derner, Erik, and Kristina Batistič. "Beyond the Safeguards: Exploring the Security Risks of ChatGPT." arXiv preprint arXiv:2305.08005 (2023).

Democratic inputs to AI https://openai.com/blog/democratic-inputs-to-ai Retrieved: Dec, 2023

The AI Act https://artificialintelligenceact.eu/ Retrieved: Dec, 2023

AI regulation https://www.technologyreview.com/2023/05/23/1073526/suddenly-everyone-wants-to-talk-about-how-to-regulate-ai/ Retrieved: Dec, 2023

Schuett, Jonas, et al. "Towards best practices in AGI safety and governance: A survey of expert opinion." arXiv preprint arXiv:2305.07153 (2023).

Game Changers https://www.cbinsights.com/research/report/game-changing-technologies-2022/ Retrieved: Dec, 2023

An In-Depth Guide To Help You Start Auditing Your AI Models https://censius.ai/blogs/ai-audit-guide Retrieved: Dec, 2023

Jie Liu. 2012. The enterprise risk management and the risk oriented internal audit. Ibusiness 4, 03 (2012), 287.

IEEE. 2008. IEEE Standard for Software Reviews and Audits. IEEE Std 1028-2008 (Aug 2008), 1–53. https://doi.org/10.1109/IEEESTD.2008.4601584

van Wyk, Jana, and Riaan Rudman. "COBIT 5 compliance: best practices cognitive computing risk assessment and control checklist." Meditari Accountancy Research (2019).

The IIA's Artificial Intelligence Auditing Framework https://www.theiia.org/en/content/articles/global-perspectives-and-insights/2017/the-iias-artificial-intelligence-auditing-framework-practical-applications-part-ii/ Retrieved: Dec, 2023

REALIZE THE FULL POTENTIAL OF ARTIFICIAL INTELLIGENCE https://www.coso.org/Shared%20Documents/Realize-the-Full-Potential-of-Artificial-Intelligence.pdf Retrieved: Dec, 2023

Do Foundation Model Providers Comply with the Draft EU AI Act? https://crfm.stanford.edu/2023/06/15/eu-ai-act.html Retrieved: Dec, 2023

AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework Retrieved: Dec, 2023

ISO/IEC 23894 – A new standard for risk management of AI https://aistandardshub.org/a-new-standard-for-ai-risk-management Retrieved: Dec, 2023

Raji, Inioluwa Deborah, et al. "Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing." Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020.

New research proposes a framework for evaluating general-purpose models against novel threats https://www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks Retrieved: Jun, 2023

Shevlane, Toby, et al. "Model evaluation for extreme risks." arXiv preprint arXiv:2305.15324 (2023).

Markert, Thora, Fabian Langer, and Vasilios Danos. "GAFAI: Proposal of a Generalized Audit Framework for AI." INFORMATIK 2022 (2022).

Auditing and Certification of AI Systems https://www.hhi.fraunhofer.de/en/departments/ai/technologies-and-solutions/auditing-and-certification-of-ai-systems.html Retrieved: Dec, 2023

Towards Auditable AI Systems Retrieved: Jun, 2023 https://www.hhi.fraunhofer.de/fileadmin/Departments/AI/TechnologiesAndSolutions/AuditingAndCertificationOfAiSystems/2022-05-23-whitepaper-tuev-bsi-hhi-towards-auditable-ai-systems.pdf Retrieved: Dec 2023

AI TRiSM https://www.gartner.com/en/information-technology/glossary/ai-trism Retrieved: Dec, 2023

Datarobot https://www.datarobot.com/platform/trusted-ai/ Retrieved: Dec, 2023

IBM Trustworthy https://research.ibm.com/topics/trustworthy-ai Retrieved: Dec, 2023

Ruparelia, Nayan B. "Software development lifecycle models." ACM SIGSOFT Software Engineering Notes 35.3 (2010): 8-13.

Explaining W-shaped Learning Assurance https://daedalean.ai/tpost/pxl6ih0yc1-explaining-w-shaped-learning-assurance Retrieved: Dec, 2023

Force, DA EASA AI Task, and A. G. Daedalean. "Concepts of Design Assurance for Neural Networks (CoDANN)." Concepts of Design Assurance for Neural Networks (CoDANN). EASA, Daedalean (2020).

EASA roadmap https://www.easa.europa.eu/en/domains/research-innovation/ai Retrieved: Dec, 2023

G-34 Artificial Intelligence in Aviation https://standardsworks.sae.org/standards-committees/g-34-artificial-intelligence-aviation Retrieved: Dec, 2023

DO-178 continues to adapt to emerging digital technologies https://militaryembedded.com/avionics/safety-certification/do-178-continues-to-adapt-to-emerging-digital-technologies Retrieved: Dec, 2023

Vidot, Guillaume, et al. "Certification of embedded systems based on Machine Learning: A survey." arXiv preprint arXiv:2106.07221 (2021).

EASA Artificial Intelligence Roadmap 1.0. https://www.easa.europa.eu/sites/default/files/dfu/EASA-AIRoadmap-v1.0.pdf Retrieved: Dec, 2023

Dmitriev, Konstantin, Johann Schumann, and Florian Holzapfel. "Towards Design Assurance Level C for Machine-Learning Airborne Applications." 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC). IEEE, 2022.

“Concepts of design assurance for neural networks (CoDANN),” European Aviation Safety Agency, Tech. Rep., 2020.

“Report. concepts of design assurance for neural networks (CoDANN) II,” European Aviation Safety Agency, Tech. Rep., 2021.

“EASA concept paper: First usable guidance for level 1 machine learning applications,” European Aviation Safety Agency, Tech. Rep., 2021.

Li, Linyi, Tao Xie, and Bo Li. "Sok: Certified robustness for deep neural networks." 2023 IEEE symposium on security and privacy (SP). IEEE, 2023.

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "On a formal verification of machine learning systems." International Journal of Open Information Technologies 10.5 (2022): 30-34.

Stroeva, Ekaterina, and Aleksey Tonkikh. "Methods for Formal Verification of Artificial Neural Networks: A Review of Existing Approaches." International Journal of Open Information Technologies 10.10 (2022): 21-29.

Brix, Christopher, et al. "The Fourth International Verification of Neural Networks Competition (VNN-COMP 2023): Summary and Results." arXiv preprint arXiv:2312.16760 (2023).

Zhang, Bohang, et al. "Towards certifying robustness using neural networks with l-dist neurons." arXiv preprint arXiv:2102.05363 (2021).

Kuprijanovskij, V. P., D. E. Namiot, and S. A. Sinjagov. "Demistifikacija cifrovoj jekonomiki." International Journal of Open Information Technologies 4.11 (2016): 59-63.

Kuprijanovskij, V. P., et al. "Roznichnaja torgovlja v cifrovoj jekonomike." International Journal of Open Information Technologies 4.7 (2016): 1-12.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность IT Congress 2024

ISSN: 2307-8162