Ongoing academic and industrial projects dedicated to robust machine learning

Dmitry Namiot, Eugene Ilyushin, Ivan Chizhov

Abstract


With the growing use of systems based on machine learning, which, from a practical point of view, are considered as systems of artificial intelligence today, the attention to the issues of reliability (robustness) of such systems and solutions is also growing. Naturally, for critical applications, for example, systems that make decisions in real time, robustness issues are the most important from the point of view of the practical use of machine learning systems. In fact, it is the robustness assessment that determines the very possibility of using machine learning in such systems. This, in a natural way, is reflected in a large number of works devoted to the issues of assessing the robustness of machine learning systems, the architecture of such systems and the protection of machine learning systems from malicious actions that can affect their operation. At the same time, it is necessary to understand that robustness problems can arise both naturally, due to the different distribution of data at the stages of training and practical application (at the stage of training the model, we use only part of the data from the general population), and as a result of targeted actions (attacks on machine learning systems). In this case, attacks can be directed both at the data and at the models themselves.


Full Text:

PDF (Russian)

References


Qayyum, Adnan, et al. "Secure and robust machine learning for healthcare: A survey." IEEE Reviews in Biomedical Engineering 14 (2020): 156-180.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199 , 2013

A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 6103–6113.

X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, 2019.

A Complete List of All (arXiv) Adversarial Example Papers https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html. Retrieved: Aug, 2021

Artificial Intelligence in Cybersecurity. http://master.cmc.msu.ru/?q=ru/node/3496 (in Russian) Retrieved: Aug, 2021

Koh, Pang Wei, et al. "Wilds: A benchmark of in-the-wild distribution shifts." International Conference on Machine Learning. PMLR, 2021.

Nair, Nimisha G., Pallavi Satpathy, and Jabez Christopher. "Covariate shift: A review and analysis on classifiers." 2019 Global Conference for Advancement in Technology (GCAT). IEEE, 2019.

Major ML datasets have tens of thousands of errors https://www.csail.mit.edu/news/major-ml-datasets-have-tens-thousands-errors Retrieved: Aug, 2021

NeuIPS https://www.llnl.gov/news/neurips-papers-aim-improve-understanding-and-robustness-machine-learning-algorithms

Pei, Kexin, et al. "Towards practical verification of machine learning: The case of computer vision systems." arXiv preprint arXiv:1712.01785 (2017).

Katz, Guy, et al. "The marabou framework for verification and analysis of deep neural networks." International Conference on Computer Aided Verification. Springer, Cham, 2019.

Shafique, Muhammad, et al. "Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead." IEEE Design & Test 37.2 (2020): 30-57.

The Pentagon Is Bolstering Its AI Systems—by Hacking Itself https://www.wired.com/story/pentagon-bolstering-ai-systems-hacking-itself Retrieved: Aug, 2021

Poison in the Well Securing the Shared Resources of Machine Learning https://cset.georgetown.edu/publication/poison-in-the-well/ Retrieved: Aug, 2021

Hacking AI A PRIMER FOR POLICYMAKERS ON MACHINE LEARNING CYBERSECURITY https://cset.georgetown.edu/wp-content/uploads/CSET-Hacking-AI.pdf Retrieved: Aug, 2021

Guaranteeing AI Robustness Against Deception https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception Retrieved: Aug, 2021

DARPA is pouring millions into a new AI defense program. Here are the companies leading the charge https://www.protocol.com/intel-darpa-adversarial-ai-project Retrieved: Aug, 2021

UT Austin Selected as Home of National AI Institute Focused on Machine Learning https://news.utexas.edu/2020/08/26/ut-austin-selected-as-home-of-national-ai-institute-focused-on-machine-learning/ Retrieved: Aug, 2021

UT Austin Launches Institute to Harness the Data Revolution https://ml.utexas.edu/news/611 Retrieved: Aug, 2021

National Security Education Center https://www.lanl.gov/projects/national-security-education-center/ Retrieved: Aug, 2021

2021 Project Descriptions Creates next-generation leaders in Machine Learning for Scientific Applications https://www.lanl.gov/projects/national-security-education-center/information-science-technology/summer-schools/applied-machine-learning/project-descriptions-2019.php Retrieved: Aug, 2021

Assured Machine Learning: Robustness, Fairness, and Privacy https://computing.llnl.gov/casc/ml/robust Retrieved: Aug, 2021

Explainable Artificial Intelligence https://computing.llnl.gov/casc/ml/ai Retrieved: Aug, 2021

Advancing Machine Learning for Mission-Critical Applications https://computing.llnl.gov/sites/default/files/COMP_ROADSHOW_ML_CASC-final.pdf Retrieved: Aug, 2021

Abdar, Moloud, et al. "A review of uncertainty quantification in deep learning: Techniques, applications and challenges." Information Fusion (2021).

Intelligence Advanced Research Projects Activity (IARPA) https://www.iarpa.gov/ Retrieved: Aug, 2021

Trojans in Artificial Intelligence https://www.iarpa.gov/index.php/research-programs/trojai Retrieved: Aug, 2021

Trojans in Artificial Intelligence bibliography https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=W911NF20C0034+OR+W911NF20C0038+OR+W911NF20C0045+OR+W911NF20C0035+OR+IARPA-20001-D2020-2007180011 Retrieved: Aug, 2021

ELLIS Programs launched https://ellis.eu/news/ellis-programs-launched Retrieved: Aug, 2021

ELLIS programs https://ellis.eu/programs Retrieved: Aug, 2021

Robust Machine Learning https://ellis.eu/programs/robust-machine-learning Retrieved: Aug, 2021

Semantic, Symbolic and Interpretable Machine Learning https://ellis.eu/programs/semantic-symbolic-and-interpretable-machine-learning Retrieved: Aug, 2021

Oomen, Thomas L. "Why the EU lacks behind China in AI development–Analysis and solutions to enhance EU’s AI strategy." rue 33.1: 7543.

MIT Reliable and Robust Machine Learning https://www.csail.mit.edu/research/reliable-and-robust-machine-learning Retrieved: Aug, 2021

Alexander Madry http://people.csail.mit.edu/madry/ Retrieved: Aug, 2021

Center for Deployable Machine Learning (CDML) https://www.csail.mit.edu/research/center-deployable-machine-learning-cdml Retrieved: Aug, 2021

Madry Lab http://madry-lab.ml/ Retrieved: Aug, 2021

Robustness package https://github.com/MadryLab/robustness Retrieved: Aug, 2021

Adversarial ML tutorial https://adversarial-ml-tutorial.org/ Retrieved: Aug, 2021

RobustML https://www.robust-ml.org/ Retrieved: Aug, 2021

Andriushchenko, Maksym, and Matthias Hein. "Provably robust boosted decision stumps and trees against adversarial attacks." arXiv preprint arXiv:1906.03526 (2019).

Identifying and eliminating bugs in learned predictive models https://deepmind.com/blog/article/robust-and-verified-ai Retrieved: Aug, 2021

Nandy, Abhishek, and Manisha Biswas. "Google’s DeepMind and the Future of Reinforcement Learning." Reinforcement Learning. Apress, Berkeley, CA, 2018. 155-163.

Fairness & Robustness in Machine Learning https://perso.math.univ-toulouse.fr/loubes/fairness-robustness-in-machine-learning/ Retrieved: Aug, 2021

Safe Artificial Intelligence http://safeai.ethz.ch/ Retrieved: Aug, 2021

Latticeflow https://latticeflow.ai/ Retrieved: Aug, 2021

Reliability Assessment of Traffic Sign Classifiers https://latticeflow.ai/wp-content/uploads/2021/01/Reliability_assessment_of_traffic_sign_classifiers_short.pdf Retrieved: Aug, 2021

The Alan Turing Institute Robust machine learning https://www.turing.ac.uk/research/interest-groups/robust-machine-learning Retrieved: Aug, 2021

AI roadmap https://www.gov.uk/government/publications/ai-roadmap Retrieved: Aug, 2021

The Alan Turing Institute Adversarial machine learning https://www.turing.ac.uk/research/research-projects/adversarial-machine-learning Retrieved: Aug, 2021

Malinin, Andrey, et al. "Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks." arXiv preprint arXiv:2107.07455 (2021).

Oxford Applied and Theoretical Machine Learning Group https://oatml.cs.ox.ac.uk/ Retrieved: Aug, 2021

Adversarial and Interpretable ML — Publications https://oatml.cs.ox.ac.uk/tags/adversarial_interpretability.html#title Retrieved: Aug, 2021

Allen Institute for AI https://allenai.org/ Retrieved: Aug, 2021

AI2 Machine Learning Seminars https://www.cs.washington.edu/research/ml/seminars Retrieved: Aug, 2021

Verified AI https://berkeleylearnverify.github.io/VerifiedAIWebsite/ Retrieved: Aug, 2021

Sanjit A. Seshia research group https://people.eecs.berkeley.edu/~sseshia/ Retrieved: Aug, 2021

Bosch AI https://www.bosch-ai.com/ Retrieved: Aug, 2021

Rich and Explainable Deep Learning https://www.bosch-ai.com/research/research-fields/rich_and_explainable_deep_learning_perception/ Retrieved: Aug, 2021

Research Engineer – Robust and Explainable AI Methods https://www.mendeley.com/careers/job/research-engineer-robust-and-explainable-ai-methods-690764 Retrieved: Aug, 2021

Yandex Shift Challenge https://research.yandex.com/shifts Retrieved: Aug, 2021

Adversa https://adversa.ai/ Retrieved: Aug, 2021

De Jimenez, Rina Elizabeth Lopez. "Pentesting on web applications using ethical-hacking." 2016 IEEE 36th Central American and Panama Convention (CONCAPAN XXXVI). IEEE, 2016.

The Road to Secure and Trusted AI https://adversa.ai/report-secure-and-trusted-ai/ Retrieved: Aug, 2021

Robust AI https://www.robust.ai/ Retrieved: Aug, 2021

Bengio, Yoshua, Yann Lecun, and Geoffrey Hinton. "Deep learning for AI." Communications of the ACM 64.7 (2021): 58-65.

Center for Long-Term Cybersecurity University of California, Berkeley. https://cltc.berkeley.edu/ Retrieved: Sep, 2021

Center for Long-Term Cybersecurity University of California, Berkeley, Robust ML. https://cltc.berkeley.edu/?s=robust Retrieved: Sep, 2021

Kuprijanovskij, V. P., et al. "Optimizacija ispol'zovanija resursov v cifrovoj jekonomike." International Journal of Open Information Technologies 4.12 (2016).

Kuprijanovskij, V. P., et al. "Cifrovaja jekonomika i Internet Veshhej-preodolenie silosa dannyh." International Journal of Open Information Technologies 4.8 (2016): 36-42.

Incident Database https://incidentdatabase.ai/ Retrieved: Sep, 2021

Robust and Secure AI https://resources.sei.cmu.edu/asset_files/WhitePaper/2021_019_001_735346.pdf Retrieved: Sep, 2021

Artificial Intelligence Engineering https://sei.cmu.edu/our-work/artificial-intelligence-engineering/ Retrieved: Sep, 2021


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность IT Congress 2024

ISSN: 2307-8162