Criteria analysis of radiation nondestructive testing data processing models

V.D. Korchagin, V.S. Kuvshinnikov, E.E. Kovshov

Abstract


A research of neural network models for the task of radiation nondestructive testing data processing in the context of production defect detection is done. The analysis is based on the results of the author's previous research of actual SOTA-architectures used for image classification and object detection tasks. The study considers the performance of the following neural network models: ResNet, EfficientNet, VGGNet, MobileNet and ViT. The analysis was based on multiple measurements of the time characteristics of both individual image instances and passing the full dataset, as well as the speed and accuracy of training depending on the size of the training sample and the complexity of the base model. The training process utilized a learning architecture method without the participation of pre-trained weights. A dataset including both labeled and unlabeled data on defects in metal of various types was compiled from several public sources.

Results are summarized that the use of images as an input tensor is not effective enough to achieve optimal accuracy of the results for the task at hand. In this regard, further investigation of models capable of taking into account additional meta-information is required. The obtained results are of practical importance for designing the neural network architecture for solving the problem of completing the algorithms for image retrieval based on the results of radiation testing in industry.

Full Text:

PDF (Russian)

References


Non-destructive radiation testing. Terms and definitions. Ru Standard No. GOST R 55776-2013. Moscow. 2019 [in Rus]

D.A. Vishnevsky. “Analysis of the influence of the human factor on the reliability of metallurgical equipment,” Sbornik nauchnyh trudov GOU VPO LNR «DonGTU», no 12(55), 2018. [in Russian language]

S.M. Goldobin, “Influence of the human factor on the appear-ance of product defects,” Medogy menegjmenta kachecstva, no. 8, pp.54 – 57, 2017. [in Rus]

D.A. Vishnevsky, “Influence of human factor on reliability of metallurgical and machine-building equipment,” Sbornik nauch-nyh trudov GOU VPO LNR «DonGTU», no. 15(58), 2019. [in Rus]

E.E. Kovshov, V.S. Kuvshinnikov, D.F. Kazakov, “Virtual reality usage in the radiography simulator devel opment for non-destructive testing personnel training,” Kontrol'. Diagnostika, vol. 24, no. 7, pp. 34–40, 2021. [in Rus]. DOI: 10.14489/td.2021.07.pp.034-04.

E.E. Kovshov, V.S. Kuvshinnikov, D.F. Kazakov, “Radio-graphic image of a non-destructive testing object generation in a virtual reality environment,” Kontrol'. Diagnostika, vol. 24, no. 8, pp. 14–22, 2021 [in Rus]. DOI 10.14489/td.2021.08.pp.014-022.

E.E. Kovshov, V.S. Kuvshinnikov, “Testing object’s material physical properties simulation in the industrial radiography VR environment,” Kontrol'. Diagnostika, vol. 26, no. 2, pp. 4–12, 2023 [in Rus]. DOI: 10.14489/td.2023.02.pp.004-012.

E.E. Kovshov, V.S. Kuvshinnikov, D.F. Kazakov, “The use of digital twins models while a radiographic image formation in a virtual reality environment,” Kontrol'. Diagnostika, vol. 26, no. 9, pp. 4–15, 2023. [in Rus]. DOI: 10.14489/td.2023.09.pp.004-015.

M. Tan et al., “Mnasnet: Platform-aware neural architecture search for mobile,” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019. pp. 2820 – 2828.

M. Tan, Q. Le, “Efficient net: Rethinking model scaling for convolutional neural networks,” International conference on machine learning, 2019, pp. 6105–6114.

V.D. Korchagin, “Analysis of modern SOTA-architectures of artificial neural networks for solving problems of image classification and object detection,” Software systems and computational methods, no. 4, pp. 73–87, 2023. [in Rus]. DOI: 10.7256/2454-0714.2023.4.

T. Ahmed, N.H.N. Sabab, Classification and understanding of cloud structures via satellite images with EfficientUNet.SN, Computer Science, vol. 3, no. 1, pp. 99, 2022

K. He et al., Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770 – 778.

S. Molčan et al., “Classification of Red Blood Cells Using Time-Distributed Convolutional Neural Networks from Simulated Videos,” Applied Sciences, vol. 13, no. 13, pp. 7967, 2023.

A. Howard et al., “Searching for mobilenetv3,” In Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314 – 1324.

A visual deep-dive into the building blocks of MobileNetV3. URL: https://francescopochetti.com/a-visual-deep-dive-into-the-building-blocks-of-mobilenetv3/

VGG16 From Scratch | Computer Vision With Keras. URL: https://pysource.com/2022/10/04/vgg16-from-scratch-computer-vision-with-keras-p-7/

K. Simonyan, A. Zisserman, “Very deep convolutional net-works for large-scale image recognition,” In Proceedings of the 3rd International Conference on Learning Representations, 2014

J. Hu, L. Shen, G. Sun, “Squeeze-and-excitation networks,” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.

A. Dosovitskiy, “An image is worth 16x16 words: Transform-ers for image recognition at scale,” International Conference on Learning Representations, 2021

A. Vaswani et al., “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, 2017.

Z. Dai et al., “Coatnet: Marrying convolution and attention for all data sizes,” Advances in Neural Information Processing Systems, vol. 34, pp. 3965–3977, 2021

J. Yu et al. Coca: Contrastive captioners are image-text foun-dation models. arXiv preprint arXiv:2205.01917.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162