Towards the effectiveness of the adversarial attacks on robustness of medical images recognition by deep neural networks

Eugene Yu. Shchetinin, Anastasia Glushkova, Blinkov Yury
15m
The problems of robustness of deep neural networks to adversarial attacks are investigated. The aim of the work is to study the effectiveness of the impact of adversarial attacks on the type of recognized biomedical images and the values of the control parameters of the algorithms for generating their attacking versions. Experimental studies were carried out on the example of solving typical problems of medical diagnostics using deep neural networks VGG16, EfficientNetB2, DenseNet121, Xception as well as data containing chest X-rays and Brain tumor MRI scans images. Neural networks trained for classification using generative deep learning turned out to be more robust to adversarial attacks than networks trained for classification by transfer learning methods.