EXPLORING FAULT INJECTION ATTACKS ON SEGMENTATION DNN MODELS

Researcher(s)

  • Saravanakrishnan Balamurugan, Electrical Engineering, National Institute of Technology,Tiruchirappalli, India

Faculty Mentor(s)

  • Chengmo Yang, Electrical and Computer Engineering, University of Delaware

Abstract

Deep Neural Networks (DNNs) have become a cornerstone in image segmentation tasks across various domains, including autonomous driving, medical imaging, and remote sensing. Meanwhile, concerns about the robustness and security of these models have emerged. This study explores fault injection attacks on segmentation DNN models to understand their vulnerabilities and potential impacts.

In this research, the effects of fault injection attacks on FPN segmentation models have been identified and analyzed. Specifically, clock glitches were applied at various offsets to identify the most affected classes of the image. For a deeper analysis, the number of glitches and their widths were varied, and the resulting changes in output have been observed. Different evaluation metrics, such as Intersection over Union (IoU) and pixel difference, were used to compare the fault-free and faulty images.

It has been observed that attacking offsets at the beginning to about 70% of inference time can affect different classes of the image, while glitches injected at offsets near the end of inference time affect specific columns of pixels. The findings demonstrate a linear relationship between the offset and the affected pixel column regions, enabling a more controllable attack of these columns through a derived equation. A high number of glitches affects more columns of pixels, which presents significant risks in applications such as autonomous driving.