Intrinsic Examples: Robust Fingerprinting of Deep Neural Networks


This paper proposes to use intrinsic examples as a DNN fingerprinting technique for the functionality verification of DNN models implemented on edge devices. The proposed intrinsic examples do not affect the normal DNN training and can enable the black-box testing capability for DNN models packaged into edge device applications. We provide three algorithms for deriving intrinsic examples of the pre-trained model (the model before the DNN system design and implementation procedure) to retrieve the knowledge learnt from the training dataset for the detection of adversarial third-party attacks such as transfer learning and fault injection attack that may happen during the system implementation procedure. Besides, they can accommodate the model transformations due to various DNN model compression methods used by the system designer.

32nd British Machine Vision Conference 2021, BMVC 2021, Online, November 22-25, 2021
Xiao (Kieran) Wang
Xiao (Kieran) Wang
Ph.D. Student