TY - JOUR
T1 - Hybrid two-level protection system for preserving pre-trained DNN models ownership
AU - Fkirin, Alaa
AU - Moursi, Ahmed Samy
AU - Attiya, Gamal
AU - El-Sayed, Ayman
AU - Shouman, Marwa A.
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/12
Y1 - 2024/12
N2 - Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands substantial time, vast datasets and high computational costs. However, these valuable models face significant risks. Attackers can steal and sell pre-trained DNN models for profit. Unauthorised sharing of these models poses a serious threat. Once sold, they can be easily copied and redistributed. Therefore, a well-built pre-trained DNN model is a valuable asset that requires protection. This paper introduces a robust hybrid two-level protection system for safeguarding the ownership of pre-trained DNN models. The first-level employs zero-bit watermarking. The second-level incorporates an adversarial attack as a watermark by using a perturbation technique to embed the watermark. The robustness of the proposed system is evaluated against seven types of attacks. These are Fast Gradient Method Attack, Auto Projected Gradient Descent Attack, Auto Conjugate Gradient Attack, Basic Iterative Method Attack, Momentum Iterative Method Attack, Square Attack and Auto Attack. The proposed two-level protection system withstands all seven attack types. It maintains accuracy and surpasses current state-of-the-art methods.
AB - Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands substantial time, vast datasets and high computational costs. However, these valuable models face significant risks. Attackers can steal and sell pre-trained DNN models for profit. Unauthorised sharing of these models poses a serious threat. Once sold, they can be easily copied and redistributed. Therefore, a well-built pre-trained DNN model is a valuable asset that requires protection. This paper introduces a robust hybrid two-level protection system for safeguarding the ownership of pre-trained DNN models. The first-level employs zero-bit watermarking. The second-level incorporates an adversarial attack as a watermark by using a perturbation technique to embed the watermark. The robustness of the proposed system is evaluated against seven types of attacks. These are Fast Gradient Method Attack, Auto Projected Gradient Descent Attack, Auto Conjugate Gradient Attack, Basic Iterative Method Attack, Momentum Iterative Method Attack, Square Attack and Auto Attack. The proposed two-level protection system withstands all seven attack types. It maintains accuracy and surpasses current state-of-the-art methods.
KW - Adversarial attack
KW - Copyright protection
KW - DNN
KW - Ownership
KW - Watermarking
UR - http://www.scopus.com/inward/record.url?scp=85202613497&partnerID=8YFLogxK
U2 - 10.1007/s00521-024-10304-0
DO - 10.1007/s00521-024-10304-0
M3 - Article
AN - SCOPUS:85202613497
SN - 0941-0643
VL - 36
SP - 21415
EP - 21449
JO - Neural Computing and Applications
JF - Neural Computing and Applications
IS - 34
ER -