![]() |
| Image Credit: Scientific Frontline |
Security researchers have developed the first functional defense mechanism capable of protecting against “cryptanalytic” attacks used to “steal” the model parameters that define how an AI system works.
“AI systems are valuable intellectual property, and cryptanalytic parameter extraction attacks are the most efficient, effective, and accurate way to ‘steal’ that intellectual property,” says Ashley Kurian, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Until now, there has been no way to defend against those attacks. Our technique effectively protects against these attacks.”
“Cryptanalytic attacks are already happening, and they’re becoming more frequent and more efficient,” says Aydin Aysu, corresponding author of the paper and an associate professor of electrical and computer engineering at NC State. “We need to implement defense mechanisms now, because implementing them after an AI model’s parameters have been extracted is too late.”


_MoreDetail-v3_x2_1720x1144.jpg)




.jpg)







.jpg)
.jpg)


.jpg)