from our new paper
Monday, March 25, 2024
On Real-Time Model Inversion Attacks Detection
The article deals with the issues of detecting adversarial attacks on machine learning models. In the most general case, adversarial attacks are special data changes at one of the stages of the machine learning pipeline, which are designed to either prevent the operation of the machine learning system, or, conversely, achieve the desired result for the attacker. In addition to the well-known poisoning and evasion attacks, there are also forms of attacks aimed at extracting sensitive information from machine learning models. These include model inversion attacks. These types of attacks pose a threat to machine learning as a service (MLaaS). Machine learning models accumulate a lot of redundant information during training, and the possibility of revealing this data when using the model can become a serious problem.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment