On Model Inversion Attacks

Dmitry Namiot
15m
Attacks on machine learning systems are defined as special actions on the ele- ments of the machine learning pipeline, which are designed to either prevent the normal operation of machine learning systems or ensure their special functioning, which is necessary for the attacker. Model inversion attacks aim to expose the private data used to train the model. Attacks that expose private information about machine learning systems are a big threat to machine learning as a service projects. In this article, we provide an overview of off-the-shelf software tools for performing model inversion attacks.