KITE blog: Are Machine Learning Models Vulnerable?

Due to the availability of large-scale datasets and affordable computing resources, the field of machine learning has witnessed rapid progress over the past decade. With the widespread adoption of techniques such as facial recognition and person re-identification, the concern over security issues can not be overemphasized.

In parallel with improving the overall performance of machine learning models, significant efforts have been put into understanding their vulnerabilities. In this blog, I outline four types of attacks that are predominant in the literature: adversarial example attack, membership inference attack, model extraction attack, and model inversion attack.

Read the blog post by Xingyang Ni the KITE project website.