Nicolas Papernot

2025

My research focuses on machine learning, a form of artificial intelligence (AI), which has become a general-purpose technology. Machine learning models are vulnerable to attacks: they misbehave in the presence of adversaries. For example, we showed that it is possible for an adversary to find inputs that force a model to make systematically incorrect predictions, even if the adversary has no information about the model’s internals. This made it clear that attacks against machine learning were no longer a hypothetical threat. My group also demonstrated that models could leak data they previously trained on, particularly when the intermediate updates applied to a model are revealed because the model’s training is distributed across multiple parties. These challenges are addressed by designing machine learning algorithms with security and privacy considerations. The work of my group and our collaborators has shown that we can take into account properties of the input domain to strengthen a model’s robustness. We also designed learning algorithms that come with provable privacy guarantees by noising and encrypting the data contributed by individuals or enabling individuals to remove their data after training completes. Ongoing work continues to improve the trustworthiness of machine learning algorithms and to support efforts to regulate AI by improving the ability to audit machine learning algorithms.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.