Notes on LLMs and privacy leakage
A paper that demonstrates the importance of privacy-preserving machine learning
TL;DR
These notes are on attacks against large language models (LLMs) that can reveal personal data in its training data. This comes from a 2020 paper authored by researchers and engineers from Google, OpenAI, Apple and several universities.
The experiment conducted for this paper was performed on GPT-2, an older iteration of OpenAI's LLM; its latest such…
Keep reading with a 7-day free trial
Subscribe to The Cyber Solicitor to keep reading this post and get 7 days of free access to the full post archives.