How we can save anonymization
In a talk for PEPR ‘24, Daniel Simmons-Marengo explains why anonymization is at risk, and what we can do to safeguard user trust going forward.
When organizations claim that they anonymize data, they offer a simple promise to users: "This data can’t harm you, so you don't need to worry about it." Sadly, this promise has been repeatedly broken when “anonymous” datasets have been reidentified. These failures are driving a growing skepticism that anonymization is possible, and even leading to a belief that anyone claiming to anonymize data is a fraud.
How can privacy professionals fix this situation, and regain the trust of users? In a talk for PEPR ‘24, Daniel Simmons-Marengo outlines a principled, yet practical solution: a list of five operational principles that anonymization techniques need to meet to live up to their promises.
Watch the presentation below.
At Tumult Labs, our mission is to help organizations safely share or publish data using differential privacy, a proven approach to quantifying privacy risk, and a possible implementation of the principles outlined by Daniel in his talk. If you’d like to hear more, don’t hesitate to reach out!