Filter Results:
(3)
Show Results For
- All HBS Web
(5)
- Faculty Publications (3)
Show Results For
- All HBS Web
(5)
- Faculty Publications (3)
Page 1 of 3
Results
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is... View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- Article
Adaptive Machine Unlearning
By: Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Chris Waites
Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees... View Details
Gupta, Varun, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. "Adaptive Machine Unlearning." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- Mar 2021
- Conference Presentation
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
We study the data deletion problem for convex models. By leveraging techniques from convex optimization and reservoir sampling, we give the first data deletion algorithms that are able to handle an arbitrarily long sequence of adversarial updates while promising both... View Details
Neel, Seth, Aaron Leon Roth, and Saeed Sharifi-Malvajerdi. "Descent-to-Delete: Gradient-Based Methods for Machine Unlearning." Paper presented at the 32nd Algorithmic Learning Theory Conference, March 2021.