Filter Results:
(24)
Show Results For
- All HBS Web
(117,424)
- Faculty Publications (24)
Show Results For
- All HBS Web
(117,424)
- Faculty Publications (24)
Page 1 of 24
Results →
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Working Paper
Black-box Training Data Identification in GANs via Detector Networks
By: Lukman Olagoke, Salil Vadhan and Seth Neel
Since their inception Generative Adversarial Networks (GANs) have been popular generative models across images, audio, video, and tabular data. In this paper we study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if... View Details
Olagoke, Lukman, Salil Vadhan, and Seth Neel. "Black-box Training Data Identification in GANs via Detector Networks." Working Paper, October 2023.
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is... View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- 2023
- Working Paper
Feature Importance Disparities for Data Bias Investigations
By: Peter W. Chang, Leor Fishman and Seth Neel
It is widely held that one cause of downstream bias in classifiers is bias present in the training data. Rectifying such biases may involve context-dependent interventions such as training separate models on subgroups, removing features with bias in the collection... View Details
Chang, Peter W., Leor Fishman, and Seth Neel. "Feature Importance Disparities for Data Bias Investigations." Working Paper, March 2023.
- April 2023
- Article
On the Privacy Risks of Algorithmic Recourse
By: Martin Pawelczyk, Himabindu Lakkaraju and Seth Neel
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected... View Details
Pawelczyk, Martin, Himabindu Lakkaraju, and Seth Neel. "On the Privacy Risks of Algorithmic Recourse." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 206 (April 2023).
- 2023
- Working Paper
PRIMO: Private Regression in Multiple Outcomes
By: Seth Neel
We introduce a new differentially private regression setting we call Private Regression in Multiple Outcomes (PRIMO), inspired the common situation where a data analyst wants to perform a set of l regressions while preserving privacy, where the covariates... View Details
Neel, Seth. "PRIMO: Private Regression in Multiple Outcomes." Working Paper, March 2023.
- September 2022 (Revised July 2023)
- Case
Data Privacy in Practice at LinkedIn
Bojinov, Iavor, Marco Iansiti, and Seth Neel. "Data Privacy in Practice at LinkedIn." Harvard Business School Case 623-024, September 2022. (Revised July 2023.)
- Article
Adaptive Machine Unlearning
By: Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Chris Waites
Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees... View Details
Gupta, Varun, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. "Adaptive Machine Unlearning." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- Mar 2021
- Conference Presentation
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
We study the data deletion problem for convex models. By leveraging techniques from convex optimization and reservoir sampling, we give the first data deletion algorithms that are able to handle an arbitrarily long sequence of adversarial updates while promising both... View Details
Neel, Seth, Aaron Leon Roth, and Saeed Sharifi-Malvajerdi. "Descent-to-Delete: Gradient-Based Methods for Machine Unlearning." Paper presented at the 32nd Algorithmic Learning Theory Conference, March 2021.
- 2021
- Article
Fair Algorithms for Infinite and Contextual Bandits
By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions... View Details
Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
- Oct 2020
- Conference Presentation
Optimal, Truthful, and Private Securities Lending
By: Emily Diana, Michael J. Kearns, Seth Neel and Aaron Leon Roth
We consider a fundamental dynamic allocation problem motivated by the problem of securities lending in financial markets, the mechanism underlying the short selling of stocks. A lender would like to distribute a finite number of identical copies of some scarce resource... View Details
Diana, Emily, Michael J. Kearns, Seth Neel, and Aaron Leon Roth. "Optimal, Truthful, and Private Securities Lending." Paper presented at the 1st Association for Computing Machinery (ACM) International Conference on AI in Finance (ICAIF), October 2020.
- Article
Oracle Efficient Private Non-Convex Optimization
By: Seth Neel, Aaron Leon Roth, Giuseppe Vietri and Zhiwei Steven Wu
One of the most effective algorithms for differentially private learning and optimization is objective perturbation. This technique augments a given optimization problem (e.g. deriving from an ERM problem) with a random linear term, and then exactly solves it.... View Details
Neel, Seth, Aaron Leon Roth, Giuseppe Vietri, and Zhiwei Steven Wu. "Oracle Efficient Private Non-Convex Optimization." Proceedings of the International Conference on Machine Learning (ICML) 37th (2020).
- 2021
- Conference Presentation
An Algorithmic Framework for Fairness Elicitation
By: Christopher Jung, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton and Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.... View Details
Jung, Christopher, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton, and Zhiwei Steven Wu. "An Algorithmic Framework for Fairness Elicitation." Paper presented at the 2nd Symposium on Foundations of Responsible Computing (FORC), 2021.
- Mar 2020
- Conference Presentation
A New Analysis of Differential Privacy's Generalization Guarantees
By: Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Moshe Shenfeld
We give a new proof of the "transfer theorem" underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and... View Details
Jung, Christopher, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld. "A New Analysis of Differential Privacy's Generalization Guarantees." Paper presented at the 11th Innovations in Theoretical Computer Science Conference, Seattle, March 2020.
- Article
How to Use Heuristics for Differential Privacy
By: Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
We develop theory for using heuristics to solve computationally hard problems in differential privacy. Heuristic approaches have enjoyed tremendous success in machine learning, for which performance can be empirically evaluated. However, privacy guarantees cannot be... View Details
Neel, Seth, Aaron Leon Roth, and Zhiwei Steven Wu. "How to Use Heuristics for Differential Privacy." Proceedings of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) 60th (2019).
- Article
The Role of Interactivity in Local Differential Privacy
By: Matthew Joseph, Jieming Mao, Seth Neel and Aaron Leon Roth
We study the power of interactivity in local differential privacy. First, we focus on the difference between fully interactive and sequentially interactive protocols. Sequentially interactive protocols may query users adaptively in sequence, but they cannot return to... View Details
Joseph, Matthew, Jieming Mao, Seth Neel, and Aaron Leon Roth. "The Role of Interactivity in Local Differential Privacy." Proceedings of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) 60th (2019).
- 2019
- Article
Fair Algorithms for Learning in Allocation Problems
By: Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth and Zachary Schutzman
Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended).... View Details
Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179.
- Article
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM
By: Katrina Ligett, Seth Neel, Aaron Leon Roth, Bo Waggoner and Steven Wu
Traditional approaches to differential privacy assume a fixed privacy requirement ϵ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint. As differential privacy is increasingly deployed in practical settings, it... View Details
Ligett, Katrina, Seth Neel, Aaron Leon Roth, Bo Waggoner, and Steven Wu. "Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM." Journal of Privacy and Confidentiality 9, no. 2 (2019).
- 2019
- Article
An Empirical Study of Rich Subgroup Fairness for Machine Learning
By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across... View Details
Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109.
- Article
Mitigating Bias in Adaptive Data Gathering via Differential Privacy
By: Seth Neel and Aaron Leon Roth
Data that is gathered adaptively—via bandit algorithms, for example—exhibits bias. This is true both when gathering simple numeric valued data—the empirical means kept track of by stochastic bandit algorithms are biased downwards—and when gathering more complicated... View Details
Neel, Seth, and Aaron Leon Roth. "Mitigating Bias in Adaptive Data Gathering via Differential Privacy." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018).