Filter Results:
(432)
Show Results For
- All HBS Web
(853)
- Faculty Publications (432)
Show Results For
- All HBS Web
(853)
- Faculty Publications (432)
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Working Paper
Causal Interpretation of Structural IV Estimands
By: Isaiah Andrews, Nano Barahona, Matthew Gentzkow, Ashesh Rambachan and Jesse M. Shapiro
We study the causal interpretation of instrumental variables (IV) estimands of nonlinear, multivariate structural models with respect to rich forms of model misspecification. We focus on guaranteeing that the researcher's estimator is sharp zero consistent, meaning... View Details
Keywords: Mathematical Methods
Andrews, Isaiah, Nano Barahona, Matthew Gentzkow, Ashesh Rambachan, and Jesse M. Shapiro. "Causal Interpretation of Structural IV Estimands." NBER Working Paper Series, No. 31799, October 2023.
- September–October 2023
- Article
Interpretable Matrix Completion: A Discrete Optimization Approach
By: Dimitris Bertsimas and Michael Lingzhi Li
We consider the problem of matrix completion on an n × m matrix. We introduce the problem of interpretable matrix completion that aims to provide meaningful insights for the low-rank matrix using side information. We show that the problem can be... View Details
Keywords: Mathematical Methods
Bertsimas, Dimitris, and Michael Lingzhi Li. "Interpretable Matrix Completion: A Discrete Optimization Approach." INFORMS Journal on Computing 35, no. 5 (September–October 2023): 952–965.
- 2023
- Article
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
By: Anna P. Meyer, Dan Ley, Suraj Srinivas and Himabindu Lakkaraju
The Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice. For instance, models... View Details
Meyer, Anna P., Dan Ley, Suraj Srinivas, and Himabindu Lakkaraju. "On Minimizing the Impact of Dataset Shifts on Actionable Explanations." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 39th (2023): 1434–1444.
- July–August 2023
- Article
Demand Learning and Pricing for Varying Assortments
By: Kris Ferreira and Emily Mower
Problem Definition: We consider the problem of demand learning and pricing for retailers who offer assortments of substitutable products that change frequently, e.g., due to limited inventory, perishable or time-sensitive products, or the retailer’s desire to... View Details
Keywords: Experiments; Pricing And Revenue Management; Retailing; Demand Estimation; Pricing Algorithm; Marketing; Price; Demand and Consumers; Mathematical Methods
Ferreira, Kris, and Emily Mower. "Demand Learning and Pricing for Varying Assortments." Manufacturing & Service Operations Management 25, no. 4 (July–August 2023): 1227–1244. (Finalist, Practice-Based Research Competition, MSOM (2021) and Finalist, Revenue Management & Pricing Section Practice Award, INFORMS (2019).)
- 2023
- Working Paper
How People Use Statistics
By: Pedro Bordalo, John J. Conlon, Nicola Gennaioli, Spencer Yongwook Kwon and Andrei Shleifer
We document two new facts about the distributions of answers in famous statistical problems: they are i) multi-modal and ii) unstable with respect to irrelevant changes in the problem. We offer a model in which, when solving a problem, people represent each hypothesis... View Details
Bordalo, Pedro, John J. Conlon, Nicola Gennaioli, Spencer Yongwook Kwon, and Andrei Shleifer. "How People Use Statistics." NBER Working Paper Series, No. 31631, August 2023.
- July 2023 (Revised July 2023)
- Background Note
Generative AI Value Chain
By: Andy Wu and Matt Higgins
Generative AI refers to a type of artificial intelligence (AI) that can create new content (e.g., text, image, or audio) in response to a prompt from a user. ChatGPT, Bard, and Claude are examples of text generating AIs, and DALL-E, Midjourney, and Stable Diffusion are... View Details
Keywords: AI; Artificial Intelligence; Model; Hardware; Data Centers; AI and Machine Learning; Applications and Software; Analytics and Data Science; Value
Wu, Andy, and Matt Higgins. "Generative AI Value Chain." Harvard Business School Background Note 724-355, July 2023. (Revised July 2023.)
- July 2023
- Article
Design and Analysis of Switchback Experiments
By: Iavor I Bojinov, David Simchi-Levi and Jinglong Zhao
In switchback experiments, a firm sequentially exposes an experimental unit to a random treatment, measures its response, and repeats the procedure for several periods to determine which treatment leads to the best outcome. Although practitioners have widely adopted... View Details
Bojinov, Iavor I., David Simchi-Levi, and Jinglong Zhao. "Design and Analysis of Switchback Experiments." Management Science 69, no. 7 (July 2023): 3759–3777.
- 2023
- Working Paper
Setting Gendered Expectations? Recruiter Outreach Bias in Online Tech Training Programs
By: Jacqueline N. Lane, Karim R. Lakhani and Roberto Fernandez
Competence development in digital technologies, analytics, and artificial intelligence is increasingly important to all types of organizations and their workforce. Universities and corporations are investing heavily in developing training programs, at all tenure... View Details
Keywords: STEM; Selection and Staffing; Gender; Prejudice and Bias; Training; Equality and Inequality; Competency and Skills
Lane, Jacqueline N., Karim R. Lakhani, and Roberto Fernandez. "Setting Gendered Expectations? Recruiter Outreach Bias in Online Tech Training Programs." Harvard Business School Working Paper, No. 23-066, April 2023. (Accepted by Organization Science.)
- 2023
- Article
Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
By: Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju
As machine learning models are increasingly being employed to make consequential decisions in real-world settings, it becomes critical to ensure that individuals who are adversely impacted (e.g., loan denied) by the predictions of these models are provided with a means... View Details
Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse." Proceedings of the International Conference on Learning Representations (ICLR) (2023).
- 2023
- Article
Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations.
By: Edward McFowland III and Cosma Rohilla Shalizi
Social influence cannot be identified from purely observational data on social networks, because such influence is generically confounded with latent homophily, that is, with a node’s network partners being informative about the node’s attributes and therefore its... View Details
Keywords: Causal Inference; Homophily; Social Networks; Peer Influence; Social and Collaborative Networks; Power and Influence; Mathematical Methods
McFowland III, Edward, and Cosma Rohilla Shalizi. "Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations." Journal of the American Statistical Association 118, no. 541 (2023): 707–718.
- 2023
- Working Paper
PRIMO: Private Regression in Multiple Outcomes
By: Seth Neel
We introduce a new differentially private regression setting we call Private Regression in Multiple Outcomes (PRIMO), inspired the common situation where a data analyst wants to perform a set of l regressions while preserving privacy, where the covariates... View Details
Neel, Seth. "PRIMO: Private Regression in Multiple Outcomes." Working Paper, March 2023.
- 2023
- Working Paper
Distributionally Robust Causal Inference with Observational Data
By: Dimitris Bertsimas, Kosuke Imai and Michael Lingzhi Li
We consider the estimation of average treatment effects in observational studies and propose a new framework of robust causal inference with unobserved confounders. Our approach is based on distributionally robust optimization and proceeds in two steps. We first... View Details
Bertsimas, Dimitris, Kosuke Imai, and Michael Lingzhi Li. "Distributionally Robust Causal Inference with Observational Data." Working Paper, February 2023.
- Working Paper
Group Fairness in Dynamic Refugee Assignment
By: Daniel Freund, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt and Wentao Weng
Ensuring that refugees and asylum seekers thrive (e.g., find employment) in their host countries is a profound humanitarian goal, and a primary driver of employment is the geographic
location within a host country to which the refugee or asylum seeker is... View Details
Freund, Daniel, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt, and Wentao Weng. "Group Fairness in Dynamic Refugee Assignment." Harvard Business School Working Paper, No. 23-047, February 2023.
- 2022
- Article
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This... View Details
Han, Tessa, Suraj Srinivas, and Himabindu Lakkaraju. "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022). (Best Paper Award, International Conference on Machine Learning (ICML) Workshop on Interpretable ML in Healthcare.)
- 2022
- Article
Data Poisoning Attacks on Off-Policy Evaluation Methods
By: Elita Lobo, Harvineet Singh, Marek Petrik, Cynthia Rudin and Himabindu Lakkaraju
Off-policy Evaluation (OPE) methods are a crucial tool for evaluating policies in high-stakes domains such as healthcare, where exploration is often infeasible, unethical, or expensive. However, the extent to which such methods can be trusted under adversarial threats... View Details
Lobo, Elita, Harvineet Singh, Marek Petrik, Cynthia Rudin, and Himabindu Lakkaraju. "Data Poisoning Attacks on Off-Policy Evaluation Methods." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 38th (2022): 1264–1274.
- 2022
- Article
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
By: Jessica Dai, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach and Himabindu Lakkaraju
As post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to ensure that the quality of the resulting explanations is consistently high across all subgroups of a population. For instance, it... View Details
Dai, Jessica, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach, and Himabindu Lakkaraju. "Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 203–214.
- 2022
- Article
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis.
By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice. Despite the growing popularity of counterfactual explanations, a... View Details
Keywords: Machine Learning Models; Counterfactual Explanations; Adversarial Examples; Mathematical Methods
Pawelczyk, Martin, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. "Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
- Article
How Much Should We Trust Staggered Difference-In-Differences Estimates?
By: Andrew C. Baker, David F. Larcker and Charles C.Y. Wang
We explain when and how staggered difference-in-differences regression estimators, commonly applied to assess the impact of policy changes, are biased. These biases are likely to be relevant for a large portion of research settings in finance, accounting, and law that... View Details
Keywords: Difference In Differences; Staggered Difference-in-differences Designs; Generalized Difference-in-differences; Dynamic Treatment Effects; Mathematical Methods
Baker, Andrew C., David F. Larcker, and Charles C.Y. Wang. "How Much Should We Trust Staggered Difference-In-Differences Estimates?" Journal of Financial Economics 144, no. 2 (May 2022): 370–395. (Editor's Choice, May 2022; Jensen Prize, First Place, June 2023.)
- 2022
- Article
Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods.
By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
As Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes critical to ensure that the stakeholders understand the rationale behind their predictions. While several GNN explanation methods have been proposed recently, there has... View Details
Keywords: Graph Neural Networks; Explanation Methods; Mathematical Methods; Framework; Theory; Analysis
Agarwal, Chirag, Marinka Zitnik, and Himabindu Lakkaraju. "Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).