Filter Results:
(82)
Show Results For
- All HBS Web
(328)
- Faculty Publications (82)
Show Results For
- All HBS Web
(328)
- Faculty Publications (82)
Page 1 of 82
Results →
- 2025
- Book
The Experimentation Machine: Finding Product–Market Fit in the Age of AI
Leverage AI to be a 10x Founder
Today’s most successful founders know that the startups that learn the fastest will win. In The Experimentation Machine, I reveal how AI is transforming the way startups find product-market fit and scale.... View Details
Today’s most successful founders know that the startups that learn the fastest will win. In The Experimentation Machine, I reveal how AI is transforming the way startups find product-market fit and scale.... View Details
Keywords: AI; Founder; Startup; AI and Machine Learning; Technology Adoption; Business Startups; Entrepreneurship; Market Entry and Exit
Bussgang, Jeffrey J. The Experimentation Machine: Finding Product–Market Fit in the Age of AI. Damn Gravity Media, 2025.
- March 2025
- Article
Novice Risk Work: How Juniors Coaching Seniors on Emerging Technologies Such as Generative AI Can Lead to Learning Failures
By: Katherine C. Kellogg, Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon and Karim R. Lakhani
The literature on communities of practice demonstrates that a proven way for senior professionals to upskill
themselves in the use of new technologies that undermine existing expertise is to learn from junior
professionals. It notes that juniors may be better able... View Details
Keywords: Rank and Position; Competency and Skills; Technology Adoption; Experience and Expertise; AI and Machine Learning
Kellogg, Katherine C., Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon, and Karim R. Lakhani. "Novice Risk Work: How Juniors Coaching Seniors on Emerging Technologies Such as Generative AI Can Lead to Learning Failures." Art. 100559. Information and Organization 35, no. 1 (March 2025).
- 2025
- Article
Statistical Inference for Heterogeneous Treatment Effects Discovered by Generic Machine Learning in Randomized Experiments
By: Kosuke Imai and Michael Lingzhi Li
Researchers are increasingly turning to machine learning (ML) algorithms to investigate causal heterogeneity in randomized experiments. Despite their promise, ML algorithms may fail to accurately ascertain heterogeneous treatment effects under practical settings with... View Details
Imai, Kosuke, and Michael Lingzhi Li. "Statistical Inference for Heterogeneous Treatment Effects Discovered by Generic Machine Learning in Randomized Experiments." Journal of Business & Economic Statistics 43, no. 1 (2025): 256–268.
- November–December 2024
- Article
Outcome-Driven Dynamic Refugee Assignment with Allocation Balancing
By: Kirk Bansak and Elisabeth Paulson
This study proposes two new dynamic assignment algorithms to match refugees and asylum seekers to geographic localities within a host country. The first, currently implemented in a multi-year pilot in Switzerland, seeks to maximize the average predicted employment... View Details
Bansak, Kirk, and Elisabeth Paulson. "Outcome-Driven Dynamic Refugee Assignment with Allocation Balancing." Operations Research 72, no. 6 (November–December 2024): 2375–2390.
- 2024
- Working Paper
Empirical Guidance: Data Processing and Analysis with Applications in Stata, R, and Python
By: Melissa Ouellet and Michael W. Toffel
This paper describes a range of best practices to compile and analyze datasets, and includes some examples in Stata, R, and Python. It is meant to serve as a reference for those getting started in econometrics, and especially those seeking to conduct data analyses in... View Details
Keywords: Empirical Methods; Empirical Operations; Statistical Methods And Machine Learning; Statistical Interferences; Research Analysts; Analytics and Data Science; Mathematical Methods
Ouellet, Melissa, and Michael W. Toffel. "Empirical Guidance: Data Processing and Analysis with Applications in Stata, R, and Python." Harvard Business School Working Paper, No. 25-010, August 2024.
- July–August 2024
- Article
Doing More with Less: Overcoming Ineffective Long-Term Targeting Using Short-Term Signals
By: Ta-Wei Huang and Eva Ascarza
Firms are increasingly interested in developing targeted interventions for customers with the best response,
which requires identifying differences in customer sensitivity, typically through the conditional average treatment
effect (CATE) estimation. In theory, to... View Details
Keywords: Long-run Targeting; Heterogeneous Treatment Effect; Statistical Surrogacy; Customer Churn; Field Experiments; Consumer Behavior; Customer Focus and Relationships; AI and Machine Learning; Marketing Strategy
Huang, Ta-Wei, and Eva Ascarza. "Doing More with Less: Overcoming Ineffective Long-Term Targeting Using Short-Term Signals." Marketing Science 43, no. 4 (July–August 2024): 863–884.
- 2024
- Working Paper
The Cram Method for Efficient Simultaneous Learning and Evaluation
By: Zeyang Jia, Kosuke Imai and Michael Lingzhi Li
We introduce the "cram" method, a general and efficient approach to simultaneous learning and evaluation using a generic machine learning (ML) algorithm. In a single pass of batched data, the proposed method repeatedly trains an ML algorithm and tests its empirical... View Details
Keywords: AI and Machine Learning
Jia, Zeyang, Kosuke Imai, and Michael Lingzhi Li. "The Cram Method for Efficient Simultaneous Learning and Evaluation." Working Paper, March 2024.
- 2023
- Working Paper
An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits
By: Biyonka Liang and Iavor I. Bojinov
Typically, multi-armed bandit (MAB) experiments are analyzed at the end of the study and thus require the analyst to specify a fixed sample size in advance. However, in many online learning applications, it is advantageous to continuously produce inference on the... View Details
Liang, Biyonka, and Iavor I. Bojinov. "An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits." Harvard Business School Working Paper, No. 24-057, March 2024.
- 2023
- Article
M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models
By: Himabindu Lakkaraju, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai and Haoyi Xiong
While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for... View Details
Keywords: AI and Machine Learning
Lakkaraju, Himabindu, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, and Haoyi Xiong. "M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Post Hoc Explanations of Language Models Can Improve Language Models
By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance... View Details
Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Other Article
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
By: Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers and Stuart Shieber
Innovation is a major driver of economic and social development, and information about many kinds of innovation is embedded in semi-structured data from patents and patent applications. Though the impact and novelty of innovations expressed in patent data are difficult... View Details
Keywords: USPTO; Natural Language Processing; Classification; Summarization; Patent Novelty; Patent Trolls; Patent Enforceability; Patents; Innovation and Invention; Intellectual Property; AI and Machine Learning; Analytics and Data Science
Suzgun, Mirac, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart Shieber. "The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications." Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track 36 (2023).
- 2023
- Article
Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability
By: Usha Bhalla, Suraj Srinivas and Himabindu Lakkaraju
With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to... View Details
Bhalla, Usha, Suraj Srinivas, and Himabindu Lakkaraju. "Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- October 2023
- Article
Improving Regulatory Effectiveness Through Better Targeting: Evidence from OSHA
By: Matthew S. Johnson, David I. Levine and Michael W. Toffel
We study how a regulator can best target inspections. Our case study is a U.S. Occupational Safety and Health Administration (OSHA) program that randomly allocated some inspections. On average, each inspection averted 2.4 serious injuries (9%) over the next five years.... View Details
Keywords: Safety Regulations; Regulations; Regulatory Enforcement; Machine Learning Models; Safety; Operations; Service Operations; Production; Forecasting and Prediction; Decisions; United States
Johnson, Matthew S., David I. Levine, and Michael W. Toffel. "Improving Regulatory Effectiveness Through Better Targeting: Evidence from OSHA." American Economic Journal: Applied Economics 15, no. 4 (October 2023): 30–67. (Profiled in the Regulatory Review.)
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is... View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- August 2023
- Article
Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel
By: Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use... View Details
Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel." Nature Machine Intelligence 5, no. 8 (August 2023): 873–883.
- 2023
- Article
Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten
By: Himabindu Lakkaraju, Satyapriya Krishna and Jiaqi Ma
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an... View Details
Keywords: Analytics and Data Science; AI and Machine Learning; Decision Making; Governing Rules, Regulations, and Reforms
Lakkaraju, Himabindu, Satyapriya Krishna, and Jiaqi Ma. "Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 17808–17826.
- 2023
- Working Paper
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness
By: Neil Menghani, Edward McFowland III and Daniel B. Neill
In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false... View Details
Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
- 2023
- Working Paper
Auditing Predictive Models for Intersectional Biases
By: Kate S. Boxer, Edward McFowland III and Daniel B. Neill
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we... View Details
Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023.