Filter Results:
(23)
Show Results For
- All HBS Web
(72)
- Faculty Publications (23)
Show Results For
- All HBS Web
(72)
- Faculty Publications (23)
Page 1 of 23
Results →
- 2025
- Article
Humor as a Window into Generative AI Bias
By: Roger Samure, Julian De Freitas and Stefano Puntoni
A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While... View Details
Samure, Roger, Julian De Freitas, and Stefano Puntoni. "Humor as a Window into Generative AI Bias." Art. 1326. Scientific Reports 15 (2025).
- January 2025
- Article
Reducing Prejudice with Counter-stereotypical AI
By: Erik Hermann, Julian De Freitas and Stefano Puntoni
Based on a review of relevant literature, we propose that the proliferation of AI with human-like and social features presents an unprecedented opportunity to address the underlying cognitive and affective drivers of prejudice. An approach informed by the psychology of... View Details
Keywords: Prejudice and Bias; AI and Machine Learning; Interpersonal Communication; Social and Collaborative Networks
Hermann, Erik, Julian De Freitas, and Stefano Puntoni. "Reducing Prejudice with Counter-stereotypical AI." Consumer Psychology Review 8, no. 1 (January 2025): 75–86.
- 2024
- Working Paper
Warnings and Endorsements: Improving Human-AI Collaboration Under Covariate Shift
By: Matthew DosSantos DiSorbo and Kris Ferreira
Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). These outliers often originate from covariate shift,... View Details
DosSantos DiSorbo, Matthew, and Kris Ferreira. "Warnings and Endorsements: Improving Human-AI Collaboration Under Covariate Shift." Working Paper, February 2024.
- October 2023
- Teaching Note
Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models
By: Tsedal Neeley and Tim Englehart
Teaching Note for HBS Case No. 422-085. Dr. Timnit Gebru—a leading artificial intelligence (AI) computer scientist and co-lead of Google’s Ethical AI team—was messaging with one of her colleagues when she saw the words: “Did you resign?? Megan sent an email saying that... View Details
- September 29, 2023
- Article
Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI
By: Simon Friis and James Riley
When it comes to artificial intelligence and inequality, algorithmic bias rightly receives a lot of attention. But it’s just one way that AI can lead to inequitable outcomes. To truly create equitable AI, we need to consider three forces through which it might make... View Details
Friis, Simon, and James Riley. "Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI." Harvard Business Review (website) (September 29, 2023).
- 2023
- Working Paper
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness
By: Neil Menghani, Edward McFowland III and Daniel B. Neill
In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false... View Details
Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
- 2023
- Working Paper
Auditing Predictive Models for Intersectional Biases
By: Kate S. Boxer, Edward McFowland III and Daniel B. Neill
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we... View Details
Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023.
- 2023
- Article
Provable Detection of Propagating Sampling Bias in Prediction Models
By: Pavan Ravishankar, Qingyu Mo, Edward McFowland III and Daniel B. Neill
With an increased focus on incorporating fairness in machine learning models, it becomes imperative not only to assess and mitigate bias at each stage of the machine learning pipeline but also to understand the downstream impacts of bias across stages. Here we consider... View Details
Ravishankar, Pavan, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. "Provable Detection of Propagating Sampling Bias in Prediction Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9562–9569. (Presented at the 37th AAAI Conference on Artificial Intelligence (2/7/23-2/14/23) in Washington, DC.)
- May 9, 2023
- Article
8 Questions About Using AI Responsibly, Answered
By: Tsedal Neeley
Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing which to use, and how, operational and ethical considerations are inevitable. This article delves into eight of them, including how your organization... View Details
Neeley, Tsedal. "8 Questions About Using AI Responsibly, Answered." Harvard Business Review (website) (May 9, 2023).
- 2023
- Working Paper
Feature Importance Disparities for Data Bias Investigations
By: Peter W. Chang, Leor Fishman and Seth Neel
It is widely held that one cause of downstream bias in classifiers is bias present in the training data. Rectifying such biases may involve context-dependent interventions such as training separate models on subgroups, removing features with bias in the collection... View Details
Chang, Peter W., Leor Fishman, and Seth Neel. "Feature Importance Disparities for Data Bias Investigations." Working Paper, March 2023.
- 2023
- Working Paper
The Limits of Algorithmic Measures of Race in Studies of Outcome Disparities
By: David S. Scharfstein and Sergey Chernenko
We show that the use of algorithms to predict race has significant limitations in measuring and understanding the sources of racial disparities in finance, economics, and other contexts. First, we derive theoretically the direction and magnitude of measurement bias in... View Details
Keywords: Racial Disparity; Paycheck Protection Program; Measurement Error; AI and Machine Learning; Race; Measurement and Metrics; Equality and Inequality; Prejudice and Bias; Forecasting and Prediction; Outcome or Result
Scharfstein, David S., and Sergey Chernenko. "The Limits of Algorithmic Measures of Race in Studies of Outcome Disparities." Working Paper, April 2023.
- 2023
- Chapter
Marketing Through the Machine’s Eyes: Image Analytics and Interpretability
By: Shunyuan Zhang, Flora Feng and Kannan Srinivasan
he growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured data and can inform recommendations for increasing profits and consumer utility—if only the... View Details
Zhang, Shunyuan, Flora Feng, and Kannan Srinivasan. "Marketing Through the Machine’s Eyes: Image Analytics and Interpretability." Chap. 8 in Artificial Intelligence in Marketing. 20, edited by Naresh K. Malhotra, K. Sudhir, and Olivier Toubia, 217–238. Review of Marketing Research. Emerald Publishing Limited, 2023.
- October–December 2022
- Article
Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem
By: Mochen Yang, Edward McFowland III, Gordon Burtch and Gediminas Adomavicius
Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to "mine" variables of interest from available data, followed... View Details
Keywords: Machine Learning; Econometric Analysis; Instrumental Variable; Random Forest; Causal Inference; AI and Machine Learning; Forecasting and Prediction
Yang, Mochen, Edward McFowland III, Gordon Burtch, and Gediminas Adomavicius. "Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem." INFORMS Journal on Data Science 1, no. 2 (October–December 2022): 138–155.
- May 2022 (Revised June 2024)
- Case
LOOP: Driving Change in Auto Insurance Pricing
By: Elie Ofek and Alicia Dadlani
John Henry and Carey Anne Nadeau, co-founders and co-CEOs of LOOP, an insurtech startup based in Austin, Texas, were on a mission to modernize the archaic $250 billion automobile insurance market. They sought to create equitably priced insurance by eliminating pricing... View Details
Keywords: AI and Machine Learning; Technological Innovation; Equality and Inequality; Prejudice and Bias; Growth and Development Strategy; Customer Relationship Management; Price; Insurance Industry; Financial Services Industry
Ofek, Elie, and Alicia Dadlani. "LOOP: Driving Change in Auto Insurance Pricing." Harvard Business School Case 522-073, May 2022. (Revised June 2024.)
- Article
Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)
By: Eva Ascarza and Ayelet Israeli
An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details
Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
- September–October 2021
- Article
Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb
By: Shunyuan Zhang, Nitin Mehta, Param Singh and Kannan Srinivasan
We study the effect of Airbnb’s smart-pricing algorithm on the racial disparity in the daily revenue earned by Airbnb hosts. Our empirical strategy exploits Airbnb’s introduction of the algorithm and its voluntary adoption by hosts as a quasi-natural experiment. Among... View Details
Keywords: Smart Pricing; Pricing Algorithm; Machine Bias; Discrimination; Racial Disparity; Social Inequality; Airbnb Revenue; Revenue; Race; Equality and Inequality; Prejudice and Bias; Price; Mathematical Methods; Accommodations Industry
Zhang, Shunyuan, Nitin Mehta, Param Singh, and Kannan Srinivasan. "Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb." Marketing Science 40, no. 5 (September–October 2021): 813–820.
- September 17, 2021
- Article
AI Can Help Address Inequity—If Companies Earn Users' Trust
By: Shunyuan Zhang, Kannan Srinivasan, Param Singh and Nitin Mehta
While companies may spend a lot of time testing models before launch, many spend too little time considering how they will work in the wild. In particular, they fail to fully consider how rates of adoption can warp developers’ intent. For instance, Airbnb launched a... View Details
Keywords: Artificial Intelligence; Algorithmic Bias; Technological Innovation; Perception; Diversity; Equality and Inequality; Trust; AI and Machine Learning
Zhang, Shunyuan, Kannan Srinivasan, Param Singh, and Nitin Mehta. "AI Can Help Address Inequity—If Companies Earn Users' Trust." Harvard Business Review Digital Articles (September 17, 2021).
- 2021
- Chapter
Towards a Unified Framework for Fair and Stable Graph Representation Learning
By: Chirag Agarwal, Himabindu Lakkaraju and Marinka Zitnik
As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between counterfactual... View Details
Agarwal, Chirag, Himabindu Lakkaraju, and Marinka Zitnik. "Towards a Unified Framework for Fair and Stable Graph Representation Learning." In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, edited by Cassio de Campos and Marloes H. Maathuis, 2114–2124. AUAI Press, 2021.
- 2020
- Working Paper
(When) Does Appearance Matter? Evidence from a Randomized Controlled Trial
By: Prithwiraj Choudhury, Tarun Khanna, Christos A. Makridis and Subhradip Sarker
While there is evidence about labor market discrimination based on race, religion, and gender, we know little about whether physical appearance leads to discrimination in labor market outcomes. We deploy a randomized experiment on 1,000 respondents in India between... View Details
Keywords: Behavioral Economics; Coronavirus; Discrimination; Homophily; Labor Market Mobility; Limited Attention; Resumes; Personal Characteristics; Prejudice and Bias
Choudhury, Prithwiraj, Tarun Khanna, Christos A. Makridis, and Subhradip Sarker. "(When) Does Appearance Matter? Evidence from a Randomized Controlled Trial." Harvard Business School Working Paper, No. 21-038, September 2020.
- August 2020
- Article
Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation
By: Prithwiraj Choudhury, Evan Starr and Rajshree Agarwal
The use of machine learning (ML) for productivity in the knowledge economy requires considerations of important biases that may arise from ML predictions. We define a new source of bias related to incompleteness in real time inputs, which may result from strategic... View Details
Choudhury, Prithwiraj, Evan Starr, and Rajshree Agarwal. "Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation." Strategic Management Journal 41, no. 8 (August 2020): 1381–1411.