Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results: (23) Arrow Down
Filter Results: (23) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (74)
    • Faculty Publications  (23)

    Show Results For

    • All HBS Web  (74)
      • Faculty Publications  (23)

      Machine BiasRemove Machine Bias →

      Page 1 of 23 Results →

      Are you looking for?

      →Search All HBS Web
      • April 2025
      • Article

      Serving with a Smile on Airbnb: Analyzing the Economic Returns and Behavioral Underpinnings of the Host’s Smile

      By: Shunyuan Zhang, Elizabeth Friedman, Kannan Srinivasan, Ravi Dhar and Xupin Zhang
      Non-informational cues, such as facial expressions, can significantly influence judgments and interpersonal impressions. While past research has explored how smiling affects business outcomes in offline or in-store contexts, relatively less is known about how smiling... View Details
      Keywords: Sharing Economy; Airbnb; Image Feature Extraction; Machine Learning; Facial Expressions; Prejudice and Bias; Nonverbal Communication; E-commerce; Consumer Behavior; Perception
      Citation
      Read Now
      Related
      Zhang, Shunyuan, Elizabeth Friedman, Kannan Srinivasan, Ravi Dhar, and Xupin Zhang. "Serving with a Smile on Airbnb: Analyzing the Economic Returns and Behavioral Underpinnings of the Host’s Smile." Journal of Consumer Research 51, no. 6 (April 2025): 1073–1097.
      • 2025
      • Article

      Humor as a Window into Generative AI Bias

      By: Roger Samure, Julian De Freitas and Stefano Puntoni
      A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While... View Details
      Keywords: AI and Machine Learning; Demographics; Prejudice and Bias
      Citation
      Read Now
      Related
      Samure, Roger, Julian De Freitas, and Stefano Puntoni. "Humor as a Window into Generative AI Bias." Art. 1326. Scientific Reports 15 (2025).
      • January 2025
      • Article

      Reducing Prejudice with Counter-stereotypical AI

      By: Erik Hermann, Julian De Freitas and Stefano Puntoni
      Based on a review of relevant literature, we propose that the proliferation of AI with human-like and social features presents an unprecedented opportunity to address the underlying cognitive and affective drivers of prejudice. An approach informed by the psychology of... View Details
      Keywords: Prejudice and Bias; AI and Machine Learning; Interpersonal Communication; Social and Collaborative Networks
      Citation
      Read Now
      Purchase
      Related
      Hermann, Erik, Julian De Freitas, and Stefano Puntoni. "Reducing Prejudice with Counter-stereotypical AI." Consumer Psychology Review 8, no. 1 (January 2025): 75–86.
      • 2025
      • Working Paper

      Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers

      By: Matthew DosSantos DiSorbo, Kris Ferreira, Maya Balakrishnan and Jordan Tong
      Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). How can humans and algorithms work together to make... View Details
      Keywords: AI and Machine Learning; Decision Choices and Conditions
      Citation
      Read Now
      Related
      DosSantos DiSorbo, Matthew, Kris Ferreira, Maya Balakrishnan, and Jordan Tong. "Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers." Working Paper, May 2025.
      • October 2023
      • Teaching Note

      Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models

      By: Tsedal Neeley and Tim Englehart
      Teaching Note for HBS Case No. 422-085. Dr. Timnit Gebru—a leading artificial intelligence (AI) computer scientist and co-lead of Google’s Ethical AI team—was messaging with one of her colleagues when she saw the words: “Did you resign?? Megan sent an email saying that... View Details
      Keywords: Ethics; Employment; Corporate Social Responsibility and Impact; Technological Innovation; AI and Machine Learning; Diversity; Prejudice and Bias; Technology Industry
      Citation
      Purchase
      Related
      Neeley, Tsedal, and Tim Englehart. "Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models." Harvard Business School Teaching Note 424-028, October 2023.
      • September 29, 2023
      • Article

      Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI

      By: Simon Friis and James Riley
      When it comes to artificial intelligence and inequality, algorithmic bias rightly receives a lot of attention. But it’s just one way that AI can lead to inequitable outcomes. To truly create equitable AI, we need to consider three forces through which it might make... View Details
      Keywords: AI and Machine Learning; Prejudice and Bias; Equality and Inequality
      Citation
      Find at Harvard
      Register to Read
      Related
      Friis, Simon, and James Riley. "Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI." Harvard Business Review (website) (September 29, 2023).
      • 2023
      • Working Paper

      Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness

      By: Neil Menghani, Edward McFowland III and Daniel B. Neill
      In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false... View Details
      Keywords: AI and Machine Learning; Forecasting and Prediction; Prejudice and Bias
      Citation
      Read Now
      Related
      Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
      • 2023
      • Working Paper

      Auditing Predictive Models for Intersectional Biases

      By: Kate S. Boxer, Edward McFowland III and Daniel B. Neill
      Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we... View Details
      Keywords: Predictive Models; Bias; AI and Machine Learning
      Citation
      Read Now
      Related
      Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023.
      • 2023
      • Article

      Provable Detection of Propagating Sampling Bias in Prediction Models

      By: Pavan Ravishankar, Qingyu Mo, Edward McFowland III and Daniel B. Neill
      With an increased focus on incorporating fairness in machine learning models, it becomes imperative not only to assess and mitigate bias at each stage of the machine learning pipeline but also to understand the downstream impacts of bias across stages. Here we consider... View Details
      Citation
      Read Now
      Related
      Ravishankar, Pavan, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. "Provable Detection of Propagating Sampling Bias in Prediction Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9562–9569. (Presented at the 37th AAAI Conference on Artificial Intelligence (2/7/23-2/14/23) in Washington, DC.)
      • May 9, 2023
      • Article

      8 Questions About Using AI Responsibly, Answered

      By: Tsedal Neeley
      Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing which to use, and how, operational and ethical considerations are inevitable. This article delves into eight of them, including how your organization... View Details
      Keywords: AI and Machine Learning; Organizational Change and Adaptation; Prejudice and Bias; Ethics
      Citation
      Register to Read
      Related
      Neeley, Tsedal. "8 Questions About Using AI Responsibly, Answered." Harvard Business Review (website) (May 9, 2023).
      • 2023
      • Working Paper

      Feature Importance Disparities for Data Bias Investigations

      By: Peter W. Chang, Leor Fishman and Seth Neel
      It is widely held that one cause of downstream bias in classifiers is bias present in the training data. Rectifying such biases may involve context-dependent interventions such as training separate models on subgroups, removing features with bias in the collection... View Details
      Keywords: AI and Machine Learning; Analytics and Data Science; Prejudice and Bias
      Citation
      Read Now
      Related
      Chang, Peter W., Leor Fishman, and Seth Neel. "Feature Importance Disparities for Data Bias Investigations." Working Paper, March 2023.
      • 2023
      • Working Paper

      The Limits of Algorithmic Measures of Race in Studies of Outcome Disparities

      By: David S. Scharfstein and Sergey Chernenko
      We show that the use of algorithms to predict race has significant limitations in measuring and understanding the sources of racial disparities in finance, economics, and other contexts. First, we derive theoretically the direction and magnitude of measurement bias in... View Details
      Keywords: Racial Disparity; Paycheck Protection Program; Measurement Error; AI and Machine Learning; Race; Measurement and Metrics; Equality and Inequality; Prejudice and Bias; Forecasting and Prediction; Outcome or Result
      Citation
      SSRN
      Read Now
      Related
      Scharfstein, David S., and Sergey Chernenko. "The Limits of Algorithmic Measures of Race in Studies of Outcome Disparities." Working Paper, April 2023.
      • 2023
      • Chapter

      Marketing Through the Machine’s Eyes: Image Analytics and Interpretability

      By: Shunyuan Zhang, Flora Feng and Kannan Srinivasan
      he growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured data and can inform recommendations for increasing profits and consumer utility—if only the... View Details
      Keywords: Transparency; Marketing Research; Algorithmic Bias; AI and Machine Learning; Marketing
      Citation
      Related
      Zhang, Shunyuan, Flora Feng, and Kannan Srinivasan. "Marketing Through the Machine’s Eyes: Image Analytics and Interpretability." Chap. 8 in Artificial Intelligence in Marketing. 20, edited by Naresh K. Malhotra, K. Sudhir, and Olivier Toubia, 217–238. Review of Marketing Research. Emerald Publishing Limited, 2023.
      • October–December 2022
      • Article

      Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem

      By: Mochen Yang, Edward McFowland III, Gordon Burtch and Gediminas Adomavicius
      Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to "mine" variables of interest from available data, followed... View Details
      Keywords: Machine Learning; Econometric Analysis; Instrumental Variable; Random Forest; Causal Inference; AI and Machine Learning; Forecasting and Prediction
      Citation
      Find at Harvard
      Register to Read
      Related
      Yang, Mochen, Edward McFowland III, Gordon Burtch, and Gediminas Adomavicius. "Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem." INFORMS Journal on Data Science 1, no. 2 (October–December 2022): 138–155.
      • May 2022 (Revised June 2024)
      • Case

      LOOP: Driving Change in Auto Insurance Pricing

      By: Elie Ofek and Alicia Dadlani
      John Henry and Carey Anne Nadeau, co-founders and co-CEOs of LOOP, an insurtech startup based in Austin, Texas, were on a mission to modernize the archaic $250 billion automobile insurance market. They sought to create equitably priced insurance by eliminating pricing... View Details
      Keywords: AI and Machine Learning; Technological Innovation; Equality and Inequality; Prejudice and Bias; Growth and Development Strategy; Customer Relationship Management; Price; Insurance Industry; Financial Services Industry
      Citation
      Educators
      Purchase
      Related
      Ofek, Elie, and Alicia Dadlani. "LOOP: Driving Change in Auto Insurance Pricing." Harvard Business School Case 522-073, May 2022. (Revised June 2024.)
      • Article

      Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)

      By: Eva Ascarza and Ayelet Israeli

      An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details

      Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
      • September–October 2021
      • Article

      Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb

      By: Shunyuan Zhang, Nitin Mehta, Param Singh and Kannan Srinivasan
      We study the effect of Airbnb’s smart-pricing algorithm on the racial disparity in the daily revenue earned by Airbnb hosts. Our empirical strategy exploits Airbnb’s introduction of the algorithm and its voluntary adoption by hosts as a quasi-natural experiment. Among... View Details
      Keywords: Smart Pricing; Pricing Algorithm; Machine Bias; Discrimination; Racial Disparity; Social Inequality; Airbnb Revenue; Revenue; Race; Equality and Inequality; Prejudice and Bias; Price; Mathematical Methods; Accommodations Industry
      Citation
      SSRN
      Find at Harvard
      Related
      Zhang, Shunyuan, Nitin Mehta, Param Singh, and Kannan Srinivasan. "Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb." Marketing Science 40, no. 5 (September–October 2021): 813–820.
      • September 17, 2021
      • Article

      AI Can Help Address Inequity—If Companies Earn Users' Trust

      By: Shunyuan Zhang, Kannan Srinivasan, Param Singh and Nitin Mehta
      While companies may spend a lot of time testing models before launch, many spend too little time considering how they will work in the wild. In particular, they fail to fully consider how rates of adoption can warp developers’ intent. For instance, Airbnb launched a... View Details
      Keywords: Artificial Intelligence; Algorithmic Bias; Technological Innovation; Perception; Diversity; Equality and Inequality; Trust; AI and Machine Learning
      Citation
      Find at Harvard
      Register to Read
      Related
      Zhang, Shunyuan, Kannan Srinivasan, Param Singh, and Nitin Mehta. "AI Can Help Address Inequity—If Companies Earn Users' Trust." Harvard Business Review Digital Articles (September 17, 2021).
      • 2021
      • Chapter

      Towards a Unified Framework for Fair and Stable Graph Representation Learning

      By: Chirag Agarwal, Himabindu Lakkaraju and Marinka Zitnik
      As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between counterfactual... View Details
      Keywords: Graph Neural Networks; AI and Machine Learning; Prejudice and Bias
      Citation
      Read Now
      Related
      Agarwal, Chirag, Himabindu Lakkaraju, and Marinka Zitnik. "Towards a Unified Framework for Fair and Stable Graph Representation Learning." In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, edited by Cassio de Campos and Marloes H. Maathuis, 2114–2124. AUAI Press, 2021.
      • 2020
      • Working Paper

      (When) Does Appearance Matter? Evidence from a Randomized Controlled Trial

      By: Prithwiraj Choudhury, Tarun Khanna, Christos A. Makridis and Subhradip Sarker
      While there is evidence about labor market discrimination based on race, religion, and gender, we know little about whether physical appearance leads to discrimination in labor market outcomes. We deploy a randomized experiment on 1,000 respondents in India between... View Details
      Keywords: Behavioral Economics; Coronavirus; Discrimination; Homophily; Labor Market Mobility; Limited Attention; Resumes; Personal Characteristics; Prejudice and Bias
      Citation
      Read Now
      Related
      Choudhury, Prithwiraj, Tarun Khanna, Christos A. Makridis, and Subhradip Sarker. "(When) Does Appearance Matter? Evidence from a Randomized Controlled Trial." Harvard Business School Working Paper, No. 21-038, September 2020.
      • 1
      • 2
      • →

      Are you looking for?

      →Search All HBS Web
      ǁ
      Campus Map
      Harvard Business School
      Soldiers Field
      Boston, MA 02163
      →Map & Directions
      →More Contact Information
      • Make a Gift
      • Site Map
      • Jobs
      • Harvard University
      • Trademarks
      • Policies
      • Accessibility
      • Digital Accessibility
      Copyright © President & Fellows of Harvard College.