Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results: (69) Arrow Down
Filter Results: (69) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (69)
    • News  (10)
    • Research  (49)
    • Events  (1)
  • Faculty Publications  (31)

Show Results For

  • All HBS Web  (69)
    • News  (10)
    • Research  (49)
    • Events  (1)
  • Faculty Publications  (31)
Page 1 of 69 Results →
  • August 2020
  • Article

Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation

By: Prithwiraj Choudhury, Evan Starr and Rajshree Agarwal
The use of machine learning (ML) for productivity in the knowledge economy requires considerations of important biases that may arise from ML predictions. We define a new source of bias related to incompleteness in real time inputs, which may result from strategic... View Details
Keywords: Machine Learning; Bias; Human Capital; Management; Strategy
Citation
Find at Harvard
Read Now
Related
Choudhury, Prithwiraj, Evan Starr, and Rajshree Agarwal. "Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation." Strategic Management Journal 41, no. 8 (August 2020): 1381–1411.
  • Article

Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)

By: Eva Ascarza and Ayelet Israeli

An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details

Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
Citation
Read Now
Related
Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
  • 2023
  • Working Paper

Feature Importance Disparities for Data Bias Investigations

By: Peter W. Chang, Leor Fishman and Seth Neel
It is widely held that one cause of downstream bias in classifiers is bias present in the training data. Rectifying such biases may involve context-dependent interventions such as training separate models on subgroups, removing features with bias in the collection... View Details
Keywords: AI and Machine Learning; Analytics and Data Science; Prejudice and Bias
Citation
Read Now
Related
Chang, Peter W., Leor Fishman, and Seth Neel. "Feature Importance Disparities for Data Bias Investigations." Working Paper, March 2023.
  • 2023
  • Article

Provable Detection of Propagating Sampling Bias in Prediction Models

By: Pavan Ravishankar, Qingyu Mo, Edward McFowland III and Daniel B. Neill
With an increased focus on incorporating fairness in machine learning models, it becomes imperative not only to assess and mitigate bias at each stage of the machine learning pipeline but also to understand the downstream impacts of bias across stages. Here we consider... View Details
Citation
Read Now
Related
Ravishankar, Pavan, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. "Provable Detection of Propagating Sampling Bias in Prediction Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9562–9569. (Presented at the 37th AAAI Conference on Artificial Intelligence (2/7/23-2/14/23) in Washington, DC.)

    Eliminating unintended bias in personalized policies using Bias Eliminating Adapted Trees (BEAT) - PNAS

    An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those... View Details

    • 18 Oct 2022
    • Research & Ideas

    When Bias Creeps into AI, Managers Can Stop It by Asking the Right Questions

    algorithm perpetuates this. Another source of bias is incomplete or unrepresentative information. A famous example is facial recognition. If I use mostly photos of white men to train the machine to learn... View Details
    Keywords: by Rachel Layne
    • 19 Nov 2019
    • Op-Ed

    Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech

    bias in Goldman Sachs’s underwriting model. (Goldman developed and issued the card.) Adding fuel to the fire, Apple co-founder Steve Wozniak shared that the same thing had happened to him and his wife. Officials from the New York... View Details
    Keywords: by Karen G. Mills; Financial Services
    • 18 Feb 2022
    • News

    Behind the Research: Bias in AI with Himabindu Lakkaraju, Edward McFowland III, and Seth Neel

    • 2025
    • Article

    Humor as a Window into Generative AI Bias

    By: Roger Samure, Julian De Freitas and Stefano Puntoni
    A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While... View Details
    Keywords: AI and Machine Learning; Demographics; Prejudice and Bias
    Citation
    Read Now
    Related
    Samure, Roger, Julian De Freitas, and Stefano Puntoni. "Humor as a Window into Generative AI Bias." Art. 1326. Scientific Reports 15 (2025).
    • September 29, 2023
    • Article

    Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI

    By: Simon Friis and James Riley
    When it comes to artificial intelligence and inequality, algorithmic bias rightly receives a lot of attention. But it’s just one way that AI can lead to inequitable outcomes. To truly create equitable AI, we need to consider three forces through which it might make... View Details
    Keywords: AI and Machine Learning; Prejudice and Bias; Equality and Inequality
    Citation
    Find at Harvard
    Register to Read
    Related
    Friis, Simon, and James Riley. "Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI." Harvard Business Review (website) (September 29, 2023).
    • 01 Apr 2019
    • What Do You Think?

    Does Our Bias Against Federal Deficits Need Rethinking?

    huge deficits stretching back and forward for a decade,” according to Kautz. “I believe that an emerging disinflationary era, driven by more intelligent machines and continuing globalization, is why they are right. Most importantly for... View Details
    Keywords: by James Heskett
    • 2021
    • Working Paper

    Invisible Primes: Fintech Lending with Alternative Data

    By: Marco Di Maggio, Dimuthu Ratnadiwakara and Don Carmichael
    We exploit anonymized administrative data provided by a major fintech platform to investigate whether using alternative data to assess borrowers’ creditworthiness results in broader credit access. Comparing actual outcomes of the fintech platform’s model to... View Details
    Keywords: Fintech Lending; Alternative Data; Machine Learning; Algorithm Bias; Finance; Information Technology; Financing and Loans; Analytics and Data Science; Credit
    Citation
    Read Now
    Related
    Di Maggio, Marco, Dimuthu Ratnadiwakara, and Don Carmichael. "Invisible Primes: Fintech Lending with Alternative Data." Harvard Business School Working Paper, No. 22-024, October 2021.
    • 2023
    • Working Paper

    Auditing Predictive Models for Intersectional Biases

    By: Kate S. Boxer, Edward McFowland III and Daniel B. Neill
    Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we... View Details
    Keywords: Predictive Models; Bias; AI and Machine Learning
    Citation
    Read Now
    Related
    Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023.
    • October 2023
    • Teaching Note

    Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models

    By: Tsedal Neeley and Tim Englehart
    Teaching Note for HBS Case No. 422-085. Dr. Timnit Gebru—a leading artificial intelligence (AI) computer scientist and co-lead of Google’s Ethical AI team—was messaging with one of her colleagues when she saw the words: “Did you resign?? Megan sent an email saying that... View Details
    Keywords: Ethics; Employment; Corporate Social Responsibility and Impact; Technological Innovation; AI and Machine Learning; Diversity; Prejudice and Bias; Technology Industry
    Citation
    Purchase
    Related
    Neeley, Tsedal, and Tim Englehart. "Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models." Harvard Business School Teaching Note 424-028, October 2023.
    • September–October 2021
    • Article

    Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb

    By: Shunyuan Zhang, Nitin Mehta, Param Singh and Kannan Srinivasan
    We study the effect of Airbnb’s smart-pricing algorithm on the racial disparity in the daily revenue earned by Airbnb hosts. Our empirical strategy exploits Airbnb’s introduction of the algorithm and its voluntary adoption by hosts as a quasi-natural experiment. Among... View Details
    Keywords: Smart Pricing; Pricing Algorithm; Machine Bias; Discrimination; Racial Disparity; Social Inequality; Airbnb Revenue; Revenue; Race; Equality and Inequality; Prejudice and Bias; Price; Mathematical Methods; Accommodations Industry
    Citation
    SSRN
    Find at Harvard
    Related
    Zhang, Shunyuan, Nitin Mehta, Param Singh, and Kannan Srinivasan. "Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb." Marketing Science 40, no. 5 (September–October 2021): 813–820.
    • 2023
    • Chapter

    Marketing Through the Machine’s Eyes: Image Analytics and Interpretability

    By: Shunyuan Zhang, Flora Feng and Kannan Srinivasan
    he growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured data and can inform recommendations for increasing profits and consumer utility—if only the... View Details
    Keywords: Transparency; Marketing Research; Algorithmic Bias; AI and Machine Learning; Marketing
    Citation
    Related
    Zhang, Shunyuan, Flora Feng, and Kannan Srinivasan. "Marketing Through the Machine’s Eyes: Image Analytics and Interpretability." Chap. 8 in Artificial Intelligence in Marketing. 20, edited by Naresh K. Malhotra, K. Sudhir, and Olivier Toubia, 217–238. Review of Marketing Research. Emerald Publishing Limited, 2023.
    • September 17, 2021
    • Article

    AI Can Help Address Inequity—If Companies Earn Users' Trust

    By: Shunyuan Zhang, Kannan Srinivasan, Param Singh and Nitin Mehta
    While companies may spend a lot of time testing models before launch, many spend too little time considering how they will work in the wild. In particular, they fail to fully consider how rates of adoption can warp developers’ intent. For instance, Airbnb launched a... View Details
    Keywords: Artificial Intelligence; Algorithmic Bias; Technological Innovation; Perception; Diversity; Equality and Inequality; Trust; AI and Machine Learning
    Citation
    Find at Harvard
    Register to Read
    Related
    Zhang, Shunyuan, Kannan Srinivasan, Param Singh, and Nitin Mehta. "AI Can Help Address Inequity—If Companies Earn Users' Trust." Harvard Business Review Digital Articles (September 17, 2021).
    • October–December 2022
    • Article

    Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem

    By: Mochen Yang, Edward McFowland III, Gordon Burtch and Gediminas Adomavicius
    Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to "mine" variables of interest from available data, followed... View Details
    Keywords: Machine Learning; Econometric Analysis; Instrumental Variable; Random Forest; Causal Inference; AI and Machine Learning; Forecasting and Prediction
    Citation
    Find at Harvard
    Register to Read
    Related
    Yang, Mochen, Edward McFowland III, Gordon Burtch, and Gediminas Adomavicius. "Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem." INFORMS Journal on Data Science 1, no. 2 (October–December 2022): 138–155.
    • March 2019
    • Case

    Wattpad

    By: John Deighton and Leora Kornfeld
    How to run a platform to match four million writers of stories to 75 million readers? Use data science. Make money by doing deals with television and filmmakers and book publishers. The case describes the challenges of matching readers to stories and of helping writers... View Details
    Keywords: Platform Businesses; Creative Industries; Publishing; Data Science; Machine Learning; Collaborative Filtering; Women And Leadership; Managing Data Scientists; Big Data; Recommender Systems; Digital Platforms; Information Technology; Intellectual Property; Analytics and Data Science; Publishing Industry; Entertainment and Recreation Industry; Canada; United States; Philippines; Viet Nam; Turkey; Indonesia; Brazil
    Citation
    Educators
    Purchase
    Related
    Deighton, John, and Leora Kornfeld. "Wattpad." Harvard Business School Case 919-413, March 2019.
    • April 2025
    • Article

    Serving with a Smile on Airbnb: Analyzing the Economic Returns and Behavioral Underpinnings of the Host’s Smile

    By: Shunyuan Zhang, Elizabeth Friedman, Kannan Srinivasan, Ravi Dhar and Xupin Zhang
    Non-informational cues, such as facial expressions, can significantly influence judgments and interpersonal impressions. While past research has explored how smiling affects business outcomes in offline or in-store contexts, relatively less is known about how smiling... View Details
    Keywords: Sharing Economy; Airbnb; Image Feature Extraction; Machine Learning; Facial Expressions; Prejudice and Bias; Nonverbal Communication; E-commerce; Consumer Behavior; Perception
    Citation
    Read Now
    Related
    Zhang, Shunyuan, Elizabeth Friedman, Kannan Srinivasan, Ravi Dhar, and Xupin Zhang. "Serving with a Smile on Airbnb: Analyzing the Economic Returns and Behavioral Underpinnings of the Host’s Smile." Journal of Consumer Research 51, no. 6 (April 2025): 1073–1097.
    • 1
    • 2
    • 3
    • 4
    • →
    ǁ
    Campus Map
    Harvard Business School
    Soldiers Field
    Boston, MA 02163
    →Map & Directions
    →More Contact Information
    • Make a Gift
    • Site Map
    • Jobs
    • Harvard University
    • Trademarks
    • Policies
    • Accessibility
    • Digital Accessibility
    Copyright © President & Fellows of Harvard College.