Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results: (16) Arrow Down
Filter Results: (16) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (59)
    • Faculty Publications  (16)

    Show Results For

    • All HBS Web  (59)
      • Faculty Publications  (16)

      Algorithmic FairnessRemove Algorithmic Fairness →

      Page 1 of 16 Results

      Are you looking for?

      →Search All HBS Web
      • December 2024
      • Article

      Public Attitudes on Performance for Algorithmic and Human Decision-Makers

      By: Kirk Bansak and Elisabeth Paulson
      This study explores public preferences for algorithmic and human decision-makers (DMs) in high-stakes contexts, how these preferences are shaped by performance metrics, and whether public evaluations of performance differ depending on the type of DM. Leveraging a... View Details
      Keywords: Public Opinion; Prejudice and Bias; Decision Making
      Citation
      Read Now
      Related
      Bansak, Kirk, and Elisabeth Paulson. "Public Attitudes on Performance for Algorithmic and Human Decision-Makers." PNAS Nexus 3, no. 12 (December 2024).
      • March 2024
      • Case

      Unintended Consequences of Algorithmic Personalization

      By: Eva Ascarza and Ayelet Israeli
      “Unintended Consequences of Algorithmic Personalization” (HBS No. 524-052) investigates algorithmic bias in marketing through four case studies featuring Apple, Uber, Facebook, and Amazon. Each study presents scenarios where these companies faced public criticism for... View Details
      Keywords: Race; Gender; Marketing; Diversity; Customer Relationship Management; Prejudice and Bias; Customization and Personalization; Technology Industry; Retail Industry; United States
      Citation
      Educators
      Purchase
      Related
      Ascarza, Eva, and Ayelet Israeli. "Unintended Consequences of Algorithmic Personalization." Harvard Business School Case 524-052, March 2024.
      • 2023
      • Working Paper

      Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness

      By: Neil Menghani, Edward McFowland III and Daniel B. Neill
      In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false... View Details
      Keywords: AI and Machine Learning; Forecasting and Prediction; Prejudice and Bias
      Citation
      Read Now
      Related
      Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
      • Working Paper

      Group Fairness in Dynamic Refugee Assignment

      By: Daniel Freund, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt and Wentao Weng
      Ensuring that refugees and asylum seekers thrive (e.g., find employment) in their host countries is a profound humanitarian goal, and a primary driver of employment is the geographic location within a host country to which the refugee or asylum seeker is... View Details
      Keywords: Refugees; Geographic Location; Mathematical Methods; Employment; Fairness
      Citation
      Read Now
      Related
      Freund, Daniel, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt, and Wentao Weng. "Group Fairness in Dynamic Refugee Assignment." Harvard Business School Working Paper, No. 23-047, February 2023.
      • Article

      Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)

      By: Eva Ascarza and Ayelet Israeli

      An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details

      Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
      • March 2022
      • Article

      Where to Locate COVID-19 Mass Vaccination Facilities?

      By: Dimitris Bertsimas, Vassilis Digalakis Jr, Alexander Jacquillat, Michael Lingzhi Li and Alessandro Previero
      The outbreak of COVID-19 led to a record-breaking race to develop a vaccine. However, the limited vaccine capacity creates another massive challenge: how to distribute vaccines to mitigate the near-end impact of the pandemic? In the United States in particular, the new... View Details
      Keywords: Vaccines; COVID-19; Health Care and Treatment; Health Pandemics; Performance Effectiveness; Analytics and Data Science; Mathematical Methods
      Citation
      Read Now
      Related
      Bertsimas, Dimitris, Vassilis Digalakis Jr, Alexander Jacquillat, Michael Lingzhi Li, and Alessandro Previero. "Where to Locate COVID-19 Mass Vaccination Facilities?" Naval Research Logistics Quarterly 69, no. 2 (March 2022): 179–200.
      • Article

      Counterfactual Explanations Can Be Manipulated

      By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
      Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate... View Details
      Keywords: Machine Learning Models; Counterfactual Explanations
      Citation
      Read Now
      Related
      Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
      • 2021
      • Article

      Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring

      By: Tom Sühr, Sophie Hilgard and Himabindu Lakkaraju
      Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the... View Details
      Citation
      Read Now
      Related
      Sühr, Tom, Sophie Hilgard, and Himabindu Lakkaraju. "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
      • 2021
      • Article

      Fair Influence Maximization: A Welfare Optimization Approach

      By: Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice and Milind Tambe
      Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed... View Details
      Citation
      Read Now
      Related
      Rahmattalabi, Aida, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, and Milind Tambe. "Fair Influence Maximization: A Welfare Optimization Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35th (2021).
      • 2021
      • Article

      Fair Algorithms for Infinite and Contextual Bandits

      By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
      We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions... View Details
      Keywords: Algorithms; Bandit Problems; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
      • 2021
      • Conference Presentation

      An Algorithmic Framework for Fairness Elicitation

      By: Christopher Jung, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton and Zhiwei Steven Wu
      We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.... View Details
      Keywords: Algorithmic Fairness; Machine Learning; Fairness; Framework; Mathematical Methods
      Citation
      Read Now
      Related
      Jung, Christopher, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton, and Zhiwei Steven Wu. "An Algorithmic Framework for Fairness Elicitation." Paper presented at the 2nd Symposium on Foundations of Responsible Computing (FORC), 2021.
      • 2019
      • Article

      Fair Algorithms for Learning in Allocation Problems

      By: Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth and Zachary Schutzman
      Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended).... View Details
      Keywords: Allocation Problems; Algorithms; Fairness; Learning
      Citation
      Register to Read
      Related
      Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179.
      • 2019
      • Article

      An Empirical Study of Rich Subgroup Fairness for Machine Learning

      By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
      Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across... View Details
      Keywords: Machine Learning; Fairness; AI and Machine Learning
      Citation
      Read Now
      Related
      Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109.
      • Article

      Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

      By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
      The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier (like classification rate or false positive rate) across these groups.... View Details
      Keywords: Machine Learning; Algorithms; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018).
      • 18 Nov 2016
      • Conference Presentation

      Rawlsian Fairness for Machine Learning

      By: Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
      Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity". In the context of a simple model of... View Details
      Keywords: Machine Learning; Algorithms; Fairness; Decision Making; Mathematical Methods
      Citation
      Related
      Joseph, Matthew, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Rawlsian Fairness for Machine Learning." Paper presented at the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), November 18, 2016.
      • Research Summary

      Overview

      By: Himabindu Lakkaraju
      I develop machine learning tools and techniques which enable human decision makers to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:

      1. How to build... View Details
      Keywords: Artificial Intelligence; Machine Learning; Decision Analysis; Decision Support
      • 1

      Are you looking for?

      →Search All HBS Web
      ǁ
      Campus Map
      Harvard Business School
      Soldiers Field
      Boston, MA 02163
      →Map & Directions
      →More Contact Information
      • Make a Gift
      • Site Map
      • Jobs
      • Harvard University
      • Trademarks
      • Policies
      • Accessibility
      • Digital Accessibility
      Copyright © President & Fellows of Harvard College.