Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results: (36) Arrow Down
Filter Results: (36) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (58)
    • News  (5)
    • Research  (36)
    • Events  (2)
    • Multimedia  (2)
  • Faculty Publications  (25)

Show Results For

  • All HBS Web  (58)
    • News  (5)
    • Research  (36)
    • Events  (2)
    • Multimedia  (2)
  • Faculty Publications  (25)
Page 1 of 36 Results →
Sort by

Are you looking for?

→Search All HBS Web
  • 2021
  • Conference Presentation

An Algorithmic Framework for Fairness Elicitation

By: Christopher Jung, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton and Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.... View Details
Keywords: Algorithmic Fairness; Machine Learning; Fairness; Framework; Mathematical Methods
Citation
Read Now
Related
Jung, Christopher, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton, and Zhiwei Steven Wu. "An Algorithmic Framework for Fairness Elicitation." Paper presented at the 2nd Symposium on Foundations of Responsible Computing (FORC), 2021.
  • 2019
  • Article

Fair Algorithms for Learning in Allocation Problems

By: Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth and Zachary Schutzman
Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended).... View Details
Keywords: Allocation Problems; Algorithms; Fairness; Learning
Citation
Register to Read
Related
Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179.
  • 2021
  • Article

Fair Algorithms for Infinite and Contextual Bandits

By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions... View Details
Keywords: Algorithms; Bandit Problems; Fairness; Mathematical Methods
Citation
Read Now
Related
Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
  • Article

How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness

By: Nripsuta Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C. Parkes and Yang Liu
What is the best way to define algorithmic fairness? While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people’s perceptions of three... View Details
Keywords: Fairness; Decision Making; Perception; Attitudes; Public Opinion
Citation
Read Now
Related
Saxena, Nripsuta, Karen Huang, Evan DeFilippis, Goran Radanovic, David C. Parkes, and Yang Liu. "How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2019).
  • Article

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier (like classification rate or false positive rate) across these groups.... View Details
Keywords: Machine Learning; Algorithms; Fairness; Mathematical Methods
Citation
Read Now
Related
Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018).
  • 18 Nov 2016
  • Conference Presentation

Rawlsian Fairness for Machine Learning

By: Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity". In the context of a simple model of... View Details
Keywords: Machine Learning; Algorithms; Fairness; Decision Making; Mathematical Methods
Citation
Related
Joseph, Matthew, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Rawlsian Fairness for Machine Learning." Paper presented at the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), November 18, 2016.
  • 2021
  • Article

Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring

By: Tom Sühr, Sophie Hilgard and Himabindu Lakkaraju
Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the... View Details
Citation
Read Now
Related
Sühr, Tom, Sophie Hilgard, and Himabindu Lakkaraju. "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
  • March 2024
  • Case

Unintended Consequences of Algorithmic Personalization

By: Eva Ascarza and Ayelet Israeli
“Unintended Consequences of Algorithmic Personalization” (HBS No. 524-052) investigates algorithmic bias in marketing through four case studies featuring Apple, Uber, Facebook, and Amazon. Each study presents scenarios where these companies faced public criticism for... View Details
Keywords: Race; Gender; Marketing; Diversity; Customer Relationship Management; Prejudice and Bias; Customization and Personalization; Technology Industry; Retail Industry; United States
Citation
Educators
Purchase
Related
Ascarza, Eva, and Ayelet Israeli. "Unintended Consequences of Algorithmic Personalization." Harvard Business School Case 524-052, March 2024.
  • Working Paper

Group Fairness in Dynamic Refugee Assignment

By: Daniel Freund, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt and Wentao Weng
Ensuring that refugees and asylum seekers thrive (e.g., find employment) in their host countries is a profound humanitarian goal, and a primary driver of employment is the geographic location within a host country to which the refugee or asylum seeker is... View Details
Keywords: Refugees; Geographic Location; Mathematical Methods; Employment; Fairness
Citation
Read Now
Related
Freund, Daniel, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt, and Wentao Weng. "Group Fairness in Dynamic Refugee Assignment." Harvard Business School Working Paper, No. 23-047, February 2023.
  • 2019
  • Article

An Empirical Study of Rich Subgroup Fairness for Machine Learning

By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across... View Details
Keywords: Machine Learning; Fairness; AI and Machine Learning
Citation
Read Now
Related
Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109.
  • 2021
  • Article

Fair Influence Maximization: A Welfare Optimization Approach

By: Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice and Milind Tambe
Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed... View Details
Citation
Read Now
Related
Rahmattalabi, Aida, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, and Milind Tambe. "Fair Influence Maximization: A Welfare Optimization Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35th (2021).
  • December 2024
  • Article

Public Attitudes on Performance for Algorithmic and Human Decision-Makers

By: Kirk Bansak and Elisabeth Paulson
This study explores public preferences for algorithmic and human decision-makers (DMs) in high-stakes contexts, how these preferences are shaped by performance metrics, and whether public evaluations of performance differ depending on the type of DM. Leveraging a... View Details
Keywords: Public Opinion; Prejudice and Bias; Decision Making
Citation
Read Now
Related
Bansak, Kirk, and Elisabeth Paulson. "Public Attitudes on Performance for Algorithmic and Human Decision-Makers." PNAS Nexus 3, no. 12 (December 2024).
  • 2023
  • Working Paper

Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness

By: Neil Menghani, Edward McFowland III and Daniel B. Neill
In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false... View Details
Keywords: AI and Machine Learning; Forecasting and Prediction; Prejudice and Bias
Citation
Read Now
Related
Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
  • Research Summary

Ethics & Politics of Emerging Technologies

In this stream of research, my collaborators and I investigate the ethical, political, and social implications of computational technologies. 

In this work, I often collaborate with academic colleagues in computer science by helping to... View Details
Keywords: Artificial Intelligence; Algorithms; Computational Social Science
  • Article

Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)

By: Eva Ascarza and Ayelet Israeli

An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details

Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
Citation
Read Now
Related
Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
  • Teaching Interest

Overview

Paul is primarily interested in teaching data science to management students through the case method. This includes technical topics (programming and statistics) as well as higher-level management issues (digital transformation, data governance, etc.) As a research... View Details
Keywords: A/B Testing; AI; AI Algorithms; AI Creativity; Algorithm; Algorithm Bias; Algorithmic Bias; Algorithmic Fairness; Algorithms; Analytics; Application Program Interface; Artificial Intelligence; Causality; Causal Inference; Computing; Computers; Data Analysis; Data Analytics; Data Architecture; Data As A Service; Data Centers; Data Governance; Data Labeling; Data Management; Data Manipulation; Data Mining; Data Ownership; Data Privacy; Data Protection; Data Science; Data Science And Analytics Management; Data Scientists; Data Security; Data Sharing; Data Strategy; Data Visualization; Database; Data-driven Decision-making; Data-driven Management; Data-driven Operations; Datathon; Economics Of AI; Economics Of Innovation; Economics Of Information System; Economics Of Science; Forecast; Forecast Accuracy; Forecasting; Forecasting And Prediction; Information Technology; Machine Learning; Machine Learning Models; Prediction; Prediction Error; Predictive Analytics; Predictive Models; Analysis; AI and Machine Learning; Analytics and Data Science; Applications and Software; Digital Transformation; Information Management; Digital Strategy; Technology Adoption
  • Research Summary

Overview

By: Himabindu Lakkaraju
I develop machine learning tools and techniques which enable human decision makers to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:

1. How to build... View Details
Keywords: Artificial Intelligence; Machine Learning; Decision Analysis; Decision Support
  • 18 Oct 2022
  • Research & Ideas

When Bias Creeps into AI, Managers Can Stop It by Asking the Right Questions

algorithm generates fair outcomes. As the algorithm sorts through information to optimize its objective, BEAT detects and eliminates bias at key points in the training process.... View Details
Keywords: by Rachel Layne
  • March–April 2022
  • Article

School Choice in Chile

By: Jose Correa, Natalie Epstein, Rafael Epstein, Juan Escobar, Ignacio Rios, Nicolas Aramayo, Bastian Bahamondes, Carlos Bonet, Martin Castillo, Andres Cristi, Boris Epstein and Felipe Subiabre
Centralized school admission mechanisms are an attractive way of improving social welfare and fairness in large educational systems. In this paper, we report the design and implementation of the newly established school choice system in Chile, where over 274,000... View Details
Keywords: Early Childhood Education; Secondary Education; Middle School Education; Family and Family Relationships; Welfare; Chile
Citation
Find at Harvard
Register to Read
Related
Correa, Jose, Natalie Epstein, Rafael Epstein, Juan Escobar, Ignacio Rios, Nicolas Aramayo, Bastian Bahamondes, Carlos Bonet, Martin Castillo, Andres Cristi, Boris Epstein, and Felipe Subiabre. "School Choice in Chile." Operations Research 70, no. 2 (March–April 2022): 1066–1087.
  • Article

Counterfactual Explanations Can Be Manipulated

By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate... View Details
Keywords: Machine Learning Models; Counterfactual Explanations
Citation
Read Now
Related
Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
  • 1
  • 2
  • →

Are you looking for?

→Search All HBS Web
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.