Filter Results:
(15)
Show Results For
- All HBS Web
(54)
- Faculty Publications (15)
Show Results For
- All HBS Web
(54)
- Faculty Publications (15)
Page 1 of 15
Results
- March 2024
- Case
Unintended Consequences of Algorithmic Personalization
By: Eva Ascarza and Ayelet Israeli
“Unintended Consequences of Algorithmic Personalization” (HBS No. 524-052) investigates algorithmic bias in marketing through four case studies featuring Apple, Uber, Facebook, and Amazon. Each study presents scenarios where these companies faced public criticism for... View Details
Keywords: Race; Gender; Marketing; Diversity; Customer Relationship Management; Prejudice and Bias; Customization and Personalization; Technology Industry; Retail Industry; United States
Ascarza, Eva, and Ayelet Israeli. "Unintended Consequences of Algorithmic Personalization." Harvard Business School Case 524-052, March 2024.
- 2023
- Working Paper
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness
By: Neil Menghani, Edward McFowland III and Daniel B. Neill
In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false... View Details
Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
- Working Paper
Group Fairness in Dynamic Refugee Assignment
By: Daniel Freund, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt and Wentao Weng
Ensuring that refugees and asylum seekers thrive (e.g., find employment) in their host countries is a profound humanitarian goal, and a primary driver of employment is the geographic
location within a host country to which the refugee or asylum seeker is... View Details
Freund, Daniel, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt, and Wentao Weng. "Group Fairness in Dynamic Refugee Assignment." Harvard Business School Working Paper, No. 23-047, February 2023.
- Article
Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)
By: Eva Ascarza and Ayelet Israeli
An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details
Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
- March 2022
- Article
Where to Locate COVID-19 Mass Vaccination Facilities?
By: Dimitris Bertsimas, Vassilis Digalakis Jr, Alexander Jacquillat, Michael Lingzhi Li and Alessandro Previero
The outbreak of COVID-19 led to a record-breaking race to develop a vaccine. However, the limited vaccine capacity creates another massive challenge: how to distribute vaccines to mitigate the near-end impact of the pandemic? In the United States in particular, the new... View Details
Keywords: Vaccines; COVID-19; Health Care and Treatment; Health Pandemics; Performance Effectiveness; Analytics and Data Science; Mathematical Methods
Bertsimas, Dimitris, Vassilis Digalakis Jr, Alexander Jacquillat, Michael Lingzhi Li, and Alessandro Previero. "Where to Locate COVID-19 Mass Vaccination Facilities?" Naval Research Logistics Quarterly 69, no. 2 (March 2022): 179–200.
- Article
Counterfactual Explanations Can Be Manipulated
By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate... View Details
Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- 2021
- Article
Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring
By: Tom Sühr, Sophie Hilgard and Himabindu Lakkaraju
Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the... View Details
Sühr, Tom, Sophie Hilgard, and Himabindu Lakkaraju. "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
- 2021
- Article
Fair Influence Maximization: A Welfare Optimization Approach
By: Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice and Milind Tambe
Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed... View Details
Rahmattalabi, Aida, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, and Milind Tambe. "Fair Influence Maximization: A Welfare Optimization Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35th (2021).
- 2021
- Article
Fair Algorithms for Infinite and Contextual Bandits
By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions... View Details
Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
- 2021
- Conference Presentation
An Algorithmic Framework for Fairness Elicitation
By: Christopher Jung, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton and Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.... View Details
Jung, Christopher, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton, and Zhiwei Steven Wu. "An Algorithmic Framework for Fairness Elicitation." Paper presented at the 2nd Symposium on Foundations of Responsible Computing (FORC), 2021.
- 2019
- Article
Fair Algorithms for Learning in Allocation Problems
By: Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth and Zachary Schutzman
Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended).... View Details
Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179.
- 2019
- Article
An Empirical Study of Rich Subgroup Fairness for Machine Learning
By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across... View Details
Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109.
- Article
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier (like classification rate or false positive rate) across these groups.... View Details
Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018).
- 18 Nov 2016
- Conference Presentation
Rawlsian Fairness for Machine Learning
By: Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity". In the context of a simple model of... View Details
Joseph, Matthew, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Rawlsian Fairness for Machine Learning." Paper presented at the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), November 18, 2016.
- Research Summary
Overview
I develop machine learning tools and techniques which enable human decision makers to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:
1. How to build... View Details
1. How to build... View Details