Filter Results:
(80)
Show Results For
- All HBS Web
(117,452)
- Faculty Publications (80)
Show Results For
- All HBS Web
(117,452)
- Faculty Publications (80)
- 2022
- Article
Data Poisoning Attacks on Off-Policy Evaluation Methods
By: Elita Lobo, Harvineet Singh, Marek Petrik, Cynthia Rudin and Himabindu Lakkaraju
Off-policy Evaluation (OPE) methods are a crucial tool for evaluating policies in high-stakes domains such as healthcare, where exploration is often infeasible, unethical, or expensive. However, the extent to which such methods can be trusted under adversarial threats... View Details
Lobo, Elita, Harvineet Singh, Marek Petrik, Cynthia Rudin, and Himabindu Lakkaraju. "Data Poisoning Attacks on Off-Policy Evaluation Methods." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 38th (2022): 1264–1274.
- 2022
- Article
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
By: Jessica Dai, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach and Himabindu Lakkaraju
As post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to ensure that the quality of the resulting explanations is consistently high across all subgroups of a population. For instance, it... View Details
Dai, Jessica, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach, and Himabindu Lakkaraju. "Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 203–214.
- 2022
- Article
Towards Robust Off-Policy Evaluation via Human Inputs
By: Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez and Himabindu Lakkaraju
Off-policy Evaluation (OPE) methods are crucial tools for evaluating policies in high-stakes domains such as healthcare, where direct deployment is often infeasible, unethical, or expensive. When deployment environments are expected to undergo changes (that is, dataset... View Details
Singh, Harvineet, Shalmali Joshi, Finale Doshi-Velez, and Himabindu Lakkaraju. "Towards Robust Off-Policy Evaluation via Human Inputs." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 686–699.
- 2022
- Conference Presentation
Towards the Unification and Robustness of Post hoc Explanation Methods
By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two... View Details
Keywords: AI and Machine Learning
Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Post hoc Explanation Methods." Paper presented at the 3rd Symposium on Foundations of Responsible Computing (FORC), 2022.
- May 2022 (Revised July 2023)
- Case
Altibbi: Revolutionizing Telehealth Using AI
Lakkaraju, Himabindu. "Altibbi: Revolutionizing Telehealth Using AI." Harvard Business School Case 622-088, May 2022. (Revised July 2023.)
- 2022
- Article
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis.
By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice. Despite the growing popularity of counterfactual explanations, a... View Details
Keywords: Machine Learning Models; Counterfactual Explanations; Adversarial Examples; Mathematical Methods
Pawelczyk, Martin, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. "Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
- 2022
- Article
Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods.
By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
As Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes critical to ensure that the stakeholders understand the rationale behind their predictions. While several GNN explanation methods have been proposed recently, there has... View Details
Keywords: Graph Neural Networks; Explanation Methods; Mathematical Methods; Framework; Theory; Analysis
Agarwal, Chirag, Marinka Zitnik, and Himabindu Lakkaraju. "Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
- 2022
- Working Paper
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
By: Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu and Himabindu Lakkaraju
As various post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to develop a deeper understanding of if and when the explanations output by these methods disagree with each other, and how... View Details
Krishna, Satyapriya, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. "The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective." Working Paper, 2022.
- 2022
- Working Paper
Rethinking Explainability as a Dialogue: A Practitioner's Perspective
By: Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan and Sameer Singh
As practitioners increasingly deploy machine learning models in critical domains such as healthcare, finance, and policy, it becomes vital to ensure that domain experts function effectively alongside these models. Explainability is one way to bridge the gap between... View Details
Keywords: Natural Language Conversations; AI and Machine Learning; Experience and Expertise; Interactive Communication; Business and Stakeholder Relations
Lakkaraju, Himabindu, Dylan Slack, Yuxin Chen, Chenhao Tan, and Sameer Singh. "Rethinking Explainability as a Dialogue: A Practitioner's Perspective." Working Paper, 2022.
- 2022
- Working Paper
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
By: Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
Practitioners increasingly use machine learning (ML) models, yet they have become more complex and harder to understand. To address this issue, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability... View Details
Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations." Working Paper, 2022.
- Article
Counterfactual Explanations Can Be Manipulated
By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate... View Details
Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- Article
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
By: Dylan Slack, Sophie Hilgard, Sameer Singh and Himabindu Lakkaraju
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by... View Details
Keywords: Black Box Explanations; Bayesian Modeling; Decision Making; Risk and Uncertainty; Information Technology
Slack, Dylan, Sophie Hilgard, Sameer Singh, and Himabindu Lakkaraju. "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- Article
Learning Models for Actionable Recourse
By: Alexis Ross, Himabindu Lakkaraju and Osbert Bastani
As machine learning models are increasingly deployed in high-stakes domains such as legal and financial decision-making, there has been growing interest in post-hoc methods for generating counterfactual explanations. Such explanations provide individuals adversely... View Details
Ross, Alexis, Himabindu Lakkaraju, and Osbert Bastani. "Learning Models for Actionable Recourse." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- 2021
- Chapter
Towards a Unified Framework for Fair and Stable Graph Representation Learning
By: Chirag Agarwal, Himabindu Lakkaraju and Marinka Zitnik
As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between counterfactual... View Details
Agarwal, Chirag, Himabindu Lakkaraju, and Marinka Zitnik. "Towards a Unified Framework for Fair and Stable Graph Representation Learning." In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, edited by Cassio de Campos and Marloes H. Maathuis, 2114–2124. AUAI Press, 2021.
- Article
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two... View Details
Keywords: Machine Learning; Black Box Explanations; Decision Making; Forecasting and Prediction; Information Technology
Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Perturbation and Gradient Based Explanations." Proceedings of the International Conference on Machine Learning (ICML) 38th (2021).
- 2021
- Article
Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring
By: Tom Sühr, Sophie Hilgard and Himabindu Lakkaraju
Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the... View Details
Sühr, Tom, Sophie Hilgard, and Himabindu Lakkaraju. "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
- 2021
- Article
Fair Influence Maximization: A Welfare Optimization Approach
By: Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice and Milind Tambe
Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed... View Details
Rahmattalabi, Aida, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, and Milind Tambe. "Fair Influence Maximization: A Welfare Optimization Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35th (2021).
- Article
Towards Robust and Reliable Algorithmic Recourse
By: Sohini Upadhyay, Shalmali Joshi and Himabindu Lakkaraju
As predictive models are increasingly being deployed in high-stakes decision making (e.g., loan
approvals), there has been growing interest in post-hoc techniques which provide recourse to affected
individuals. These techniques generate recourses under the assumption... View Details
Keywords: Machine Learning Models; Algorithmic Recourse; Decision Making; Forecasting and Prediction
Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. "Towards Robust and Reliable Algorithmic Recourse." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- 2021
- Working Paper
When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making
By: Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage and Himabindu Lakkaraju
McGrath, Sean, Parth Mehta, Alexandra Zytek, Isaac Lage, and Himabindu Lakkaraju. "When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making." Working Paper, January 2021.
- 2020
- Conference Presentation
An Empirical Study of the Trade-Offs Between Interpretability and Fairness
By: Shahin Jabbari, Han-Ching Ou, Himabindu Lakkaraju and Milind Tambe
Jabbari, Shahin, Han-Ching Ou, Himabindu Lakkaraju, and Milind Tambe. "An Empirical Study of the Trade-Offs Between Interpretability and Fairness." Paper presented at the ICML Workshop on Human Interpretability in Machine Learning, International Conference on Machine Learning (ICML), 2020.