Filter Results:
(114)
Show Results For
- All HBS Web (114)
- Faculty Publications (34)
Show Results For
- All HBS Web (114)
- Faculty Publications (34)
Page 1 of 114
Results →
- 2022
- Article
Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods.
By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
As Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes critical to ensure that the stakeholders understand the rationale behind their predictions. While several GNN explanation methods have been proposed recently, there has... View Details
Keywords: Graph Neural Networks; Explanation Methods; Mathematical Methods; Framework; Theory; Analysis
Agarwal, Chirag, Marinka Zitnik, and Himabindu Lakkaraju. "Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
- Article
Counterfactual Explanations Can Be Manipulated
By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate... View Details
Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- Article
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two... View Details
Keywords: Machine Learning; Black Box Explanations; Decision Making; Forecasting and Prediction; Information Technology
Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Perturbation and Gradient Based Explanations." Proceedings of the International Conference on Machine Learning (ICML) 38th (2021).
- 2022
- Conference Presentation
Towards the Unification and Robustness of Post hoc Explanation Methods
By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two... View Details
Keywords: AI and Machine Learning
Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Post hoc Explanation Methods." Paper presented at the 3rd Symposium on Foundations of Responsible Computing (FORC), 2022.
- 2022
- Article
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis.
By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice. Despite the growing popularity of counterfactual explanations, a... View Details
Keywords: Machine Learning Models; Counterfactual Explanations; Adversarial Examples; Mathematical Methods
Pawelczyk, Martin, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. "Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
- 2022
- Article
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
By: Jessica Dai, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach and Himabindu Lakkaraju
As post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to ensure that the quality of the resulting explanations is consistently high across all subgroups of a population. For instance, it... View Details
Dai, Jessica, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach, and Himabindu Lakkaraju. "Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 203–214.
- 2022
- Article
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This... View Details
Han, Tessa, Suraj Srinivas, and Himabindu Lakkaraju. "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022). (Best Paper Award, International Conference on Machine Learning (ICML) Workshop on Interpretable ML in Healthcare.)
- 2022
- Article
OpenXAI: Towards a Transparent Evaluation of Model Explanations
By: Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik and Himabindu Lakkaraju
While several types of post hoc explanation methods have been proposed in recent literature, there is very little work on systematically benchmarking these methods. Here, we introduce OpenXAI, a comprehensive and extensible opensource framework for evaluating and... View Details
Agarwal, Chirag, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. "OpenXAI: Towards a Transparent Evaluation of Model Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022).
- 2023
- Article
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
By: Anna P. Meyer, Dan Ley, Suraj Srinivas and Himabindu Lakkaraju
The Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice. For instance, models... View Details
Meyer, Anna P., Dan Ley, Suraj Srinivas, and Himabindu Lakkaraju. "On Minimizing the Impact of Dataset Shifts on Actionable Explanations." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 39th (2023): 1434–1444.
- 2023
- Article
Post Hoc Explanations of Language Models Can Improve Language Models
By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance... View Details
Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- November 1993 (Revised July 1994)
- Background Note
Adjusted Present Value Method for Capital Assets, The
By: Steven R. Fenster and Stuart C. Gilson
This case provides an explanation of the adjusted present value method for valuing capital assets. The authors believe this approach is generally simple and better for the complicated and changing capital structure found in restructuring. View Details
Fenster, Steven R., and Stuart C. Gilson. "Adjusted Present Value Method for Capital Assets, The ." Harvard Business School Background Note 294-047, November 1993. (Revised July 1994.)
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- Article
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
By: Dylan Slack, Sophie Hilgard, Sameer Singh and Himabindu Lakkaraju
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by... View Details
Keywords: Black Box Explanations; Bayesian Modeling; Decision Making; Risk and Uncertainty; Information Technology
Slack, Dylan, Sophie Hilgard, Sameer Singh, and Himabindu Lakkaraju. "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- July 1987 (Revised October 2009)
- Background Note
A Method For Valuing High-Risk, Long-Term Investments: The "Venture Capital Method"
By: William A. Sahlman and Daniel R Scherlis
Describes a method for valuing high-risk, long-term investments such as those confronting venture capitalists. The method entails forecasting a future value (e.g., five years from the present) and discounting that terminal value back to the present by applying a high... View Details
Keywords: Forecasting and Prediction; Entrepreneurship; Venture Capital; Investment; Risk Management; Valuation
Sahlman, William A., and Daniel R Scherlis. A Method For Valuing High-Risk, Long-Term Investments: The "Venture Capital Method". Harvard Business School Background Note 288-006, July 1987. (Revised October 2009.)
- 2023
- Article
Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten
By: Himabindu Lakkaraju, Satyapriya Krishna and Jiaqi Ma
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an... View Details
Keywords: Analytics and Data Science; AI and Machine Learning; Decision Making; Governing Rules, Regulations, and Reforms
Lakkaraju, Himabindu, Satyapriya Krishna, and Jiaqi Ma. "Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 17808–17826.
- 2023
- Article
M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models
By: Himabindu Lakkaraju, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai and Haoyi Xiong
While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for... View Details
Keywords: AI and Machine Learning
Lakkaraju, Himabindu, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, and Haoyi Xiong. "M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- Article
History-informed Strategy Research: The Promise of History and Historical Research Methods in Advancing Strategy Scholarship
By: Nicholas Argyres, Alfredo De Massis, Nicolai J. Foss, Federico Frattini, Geoffrey Jones and Brian Silverman
Recent years have seen an increasing interest in the use of history and historical research methods in strategy research. This article discusses how and why history and historical research methods can enrich theoretical explanations of strategy phenomena. The article... View Details
Argyres, Nicholas, Alfredo De Massis, Nicolai J. Foss, Federico Frattini, Geoffrey Jones, and Brian Silverman. "History-informed Strategy Research: The Promise of History and Historical Research Methods in Advancing Strategy Scholarship." Strategic Management Journal 41, no. 3 (March 2020): 343–368.
- June 2020
- Article
Parallel Play: Startups, Nascent Markets, and the Effective Design of a Business Model
By: Rory McDonald and Kathleen Eisenhardt
Prior research advances several explanations for entrepreneurial success in nascent markets but leaves a key imperative unexplored: the business model. By studying five ventures in the same nascent market, we develop a novel theoretical framework for understanding how... View Details
Keywords: Search; Legitimacy; Organizational Innovation; Organizational Learning; Mechanisms And Processes; Institutional Entrepreneurship; Qualitative Methods; Business Model Design; Business Model; Business Startups; Entrepreneurship; Emerging Markets; Adaptation; Competition; Strategy
McDonald, Rory, and Kathleen Eisenhardt. "Parallel Play: Startups, Nascent Markets, and the Effective Design of a Business Model." Administrative Science Quarterly 65, no. 2 (June 2020): 483–523.
- 2022
- Working Paper
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
By: Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu and Himabindu Lakkaraju
As various post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to develop a deeper understanding of if and when the explanations output by these methods disagree with each other, and how... View Details
Krishna, Satyapriya, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. "The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective." Working Paper, 2022.
- 2023
- Article
Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability
By: Usha Bhalla, Suraj Srinivas and Himabindu Lakkaraju
With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to... View Details
Bhalla, Usha, Suraj Srinivas, and Himabindu Lakkaraju. "Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability." Advances in Neural Information Processing Systems (NeurIPS) (2023).