Himabindu Lakkaraju
Assistant Professor of Business Administration
Assistant Professor of Business Administration
Himabindu "Hima" Lakkaraju is an Assistant Professor of Business Administration at Harvard Business School. She is also a faculty affiliate in the Department of Computer Science at Harvard University, the Harvard Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at Harvard. She teaches the first year course on Technology and Operations Management, and has previously offered multiple courses and guest lectures on a diverse set of topics pertaining to Artificial Intelligence (AI) and Machine Learning (ML), and their real world implications.
Professor Lakkaraju's research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy. More specifically, she studies the design and deployment of AI models that are explainable (readily understandable to decision makers), fair (do not discriminate against minority groups), and more broadly reliable when deployed in the real world. She leads the AI4LIFE research group at Harvard University as part of which she supervises multiple postdoctoral, graduate, and undergraduate students. Professor Lakkaraju’s research has been published in top AI and ML conferences including the International Conference on Machine Learning (ICML), Advances in Neural Information Processing Systems (NeurIPS), the AAAI Conference on Artificial Intelligence, and International Conference on Artificial Intelligence and Statistics (AISTATS), as well as prestigious interdisciplinary journals such as Quarterly Journal of Economics (QJE).
Her research has been felicitated with multiple best paper awards including the INFORMS Best Data Mining Paper Prize and the SIAM International Conference on Data Mining (SDM) Best Research Paper Award. She also received prestigious grants from the National Science Foundation (NSF), and various research awards from Google, Amazon, and Bayer. Her research has also been covered by various popular media outlets including the New York Times, MIT Tech Review, Harvard Business Review, TIME magazine, Forbes, Wired, and VentureBeat. Professor Lakkaraju was also named one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.
In addition to her research, she is passionate about advising startups in the space of AI/ML, and making the field of AI/ML more accessible to the general public. She advises the research and product strategy of Fiddler AI, one of the most sought after startups in the field of responsible AI. Professor Lakkaraju also co-founded the Trustworthy ML Initiative (TrustML) to lower entry barriers, promote discussion and debate on latest developments, and build a community of researchers, practitioners, and policy makers working on AI and its real world implications. As part of this initiative, Lakkaraju has developed various accessible, publicly and freely available tutorials on several aspects of AI and ML. She has also started a virtual talk series where renowned experts (both in AI/ML and other domains such as healthcare, business, and policy) discuss their research and anyone in the world can attend these talks and interact with these experts.
Professor Lakkaraju received her PhD and MS degrees in computer science from Stanford University. Prior to her stint at Harvard Business School, she has held various positions at Microsoft Research, IBM Research, and Adobe.
For more information, please visit:
https://himalakkaraju.github.io/
https://twitter.com/hima_lakkaraju
https://en.wikipedia.org/wiki/Himabindu_Lakkaraju
- Published Papers
-
- Lakkaraju, Himabindu, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, and Haoyi Xiong. "M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models." Advances in Neural Information Processing Systems (NeurIPS) (2023). View Details
- Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023). View Details
- Bhalla, Usha, Suraj Srinivas, and Himabindu Lakkaraju. "Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability." Advances in Neural Information Processing Systems (NeurIPS) (2023). View Details
- Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023). View Details
- Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel." Nature Machine Intelligence 5, no. 8 (August 2023): 873–883. View Details
- McGrath, Sean, Parth Mehta, Alexandra Zytek, Isaac Lage, and Himabindu Lakkaraju. "When Does Uncertainty Matter? Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making." Transactions on Machine Learning Research (TMLR) (June 2023). View Details
- Lakkaraju, Himabindu, Satyapriya Krishna, and Jiaqi Ma. "Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 17808–17826. View Details
- Gao, Ruijiang, and Himabindu Lakkaraju. "On the Impact of Actionable Explanations on Social Segregation." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 10727–10743. View Details
- Meyer, Anna P., Dan Ley, Suraj Srinivas, and Himabindu Lakkaraju. "On Minimizing the Impact of Dataset Shifts on Actionable Explanations." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 39th (2023): 1434–1444. View Details
- Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse." Proceedings of the International Conference on Learning Representations (ICLR) (2023). View Details
- Pawelczyk, Martin, Himabindu Lakkaraju, and Seth Neel. "On the Privacy Risks of Algorithmic Recourse." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 206 (April 2023). View Details
- Agarwal, Chirag, Owen Queen, Himabindu Lakkaraju, and Marinka Zitnik. "Evaluating Explainability for Graph Neural Networks." Art. 114. Scientific Data 10 (2023). View Details
- Han, Tessa, Suraj Srinivas, and Himabindu Lakkaraju. "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022). (Best Paper Award, International Conference on Machine Learning (ICML) Workshop on Interpretable ML in Healthcare.) View Details
- Srinivas, Suraj, Kyle Matoba, Himabindu Lakkaraju, and Francois Fleuret. "Efficiently Training Low-Curvature Neural Networks." Advances in Neural Information Processing Systems (NeurIPS) (2022). View Details
- Agarwal, Chirag, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. "OpenXAI: Towards a Transparent Evaluation of Model Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022). View Details
- Lobo, Elita, Harvineet Singh, Marek Petrik, Cynthia Rudin, and Himabindu Lakkaraju. "Data Poisoning Attacks on Off-Policy Evaluation Methods." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 38th (2022): 1264–1274. View Details
- Pawelczyk, Martin, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. "Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022). View Details
- Agarwal, Chirag, Marinka Zitnik, and Himabindu Lakkaraju. "Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022). View Details
- Dai, Jessica, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach, and Himabindu Lakkaraju. "Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 203–214. View Details
- Singh, Harvineet, Shalmali Joshi, Finale Doshi-Velez, and Himabindu Lakkaraju. "Towards Robust Off-Policy Evaluation via Human Inputs." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 686–699. View Details
- Shergadwala, Murtuza, Himabindu Lakkaraju, and Krishnaram Kenthapadi. "A Human-Centric Take on Model Monitoring." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (HCOMP) 10 (2022): 173–183. View Details
- Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Post hoc Explanation Methods." Paper presented at the 3rd Symposium on Foundations of Responsible Computing (FORC), 2022. View Details
- Slack, Dylan, Sophie Hilgard, Sameer Singh, and Himabindu Lakkaraju. "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021). View Details
- Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. "Towards Robust and Reliable Algorithmic Recourse." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021). View Details
- Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021). View Details
- Ross, Alexis, Himabindu Lakkaraju, and Osbert Bastani. "Learning Models for Actionable Recourse." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021). View Details
- Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Perturbation and Gradient Based Explanations." Proceedings of the International Conference on Machine Learning (ICML) 38th (2021). View Details
- Agarwal, Chirag, Himabindu Lakkaraju, and Marinka Zitnik. "Towards a Unified Framework for Fair and Stable Graph Representation Learning." In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, edited by Cassio de Campos and Marloes H. Maathuis, 2114–2124. AUAI Press, 2021. View Details
- Rahmattalabi, Aida, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, and Milind Tambe. "Fair Influence Maximization: A Welfare Optimization Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35th (2021). View Details
- Sühr, Tom, Sophie Hilgard, and Himabindu Lakkaraju. "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021). View Details
- Rawal, Kaivalya, and Himabindu Lakkaraju. "Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses." Advances in Neural Information Processing Systems (NeurIPS) 33 (2020). View Details
- Yang, Wanqian, Lars Lorch, Moritz Graule, Himabindu Lakkaraju, and Finale Doshi-Velez. "Incorporating Interpretable Output Constraints in Bayesian Neural Networks." Advances in Neural Information Processing Systems (NeurIPS) 33 (2020). View Details
- Lakkaraju, Himabindu, Nino Arsov, and Osbert Bastani. "Robust and Stable Black Box Explanations." Proceedings of the International Conference on Machine Learning (ICML) 37th (2020): 5628–5638. (Published in PMLR, Vol. 119.) View Details
- Slack, Dylan, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. "Fooling LIME and SHAP: Adversarial Attacks on Post Hoc Explanation Methods." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2020): 180–186. View Details
- Lakkaraju, Himabindu, and Osbert Bastani. "'How Do I Fool You?': Manipulating User Trust via Misleading Black Box Explanations." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2020): 79–85. View Details
- Jabbari, Shahin, Han-Ching Ou, Himabindu Lakkaraju, and Milind Tambe. "An Empirical Study of the Trade-Offs Between Interpretability and Fairness." Paper presented at the ICML Workshop on Human Interpretability in Machine Learning, International Conference on Machine Learning (ICML), 2020. View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Faithful and Customizable Explanations of Black Box Models." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2019). View Details
- Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "Human Decisions and Machine Predictions." Quarterly Journal of Economics 133, no. 1 (February 2018): 237–293. View Details
- Lakkaraju, Himabindu, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 23rd (2017). View Details
- Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Treatment Regimes." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 20th (2017). View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Eric Horvitz. "Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration." Proceedings of the AAAI Conference on Artificial Intelligence 31st (2017). View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Interpretable and Explorable Approximations of Black Box Models." Paper presented at the 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), Halifax, NS, Canada, August 14, 2017. View Details
- Lakkaraju, Himabindu, and Jure Leskovec. "Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making." Advances in Neural Information Processing Systems (NeurIPS) 29 (2016). View Details
- Lakkaraju, Himabindu, Stephen H. Bach, and Jure Leskovec. "Interpretable Decision Sets: A Joint Framework for Description and Prediction." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 22nd (2016). View Details
- Kosinki, Michal, Yilun Wang, Himabindu Lakkaraju, and Jure Leskovec. "Mining Big Data to Extract Patterns and Predict Real-Life Outcomes." Psychological Methods 21, no. 4 (December 2016): 493–506. View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Eric Horvitz. "Discovering Unknown Unknowns of Predictive Models." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Reliable Machine Learning in the Wild, Barcelona, Spain, December 9, 2016. View Details
- Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Interpretable Machine Learning in Complex Systems, Barcelona, Spain, December 9, 2016. View Details
- Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Treatment Regimes for Judicial Bail Decisions." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Symposium on Machine Learning and the Law, Barcelona, Spain, December 8, 2016. View Details
- Lakkaraju, Himabindu, Everaldo Aguiar, Carl Shan, David Miller, Nasir Bhanpuri, Rayid Ghani, and Kecia Addison. "A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 21st (2015). View Details
- Lakkaraju, Himabindu, Jure Leskovec, Jon Kleinberg, and Sendhil Mullainathan. "A Bayesian Framework for Modeling Human Evaluations." Proceedings of the SIAM International Conference on Data Mining (2015): 181–189. View Details
- Aguiar, Everaldo, Himabindu Lakkaraju, Nasir Bhanpuri, David Miller, Ben Yuhas, Kecia Addison, and Rayid Ghani. "Who, When, and Why: A Machine Learning Approach to Prioritizing Students at Risk of Not Graduating High School on Time." Proceedings of the International Learning Analytics and Knowledge Conference 5th (2015). View Details
- Lakkaraju, Himabindu, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "Using Big Data to Improve Social Policy." NBER Economics of Crime Working Group, 2014. View Details
- Lakkaraju, Himabindu, Richard Socher, and Chris Manning. "Aspect Specific Sentiment Analysis Using Hierarchical Deep Learning." Paper presented at the 28th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Deep Learning and Representation Learning, Montreal, Canada, December 12, 2014. View Details
- Lakkaraju, Himabindu, Julian McAuley, and Jure Leskovec. "What's in a Name? Understanding the Interplay Between Titles, Content, and Communities in Social Media." Proceedings of the International AAAI Conference on Weblogs and Social Media 7th (2013). View Details
- Lakkaraju, Himabindu, Indrajit Bhattacharya, and Chiranjib Bhattacharyya. "Dynamic Multi-Relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media." Proceedings of the IEEE International Conference on Data Mining 12th (2012). View Details
- Lakkaraju, Himabindu, and Hyung-Il Ahn. "TEM: A Novel Perspective to Modeling Content on Microblogs." Proceedings of the International World Wide Web Conference 21st (2012). View Details
- Lakkaraju, Himabindu, Chiranjib Bhattacharyya, Indrajit Bhattacharya, and Srujana Merugu. "Exploiting Coherence for the Simultaneous Discovery of Latent Facets and Associated Sentiments." Proceedings of the SIAM International Conference on Data Mining (2011): 498–509. View Details
- Lakkaraju, Himabindu, and Jitendra Ajmera. "Attention Prediction on Social Media Brand Pages." Proceedings of the ACM Conference on Information and Knowledge Management 20th (2011). View Details
- Lakkaraju, Himabindu, Angshu Rai, and Srujana Merugu. "Smart News Feeds for Social Networks Using Scalable Joint Latent Factor Models." Proceedings of the International World Wide Web Conference 20th (2011). View Details
- Lakkaraju, Himabindu, and Hyung-Il Ahn. "A Non Parametric Theme Event Topic Model for Characterizing Microblogs." Paper presented at the 25th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Computational Science and the Wisdom of Crowds, Granada, Spain, December 17, 2011. View Details
- Lakkaraju, Himabindu, and Angshu Rai. "Unified Modeling of User Activities on Social Networking Sites." Paper presented at the 25th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Computational Science and the Wisdom of Crowds, Granada, Spain, December 17, 2011. View Details
- Book Chapters
-
- Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "Analyzing Human Decisions and Machine Predictions in Bail Decision Making." In The Inequality Reader: Contemporary and Foundational Readings in Race, Class, and Gender. 3rd edition, edited by David B. Grusky and Szonja Szelényi. Routledge, forthcoming. View Details
- Working Papers
-
- Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations." Working Paper, 2022. View Details
- Lakkaraju, Himabindu, Dylan Slack, Yuxin Chen, Chenhao Tan, and Sameer Singh. "Rethinking Explainability as a Dialogue: A Practitioner's Perspective." Working Paper, 2022. View Details
- Krishna, Satyapriya, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. "The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective." Working Paper, 2022. View Details
- McGrath, Sean, Parth Mehta, Alexandra Zytek, Isaac Lage, and Himabindu Lakkaraju. "When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making." Working Paper, January 2021. View Details
- Lakkaraju, Himabindu, and Chiara Farronato. "When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions." Working Paper, 2023. View Details
- Cases and Teaching Materials
-
- Lakkaraju, Himabindu. "Altibbi: Revolutionizing Telehealth Using AI." Harvard Business School Case 622-088, May 2022. (Revised July 2023.) View Details
- Other Publications and Materials
-
- Lakkaraju, Himabindu, Flavio Calmon, Jiaqi Ma, and Alex Oesterling. "Fair Machine Unlearning: Data Removal while Mitigating Disparities." 2024. View Details
- Lakkaraju, Himabindu, Sree Harsha Tanneru, and Chirag Agarwal. "Quantifying Uncertainty in Natural Language Explanations of Large Language Models." Paper presented at the Society for Artificial Intelligence and Statistics, 2024. View Details
- Research Summary
-
I develop machine learning tools and techniques which enable human decision makers to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:
1. How to build fair and interpretable models that can aid human decision-making?
2. How to ensure that models and their explanations are robust to adversarial attacks?
3. How to train and evaluate models in the presence of missing counterfactuals?
4. How to detect and correct underlying biases in human decisions and algorithmic predictions?
These questions have far-reaching implications in domains involving high-stakes decisions such as criminal justice, health care, public policy, business, and education.The goal of this research is to assess the impact of deploying machine learning models in real world decision making in domains such as health care.I work on developing various tools and methodologies which can help decision makers (e.g., doctors, managers) to better understand the predictions of machine learning models.The goal of this research is to understand how adversaries can exploit various algorithms used for explaining complex machine learning models with an intention to mislead end users. For instance, can adversaries trick these algorithms into masking their racial and gender biases?The goal of this research direction is to ensure that the machine learning models we build and deploy do not discriminate against individuals from minority groups.The goal of this research is to ensure that machine learning models that we build and deploy are not easily susceptible to attacks by adversarial or malicious entities. - Teaching
-
As machine learning models are increasingly being employed to aid decision makers in high-stakes settings such as healthcare and criminal justice, it is important to ensure that the decision makers correctly understand and consequent trust the functionality of these models. This graduate level course aims to familiarize students with the recent advances in the emerging field of explainable ML. In this course, we will review seminal position papers of the field, understand the notion of model interpretability from the perspective of various kinds of end users (e.g., doctors, ML researchers/engineers), discuss in detail different classes of interpretable models and model explanations (e.g., case/prototype based approaches, sparse linear models, rule-based techniques, saliency maps, generalized additive models, and counterfactual explanations), and explore the connections between model interpretability and fairness, robustness, causality and debugging. The course will also emphasize on various applications which can immensely benefit from model interpretability including medical and judicial decision making.
The course will comprise of a mix of lectures by instructor, paper presentations by students, and guest lectures by researchers who have authored seminal papers on this topic. This course has a significant research component in the form of a course project that students will be expected to work on through out the semester. All in all, this course is geared towards those students who are interested in diving deep and conducting research in the field of interpretable and explainable ML.
This course enables students to develop the skills and concepts needed to ensure the ongoing contribution of a firm's operations to its competitive position. Topics include digital marketplaces, technology, and data science.
I taught a set of lectures on "Introduction to Machine Learning for Social Scientists" as part of this required course for first year PhD students. This module familiarizes students with all the basic concepts in machine learning, their implementations, as well as the real world challenges encountered when building and deploying machine learning models.
Topics encompass:
Supervised learning
Unsupervised learning
Reinforcement learning
Practical Challenges: overfitting, data imbalance, missing counterfactuals
Making machine learning models fair, interpretable, and robust - Awards & Honors
-
Recipient of the Distinguished Young Alumnus Award from the Indian Institute of Science in 2024.Selected as one of the 35 innovators under 35 by MIT Technology Review in 2019 for my research on using machine learning to support decision making in law.Named as one of the Top Innovators to Watch by Vanity Fair in 2019.Named a 2023 Kavli Fellow by the National Academy of Sciences.Recipient of the 2022 J.P. Morgan Artificial Intelligence Faculty Research Award in the “AI to Empower Employees” category for “Understanding and Mitigating Privacy Risks of Algorithmic Recourse.”Honorable Mention for the Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML) Outstanding Paper Award at the 2022 Conference on Neural Information Processing Systems (NeurIPS).Winner of the 2022 Best Paper Award from the Workshop on Interpretable Machine Learning in Healthcare (IMLH) at the International Conference on Machine Learning (ICML) for "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations" (Advances in Neural Information Processing Systems, 2022) with Tessa Han and Suraj Srinivas.Winner of the 2021 Google AI for Social Good Award in the Public Health category.Recipient of a 2021 Amazon Research Award.Recipient of a 2021 Fairness in AI Grant from the National Science Foundation and Amazon.Recipient of a 2020 Google Research Award.Recipient of a 2017 Microsoft Research Dissertation Grant.Winner of the 2017 INFORMS Data Mining Best Paper Award for ”Learning Cost-Effective and Interpretable Treatment Regimes” with Cynthia Rudin.Named as one of the Rising Stars in Computer Science.Recipient of the Outstanding Reviewer Award at the 2016 World Wide Web Conference.Recipient of a Google Anita Borg Scholarship in 2015.Recipient of a Stanford Graduate Fellowship in 2013.Recipient of a 2012 IBM Research Eminence and Excellence Award.
- Areas of Interest
- In The News