Filter Results:
(897)
Show Results For
- All HBS Web
(897)
- People (1)
- News (155)
- Research (572)
- Events (10)
- Multimedia (3)
- Faculty Publications (468)
Show Results For
- All HBS Web
(897)
- People (1)
- News (155)
- Research (572)
- Events (10)
- Multimedia (3)
- Faculty Publications (468)
- April 2023
- Article
On the Privacy Risks of Algorithmic Recourse
By: Martin Pawelczyk, Himabindu Lakkaraju and Seth Neel
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected... View Details
Pawelczyk, Martin, Himabindu Lakkaraju, and Seth Neel. "On the Privacy Risks of Algorithmic Recourse." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 206 (April 2023).
- March 2024
- Exercise
'Storrowed': A Generative AI Exercise
By: Mitchell Weiss
"Storrowed" is an exercise to help participants raise their capacity and curiosity for generative AI. It focuses on generative AI for problem understanding and ideation, but can be adapted for use more broadly. Participants use generative AI tools to understand a... View Details
Weiss, Mitchell. "'Storrowed': A Generative AI Exercise." Harvard Business School Exercise 824-188, March 2024.
- Winter 2021
- Editorial
Introduction
This issue of Negotiation Journal is dedicated to the theme of artificial intelligence, technology, and negotiation. It arose from a Program on Negotiation (PON) working conference on that important topic held virtually on May 17–18. The conference was not the... View Details
Wheeler, Michael A. "Introduction." Special Issue on Artificial Intelligence, Technology, and Negotiation. Negotiation Journal 37, no. 1 (Winter 2021): 5–12.
- September 23, 2024
- Article
AI Wants to Make You Less Lonely. Does It Work?
De Freitas, Julian. "AI Wants to Make You Less Lonely. Does It Work?" Wall Street Journal (September 23, 2024), R.11.
- 2023
- Working Paper
Distributionally Robust Causal Inference with Observational Data
By: Dimitris Bertsimas, Kosuke Imai and Michael Lingzhi Li
We consider the estimation of average treatment effects in observational studies and propose a new framework of robust causal inference with unobserved confounders. Our approach is based on distributionally robust optimization and proceeds in two steps. We first... View Details
Bertsimas, Dimitris, Kosuke Imai, and Michael Lingzhi Li. "Distributionally Robust Causal Inference with Observational Data." Working Paper, February 2023.
- February 2024
- Technical Note
AI Product Development Lifecycle
By: Michael Parzen, Jessie Li and Marily Nika
In this article, we will discuss the concept of AI Products, how they are changing our daily lives, how the field of AI & Product Management is evolving, and the AI Product Development Lifecycle. View Details
Keywords: Artificial Intelligence; Product Management; Product Life Cycle; Technology; AI and Machine Learning; Product Development
Parzen, Michael, Jessie Li, and Marily Nika. "AI Product Development Lifecycle." Harvard Business School Technical Note 624-070, February 2024.
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 8 Sep 2023
- Conference Presentation
Chatbots and Mental Health: Insights into the Safety of Generative AI
By: Julian De Freitas, K. Uguralp, Z. Uguralp and Stefano Puntoni
De Freitas, Julian, K. Uguralp, Z. Uguralp, and Stefano Puntoni. "Chatbots and Mental Health: Insights into the Safety of Generative AI." Paper presented at the Business & Generative AI Workshop, Wharton School, AI at Wharton, San Francisco, CA, United States, September 8, 2023.
- Article
Why Boards Aren't Dealing with Cyberthreats
By: J. Yo-Jud Cheng and Boris Groysberg
Cheng, J. Yo-Jud, and Boris Groysberg. "Why Boards Aren't Dealing with Cyberthreats." Harvard Business Review (website) (February 22, 2017). (Excerpt featured in the Harvard Business Review. May–June 2017 "Idea Watch" section.)
- 2024
- Conference Paper
Quantifying Uncertainty in Natural Language Explanations of Large Language Models
By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
Large Language Models (LLMs) are increasingly used as powerful tools for several
high-stakes natural language processing (NLP) applications. Recent prompting
works claim to elicit intermediate reasoning steps and key tokens that serve as
proxy explanations for LLM... View Details
Lakkaraju, Himabindu, Sree Harsha Tanneru, and Chirag Agarwal. "Quantifying Uncertainty in Natural Language Explanations of Large Language Models." Paper presented at the Society for Artificial Intelligence and Statistics, 2024.
- 2022
- Article
Efficiently Training Low-Curvature Neural Networks
By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
Standard deep neural networks often have excess non-linearity, making them susceptible to issues such as low adversarial robustness and gradient instability. Common methods to address these downstream issues, such as adversarial training, are expensive and often... View Details
Keywords: AI and Machine Learning
Srinivas, Suraj, Kyle Matoba, Himabindu Lakkaraju, and Francois Fleuret. "Efficiently Training Low-Curvature Neural Networks." Advances in Neural Information Processing Systems (NeurIPS) (2022).
- July–August 2021
- Article
Why You Aren't Getting More from Your Marketing AI
By: Eva Ascarza, Michael Ross and Bruce G.S. Hardie
Fewer than 40% of companies that invest in AI see gains from it, usually because of one or more of these errors: (1) They don’t ask the right question, and end up directing AI to solve the wrong problem. (2) They don’t recognize the differences between the value of... View Details
Keywords: Artificial Intelligence; Marketing; Decision Making; Communication; Framework; AI and Machine Learning
Ascarza, Eva, Michael Ross, and Bruce G.S. Hardie. "Why You Aren't Getting More from Your Marketing AI." Harvard Business Review 99, no. 4 (July–August 2021): 48–54.
- November 2, 2021
- Article
The Cultural Benefits of Artificial Intelligence in the Enterprise
By: Sam Ransbotham, François Candelon, David Kiron, Burt LaFountain and Shervin Khodabandeh
The 2021 MIT SMR-BCG report identifies a wide range of AI-related cultural benefits at both the team and organizational levels. Whether it’s reconsidering business assumptions or empowering teams, managing the dynamics across culture, AI use, and organizational... View Details
Ransbotham, Sam, François Candelon, David Kiron, Burt LaFountain, and Shervin Khodabandeh. "The Cultural Benefits of Artificial Intelligence in the Enterprise." MIT Sloan Management Review, Big Ideas Artificial Intelligence and Business Strategy Initiative (website) (November 2, 2021). (Findings from the 2021 Artificial Intelligence and Business Strategy Global Executive Study and Research Project.)
- 20 Oct 2022 - 22 Oct 2022
- Talk
Stigma Against AI Companion Applications
By: Julian De Freitas, A. Ragnhildstveit and A.K. Uğuralp
- 2023
- Article
Benchmarking Large Language Models on CMExam—A Comprehensive Chinese Medical Exam Dataset
By: Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu and Michael Lingzhi Li
Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam,... View Details
Keywords: Large Language Model; AI and Machine Learning; Analytics and Data Science; Health Industry
Liu, Junling, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, and Michael Lingzhi Li. "Benchmarking Large Language Models on CMExam—A Comprehensive Chinese Medical Exam Dataset." Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track 36 (2023).
- 2023
- Working Paper
The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling
By: Caleb Kwon, Antonio Moreno and Ananth Raman
Are the inputs used by your AI tool correct and up to date? In this paper, we show that the answer to this question: (i) is frequently a “no” in real business contexts, and (ii) has significant implications on the performance of AI tools. In the context of algorithmic... View Details
Kwon, Caleb, Antonio Moreno, and Ananth Raman. "The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling." Working Paper, October 2023.
- June 20, 2023
- Article
Cautious Adoption of AI Can Create Positive Company Culture
By: Joseph Pacelli and Jonas Heese
Pacelli, Joseph, and Jonas Heese. "Cautious Adoption of AI Can Create Positive Company Culture." CMR Insights (June 20, 2023).
- October 31, 2022
- Article
Achieving Individual—and Organizational—Value with AI
By: Sam Ransbotham, David Kiron, François Candelon, Shervin Khodabandeh and Michael Chu
New research shows that employees derive individual value from AI when using the technology improves their sense of competency, autonomy, and relatedness. Likewise, organizations are far more likely to obtain value from AI when their workers do. This report offers key... View Details
Ransbotham, Sam, David Kiron, François Candelon, Shervin Khodabandeh, and Michael Chu. "Achieving Individual—and Organizational—Value with AI." MIT Sloan Management Review, Big Ideas Artificial Intelligence and Business Strategy Initiative (website) (October 31, 2022). (Findings from the 2022 Artificial Intelligence and Business Strategy Global Executive Study and Research Project.)
- October 14, 2023
- Article
Will Consumers Buy Selfish Self-Driving Cars?
De Freitas, Julian. "Will Consumers Buy Selfish Self-Driving Cars?" Wall Street Journal (October 14, 2023), C5.