Filter Results:
(664)
Show Results For
- All HBS Web
(1,039)
- People (1)
- News (155)
- Research (664)
- Events (13)
- Multimedia (3)
- Faculty Publications (575)
Show Results For
- All HBS Web
(1,039)
- People (1)
- News (155)
- Research (664)
- Events (13)
- Multimedia (3)
- Faculty Publications (575)
Sort by
- September 23, 2024
- Article
AI Wants to Make You Less Lonely. Does It Work?
De Freitas, Julian. "AI Wants to Make You Less Lonely. Does It Work?" Wall Street Journal (September 23, 2024), R.11.
- 2023
- Working Paper
Distributionally Robust Causal Inference with Observational Data
By: Dimitris Bertsimas, Kosuke Imai and Michael Lingzhi Li
We consider the estimation of average treatment effects in observational studies and propose a new framework of robust causal inference with unobserved confounders. Our approach is based on distributionally robust optimization and proceeds in two steps. We first... View Details
Bertsimas, Dimitris, Kosuke Imai, and Michael Lingzhi Li. "Distributionally Robust Causal Inference with Observational Data." Working Paper, February 2023.
- December 18, 2024
- Article
Is AI the Right Tool to Solve That Problem?
By: Paolo Cervini, Chiara Farronato, Pushmeet Kohli and Marshall W Van Alstyne
While AI has the potential to solve major problems, organizations embarking on such journeys of often encounter obstacles. They include a dearth of high-quality data; too many possible solutions; the lack of a clear, measurable objective; and difficulty in identifying... View Details
Cervini, Paolo, Chiara Farronato, Pushmeet Kohli, and Marshall W Van Alstyne. "Is AI the Right Tool to Solve That Problem?" Harvard Business Review (website) (December 18, 2024).
- March 2024
- Exercise
'Storrowed': A Generative AI Exercise
By: Mitchell Weiss
"Storrowed" is an exercise to help participants raise their capacity and curiosity for generative AI. It focuses on generative AI for problem understanding and ideation, but can be adapted for use more broadly. Participants use generative AI tools to understand a... View Details
Weiss, Mitchell. "'Storrowed': A Generative AI Exercise." Harvard Business School Exercise 824-188, March 2024.
- Winter 2021
- Editorial
Introduction
This issue of Negotiation Journal is dedicated to the theme of artificial intelligence, technology, and negotiation. It arose from a Program on Negotiation (PON) working conference on that important topic held virtually on May 17–18. The conference was not the... View Details
Wheeler, Michael A. "Introduction." Special Issue on Artificial Intelligence, Technology, and Negotiation. Negotiation Journal 37, no. 1 (Winter 2021): 5–12.
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 8 Sep 2023
- Conference Presentation
Chatbots and Mental Health: Insights into the Safety of Generative AI
By: Julian De Freitas, K. Uguralp, Z. Uguralp and Stefano Puntoni
De Freitas, Julian, K. Uguralp, Z. Uguralp, and Stefano Puntoni. "Chatbots and Mental Health: Insights into the Safety of Generative AI." Paper presented at the Business & Generative AI Workshop, Wharton School, AI at Wharton, San Francisco, CA, United States, September 8, 2023.
- Article
Why Boards Aren't Dealing with Cyberthreats
By: J. Yo-Jud Cheng and Boris Groysberg
Cheng, J. Yo-Jud, and Boris Groysberg. "Why Boards Aren't Dealing with Cyberthreats." Harvard Business Review (website) (February 22, 2017). (Excerpt featured in the Harvard Business Review. May–June 2017 "Idea Watch" section.)
- July–August 2021
- Article
Why You Aren't Getting More from Your Marketing AI
By: Eva Ascarza, Michael Ross and Bruce G.S. Hardie
Fewer than 40% of companies that invest in AI see gains from it, usually because of one or more of these errors: (1) They don’t ask the right question, and end up directing AI to solve the wrong problem. (2) They don’t recognize the differences between the value of... View Details
Keywords: Artificial Intelligence; Marketing; Decision Making; Communication; Framework; AI and Machine Learning
Ascarza, Eva, Michael Ross, and Bruce G.S. Hardie. "Why You Aren't Getting More from Your Marketing AI." Harvard Business Review 99, no. 4 (July–August 2021): 48–54.
- February 2024
- Technical Note
AI Product Development Lifecycle
By: Michael Parzen, Jessie Li and Marily Nika
In this article, we will discuss the concept of AI Products, how they are changing our daily lives, how the field of AI & Product Management is evolving, and the AI Product Development Lifecycle. View Details
Keywords: Artificial Intelligence; Product Management; Product Life Cycle; Technology; AI and Machine Learning; Product Development
Parzen, Michael, Jessie Li, and Marily Nika. "AI Product Development Lifecycle." Harvard Business School Technical Note 624-070, February 2024.
- October 31, 2022
- Article
Achieving Individual—and Organizational—Value with AI
By: Sam Ransbotham, David Kiron, François Candelon, Shervin Khodabandeh and Michael Chu
New research shows that employees derive individual value from AI when using the technology improves their sense of competency, autonomy, and relatedness. Likewise, organizations are far more likely to obtain value from AI when their workers do. This report offers key... View Details
Ransbotham, Sam, David Kiron, François Candelon, Shervin Khodabandeh, and Michael Chu. "Achieving Individual—and Organizational—Value with AI." MIT Sloan Management Review, Big Ideas Artificial Intelligence and Business Strategy Initiative (website) (October 31, 2022). (Findings from the 2022 Artificial Intelligence and Business Strategy Global Executive Study and Research Project.)
- July 2024
- Article
AI, ROI, and Sales Productivity
By: Frank V. Cespedes
Artificial intelligence (AI) is now a loose term for many different things and at the peak of its hype curve. So managers hitch-their-pitch to the term in arguing for resources. But like any technology, its business value depends upon actionable use cases embraced by... View Details
Cespedes, Frank V. "AI, ROI, and Sales Productivity." Top Sales Magazine (July 2024), 12–13.
- March 16, 2021
- Article
From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles
By: Julian De Freitas, Andrea Censi, Bryant Walker Smith, Luigi Di Lillo, Sam E. Anthony and Emilio Frazzoli
For the first time in history, automated vehicles (AVs) are being deployed in populated environments. This unprecedented transformation of our everyday lives demands a significant undertaking: endowing
complex autonomous systems with ethically acceptable behavior. We... View Details
Keywords: Automated Driving; Public Health; Artificial Intelligence; Transportation; Health; Ethics; Policy; AI and Machine Learning
De Freitas, Julian, Andrea Censi, Bryant Walker Smith, Luigi Di Lillo, Sam E. Anthony, and Emilio Frazzoli. "From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles." Proceedings of the National Academy of Sciences 118, no. 11 (March 16, 2021).
- 2025
- Article
Humor as a Window into Generative AI Bias
By: Roger Samure, Julian De Freitas and Stefano Puntoni
A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While... View Details
Samure, Roger, Julian De Freitas, and Stefano Puntoni. "Humor as a Window into Generative AI Bias." Art. 1326. Scientific Reports 15 (2025).
- 2025
- Working Paper
The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling
By: Caleb Kwon, Antonio Moreno and Ananth Raman
Problem Definition: Considerable academic and practitioner attention is placed on the value of ex-post interactions (i.e., overrides) in the human-AI interface. In contrast, relatively little attention has been paid to ex-ante human-AI interactions (e.g., the... View Details
Kwon, Caleb, Antonio Moreno, and Ananth Raman. "The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling." Working Paper, January 2025.
- June 20, 2023
- Article
Cautious Adoption of AI Can Create Positive Company Culture
By: Joseph Pacelli and Jonas Heese
Pacelli, Joseph, and Jonas Heese. "Cautious Adoption of AI Can Create Positive Company Culture." CMR Insights (June 20, 2023).
- 2025
- Working Paper
Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations
By: Jacqueline N. Lane, Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh and Pei-Hsin Wang
Do AI-generated narrative explanations enhance human oversight or diminish it? We investigate this question through a field experiment with 228 evaluators screening 48 early-stage innovations under three conditions: human-only, black-box AI recommendations without... View Details
Lane, Jacqueline N., Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh, and Pei-Hsin Wang. "Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations." Harvard Business School Working Paper, No. 25-001, August 2024. (Revised May 2025.)
- 2025
- Working Paper
Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers
By: Matthew DosSantos DiSorbo, Kris Ferreira, Maya Balakrishnan and Jordan Tong
Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). How can humans and algorithms work together to make... View Details
DosSantos DiSorbo, Matthew, Kris Ferreira, Maya Balakrishnan, and Jordan Tong. "Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers." Working Paper, May 2025.