Filter Results:
(1,204)
Show Results For
- All HBS Web
(1,204)
- People (2)
- News (188)
- Research (805)
- Events (14)
- Multimedia (3)
- Faculty Publications (574)
Show Results For
- All HBS Web
(1,204)
- People (2)
- News (188)
- Research (805)
- Events (14)
- Multimedia (3)
- Faculty Publications (574)
- March 16, 2021
- Article
From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles
By: Julian De Freitas, Andrea Censi, Bryant Walker Smith, Luigi Di Lillo, Sam E. Anthony and Emilio Frazzoli
For the first time in history, automated vehicles (AVs) are being deployed in populated environments. This unprecedented transformation of our everyday lives demands a significant undertaking: endowing
complex autonomous systems with ethically acceptable behavior. We... View Details
Keywords: Automated Driving; Public Health; Artificial Intelligence; Transportation; Health; Ethics; Policy; AI and Machine Learning
De Freitas, Julian, Andrea Censi, Bryant Walker Smith, Luigi Di Lillo, Sam E. Anthony, and Emilio Frazzoli. "From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles." Proceedings of the National Academy of Sciences 118, no. 11 (March 16, 2021).
- 2024
- Working Paper
Displacement or Complementarity? The Labor Market Impact of Generative AI
By: Wilbur Xinyuan Chen, Suraj Srinivasan and Saleh Zakerinia
Generative AI is poised to reshape the labor market, affecting cognitive and white-collar occupations in ways distinct from past technological revolutions. This study examines whether generative AI displaces workers or augments their jobs by analyzing labor demand and... View Details
Keywords: Generative Ai; Labor Market; Automation And Augmentation; Labor; AI and Machine Learning; Competency and Skills
Chen, Wilbur Xinyuan, Suraj Srinivasan, and Saleh Zakerinia. "Displacement or Complementarity? The Labor Market Impact of Generative AI." Harvard Business School Working Paper, No. 25-039, December 2024.
- December 18, 2024
- Article
Is AI the Right Tool to Solve That Problem?
By: Paolo Cervini, Chiara Farronato, Pushmeet Kohli and Marshall W Van Alstyne
While AI has the potential to solve major problems, organizations embarking on such journeys of often encounter obstacles. They include a dearth of high-quality data; too many possible solutions; the lack of a clear, measurable objective; and difficulty in identifying... View Details
Cervini, Paolo, Chiara Farronato, Pushmeet Kohli, and Marshall W Van Alstyne. "Is AI the Right Tool to Solve That Problem?" Harvard Business Review (website) (December 18, 2024).
- March 2024
- Exercise
'Storrowed': A Generative AI Exercise
By: Mitchell Weiss
"Storrowed" is an exercise to help participants raise their capacity and curiosity for generative AI. It focuses on generative AI for problem understanding and ideation, but can be adapted for use more broadly. Participants use generative AI tools to understand a... View Details
Weiss, Mitchell. "'Storrowed': A Generative AI Exercise." Harvard Business School Exercise 824-188, March 2024.
- Winter 2021
- Editorial
Introduction
This issue of Negotiation Journal is dedicated to the theme of artificial intelligence, technology, and negotiation. It arose from a Program on Negotiation (PON) working conference on that important topic held virtually on May 17–18. The conference was not the... View Details
Wheeler, Michael A. "Introduction." Special Issue on Artificial Intelligence, Technology, and Negotiation. Negotiation Journal 37, no. 1 (Winter 2021): 5–12.
- 2025
- Working Paper
Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers
By: Matthew DosSantos DiSorbo, Kris Ferreira, Maya Balakrishnan and Jordan Tong
Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). How can humans and algorithms work together to make... View Details
DosSantos DiSorbo, Matthew, Kris Ferreira, Maya Balakrishnan, and Jordan Tong. "Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers." Working Paper, May 2025.
- 2023
- Article
Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators
By: Benjamin Jakubowski, Siram Somanchi, Edward McFowland III and Daniel B. Neill
Regression discontinuity (RD) designs are widely used to estimate causal effects in the absence of a randomized experiment. However, standard approaches to RD analysis face two significant limitations. First, they require a priori knowledge of discontinuities in... View Details
Jakubowski, Benjamin, Siram Somanchi, Edward McFowland III, and Daniel B. Neill. "Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators." Journal of Machine Learning Research 24, no. 133 (2023): 1–57.
- May 2022
- Supplement
Borusan CAT: Monetizing Prediction in the Age of AI (B)
By: Navid Mojir and Gamze Yucaoglu
Borusan Cat is an international distributor of Caterpillar heavy machines. In 2021, it had been three years since Ozgur Gunaydin (CEO) and Esra Durgun (Director of Strategy, Digitization, and Innovation) started working on Muneccim, the company’s predictive AI tool.... View Details
Keywords: AI and Machine Learning; Commercialization; Technology Adoption; Industrial Products Industry; Turkey; Middle East
Mojir, Navid, and Gamze Yucaoglu. "Borusan CAT: Monetizing Prediction in the Age of AI (B)." Harvard Business School Supplement 522-045, May 2022.
- July–August 2021
- Article
Why You Aren't Getting More from Your Marketing AI
By: Eva Ascarza, Michael Ross and Bruce G.S. Hardie
Fewer than 40% of companies that invest in AI see gains from it, usually because of one or more of these errors: (1) They don’t ask the right question, and end up directing AI to solve the wrong problem. (2) They don’t recognize the differences between the value of... View Details
Keywords: Artificial Intelligence; Marketing; Decision Making; Communication; Framework; AI and Machine Learning
Ascarza, Eva, Michael Ross, and Bruce G.S. Hardie. "Why You Aren't Getting More from Your Marketing AI." Harvard Business Review 99, no. 4 (July–August 2021): 48–54.
- 16 Nov 2020
- Blog Post
Flatiron School: Reflections from Summer 2020
Birchbox, Young Invincibles, Color Camp, Women 2.0 and Casper. WHAT WERE YOUR GOALS FOR THE SUMMER? Rocio Wu (MBA 2020): Learning python and machine learning had always been on... View Details
Keywords: All Industries
- February 2024
- Technical Note
AI Product Development Lifecycle
By: Michael Parzen, Jessie Li and Marily Nika
In this article, we will discuss the concept of AI Products, how they are changing our daily lives, how the field of AI & Product Management is evolving, and the AI Product Development Lifecycle. View Details
Keywords: Artificial Intelligence; Product Management; Product Life Cycle; Technology; AI and Machine Learning; Product Development
Parzen, Michael, Jessie Li, and Marily Nika. "AI Product Development Lifecycle." Harvard Business School Technical Note 624-070, February 2024.
- January–February 2025
- Article
Why People Resist Embracing AI
The success of AI depends not only on its capabilities, which are becoming more advanced each day, but on people’s willingness to harness them. Unfortunately, many people view AI negatively, fearing it will cause job losses, increase the likelihood that their personal... View Details
De Freitas, Julian. "Why People Resist Embracing AI." Harvard Business Review 103, no. 1 (January–February 2025): 52–56.
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 8 Sep 2023
- Conference Presentation
Chatbots and Mental Health: Insights into the Safety of Generative AI
By: Julian De Freitas, K. Uguralp, Z. Uguralp and Stefano Puntoni
De Freitas, Julian, K. Uguralp, Z. Uguralp, and Stefano Puntoni. "Chatbots and Mental Health: Insights into the Safety of Generative AI." Paper presented at the Business & Generative AI Workshop, Wharton School, AI at Wharton, San Francisco, CA, United States, September 8, 2023.
- Article
Why Boards Aren't Dealing with Cyberthreats
By: J. Yo-Jud Cheng and Boris Groysberg
Cheng, J. Yo-Jud, and Boris Groysberg. "Why Boards Aren't Dealing with Cyberthreats." Harvard Business Review (website) (February 22, 2017). (Excerpt featured in the Harvard Business Review. May–June 2017 "Idea Watch" section.)
- 2025
- Article
Humor as a Window into Generative AI Bias
By: Roger Samure, Julian De Freitas and Stefano Puntoni
A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While... View Details
Samure, Roger, Julian De Freitas, and Stefano Puntoni. "Humor as a Window into Generative AI Bias." Art. 1326. Scientific Reports 15 (2025).
- 2025
- Working Paper
The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling
By: Caleb Kwon, Antonio Moreno and Ananth Raman
Problem Definition: Considerable academic and practitioner attention is placed on the value of ex-post interactions (i.e., overrides) in the human-AI interface. In contrast, relatively little attention has been paid to ex-ante human-AI interactions (e.g., the... View Details
Kwon, Caleb, Antonio Moreno, and Ananth Raman. "The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling." Working Paper, January 2025.
- June 20, 2023
- Article
Cautious Adoption of AI Can Create Positive Company Culture
By: Joseph Pacelli and Jonas Heese
Pacelli, Joseph, and Jonas Heese. "Cautious Adoption of AI Can Create Positive Company Culture." CMR Insights (June 20, 2023).
- Working Paper
Shifting Work Patterns with Generative AI
By: Eleanor W. Dillon, Sonia Jaffe, Nicole Immorlica and Christopher T. Stanton
We present evidence on how generative AI changes the work patterns of knowledge workers using
data from a 6-month-long, cross-industry, randomized field experiment. Half of the 7,137 workers
in the study received access to a generative AI tool integrated into the... View Details
Dillon, Eleanor W., Sonia Jaffe, Nicole Immorlica, and Christopher T. Stanton. "Shifting Work Patterns with Generative AI." NBER Working Paper Series, No. 33795, May 2025.