Filter Results:
(1,046)
Show Results For
- All HBS Web
(1,046)
- People (1)
- News (187)
- Research (679)
- Events (13)
- Multimedia (3)
- Faculty Publications (561)
Show Results For
- All HBS Web
(1,046)
- People (1)
- News (187)
- Research (679)
- Events (13)
- Multimedia (3)
- Faculty Publications (561)
- December 18, 2024
- Article
Is AI the Right Tool to Solve That Problem?
By: Paolo Cervini, Chiara Farronato, Pushmeet Kohli and Marshall W Van Alstyne
While AI has the potential to solve major problems, organizations embarking on such journeys of often encounter obstacles. They include a dearth of high-quality data; too many possible solutions; the lack of a clear, measurable objective; and difficulty in identifying... View Details
Cervini, Paolo, Chiara Farronato, Pushmeet Kohli, and Marshall W Van Alstyne. "Is AI the Right Tool to Solve That Problem?" Harvard Business Review (website) (December 18, 2024).
- March 2024
- Exercise
'Storrowed': A Generative AI Exercise
By: Mitchell Weiss
"Storrowed" is an exercise to help participants raise their capacity and curiosity for generative AI. It focuses on generative AI for problem understanding and ideation, but can be adapted for use more broadly. Participants use generative AI tools to understand a... View Details
Weiss, Mitchell. "'Storrowed': A Generative AI Exercise." Harvard Business School Exercise 824-188, March 2024.
- Winter 2021
- Editorial
Introduction
This issue of Negotiation Journal is dedicated to the theme of artificial intelligence, technology, and negotiation. It arose from a Program on Negotiation (PON) working conference on that important topic held virtually on May 17–18. The conference was not the... View Details
Wheeler, Michael A. "Introduction." Special Issue on Artificial Intelligence, Technology, and Negotiation. Negotiation Journal 37, no. 1 (Winter 2021): 5–12.
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 8 Sep 2023
- Conference Presentation
Chatbots and Mental Health: Insights into the Safety of Generative AI
By: Julian De Freitas, K. Uguralp, Z. Uguralp and Stefano Puntoni
De Freitas, Julian, K. Uguralp, Z. Uguralp, and Stefano Puntoni. "Chatbots and Mental Health: Insights into the Safety of Generative AI." Paper presented at the Business & Generative AI Workshop, Wharton School, AI at Wharton, San Francisco, CA, United States, September 8, 2023.
- Article
Why Boards Aren't Dealing with Cyberthreats
By: J. Yo-Jud Cheng and Boris Groysberg
Cheng, J. Yo-Jud, and Boris Groysberg. "Why Boards Aren't Dealing with Cyberthreats." Harvard Business Review (website) (February 22, 2017). (Excerpt featured in the Harvard Business Review. May–June 2017 "Idea Watch" section.)
- 2024
- Working Paper
Displacement or Complementarity? The Labor Market Impact of Generative AI
By: Wilbur Xinyuan Chen, Suraj Srinivasan and Saleh Zakerinia
Generative AI is poised to reshape the labor market, affecting cognitive and white-collar occupations in ways distinct from past technological revolutions. This study examines whether generative AI displaces workers or augments their jobs by analyzing labor demand and... View Details
Keywords: Generative Ai; Labor Market; Automation And Augmentation; Labor; AI and Machine Learning; Competency and Skills
Chen, Wilbur Xinyuan, Suraj Srinivasan, and Saleh Zakerinia. "Displacement or Complementarity? The Labor Market Impact of Generative AI." Harvard Business School Working Paper, No. 25-039, December 2024.
- February 2024
- Technical Note
AI Product Development Lifecycle
By: Michael Parzen, Jessie Li and Marily Nika
In this article, we will discuss the concept of AI Products, how they are changing our daily lives, how the field of AI & Product Management is evolving, and the AI Product Development Lifecycle. View Details
Keywords: Artificial Intelligence; Product Management; Product Life Cycle; Technology; AI and Machine Learning; Product Development
Parzen, Michael, Jessie Li, and Marily Nika. "AI Product Development Lifecycle." Harvard Business School Technical Note 624-070, February 2024.
- October 31, 2022
- Article
Achieving Individual—and Organizational—Value with AI
By: Sam Ransbotham, David Kiron, François Candelon, Shervin Khodabandeh and Michael Chu
New research shows that employees derive individual value from AI when using the technology improves their sense of competency, autonomy, and relatedness. Likewise, organizations are far more likely to obtain value from AI when their workers do. This report offers key... View Details
Ransbotham, Sam, David Kiron, François Candelon, Shervin Khodabandeh, and Michael Chu. "Achieving Individual—and Organizational—Value with AI." MIT Sloan Management Review, Big Ideas Artificial Intelligence and Business Strategy Initiative (website) (October 31, 2022). (Findings from the 2022 Artificial Intelligence and Business Strategy Global Executive Study and Research Project.)
- July 2024
- Article
AI, ROI, and Sales Productivity
By: Frank V. Cespedes
Artificial intelligence (AI) is now a loose term for many different things and at the peak of its hype curve. So managers hitch-their-pitch to the term in arguing for resources. But like any technology, its business value depends upon actionable use cases embraced by... View Details
Cespedes, Frank V. "AI, ROI, and Sales Productivity." Top Sales Magazine (July 2024), 12–13.
- March 16, 2021
- Article
From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles
By: Julian De Freitas, Andrea Censi, Bryant Walker Smith, Luigi Di Lillo, Sam E. Anthony and Emilio Frazzoli
For the first time in history, automated vehicles (AVs) are being deployed in populated environments. This unprecedented transformation of our everyday lives demands a significant undertaking: endowing
complex autonomous systems with ethically acceptable behavior. We... View Details
Keywords: Automated Driving; Public Health; Artificial Intelligence; Transportation; Health; Ethics; Policy; AI and Machine Learning
De Freitas, Julian, Andrea Censi, Bryant Walker Smith, Luigi Di Lillo, Sam E. Anthony, and Emilio Frazzoli. "From Driverless Dilemmas to More Practical Commonsense Tests for Automated Vehicles." Proceedings of the National Academy of Sciences 118, no. 11 (March 16, 2021).
- April 2023
- Article
On the Privacy Risks of Algorithmic Recourse
By: Martin Pawelczyk, Himabindu Lakkaraju and Seth Neel
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected... View Details
Pawelczyk, Martin, Himabindu Lakkaraju, and Seth Neel. "On the Privacy Risks of Algorithmic Recourse." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 206 (April 2023).
- 2025
- Working Paper
Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations
By: Jacqueline N. Lane, Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh and Pei-Hsin Wang
Do AI-generated narrative explanations enhance human oversight or diminish it? We investigate this question through a field experiment with 228 evaluators screening 48 early-stage innovations under three conditions: human-only, black-box AI recommendations without... View Details
Lane, Jacqueline N., Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh, and Pei-Hsin Wang. "Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations." Harvard Business School Working Paper, No. 25-001, August 2024. (Revised May 2025.)
- 2025
- Working Paper
Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers
By: Matthew DosSantos DiSorbo, Kris Ferreira, Maya Balakrishnan and Jordan Tong
Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). How can humans and algorithms work together to make... View Details
DosSantos DiSorbo, Matthew, Kris Ferreira, Maya Balakrishnan, and Jordan Tong. "Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers." Working Paper, May 2025.
- 2023
- Article
Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators
By: Benjamin Jakubowski, Siram Somanchi, Edward McFowland III and Daniel B. Neill
Regression discontinuity (RD) designs are widely used to estimate causal effects in the absence of a randomized experiment. However, standard approaches to RD analysis face two significant limitations. First, they require a priori knowledge of discontinuities in... View Details
Jakubowski, Benjamin, Siram Somanchi, Edward McFowland III, and Daniel B. Neill. "Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators." Journal of Machine Learning Research 24, no. 133 (2023): 1–57.
- May 2022
- Supplement
Borusan CAT: Monetizing Prediction in the Age of AI (B)
By: Navid Mojir and Gamze Yucaoglu
Borusan Cat is an international distributor of Caterpillar heavy machines. In 2021, it had been three years since Ozgur Gunaydin (CEO) and Esra Durgun (Director of Strategy, Digitization, and Innovation) started working on Muneccim, the company’s predictive AI tool.... View Details
Keywords: AI and Machine Learning; Commercialization; Technology Adoption; Industrial Products Industry; Turkey; Middle East
Mojir, Navid, and Gamze Yucaoglu. "Borusan CAT: Monetizing Prediction in the Age of AI (B)." Harvard Business School Supplement 522-045, May 2022.
- Working Paper
Shifting Work Patterns with Generative AI
By: Eleanor W. Dillon, Sonia Jaffe, Nicole Immorlica and Christopher T. Stanton
We present evidence on how generative AI changes the work patterns of knowledge workers using
data from a 6-month-long, cross-industry, randomized field experiment. Half of the 7,137 workers
in the study received access to a generative AI tool integrated into the... View Details
Dillon, Eleanor W., Sonia Jaffe, Nicole Immorlica, and Christopher T. Stanton. "Shifting Work Patterns with Generative AI." NBER Working Paper Series, No. 33795, May 2025. (Conditionally Accepted at American Economic Review: Insights .)
- January 2025
- Technical Note
AI vs Human: Analyzing Acceptable Error Rates Using the Confusion Matrix
By: Tsedal Neeley and Tim Englehart
This technical note introduces the confusion matrix as a foundational tool in artificial intelligence (AI) and large language models (LLMs) for assessing the performance of classification models, focusing on their reliability for decision-making. A confusion matrix... View Details
Keywords: Reliability; Confusion Matrix; AI and Machine Learning; Decision Making; Measurement and Metrics; Performance
Neeley, Tsedal, and Tim Englehart. "AI vs Human: Analyzing Acceptable Error Rates Using the Confusion Matrix." Harvard Business School Technical Note 425-049, January 2025.