Filter Results:
(954)
Show Results For
- All HBS Web
(954)
- People (1)
- News (156)
- Research (636)
- Events (13)
- Multimedia (3)
- Faculty Publications (542)
Show Results For
- All HBS Web
(954)
- People (1)
- News (156)
- Research (636)
- Events (13)
- Multimedia (3)
- Faculty Publications (542)
- July 2024
- Case
Replika AI: Alleviating Loneliness (A)
By: Shikhar Ghosh and Shweta Bagai
Eugenia Kuyda launched Replika AI in 2017 as an empathetic digital companion to combat loneliness and provide emotional support. The platform surged in popularity during the COVID-19 pandemic, offering non-judgmental support to isolated users. By 2023, Replika boasted... View Details
Keywords: Entrepreneurship; Ethics; Health Pandemics; AI and Machine Learning; Well-being; Technology Industry
Ghosh, Shikhar, and Shweta Bagai. "Replika AI: Alleviating Loneliness (A)." Harvard Business School Case 824-088, July 2024.
- Working Paper
Shifting Work Patterns with Generative AI
By: Eleanor W. Dillon, Sonia Jaffe, Nicole Immorlica and Christopher T. Stanton
We present evidence on how generative AI changes the work patterns of knowledge workers using
data from a 6-month-long, cross-industry, randomized field experiment. Half of the 7,137 workers
in the study received access to a generative AI tool integrated into the... View Details
Dillon, Eleanor W., Sonia Jaffe, Nicole Immorlica, and Christopher T. Stanton. "Shifting Work Patterns with Generative AI." NBER Working Paper Series, No. 33795, May 2025.
- January 2025
- Technical Note
AI vs Human: Analyzing Acceptable Error Rates Using the Confusion Matrix
By: Tsedal Neeley and Tim Englehart
This technical note introduces the confusion matrix as a foundational tool in artificial intelligence (AI) and large language models (LLMs) for assessing the performance of classification models, focusing on their reliability for decision-making. A confusion matrix... View Details
Keywords: Reliability; Confusion Matrix; AI and Machine Learning; Decision Making; Measurement and Metrics; Performance
Neeley, Tsedal, and Tim Englehart. "AI vs Human: Analyzing Acceptable Error Rates Using the Confusion Matrix." Harvard Business School Technical Note 425-049, January 2025.
- September 2024
- Exercise
Finding Your 'Jagged Frontier': A Generative AI Exercise
By: Mitchell Weiss
In 2023 a set of scholars set out to study the effect of artificial intelligence (AI) on the quality and productivity of knowledge workers—in this specific instance, management consultants. They wanted to know across a range of tasks in a workflow, which, if any, would... View Details
Keywords: AI and Machine Learning; Performance Productivity; Performance Evaluation; Consulting Industry
Weiss, Mitchell. "Finding Your 'Jagged Frontier': A Generative AI Exercise." Harvard Business School Exercise 825-070, September 2024.
- November 2, 2021
- Article
The Cultural Benefits of Artificial Intelligence in the Enterprise
By: Sam Ransbotham, François Candelon, David Kiron, Burt LaFountain and Shervin Khodabandeh
The 2021 MIT SMR-BCG report identifies a wide range of AI-related cultural benefits at both the team and organizational levels. Whether it’s reconsidering business assumptions or empowering teams, managing the dynamics across culture, AI use, and organizational... View Details
Ransbotham, Sam, François Candelon, David Kiron, Burt LaFountain, and Shervin Khodabandeh. "The Cultural Benefits of Artificial Intelligence in the Enterprise." MIT Sloan Management Review, Big Ideas Artificial Intelligence and Business Strategy Initiative (website) (November 2, 2021). (Findings from the 2021 Artificial Intelligence and Business Strategy Global Executive Study and Research Project.)
- 20 Oct 2022 - 22 Oct 2022
- Talk
Stigma Against AI Companion Applications
By: Julian De Freitas, A. Ragnhildstveit and A.K. Uğuralp
- 2025
- Working Paper
Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers
By: Matthew DosSantos DiSorbo, Kris Ferreira, Maya Balakrishnan and Jordan Tong
Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). How can humans and algorithms work together to make... View Details
DosSantos DiSorbo, Matthew, Kris Ferreira, Maya Balakrishnan, and Jordan Tong. "Warnings and Endorsements: Improving Human-AI Collaboration in the Presence of Outliers." Working Paper, May 2025.
- 2023
- Article
Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators
By: Benjamin Jakubowski, Siram Somanchi, Edward McFowland III and Daniel B. Neill
Regression discontinuity (RD) designs are widely used to estimate causal effects in the absence of a randomized experiment. However, standard approaches to RD analysis face two significant limitations. First, they require a priori knowledge of discontinuities in... View Details
Jakubowski, Benjamin, Siram Somanchi, Edward McFowland III, and Daniel B. Neill. "Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators." Journal of Machine Learning Research 24, no. 133 (2023): 1–57.
- May 2022
- Supplement
Borusan CAT: Monetizing Prediction in the Age of AI (B)
By: Navid Mojir and Gamze Yucaoglu
Borusan Cat is an international distributor of Caterpillar heavy machines. In 2021, it had been three years since Ozgur Gunaydin (CEO) and Esra Durgun (Director of Strategy, Digitization, and Innovation) started working on Muneccim, the company’s predictive AI tool.... View Details
Keywords: AI and Machine Learning; Commercialization; Technology Adoption; Industrial Products Industry; Turkey; Middle East
Mojir, Navid, and Gamze Yucaoglu. "Borusan CAT: Monetizing Prediction in the Age of AI (B)." Harvard Business School Supplement 522-045, May 2022.
- 11 Oct 2024
- Research & Ideas
How AI Could Ease the Refugee Crisis and Bring New Talent to Businesses
says. “What we’re asking is, can we build algorithms that will help find better matches that will allow people to integrate more easily?” The paper presents data from Switzerland and the United States that showed promise in using machine... View Details
- September–October 2024
- Article
The Crowdless Future? Generative AI and Creative Problem-Solving
The rapid advances in generative artificial intelligence (AI) open up attractive opportunities for creative problem-solving through human-guided AI partnerships. To explore this potential, we initiated a crowdsourcing challenge focused on sustainable, circular economy... View Details
Keywords: Large Language Models; Generative Ai; Crowdsourcing; AI and Machine Learning; Creativity; Technological Innovation
Boussioux, Léonard, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani. "The Crowdless Future? Generative AI and Creative Problem-Solving." Organization Science 35, no. 5 (September–October 2024): 1589–1607.
- December 2024 (Revised January 2025)
- Technical Note
A Guide to the Vocabulary, Evolution, and Impact of Artificial Intelligence (AI)
By: Shane Greenstein, Nathaniel Lovin, Scott Wallsten, Kerry Herman and Susan Pinckney
A note on the vocabulary, evolution, and impact of AI. View Details
Keywords: Artificial Intelligence; Software; AI and Machine Learning; Technology Adoption; Technological Innovation; Technology Industry
Greenstein, Shane, Nathaniel Lovin, Scott Wallsten, Kerry Herman, and Susan Pinckney. "A Guide to the Vocabulary, Evolution, and Impact of Artificial Intelligence (AI)." Harvard Business School Technical Note 625-039, December 2024. (Revised January 2025.)
- March 2024
- Teaching Note
'Storrowed': A Generative AI Exercise
By: Mitchell Weiss
Teaching Note for HBS Exercise No. 824-188. “Storrowed” is an exercise to help participants raise their proficiency with generative AI. It begins by highlighting a problem: trucks getting wedged underneath bridges in Boston, Massachusetts on the city’s Storrow Drive.... View Details
- 27 Jun 2024
- Research & Ideas
Gen AI Marketing: How Some 'Gibberish' Code Can Give Products an Edge
their products listed on top, is that a good thing or a bad thing? It just depends on which side you’re looking from,” says Lakkaraju. The coffee machine experiment The study involves a hypothetical search for an “affordable” new coffee... View Details
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 8 Sep 2023
- Conference Presentation
Chatbots and Mental Health: Insights into the Safety of Generative AI
By: Julian De Freitas, K. Uguralp, Z. Uguralp and Stefano Puntoni
De Freitas, Julian, K. Uguralp, Z. Uguralp, and Stefano Puntoni. "Chatbots and Mental Health: Insights into the Safety of Generative AI." Paper presented at the Business & Generative AI Workshop, Wharton School, AI at Wharton, San Francisco, CA, United States, September 8, 2023.
- Article
Why Boards Aren't Dealing with Cyberthreats
By: J. Yo-Jud Cheng and Boris Groysberg
Cheng, J. Yo-Jud, and Boris Groysberg. "Why Boards Aren't Dealing with Cyberthreats." Harvard Business Review (website) (February 22, 2017). (Excerpt featured in the Harvard Business Review. May–June 2017 "Idea Watch" section.)
- October 14, 2023
- Article
Will Consumers Buy Selfish Self-Driving Cars?
De Freitas, Julian. "Will Consumers Buy Selfish Self-Driving Cars?" Wall Street Journal (October 14, 2023), C5.
- January–February 2025
- Article
Why People Resist Embracing AI
The success of AI depends not only on its capabilities, which are becoming more advanced each day, but on people’s willingness to harness them. Unfortunately, many people view AI negatively, fearing it will cause job losses, increase the likelihood that their personal... View Details
De Freitas, Julian. "Why People Resist Embracing AI." Harvard Business Review 103, no. 1 (January–February 2025): 52–56.