Filter Results:
(950)
Show Results For
- All HBS Web
(950)
- People (1)
- News (155)
- Research (631)
- Events (13)
- Multimedia (3)
- Faculty Publications (538)
Show Results For
- All HBS Web
(950)
- People (1)
- News (155)
- Research (631)
- Events (13)
- Multimedia (3)
- Faculty Publications (538)
- March 2025
- Case
Mobvoi’s Path Through Market Challenges and Business Reinvention
By: Paul A. Gompers and Shu Lin
Founded in 2012, Mobvoi evolved through multiple transformations—from AI-driven voice technology to smart wearables and later AI-generated content. Backed by major investors, the company navigated shifts in strategy while facing two failed IPO attempts. As market... View Details
- July–August 2021
- Article
Why You Aren't Getting More from Your Marketing AI
By: Eva Ascarza, Michael Ross and Bruce G.S. Hardie
Fewer than 40% of companies that invest in AI see gains from it, usually because of one or more of these errors: (1) They don’t ask the right question, and end up directing AI to solve the wrong problem. (2) They don’t recognize the differences between the value of... View Details
Keywords: Artificial Intelligence; Marketing; Decision Making; Communication; Framework; AI and Machine Learning
Ascarza, Eva, Michael Ross, and Bruce G.S. Hardie. "Why You Aren't Getting More from Your Marketing AI." Harvard Business Review 99, no. 4 (July–August 2021): 48–54.
- 2023
- Article
Benchmarking Large Language Models on CMExam—A Comprehensive Chinese Medical Exam Dataset
By: Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu and Michael Lingzhi Li
Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam,... View Details
Keywords: Large Language Model; AI and Machine Learning; Analytics and Data Science; Health Industry
Liu, Junling, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, and Michael Lingzhi Li. "Benchmarking Large Language Models on CMExam—A Comprehensive Chinese Medical Exam Dataset." Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track 36 (2023).
- October 14, 2023
- Article
Will Consumers Buy Selfish Self-Driving Cars?
De Freitas, Julian. "Will Consumers Buy Selfish Self-Driving Cars?" Wall Street Journal (October 14, 2023), C5.
- May 9, 2023
- Article
8 Questions About Using AI Responsibly, Answered
By: Tsedal Neeley
Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing which to use, and how, operational and ethical considerations are inevitable. This article delves into eight of them, including how your organization... View Details
Neeley, Tsedal. "8 Questions About Using AI Responsibly, Answered." Harvard Business Review (website) (May 9, 2023).
- 2025
- Article
Humor as a Window into Generative AI Bias
By: Roger Samure, Julian De Freitas and Stefano Puntoni
A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While... View Details
Samure, Roger, Julian De Freitas, and Stefano Puntoni. "Humor as a Window into Generative AI Bias." Art. 1326. Scientific Reports 15 (2025).
- 2025
- Working Paper
The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling
By: Caleb Kwon, Antonio Moreno and Ananth Raman
Problem Definition: Considerable academic and practitioner attention is placed on the value of ex-post interactions (i.e., overrides) in the human-AI interface. In contrast, relatively little attention has been paid to ex-ante human-AI interactions (e.g., the... View Details
Kwon, Caleb, Antonio Moreno, and Ananth Raman. "The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling." Working Paper, January 2025.
- June 20, 2023
- Article
Cautious Adoption of AI Can Create Positive Company Culture
By: Joseph Pacelli and Jonas Heese
Pacelli, Joseph, and Jonas Heese. "Cautious Adoption of AI Can Create Positive Company Culture." CMR Insights (June 20, 2023).
- September 2024
- Background Note
Copyright and Fair Use
By: David B. Yoffie
The U.S. Copyright Office defines a copyright as “a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.” Two core principles of copyright are originality and fixation. A work is... View Details
Yoffie, David B. "Copyright and Fair Use." Harvard Business School Background Note 725-394, September 2024.
- August 25, 2022
- Article
Find the Right Pace for Your AI Rollout
By: Rebecca Karp and Aticus Peterson
Implementing AI can introduce disruptive change and disfranchise staff and employees. When members are reluctant to adopt a new technology, they might hesitate to use it, push back against its deployment, or use it in limited capacity — which affects the benefits an... View Details
Karp, Rebecca, and Aticus Peterson. "Find the Right Pace for Your AI Rollout." Harvard Business Review Digital Articles (August 25, 2022).
- Article
Fake AI People Won't Fix Online Dating
Computer-generated images may inspire even more distrust and surely won’t lead to the love of a lifetime. View Details
Keywords: Artificial Intelligence; Dating Services; Internet and the Web; Ethics; AI and Machine Learning
Kominers, Scott Duke. "Fake AI People Won't Fix Online Dating." Bloomberg Opinion (January 16, 2020).
- 11 Oct 2024
- Research & Ideas
How AI Could Ease the Refugee Crisis and Bring New Talent to Businesses
says. “What we’re asking is, can we build algorithms that will help find better matches that will allow people to integrate more easily?” The paper presents data from Switzerland and the United States that showed promise in using machine... View Details
- 2023
- Working Paper
Black-box Training Data Identification in GANs via Detector Networks
By: Lukman Olagoke, Salil Vadhan and Seth Neel
Since their inception Generative Adversarial Networks (GANs) have been popular generative models across images, audio, video, and tabular data. In this paper we study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if... View Details
Olagoke, Lukman, Salil Vadhan, and Seth Neel. "Black-box Training Data Identification in GANs via Detector Networks." Working Paper, October 2023.
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause... View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 8 Sep 2023
- Conference Presentation
Chatbots and Mental Health: Insights into the Safety of Generative AI
By: Julian De Freitas, K. Uguralp, Z. Uguralp and Stefano Puntoni
De Freitas, Julian, K. Uguralp, Z. Uguralp, and Stefano Puntoni. "Chatbots and Mental Health: Insights into the Safety of Generative AI." Paper presented at the Business & Generative AI Workshop, Wharton School, AI at Wharton, San Francisco, CA, United States, September 8, 2023.
- Article
Why Boards Aren't Dealing with Cyberthreats
By: J. Yo-Jud Cheng and Boris Groysberg
Cheng, J. Yo-Jud, and Boris Groysberg. "Why Boards Aren't Dealing with Cyberthreats." Harvard Business Review (website) (February 22, 2017). (Excerpt featured in the Harvard Business Review. May–June 2017 "Idea Watch" section.)
- January 2024
- Case
The Financial Times (FT) and Generative AI
By: Andrew Rashbass, Ramon Casadesus-Masanell and Jordan Mitchell
In September 2023, John Ridding, CEO of the Financial Times, was considering the possible impact of Generative AI on the industry and his business. Having navigated successfully the seismic shift from print to digital, and reporting record results, the company... View Details
Keywords: AI and Machine Learning; Technology Adoption; Change Management; Journalism and News Industry
Rashbass, Andrew, Ramon Casadesus-Masanell, and Jordan Mitchell. "The Financial Times (FT) and Generative AI." Harvard Business School Case 724-410, January 2024.
- January 2024 (Revised February 2024)
- Case
OpenAI: Idealism Meets Capitalism
By: Shikhar Ghosh and Shweta Bagai
In November 2023, the board of OpenAI, one of the most successful companies in the history of technology, decided to fire Sam Altman, its charismatic and influential CEO. Their decision shocked the corporate world and had people wondering why OpenAI had designed a... View Details
Keywords: AI; AI and Machine Learning; Governing and Advisory Boards; Ethics; Strategy; Technological Innovation; Leadership
Ghosh, Shikhar, and Shweta Bagai. "OpenAI: Idealism Meets Capitalism." Harvard Business School Case 824-134, January 2024. (Revised February 2024.)
- 2025
- Article
Ideation with Generative AI—In Consumer Research and Beyond
By: Julian De Freitas, G. Nave and Stefano Puntoni
The use of large language models (LLMs) in consumer research is rapidly evolving, with applications including synthetic data generation, data analysis, and more. However, their role in creative ideation—a cornerstone of consumer research—remains underexplored. Drawing... View Details
De Freitas, Julian, G. Nave, and Stefano Puntoni. "Ideation with Generative AI—In Consumer Research and Beyond." Journal of Consumer Research (2025).