Filter Results:
(1,341)
Show Results For
- All HBS Web (2,083)
- Faculty Publications (459)
Show Results For
- All HBS Web (2,083)
- Faculty Publications (459)
Page 1 of 1,341
Results →
Sort by
- 2024
- Conference Paper
Quantifying Uncertainty in Natural Language Explanations of Large Language Models
By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
Large Language Models (LLMs) are increasingly used as powerful tools for several
high-stakes natural language processing (NLP) applications. Recent prompting
works claim to elicit intermediate reasoning steps and key tokens that serve as
proxy explanations for LLM... View Details
Lakkaraju, Himabindu, Sree Harsha Tanneru, and Chirag Agarwal. "Quantifying Uncertainty in Natural Language Explanations of Large Language Models." Paper presented at the Society for Artificial Intelligence and Statistics, 2024.
- 2023
- Article
Benchmarking Large Language Models on CMExam—A Comprehensive Chinese Medical Exam Dataset
By: Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu and Michael Lingzhi Li
Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam,... View Details
Keywords: Large Language Model; AI and Machine Learning; Analytics and Data Science; Health Industry
Liu, Junling, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, and Michael Lingzhi Li. "Benchmarking Large Language Models on CMExam—A Comprehensive Chinese Medical Exam Dataset." Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track 36 (2023).
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training... View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2022
- Working Paper
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
By: Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
Practitioners increasingly use machine learning (ML) models, yet they have become more complex and harder to understand. To address this issue, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability... View Details
Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations." Working Paper, 2022.
- 2023
- Article
Post Hoc Explanations of Language Models Can Improve Language Models
By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance... View Details
Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- May 2022
- Case
Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models
By: Tsedal Neeley and Stefani Ruper
Dr. Timnit Gebru—a leading artificial intelligence (AI) computer scientist and co-lead of Google’s Ethical AI team—was messaging with one of her colleagues when she saw the words: “Did you resign?? Megan sent an email saying that she accepted your resignation.” Heart... View Details
Neeley, Tsedal, and Stefani Ruper. "Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models." Harvard Business School Case 422-085, May 2022.
- 2024
- Working Paper
Using LLMs for Market Research
By: James Brand, Ayelet Israeli and Donald Ngwe
Large language models (LLMs) have rapidly gained popularity as labor-augmenting
tools for programming, writing, and many other processes that benefit from quick text
generation. In this paper we explore the uses and benefits of LLMs for researchers and
practitioners... View Details
Keywords: Large Language Model; Research; AI and Machine Learning; Analysis; Customers; Consumer Behavior; Technology Industry; Information Technology Industry
Brand, James, Ayelet Israeli, and Donald Ngwe. "Using LLMs for Market Research." Harvard Business School Working Paper, No. 23-062, April 2023. (Revised July 2024.)
- October 2023
- Case
Fixie and Conversational AI Sidekicks
By: Jeffrey J. Bussgang and Carin-Isabel Knoop
In March 2023, Fixie Co-Founder and Chief Architect Matt Welsh and co-founders had the kind of meeting no founders want to have. The president of leading artificial intelligence (AI) research and deployment firm OpenAI, which had catapulted into fame with its ChatGPT... View Details
Keywords: Large Language Model; Entrepreneurship; Decision Choices and Conditions; AI and Machine Learning; Technological Innovation; Competitive Strategy; Technology Industry; United States
Bussgang, Jeffrey J., and Carin-Isabel Knoop. "Fixie and Conversational AI Sidekicks." Harvard Business School Case 824-037, October 2023.
- October 2023
- Teaching Note
Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models
By: Tsedal Neeley and Tim Englehart
Teaching Note for HBS Case No. 422-085. Dr. Timnit Gebru—a leading artificial intelligence (AI) computer scientist and co-lead of Google’s Ethical AI team—was messaging with one of her colleagues when she saw the words: “Did you resign?? Megan sent an email saying that... View Details
- 2023
- Working Paper
Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality
By: Fabrizio Dell'Acqua, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine C. Kellogg, Saran Rajendran, Lisa Krayer, François Candelon and Karim R. Lakhani
The public release of Large Language Models (LLMs) has sparked tremendous interest in how humans will use Artificial Intelligence (AI) to accomplish a variety of tasks. In our study conducted with Boston Consulting Group, a global management consulting firm, we examine... View Details
Keywords: Large Language Model; AI and Machine Learning; Performance Efficiency; Performance Improvement
Dell'Acqua, Fabrizio, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine C. Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R. Lakhani. "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality." Harvard Business School Working Paper, No. 24-013, September 2023.
- 2024
- Working Paper
The Financial Anatomy of Climate Solutions: A Large Language Model Approach to Company Classification and Analysis
By: Shirley Lu and George Serafeim
Leveraging advancements in large language models (LLM), we study the financial characteristics of firms offering climate solutions-products and services aimed at fostering a transition to a low-carbon economy. We use a new measure that applies LLM to 10-K Item 1... View Details
Keywords: Climate; Climate Change; Climate Finance; Innovation; Technology; Financial Statement Analysis; Sustainability; Environment
Lu, Shirley, and George Serafeim. "The Financial Anatomy of Climate Solutions: A Large Language Model Approach to Company Classification and Analysis." Harvard Business School Working Paper, No. 25-026, August 2024.
- January 2020
- Case
Ureed.com: The Marketplace for Language
By: Ashley V. Whillans, Esel Çekin and Alpana Thapar
Jordanian entrepreneur, Nour Al Hassan, founded Tarjama in 2008, tapping into an underserved and high demand need: Arabic translation service. Its lean model comprised of hiring full-time employees, mainly women, who worked from home. It steadily grew over the... View Details
Keywords: Language Translation; Freelancers; Entrepreneurship; Human Resources; Management; Expansion; Quality; Growth and Development Strategy
Whillans, Ashley V., Esel Çekin, and Alpana Thapar. "Ureed.com: The Marketplace for Language." Harvard Business School Case 920-038, January 2020.
- 2024
- Working Paper
The Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations
By: Jacqueline N. Lane, Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh and Pei-Hsin Wang
The rise of generative artificial intelligence (AI) is transforming creative problem-solving, necessitating new approaches for evaluating innovative solutions. This study explores how human-AI collaboration can enhance early-stage evaluations, focusing on the interplay... View Details
Lane, Jacqueline N., Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh, and Pei-Hsin Wang. "The Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations." Harvard Business School Working Paper, No. 25-001, August 2024. (Revised August 2024.)
- November 1998
- Article
Modeling Large Data Sets in Marketing
By: Sridhar Balasubramanian, Sunil Gupta, Wagner Kamakura and Michel Wedel
Balasubramanian, Sridhar, Sunil Gupta, Wagner Kamakura, and Michel Wedel. "Modeling Large Data Sets in Marketing." Special Issue on Large Data Sets in Business Economics. Statistica Neerlandica 52, no. 3 (November 1998).
- July 2024
- Article
How Artificial Intelligence Constrains Human Experience
By: A. Valenzuela, S. Puntoni, D. Hoffman, N. Castelo, J. De Freitas, B. Dietvorst, C. Hildebrand, Y.E. Huh, R. Meyer, M. Sweeney, S. Talaifar, G. Tomaino and K. Wertenbroch
Many consumption decisions and experiences are digitally mediated. As a consequence, consumer behavior is increasingly the joint product of human psychology and ubiquitous algorithms (Braun et al. 2024; cf. Melumad et al. 2020). The coming of age of Large Language... View Details
Keywords: Large Language Model; User Experience; AI and Machine Learning; Consumer Behavior; Technology Adoption; Risk and Uncertainty; Cost vs Benefits
Valenzuela, A., S. Puntoni, D. Hoffman, N. Castelo, J. De Freitas, B. Dietvorst, C. Hildebrand, Y.E. Huh, R. Meyer, M. Sweeney, S. Talaifar, G. Tomaino, and K. Wertenbroch. "How Artificial Intelligence Constrains Human Experience." Journal of the Association for Consumer Research 9, no. 3 (July 2024): 241–256.
- August 2023
- Article
Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel
By: Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use... View Details
Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel." Nature Machine Intelligence 5, no. 8 (August 2023): 873–883.
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is... View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- December 2019
- Article
The Ethical Perils of Personal, Communal Relations: A Language Perspective
By: Maryam Kouchaki, Francesca Gino and Yuval Feldman
The current paper focuses on how the type of relationship that exists between a group and its members influences misconduct by fostering certain perceptions of the group. Using multiple methods, lab- and field-based experiments (N = 1,679), and a large dataset of S&P... View Details
Kouchaki, Maryam, Francesca Gino, and Yuval Feldman. "The Ethical Perils of Personal, Communal Relations: A Language Perspective." Psychological Science 30, no. 12 (December 2019): 1745–1766.
- November–December 2019
- Article
Head, Heart or Hands: How Do Employees Respond to a Radical Global Language Change Over Time?
By: Sebastian Reiche and Tsedal Neeley
To understand how recipients respond to radical change over time across cognitive, affective, and behavioral dimensions, we conducted a longitudinal study of a mandated language change at a Chilean subsidiary of a large U.S. multinational organization. The... View Details
Keywords: Language; Communication; Change; Employees; Attitudes; Emotions; Globalized Firms and Management
Reiche, Sebastian, and Tsedal Neeley. "Head, Heart or Hands: How Do Employees Respond to a Radical Global Language Change Over Time?" Organization Science 30, no. 6 (November–December 2019): 1252–1269.
- 2006
- Article
Cyclical Wages in a Search-and-Bargaining Model with Large Firms
By: Julio J. Rotemberg
Rotemberg, Julio J. "Cyclical Wages in a Search-and-Bargaining Model with Large Firms." NBER International Seminar on Macroeconomics (2006): 65–114.