This course enables students to develop the skills and concepts needed to ensure the ongoing contribution of a firm's operations to its competitive position. It helps them to understand the complex processes underlying the development and manufacture of products as well as the creation and delivery of services.
Edward McFowland III
Assistant Professor of Business Administration
Assistant Professor of Business Administration
Edward McFowland III is an Assistant Professor in the Technology and Operations Management Unit at Harvard Business School. He teaches the first-year TOM course in the required curriculum.
Professor McFowland’s research interests – which lie at the intersection of Machine Learning, Information Systems, and Management—include the development of computationally efficient algorithms for large-scale statistical machine learning and “big data” analytics. As a data and computational social scientist, Professor McFowland aims to bridge the gap between machine learning and the social sciences (e.g., economics, public policy, and management). His work has been published in leading management, machine learning, and statistics journals, and has been supported by Adobe, Facebook, PNC Bank, AT&T Research Labs, and the National Science Foundation.
Professor McFowland earned his Ph.D. in Information Systems and Management from Carnegie Mellon University. He also holds Masters degrees in Machine Learning, Public Policy, and in Information Systems from Carnegie Mellon University. Prior to joining HBS, Professor McFowland taught at the University of Minnesota Carlson School of Management.
- Featured Work
-
The public release of Large Language Models (LLMs) has sparked tremendous interest in how humans will use Artificial Intelligence (AI) to accomplish a variety of tasks. In our study conducted with Boston Consulting Group, a global management consulting firm, we examine the performance implications of AI on realistic, complex, and knowledge-intensive tasks. The pre-registered experiment involved 758 consultants comprising about 7% of the individual contributor-level consultants at the company. After establishing a performance baseline on a similar task, subjects were randomly assigned to one of three conditions: no AI access, GPT-4 AI access, or GPT-4 AI access with a prompt engineering overview. We suggest that the capabilities of AI create a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI. For each one of a set of 18 realistic consulting tasks within the frontier of AI capabilities, consultants using AI were significantly more productive (they completed 12.2% more tasks on average, and completed task 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group). Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores. For a task selected to be outside the frontier, however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI. Further, our analysis shows the emergence of two distinctive patterns of successful AI use by humans along a spectrum of human-AI integration. One set of consultants acted as “Centaurs,” like the mythical halfhorse/half-human creature, dividing and delegating their solution-creation activities to the AI or to themselves. Another set of consultants acted more like “Cyborgs,” completely integrating their task flow with the AI and continually interacting with the technology.Social influence cannot be identified from purely observational data on social networks, because such influence is generically confounded with latent homophily, that is, with a node’s network partners being informative about the node’s attributes and therefore its behavior. If the network grows according to either a latent community (stochastic block) model, or a continuous latent space model, then latent homophilous attributes can be consistently estimated from the global pattern of social ties. We show that, for common versions of those two network models, these estimates are so informative that controlling for estimated attributes allows for asymptotically unbiased and consistent estimation of social-influence effects in linear models. In particular, the bias shrinks at a rate that directly reflects how much information the network provides about the latent attributes. These are the first results on the consistent nonexperimental estimation of social-influence effects in the presence of latent homophily, and we discuss the prospects for generalizing them.Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to "mine" variables of interest from available data, followed by the inclusion of those variables into an econometric framework, with the objective of estimating causal effects. Recent work highlights that, because the predictions from machine learning models are inevitably imperfect, econometric analyses based on the predicted variables are likely to suffer from bias due to measurement error. We propose a novel approach to mitigate these biases, leveraging the ensemble learning technique known as the random forest. We propose employing random forest not just for prediction, but also for generating instrumental variables to address the measurement error embedded in the prediction. The random forest algorithm performs best when comprised of a set of trees that are individually accurate in their predictions, yet which also make "different" mistakes, i.e., have weakly correlated prediction errors. A key observation is that these properties are closely related to the relevance and exclusion requirements of valid instrumental variables. We design a data-driven procedure to select tuples of individual trees from a random forest, in which one tree serves as the endogenous covariate and the other trees serve as its instruments. Simulation experiments demonstrate the efficacy of the proposed approach in mitigating estimation biases, and its superior performance over an alternative method (simulation-extrapolation), which has been suggested by prior work as a reasonable method of addressing the measurement error problem.We define a prescriptive analytics framework that addresses the needs of a constrained decision-maker facing, ex ante, unknown costs and benefits of multiple policy levers. The framework is general in nature and can be deployed in any utility maximizing context, public or private. It relies on randomized field experiments for causal inference, machine learning for estimating heterogeneous treatment effects, and the optimization of an integer linear program for converting predictions into decisions. The net result is the discovery of individual-level targeting of policy interventions to maximize overall utility under a budget constraint. The framework is set in the context of the four pillars of analytics and is especially valuable for companies that already have an existing practice of running A/B tests. The key contribution in this work is to develop and operationalize a framework to exploit both within- and between-treatment arm heterogeneity in the utility response function, in order to derive benefits from future (optimized) prescriptions. We demonstrate the value of this framework as compared to benchmark practices (i.e., the use of the average treatment effect, uplift modeling, as well as an extension to contextual bandits) in two different settings. Unlike these standard approaches, our framework is able to recognize, adapt to, and exploit the (potential) presence of different subpopulations that experience varying costs and benefits within a treatment arm, while also exhibiting differential costs and benefits across treatment arms. As a result, we find a targeting strategy that produces an order of magnitude improvement in expected total utility, for the case where significant within- and between-treatment arm heterogeneity exists.
- Journal Articles
-
- Cao, Rui, Evan Olawsky, Edward McFowland III, Erin Marcotte, Logan Spector, and Tianzhong Yang. "Subset Scanning for Multi-Trait Analysis Using GWAS Summary Statistics." Bioinformatics 40, no. 1 (January 2024). View Details
- Jakubowski, Benjamin, Siram Somanchi, Edward McFowland III, and Daniel B. Neill. "Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators." Journal of Machine Learning Research 24, no. 133 (2023): 1–57. View Details
- McFowland III, Edward, and Cosma Rohilla Shalizi. "Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations." Journal of the American Statistical Association 118, no. 541 (2023): 707–718. View Details
- Bapna, Ravi, Edward McFowland III, Probal Mojumder, Jui Ramaprasad, and Akhmed Umyarov. "So, Who Likes You? Evidence from a Randomized Field Experiment." Management Science 69, no. 7 (July 2023): 3939–3957. View Details
- Yang, Mochen, Edward McFowland III, Gordon Burtch, and Gediminas Adomavicius. "Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem." INFORMS Journal on Data Science 1, no. 2 (October–December 2022): 138–155. View Details
- Ravishankar, Pavan, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. "Provable Detection of Propagating Sampling Bias in Prediction Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9562–9569. (Presented at the 37th AAAI Conference on Artificial Intelligence (2/7/23-2/14/23) in Washington, DC.) View Details
- McFowland III, Edward. "Commentary on 'Causal Decision Making and Causal Effect Estimation Are Not the Same... and Why It Matters'." INFORMS Journal on Data Science 1, no. 1 (April–June 2022): 21–22. View Details
- Doss, Charles R., and Edward McFowland III. "Nonparametric Subset Scanning for Detection of Heteroscedasticity." Journal of Computational and Graphical Statistics 31, no. 3 (2022): 813–823. View Details
- Cintas, Celia, Skyler Speakman, Girmaw Abebe Tadesse, Victor Akinwande, Edward McFowland III, and Komminist Weldemariam. "Pattern Detection in the Activation Space for Identifying Synthesized Content." Pattern Recognition Letters 153 (January 2022): 207–213. View Details
- McFowland III, Edward, Sandeep Gangarapu, Ravi Bapna, and Tianshu Sun. "A Prescriptive Analytics Framework for Optimal Policy Deployment Using Heterogeneous Treatment Effects." MIS Quarterly 45, no. 4 (December 2021): 1807–1832. View Details
- Cintas, Celia, Skyler Speakman, Victor Akinwande, William Ogallo, Komminist Weldemariam, Srihari Sridharan, and Edward McFowland III. "Detecting Adversarial Attacks via Subset Scanning of Autoencoder Activations and Reconstruction Error." Proceedings of the International Joint Conference on Artificial Intelligence 29th (2020). View Details
- Herlands, William, Edward McFowland III, Andrew Gordon Wilson, and Daniel B. Neill. "Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-iid Data." Proceedings of Machine Learning Research (PMLR) 84 (2018): 425–434. (Also presented at the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018.) View Details
- Speakman, Skyler, Sriram Somanchi, Edward McFowland III, and Daniel B. Neill. "Penalized Fast Subset Scanning." Journal of Computational and Graphical Statistics 25, no. 2 (2016): 382–404. (Selected for “Best of JCGS” invited session by the journal’s editor in chief.) View Details
- Speakman, Skyler, Edward McFowland III, and Daniel B. Neill. "Scalable Detection of Anomalous Patterns With Connectivity Constraints." Journal of Computational and Graphical Statistics 24, no. 4 (2015): 1014–1033. View Details
- McFowland III, Edward, Skyler Speakman, and Daniel B. Neill. "Fast Generalized Subset Scan for Anomalous Pattern Detection." Art. 12. Journal of Machine Learning Research 14 (2013): 1533–1561. View Details
- Neill, Daniel B., Edward McFowland III, and Huanian Zheng. "Fast Subset Scan for Multivariate Spatial Biosurveillance." Statistics in Medicine 32, no. 13 (June 15, 2013): 2185–2208. View Details
- Journal Abstracts
-
- Neill, Daniel B., Edward McFowland III, and Huanian Zheng. "Fast Subset Scan for Multivariate Spatial Biosurveillance." Emerging Health Threats Journal 4, Suppl. 1, no. s42 (2011). View Details
- Speakman, Skyler, Edward McFowland III, and Daniel B. Neill. "Scalable Detection of Anomalous Patterns With Connectivity Constraints." Emerging Health Threats Journal 4 (2011): 11121. View Details
- Book Chapters
-
- Speakman, Skyler, Sriram Somanchi, Edward McFowland III, and Daniel B. Neill. "Disease Surveillance, Case Study." In Encyclopedia of Social Network Analysis and Mining, edited by Reda Alhajj and Jon Rokne, 380–385. New York: Springer, 2014. View Details
- Working Papers
-
- Kellogg, Katherine C., Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon, and Karim R. Lakhani. "Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics." Harvard Business School Working Paper, No. 24-074, June 2024. View Details
- Dell'Acqua, Fabrizio, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine C. Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R. Lakhani. "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality." Harvard Business School Working Paper, No. 24-013, September 2023. View Details
- Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023. View Details
- Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023. View Details
- McFowland III, Edward, Sriram Somanchi, and Daniel B. Neill. "Efficient Discovery of Heterogeneous Quantile Treatment Effects in Randomized Experiments via Anomalous Pattern Detection." Working Paper, 2023. View Details
- Somanchi, Sriram, Edward McFowland III, and Daniel B. Neill. "Detecting Anomalous Patterns of Care Using Health Insurance Claims." Working Paper, 2021. (In Preparation.) View Details
- McFowland III, Edward, and Daniel B. Neill. "Toward Automated Discovery of Novel Anomalous Patterns." Working Paper, 2021. View Details
- Cases and Teaching Materials
-
- Bojinov, Iavor, Edward McFowland III, François Candelon, Nikolina Jonsson, and Emer Moloney. "Pernod Ricard: Uncorking Digital Transformation." Harvard Business School Case 624-095, May 2024. View Details
- Teaching
-
This course enables students to develop the skills and concepts needed to ensure the ongoing contribution of a firm's operations to its competitive position. It helps them to understand the complex processes underlying the development and manufacture of products as well as the creation and delivery of services.
Keywords: Product Development; Information Technology - Awards & Honors
-
Winner of the Best Complete Paper Award at the 2022 INFORMS Workshop on Data Science for "Ensemble IV: Creating Instrumental Variables from Ensemble Learners for Robust Statistical Inference” with Gordon Burtch, Mochen Yang, and Gediminas Adomavicius.Recipient of a 2021 Fairness in AI Grant from the National Science Foundation and Amazon.A recipient of the 2021 Mary and Jim Lawrence Fellowship for contributions in enhancing the intellectual environment of the Carlson School of Management at University of Minnesota.Won the Best Reviewer Award at the 2019 Conference on Information Systems and Technology.Winner of a 2018 Facebook Computational Social Science Methodology Research Award.Runner Up for the 2018 Best Paper Award at the INFORMS Workshop on Data Science for "Using Data-Mined Variables in Causal Inference Tasks: A Random Forest Approach to the Measurement Error Problem" with Mochen Yang, Gordon Burtch, and Gediminas Adomavicius.Recipient of a 2018 Adobe Faculty Research Award, a grant to fund data science research.Recipient of a 2017 Adobe Faculty Research Award, a grant to fund data science research.Winner of the 2016 Journal of Computational and Graphical Statistics Best Paper Award for "Penalized Fast Subset Scanning" (2016) with Skyler Speakman, Sriram Somanchi, and Daniel B. Neill.Winner of the 2015 William W. Cooper Doctoral Dissertation Award from Carnegie Mellon University.
- Additional Information
-
Affiliations
- Areas of Interest
-
- analytics
- decision-making
- information technology
- machine learning
- networks
- econometrics
Additional Topics - In The News