Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results: (11) Arrow Down
Filter Results: (11) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (12)
    • Research  (11)
  • Faculty Publications  (8)

Show Results For

  • All HBS Web  (12)
    • Research  (11)
  • Faculty Publications  (8)
Page 1 of 11 Results
Sort by

Are you looking for?

→Search All HBS Web
  • 2021
  • Article

Fair Algorithms for Infinite and Contextual Bandits

By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions... View Details
Keywords: Algorithms; Bandit Problems; Fairness; Mathematical Methods
Citation
Read Now
Related
Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
  • Article

Mitigating Bias in Adaptive Data Gathering via Differential Privacy

By: Seth Neel and Aaron Leon Roth
Data that is gathered adaptively—via bandit algorithms, for example—exhibits bias. This is true both when gathering simple numeric valued data—the empirical means kept track of by stochastic bandit algorithms are biased downwards—and when gathering more complicated... View Details
Keywords: Bandit Algorithms; Bias; Analytics and Data Science; Mathematical Methods; Theory
Citation
Read Now
Related
Neel, Seth, and Aaron Leon Roth. "Mitigating Bias in Adaptive Data Gathering via Differential Privacy." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018).
  • 2023
  • Working Paper

An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits

By: Biyonka Liang and Iavor I. Bojinov
Typically, multi-armed bandit (MAB) experiments are analyzed at the end of the study and thus require the analyst to specify a fixed sample size in advance. However, in many online learning applications, it is advantageous to continuously produce inference on the... View Details
Keywords: Analytics and Data Science; AI and Machine Learning; Mathematical Methods
Citation
Read Now
Related
Liang, Biyonka, and Iavor I. Bojinov. "An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits." Harvard Business School Working Paper, No. 24-057, March 2024.
  • 18 Nov 2016
  • Conference Presentation

Rawlsian Fairness for Machine Learning

By: Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity". In the context of a simple model of... View Details
Keywords: Machine Learning; Algorithms; Fairness; Decision Making; Mathematical Methods
Citation
Related
Joseph, Matthew, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Rawlsian Fairness for Machine Learning." Paper presented at the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), November 18, 2016.
  • November–December 2018
  • Article

Online Network Revenue Management Using Thompson Sampling

By: Kris J. Ferreira, David Simchi-Levi and He Wang
We consider a network revenue management problem where an online retailer aims to maximize revenue from multiple products with limited inventory constraints. As common in practice, the retailer does not know the consumer's purchase probability at each price and must... View Details
Keywords: Online Marketing; Revenue Management; Revenue; Management; Marketing; Internet and the Web; Price; Mathematical Methods
Citation
Find at Harvard
Read Now
Related
Ferreira, Kris J., David Simchi-Levi, and He Wang. "Online Network Revenue Management Using Thompson Sampling." Operations Research 66, no. 6 (November–December 2018): 1586–1602.
  • 2023
  • Article

Balancing Risk and Reward: An Automated Phased Release Strategy

By: Yufan Li, Jialiang Mao and Iavor Bojinov
Phased releases are a common strategy in the technology industry for gradually releasing new products or updates through a sequence of A/B tests in which the number of treated units gradually grows until full deployment or deprecation. Performing phased releases in a... View Details
Keywords: Product Launch; Mathematical Methods; Product Development
Citation
Read Now
Related
Li, Yufan, Jialiang Mao, and Iavor Bojinov. "Balancing Risk and Reward: An Automated Phased Release Strategy." Advances in Neural Information Processing Systems (NeurIPS) (2023).
  • 2022
  • Article

Towards Robust Off-Policy Evaluation via Human Inputs

By: Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez and Himabindu Lakkaraju
Off-policy Evaluation (OPE) methods are crucial tools for evaluating policies in high-stakes domains such as healthcare, where direct deployment is often infeasible, unethical, or expensive. When deployment environments are expected to undergo changes (that is, dataset... View Details
Keywords: Analytics and Data Science; Research
Citation
Find at Harvard
Purchase
Related
Singh, Harvineet, Shalmali Joshi, Finale Doshi-Velez, and Himabindu Lakkaraju. "Towards Robust Off-Policy Evaluation via Human Inputs." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 686–699.
  • 06 Oct 2015
  • First Look

October 6, 2015

upon the Thompson sampling algorithm used for multi-armed bandit problems by incorporating inventory constraints into the pricing decisions. Our algorithm proves to have both... View Details
Keywords: Sean Silverthorne
  • 03 Jan 2017
  • First Look

January 3, 2017

performance results when compared to other algorithms developed for similar settings. Moreover, we show how our algorithms can be extended for use in general multi-armed bandit... View Details
Keywords: Carmen Nobel
  • 20 Mar 2018
  • First Look

First Look at New Research and Ideas, March 20, 2018

which builds upon the Thompson sampling algorithm used for multi-armed bandit problems by incorporating inventory constraints into the model and algorithm. Our algorithm proves... View Details
Keywords: Sean Silverthorne
  • 21 Nov 2017
  • First Look

First Look at New Research and Ideas, November 21, 2017

strong theoretical performance guarantees as well as promising numerical performance results when compared to other algorithms developed for similar settings. Moreover, we show how our algorithms can be... View Details
Keywords: Sean Silverthorne
  • 1

Are you looking for?

→Search All HBS Web
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.