Kevin J. Boudreau
London Business School
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin J. Boudreau.
Management Science | 2011
Kevin J. Boudreau; Nicola Lacetera; Karim R. Lakhani
Contests are a historically important and increasingly popular mechanism for encouraging innovation. A central concern in designing innovation contests is how many competitors to admit. Using a unique data set of 9,661 software contests, we provide evidence of two coexisting and opposing forces that operate when the number of competitors increases. Greater rivalry reduces the incentives of all competitors in a contest to exert effort and make investments. At the same time, adding competitors increases the likelihood that at least one competitor will find an extreme-value solution. We show that the effort-reducing effect of greater rivalry dominates for less uncertain problems, whereas the effect on the extreme value prevails for more uncertain problems. Adding competitors thus systematically increases overall contest performance for high-uncertainty problems. We also find that higher uncertainty reduces the negative effect of added competitors on incentives. Thus, uncertainty and the nature of the problem should be explicitly considered in the design of innovation tournaments. We explore the implications of our findings for the theory and practice of innovation contests. This paper was accepted by Christian Terwiesch, operations management.
Management Science | 2010
Kevin J. Boudreau
This paper studies two fundamentally distinct approaches to opening a technology platform and their different impacts on innovation. One approach is to grant access to a platform and thereby open up markets for complementary components around the platform. Another approach is to give up control over the platform itself. Using data on 21 handheld computing systems (1990--2004), I find that granting greater levels of access to independent hardware developer firms produces up to a fivefold acceleration in the rate of new handheld device development, depending on the precise degree of access and how this policy was implemented. Where operating system platform owners went further to give up control (beyond just granting access to their platforms) the incremental effect on new device development was still positive but an order of magnitude smaller. The evidence from the industry and theoretical arguments both suggest that distinct economic mechanisms were set in motion by these two approaches to opening.
Organization Science | 2012
Kevin J. Boudreau
In this paper, I study the effect of adding large numbers of producers of application software programs (“apps”) to leading handheld computer platforms, from 1999 to 2004. To isolate causal effects, I exploit changes in the software labor market. Consistent with past theory, I find a tight link between the number of producers on platform and the number of software varieties that were generated. The patterns indicate the link is closely related to the diversity and distinct specializations of producers. Also highlighting the role of heterogeneity and nonrandom entry and sorting, later cohorts generated less compelling software than earlier cohorts. Adding producers to a platform also shaped investment incentives in ways that were consistent with a tension between network effects and competitive crowding, alternately increasing or decreasing innovation incentives depending on whether apps were differentiated or close substitutes. The crowding of similar apps dominated in this case; the average effect of adding producers on innovation incentives was negative. Overall, adding large numbers of producers led innovation to become more dependent on population-level diversity, variation, and experimentation—while drawing less on the heroic efforts of any one individual innovator.
Nature Biotechnology | 2013
Karim R. Lakhani; Kevin J. Boudreau; Po-Ru Loh; Lars Backstrom; Carliss Y. Baldwin; Eric Lonstein; Mike Lydon; Alan MacCormack; Ramy Arnaout; Eva C. Guinan
Advances in biotechnology have fuelled the generation of unprecedented quantities of data across the life sciences. However, finding individuals who can address such “big data” problems effectively has become a significant research bottleneck. Historically, prize-based contests have had striking success in attracting unconventional individuals who can solve difficult challenges. To determine whether this approach could solve a real “big data” biologic algorithm problem, we used a complex immunogenomics problem as the basis for a two-week online contest broadcast to participants outside academia and biomedical disciplines. Participants in our contest generated over 600 submissions containing 89 novel computational approaches to the problem. Thirty submissions exceeded the benchmark performance of NIH’s MegaBLAST. The best achieved both greater accuracy and speed (x1000). Here we show the potential of using online prize-based contests to access individuals without domain-specific backgrounds to address big data challenges in life sciences.
Strategic Management Journal | 2014
Kevin J. Boudreau; Lars Bo Jeppesen
Platforms have evolved beyond just being organized as multi-sided markets with complementors selling to users. Complementors are often unpaid, working outside of a price system and driven by heterogeneous sources of motivation — which should affect how they respond to platform growth. Does reliance on network effects and strategies to attract large numbers of complementors remain advisable in such contexts? We test hypotheses related to these issues using data from 85 online multi-player game platforms with unpaid complementors. We find that complementor development responds to platform growth even without sales incentives, but that attracting complementors has a net zero effect on on-going development and fails to stimulate network effects. We discuss conditions under which a strategy of using unpaid crowd complementors remains advantageous.
Management Science | 2016
Kevin J. Boudreau; Eva C. Guinan; Karim R. Lakhani; Christoph Riedl
Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the “intellectual distance” between the knowledge embodied in research proposals and an evaluator’s own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator–proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing “noise” or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention, and allocation of resources in the ongoing accumulation of scientific knowledge.
Research Policy | 2015
Kevin J. Boudreau; Karim R. Lakhani
Recent calls for greater openness in our private and public innovation systems have particularly urged for more open disclosure and granting of access to intermediate works–early results, algorithms, materials, data and techniques–with the goals of enhancing overall research and development productivity and enhancing cumulative innovation. To make progress towards understanding implications of such policy changes we devised a large-scale field experiment in which 733 subjects were divided into matched independent subgroups to address a bioinformatics problem under either a regime of open disclosure of intermediate results or, alternatively, one of closed secrecy around intermediate solutions. We observe the cumulative innovation process in each regime with fine-grained measures and are able to derive inferences with a series of cross-sectional comparisons. Open disclosures led to lower participation and lower effort but nonetheless led to higher average problem-solving performance by concentrating these lesser efforts on the most performant technical approaches. Closed secrecy produced higher participation and higher effort, while producing less correlated choices of technical approaches that participants pursued, resulting in greater individual and collective experimentation and greater dispersion of performance. We discuss the implications of such changes to the ongoing theory, evidence and policy considerations with regards to cumulative innovation. (JEL O3, JO, D02) * Boudreau: London Business School and Institute of Quantitative Social Science at Harvard University, Regent’s Park, London, U.K. NW1 4SA, fax: +44 (0)20 7000 8701, telephone: +44 (0)20 7000 8455, e-mail: [email protected]; Lakhani: Harvard Business School and Institute of Quantitative Social Science at Harvard University, email: [email protected]. We are grateful to members of the Harvard Medical School communities for their contribution of considerable attention and resources to this project, including Ramy Arnaout, Eva Guinan and Lee Nadler. We also thank managers at TopCoder, including Jack Hughes, Rob Hughes, Mike Lydon, and Ira Heffan, who provided invaluable assistance in carrying out all aspects of the experiment and in designing and implementing the experimental platform. We thank expert computation and data scientists Po-Ru Loh, Hernan Amiune and Xiaoshi Lu for careful technical evaluations of the data algorithms developed within the experiment. We would like to thank several people for their comments, including: Lee Branstetter, Wesley Cohen, Carliss Baldwin, Erik Brynjolfsson, Chaim Ferschtman, Rebecca Henderson, Nicola Lacetera, Alan McCormack, Petra Moser, Ramana Nanda, Richard Nelson, Catherine Tucker and seminar participants at London Business School, Tel Aviv University, the National Bureau of Economic Research (NBER), and the Roundtable for Engineering Entrepreneurship Research (REER). Onal Vural provided excellent research assistance. All errors are our own. Boudreau would like to acknowledge financial support from a London Business School Research and Materials Development Grant and the University of Toronto, Rotman School of Management. Lakhani would like to acknowledge the financial support of the HBS Division of Research and Faculty Development. A Google Faculty Research Grant supported both authors.
Archive | 2008
Kevin J. Boudreau
Some open strategies involve giving up control of a core platform technology. Others involve encouraging outsiders to build complementary innovations on top of the core technology. While these approaches often go hand in hand, I argue they should relate to different objectives, instruments and economic mechanisms. This paper presents evidence from a panel of handheld computing systems (1990-2004) to reveal different relationships between rates of new handheld device introductions and different open strategies. The core of the analysis deals with addressing endogeneity concerns and assuring that econometric comparisons between policy switchers and non-switchers are meaningful. I find that opening the complement (in terms of licensing and IP policies) was related to up to five-times acceleration in the development of new devices. The degree of openness mattered, too; intermediate levels of opening the complement were associated with fastest rates. Opening the platform (in terms of ownership, vertical scope and outside contributions) was associated with a smaller effect, a roughly 20% acceleration in development rates. I interpret these results in the light of emerging theories of openness and innovation.
The RAND Journal of Economics | 2016
Kevin J. Boudreau; Karim R. Lakhani; Michael E. Menietti
Economic analysis of rank-order tournaments has shown that intensified competition leads to declining performance. Empirical research demonstrates that individuals in tournament-type contests perform less well on average in the presence of larger number of competitors in total and superstars. Particularly in field settings, studies often lack direct evidence about the underlying mechanisms, such as the amount of effort, that might account for these results. Here we exploit a novel dataset on algorithmic programming contests that contains data on individual effort, risk taking, and cognitive errors that may underlie tournament performance outcomes. We find that competitors on average react negatively to an increase in the total number of competitors, and react more negatively to an increase in the number of superstars than non-superstars. We also find that the most negative reactions come from a particular subgroup of competitors: those that are highly skilled, but whose abilities put them near to the top of the ability distribution. For these competitors, we find no evidence that the decline in performance outcomes stems from reduced effort or increased risk taking. Instead, errors in logic lead to a decline in performance, which suggests a cognitive explanation for the negative response to increased competition. We also find that a small group of competitors, who are at the very top of the ability distribution (non-superstars), react positively to increased competition from superstars. For them, we find some evidence of increased effort and no increase in errors of logic, consistent with both economic and psychological explanations.Tournaments are widely used in the economy to organize production and innovation. We study individual data on 2775 contestants in 755 software algorithm development contests with random assignment. The performance response to added contestants varies nonmonotonically across contestants of different abilities, precisely conforming to theoretical predictions. Most participants respond negatively, whereas the highest-skilled contestants respond positively. In counterfactual simulations, we interpret a number of tournament design policies (number of competitors, prize allocation and structure, number of divisions, open entry) and assess their effectiveness in shaping optimal tournament outcomes for a designer.
The Review of Economics and Statistics | 2017
Kevin J. Boudreau; Thomas J. Brady; Ina Ganguli; Patrick Gaulé; Eva C. Guinan; Anthony N. Hollenberg; Karim R. Lakhani
Scientists typically self-organize into teams, matching with others to collaborate in the production of new knowledge. We present the results of a field experiment conducted at Harvard Medical School to understand the extent to which search costs affect matching among scientific collaborators. We generated exogenous variation in search costs for pairs of potential collaborators by randomly assigning individuals to 90-minute structured information-sharing sessions as part of a grant funding opportunity for biomedical researchers. We estimate that the treatment increases the baseline probability of grant co-application of a given pair of researchers by 75% (increasing the likelihood of a pair collaborating from 0.16 percent to 0.28 percent), with effects higher among those in the same specialization. The findings indicate that matching between scientists is subject to considerable frictions, even in the case of geographically-proximate scientists working in the same institutional context with ample access to common information and funding opportunities.
Collaboration
Dive into the Kevin J. Boudreau's collaboration.
Libera Università Internazionale degli Studi Sociali Guido Carli
View shared research outputs