Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Sarewitz is active.

Publication


Featured researches published by Daniel Sarewitz.


Technology in Society | 2002

Real-time Technology Assessment

David H. Guston; Daniel Sarewitz

Abstract Social science scholarship has identified complex linkages between society and science, but it has been less successful at actually enhancing those linkages in ways that can add to the value and capability of each sector. We propose a research program to integrate natural science and engineering investigations with social science and policy research from the outset — what we call “real-time technology assessment” (real-time TA). Comprising investigations into analogical case studies, research program mapping, communication and early warning, and technology assessment and choice, real-time TA can inform and support natural science and engineering research, and it can provide an explicit mechanism for observing, critiquing, and influencing social values as they become embedded in innovations. After placing real-time TA in the context of scholarship on technology assessment, the paper elaborates on this coordinated set of research tasks, using the example of nano-scale science and engineering (nanotechnology) research. The paper then discusses issues in the implementation of real-time TA and concludes that the adoption of real-time TA can significantly enhance the societal value of research-based innovation.


Nature | 2007

Climate change 2007: Lifting the taboo on adaptation

Roger A. Pielke; Gwyn Prins; Steve Rayner; Daniel Sarewitz

Renewed attention to policies for adapting to climate change cannot come too soon for Roger Pielke, Jr, Gwyn Prins, Steve Rayner and Daniel Sarewitz. The first volume of Climate Change 2007, the Fourth Assessment Report of the IPCC (Intergovernmental Panel on Climate Change), was published on 2 February. In a Special Report Natures news team sums up the documents main conclusions and assesses initial reactions to it. Two related Commentaries look at some practical steps being taken in response to climate change.


Risk Analysis | 2003

Vulnerability and Risk: Some Thoughts from a Political and Policy Perspective

Daniel Sarewitz; Roger A. Pielke; Mojdeh Keykhah

Public policies to mitigate the impacts of extreme events such as hurricanes or terrorist attacks will differ depending on whether they focus on reducing risk or reducing vulnerability. Here we present and defend six assertions aimed at exploring the benefits of vulnerability-based policies. (1) Risk-based approaches to covering the costs of extreme events do not depend for their success on reduction of vulnerability. (2) Risk-based approaches to preparing for extreme events are focused on acquiring accurate probabilistic information about the events themselves. (3) Understanding and reducing vulnerability does not demand accurate predictions of the incidence of extreme events. (4) Extreme events are created by context. (5) It is politically difficult to justify vulnerability reduction on economic grounds. (6) Vulnerability reduction is a human rights issue; risk reduction is not.


Sustainability Science | 2014

The future of sustainability science: a solutions-oriented research agenda

Thaddeus R. Miller; Arnim Wiek; Daniel Sarewitz; John P. Robinson; Lennart Olsson; David Kriebel; Derk Loorbach

Over the last decade, sustainability science has been at the leading edge of widespread efforts from the social and natural sciences to produce use-inspired research. Yet, how knowledge generated by sustainability science and allied fields will contribute to transitions toward sustainability remains a critical theoretical and empirical question for basic and applied research. This article explores the limitations of sustainability science research to move the field beyond the analysis of problems in coupled systems to interrogate the social, political and technological dimensions of linking knowledge and action. Over the next decade, sustainability science can strengthen its empirical, theoretical and practical contributions by developing along four research pathways focused on the role of values in science and decision-making for sustainability: how communities at various scales envision and pursue sustainable futures; how socio-technical change can be fostered at multiple scales; the promotion of social and institutional learning for sustainable development.


Technology in Society | 1999

Prediction in science and policy

Daniel Sarewitz; Roger A. Pielke

Prediction in traditional, reductionist natural science serves the role of validating hypotheses about invariant natural phenomena. In recent years, a new type of prediction has arisen in science, motivated in part by the needs of policy makers and the availability of new technologies. This new predictive science seeks to foretell the behavior of complex environmental phenomena such as climate change, earthquakes, and extreme weather events. Significant intellectual and financial resources are now devoted to such efforts, in the expectation that predictions will guide policy making. These expectations, however, derive in part from confusion about the different roles of prediction in science and society. Policy makers lack a framework for assessing when and if prediction can help achieve policy goals. This article is a first step towards developing such a framework.


Science & Public Policy | 2005

Public values and public failure in US science policy

Barry Bozeman; Daniel Sarewitz

Domestic science policy in the United States is linked inextricably to economic thinking. We seek to develop a practical analytical framework that confronts the manifest problems of economic valuing for science and technology activities. We argue that pervasive use of market valuation, market-failure assumptions and economic metaphors shapes the structure of science policy in undesirable ways. In particular, reliance on economic reasoning tends to shift the discourse about science policy away from political questions of “why?” and “to what end?” to economic questions of “how much?” Borrowing from the “public values failure framework”, we examine public values criteria for science policy, illustrated with case vignettes on such topics as genetically modified crops and the market for human organs. Copyright , Beech Tree Publishing.


Science As Culture | 2006

Too Little, Too Late? Research Policies on the Societal Implications of Nanotechnology in the United States

Ira Bennett; Daniel Sarewitz

‘Nanotechnology’ is still in its infancy. Nevertheless, and despite ongoing disagreements about how ‘nanotechnology’ ought to be defined, narratives emerging from a diversity of sources share the notion that the societal impacts of nanotechnology could be transformational, perhaps radically so, in social realms as diverse as privacy, workforce, security, health, and human cognition. One consequence of this shared belief is a nascent effort to understand, anticipate, and perhaps manage the implications and dynamics of the societal impacts of nanotechnology. A rapidly expanding menu of conferences and reports, sponsored by governmental and non-governmental bodies in the US and Western Europe, attest to a growing concern about the societal effects of nanotechnology (e.g. Roco and Bainbridge, 2001; ETC, 2003; Meridian Institute, 2005; Wilsdon and Willis, 2004; Royal Society/Royal Academy of Engineering, 2004). In the US, a federal initiative to fund nanoscale science and engineering (NSE) research was accompanied at its inception in 2000 by a commitment to support a parallel, if substantially smaller, research effort on societal implications. Three years later, the US Congress actually passed legislation to mandate the expansion of this effort. In this paper we ask: what roles are the social sciences playing in the emerging co-evolution of nanotechnology and society, and, crucially, how do those roles come to be defined? To probe this question, we look to the US experience in constructing three brief narratives of our own to illustrate the evolution of: (1) NSE research; (2) speculations and concerns about the implications of nanotechnology; and (3) government commitment to supporting research on the societal implications of nanotechnology. Conspicuously absent from these stories is the influence of several decades of scholarship on the interactions of science, technology, and society. The community of science and technology studies (‘science studies’ hereafter) and science and technology policy scholars seem to have engaged with the challenges of nanotechnology only when stimulated by the Science as Culture Vol. 15, No. 4, 309–325, December 2006


Science, Technology, & Human Values | 2000

Ex Post Evaluation: A More Effective Role for Scientific Assessments in Environmental Policy

Charles Herrick; Daniel Sarewitz

Unreasonable expectations about the nature and character of scientific knowledge support the widespread political assumption that predictive scientific assessments are a necessary precursor to environmental decision making. All too often, the practical outcome of this assumption is that scientific uncertainty becomes a ready-made dodge for what is in reality just a difficult political decision. Interdisciplinary assessments necessary to address complex environmental policy issues invariably result in findings that are inherently contestable, especially when applied in the unrestrained realm of partisan politics. In this article, the authors argue that predictive scientific assessments are inherently limited in the extent to which they can guide policy development and that rigorous scientific assessments can be much more valuable in the role of ex post policy evaluation than they can in the context of ex ante policy formulation.


Nature | 2012

Beware the creeping cracks of bias.

Daniel Sarewitz

Alarming cracks are starting to penetrate deep into the scientific edifice. They threaten the status of science and its value to society. And they cannot be blamed on the usual suspects — inadequate funding, misconduct, political interference, an illiterate public. Their cause is bias, and the threat they pose goes to the heart of research. Bias is an inescapable element of research, especially in fields such as biomedicine that strive to isolate cause–effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized. Yet if biases were random, then multiple studies ought to converge on truth. Evidence is mounting that biases are not random. A Comment in Nature in March reported that researchers at Amgen were able to confirm the results of only six of 53 ‘landmark studies’ in preclinical cancer research (C. G. Begley & L. M. Ellis Nature 483, 531–533; 2012). For more than a decade, and with increasing frequency, scientists and journalists have pointed out similar problems. Early signs of trouble were appearing by the mid-1990s, when researchers began to document systematic positive bias in clinical trials funded by the pharmaceutical industry. Initially these biases seemed easy to address, and in some ways they offered psychological comfort. The problem, after all, was not with science, but with the poison of the profit motive. It could be countered with strict requirements to disclose conflicts of interest and to report all clinical trials. Yet closer examination showed that the trouble ran deeper. Science’s internal controls on bias were failing, and bias and error were trending in the same direction — towards the pervasive over-selection and over-reporting of false positive results. The problem was most provocatively asserted in a now-famous 2005 paper by John Ioannidis, currently at Stanford University in California: ‘Why Most Published Research Findings Are False’ (J. P. A. Ioannidis PLoS Med. 2, e124; 2005). Evidence of systematic positive bias was turning up in research ranging from basic to clinical, and on subjects ranging from genetic disease markers to testing of traditional Chinese medical practices. How can we explain such pervasive bias? Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve. Researchers seek to reduce bias through tightly controlled experimental investigations. In doing so, however, they are also moving farther away from the real-world complexity in which scientific results must be applied to solve problems. The consequences of this strategy have become acutely apparent in mouse-model research. The technology to produce unlimited numbers of identical transgenic mice attracts legions of researchers and abundant funding because it allows for controlled, replicable experiments and rigorous hypothesis-testing — the canonical tenets of ‘scientific excellence’. But the findings of such research often turn out to be invalid when applied to humans. A biased scientific result is no different from a useless one. Neither can be turned into a real-world application. So it is not surprising that the cracks in the edifice are showing up first in the biomedical realm, because research results are constantly put to the practical test of improving human health. Nor is it surprising, even if it is painfully ironic, that some of the most troubling research to document these problems has come from industry, precisely because industry’s profits depend on the results of basic biomedical science to help guide drugdevelopment choices. Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends not just on competition between researchers, but also on the close ties between science and its application that allow society to push back against biased and useless results. It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications (such as drugs) and straightforward indicators of desired outcomes (such as reduced morbidity and mortality). Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves. Useful steps to deal with this threat may range from reducing the hype from universities and journals about specific projects, to strengthening collaborations between those involved in fundamental research and those who will put the results to use in the real world. There are no easy solutions. The first step is to face up to the problem — before the cracks undermine the very foundations of science. ■


Science & Public Policy | 2007

Science policies for reducing societal inequities

Edward J. Woodhouse; Daniel Sarewitz

In an effort to move social justice issues higher on R&D policy-making agendas, we ask whether new technoscientific capacities introduced into a non-egalitarian society tend disproportionately to benefit the affluent and powerful. To demonstrate plausibility of the hypothesis, we first review examples of grossly non-egalitarian outcomes from military, medical, and other R&D arenas. We then attempt to debunk the science-inequity link by looking for substantial categories where R&D is conducive to reducing unjustified inequalities. For example, R&D sometimes enables less affluent persons to purchase more or better goods and services. Although the case for price-based equity proves weaker than normally believed, R&D targeted towards public goods turns out to offer a reasonable chance of equity enhancement, as do several other potentially viable approaches to science policy. However, major changes in science-policy institutions and participants probably would be required for R&D to serve humanity equitably. Copyright , Beech Tree Publishing.

Collaboration


Dive into the Daniel Sarewitz's collaboration.

Top Co-Authors

Avatar

Roger A. Pielke

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Kriebel

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Edward J. Woodhouse

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arie Rip

University of Twente

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnim Wiek

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge