Edward J. Woodhouse
Rensselaer Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward J. Woodhouse.
Science & Public Policy | 2007
Edward J. Woodhouse; Daniel Sarewitz
In an effort to move social justice issues higher on R&D policy-making agendas, we ask whether new technoscientific capacities introduced into a non-egalitarian society tend disproportionately to benefit the affluent and powerful. To demonstrate plausibility of the hypothesis, we first review examples of grossly non-egalitarian outcomes from military, medical, and other R&D arenas. We then attempt to debunk the science-inequity link by looking for substantial categories where R&D is conducive to reducing unjustified inequalities. For example, R&D sometimes enables less affluent persons to purchase more or better goods and services. Although the case for price-based equity proves weaker than normally believed, R&D targeted towards public goods turns out to offer a reasonable chance of equity enhancement, as do several other potentially viable approaches to science policy. However, major changes in science-policy institutions and participants probably would be required for R&D to serve humanity equitably. Copyright , Beech Tree Publishing.
IEEE Technology and Society Magazine | 1997
Edward J. Woodhouse; Dean Nieusma
Both experts and users of expertise fail to grasp the role of expertise in decision making. Of course, expert advice is essential in complex human activities, but those who follow that advice also sometimes regret it. Under what conditions does expert advice and action tend to produce satisfactory outcomes? Recent studies of technological controversies and governmental regulatory procedures reveal some of the conditions under which expert advice may work well, and conditions under which it does not. Before considering these situations, the authors review two mental models (simple and cynical) that obstruct clear thinking about what users of expertise should expect from professional knowledge and what professionals should seek to contribute to technological controversies and other situations requiring expertise.
IEEE Technology and Society Magazine | 2001
Edward J. Woodhouse
One of the toughest and most important challenges facing technological civilization is excessive consumption in affluent societies. Scholars in the fields of design, industrial ecology and environmental ethics have begun dealing with aspects of over-consumption. With selected exceptions, however, engineering ethicists have not yet paid close attention to the matter. In this article, I attempt to catalyse such inquiry.
Science | 2015
Christelle Didier; Weiwen Duan; Jean-Pierre Dupuy; David H. Guston; Yongmou Liu; José Antonio López Cerezo; Diane P. Michelfelder; Carl Mitcham; Daniel Sarewitz; Jack Stilgoe; Andrew Stirling; Shannon Vallor; Guoyu Wang; James Wilsdon; Edward J. Woodhouse
The 17 July special section on Artificial Intelligence (AI) (p. [248][1]), although replete with solid information and ethical concern, was biased toward optimism about the technology. The articles concentrated on the roles that the military and government play in “advancing” AI, but did not include the opinions of any political scientists or technology policy scholars trained to think about the unintended (and negative) consequences of governmental steering of technology. The interview with Stuart Russell touches on these concerns (“Fears of an AI pioneer,” J. Bohannon, News, p. [252][2]), but as a computer scientist, his solutions focus on improved training. Yet even the best training will not protect against market or military incentives to stay ahead of competitors. Likewise double-edged was M. I. Jordan and T. M. Mitchells desire “that society begin now to consider how to maximize” the benefits of AI as a transformative technology (“Machine learning: Trends, perspectives, and prospects,” Reviews, p. [255][3]). Given the grievous shortcomings of national governance and the even weaker capacities of the international system, it is dangerous to invest heavily in AI without political processes in place that allow those who support and oppose the technology to engage in a fair debate. The section implied that we are all engaged in a common endeavor, when in fact AI is dominated by a relative handful of mostly male, mostly white and east Asian, mostly young, mostly affluent, highly educated technoscientists and entrepreneurs and their affluent customers. A majority of humanity is on the outside looking in, and it is past time for those working on AI to be frank about it. The rhetoric was also loaded with positive terms. AI presents a risk of real harm, and any serious analysis of its potential future would do well to unflinchingly acknowledge that fact. The question posed in the collections introduction—“How will we ensure that the rise of the machines is entirely under human control?” (“Rise of the machines,” J. Stajic et al. , p. [248][1])—is the wrong question to ask. There are no institutions adequate to “ensure” it. There are no procedures by which all humans can take part in the decision process. The more important question is this: Should we slow the pace of AI research and applications until a majority of people, representing the worlds diversity, can play a meaningful role in the deliberations? Until that question is part of the debate, there is no debate worth having. [1]: /lookup/doi/10.1126/science.349.6245.248 [2]: /lookup/doi/10.1126/science.349.6245.252 [3]: /lookup/doi/10.1126/science.aaa8415
international symposium on technology and society | 1997
Edward J. Woodhouse
Because the practice of engineering authoritatively reshapes the world, it deserves to be seen as a public and political activity. If technology is a form of legislation, are engineers the legislators or do they occupy some other political role? Because engineering in the 20th century has served some social interests much better than others, might those who have been disadvantaged reasonably construe engineers as their political opponents? What constraints face engineers individually and collectively in attempting to reconsider and retarget beliefs and actions bearing on their work as technological decision makers?.
international symposium on technology and society | 1996
Edward J. Woodhouse
Places the political use of expertise into a larger conceptual framework. Argues that there are sharp limits on knowledge of any kind displacing politics as a method of settling value-laden disputes, and that purveyors of expertise ought to see it as an aid or supplement to partisan negotiations among affected interests-never a substitute for it. Argues that the main path for improving expertise is in helping decision makers devise initial precautions, craft flexible policy trials, speed up learning from experience, and otherwise cope with the uncertainty inherent in complex social choices. Suggests that the intelligence of social outcomes ordinarily is enhanced when experts utilize their abilities on behalf of social interests that otherwise would be disadvantaged in political negotiations.
Annals of the New York Academy of Sciences | 1989
Edward J. Woodhouse
From observing several decades of intense controversy over technological risks, political scientists are beginning to understand the ingredients of successful policy-making about risks, as well as some of the obstacles to it. This paper reviews part of what we have learned, with special attention to nuclear power plants, biotechnology, and toxic chemicals. It suggests how the key insights might be applied to the regulation of earthquake hazards, and argues that researchers and other relevant professionals have a responsibility to take a more political approach if they want their analyses to lead to improvements in society’s protections against earthquake damage.
Policy Sciences | 1992
Andrew Weiss; Edward J. Woodhouse
IEEE Technology and Society Magazine | 2004
Edward J. Woodhouse
IEEE Technology and Society Magazine | 2001
Jack C. Swearengen; Edward J. Woodhouse