William N. Dunn
University of Pittsburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William N. Dunn.
Journal of Policy Analysis and Management | 1982
William N. Dunn
This article presents to public health professionals concepts and perspectives from political science relevant for creating a healthier public policy. Currently, there is no uniform vision of what constitutes public interest and the decisions of public administrations tend to be based on compromise. In public debate, what is paramount is the capacity to persuade. From the perspective of public policy analysis, the crucial issue is definition: the final decision depends on the definition of the problem that has emerged triumphant in the public debate among competing actors with different definitions of the problem. From a policy analysis perspective, the problems entering the agenda of public administration does not necessarily correspond to their severity, as competing actors try to impose their point of view. Because of its historical evolution, the Spanish political system has specific traits. The relatively weak democratic tradition tends to make the decision process less visibles, with strong technocratic elements and weaker social articulation. Both the juridical tradition and liberal rhetoric portray lobbying as contrary to public interest, when in fact it is constantly performed by powerful vested interest groups, through both personal contacts and economic connections. Regulatory policies, with concentrated costs and diffuse benefits, seem to be moving from Spain to the European Union. To promote healthier public policies, the development of civil society initiatives and the building of coalitions will play an increasingly greater role in the future.
Science Communication | 1983
William N. Dunn
In this issue, &dquo;Research in Progress&dquo; addresses the problem of measuring knowledge use. Knowledge use refers to different processes of knowing, some of which result in behavior which may be directly observed. Measurement refers to the assignment of numerals to such processes, or to their observed behavioral consequences, according to some rule (Stevens, 1974). Rules may span the classical hierarchy of ratio to nominal levels of measurement. Efforts to measure knowledge (or information) use are based on prior conceptual and procedural decisions. Procedural decisions involve the adoption of a particular method of observing knowledge use (e.g., questionnaires versus participant observation), while conceptual decisions involve the definition and classification of knowledge use itself (e.g., conceptual versus instrumental use). All measures of knowledge
Human Relations | 1986
William N. Dunn; Ari Ginsberg
Following the lead of other contributors to cognitive organizational theory, this paper offers a sociocognitive network methodology that represents a sharp departure from traditional approaches to organizational analysis. After outlining the contours of a sociocognitive perspective of organizational dynamics, we present a highly flexible, but reproducible methodology that allows us to uncover and quantify differences in the content of organizational reference frames. We then demonstrate how the resulting indices of cognitive content may be merged with standard sociometric data, creating a network matrix that measures the sociocognitive connectedness of an organizations participants and enables us to identify and monitor barriers to organizational innovation and change.
Knowledge, Technology & Policy | 1988
William N. Dunn; Burkart Holzner
An emergent social science of knowledge applications, drawing on a substantial multidisciplinary literature published over the past twenty-five years, signals an inversion of typical scholarly reasoning about the knowledge-society nexus. Whereas most scholarly research thus far has concentrated on conditions believed to affect the production of scientific and professional knowledge, we pose a new problematic: What must we examine in order to comprehend and consciously shape applications of scientific and professional knowledge to the manifold problems facing contemporary societies? To date, approaches to this problematic have proceeded on the basis of four broadly accepted if abstract theses about the nature of contemporary knowledge systems: subjectivity, corrigibility, sociality, and complexity. Within the boundaries supplied by these commonly accepted theses are unresolved controversies expressed in competing visions of complexity, alternative perspectives of causation, rival images of progress, and conflicting criteria of application.
Evaluation and Program Planning | 1981
William N. Dunn; Ian I. Mitroff; Stuart Jay Deutsch
Abstract As a discipline within the applied social sciences, evaluation research survives despite mounting evidence that it is peripheral or even damaging to efforts aimed at the improvement of public policies and programs. The obsolescence of evaluation research is a consequence of its failure to recognize that the enterprise of evaluation is an ill-structured problem; and of its uncritical acceptance of implicit decision rules and underlying philosophical assumptions which maximize its own irrelevance: An alternative approach—systemic-dialectical evaluation —is proposed as a concrete means for enhancing the relevance of the discipline.
Evaluation and Program Planning | 1990
William N. Dunn
Abstract The structural model of argument (when augmented by practical standards of truth-estimation) yields optimally plausible answers to questions that demand a leap beyond the data at the disposal of policy analysts. The reconstruction of policy inquiry according to a new approach to truth-estimation promises conclusions which are the best that may be obtained in the complex circumstances of public discourse. Those circumstances demand the radical reinterpretation of knowledge according to requirements of the social context of application. The reconstruction of policy inquiry along lines of practical reasoning makes it possible to distinguish plausibly true beliefs (knowledge) from beliefs in general (opinion). Thus we avoid the confusion of truth and persuasion. Practical reasoning does not replace standard deductive and inductive logics. It accommodates these standard modes of reasoning by incorporating them in a multilevel process that captures the many logics of public discourse.
Science Communication | 1987
Burkart Holzner; William N. Dunn; Muhammad Shahidullah
The social system of knowledge—or knowledge system, for short—is an accounting scheme that helps organize the search for social impact of science (SIS) indicators. The accounting scheme specifies six related knowledge functions (production, structuring, storage, distribution, utilization, and mandating) that are performed in different domains (industry, agriculture, education, and so forth) by many institutions and organizations that vary in size, autonomy, specialization, and complexity. By mediating relations between science and society, these institutions and organizations facilitate and retard the impact of science on the larger society. The knowledge systems accounting scheme also helps identify aspects of science impacts on society (e.g., scientific evidence), aspects of society on which science impacts (e.g., the economy, polity, and culture), and structures by which social impacts of science are mediated (e.g., technical communities). The knowledge system provides a conceptual base for the future development of what has been called “knowledge systems accounting” (see Dunn and Holzner, this volume).
Knowledge, Technology & Policy | 1991
William N. Dunn
ConclusionPolicy analysis is a specialized intellectual activity that affects and is affected by the exercise of power, rule, and authority in knowledge systems. The organized complexity of this system, formed by the interpenetration of tangled knowledge functions which appear as a river delta, raises doubts about the appropriateness of policy impact as a standard of accountability for policy analysts. The typical practice of “random medition” may be replaced with forms of “systematic mediation” which, directed towards the investigation of rival hypotheses about the policy impact of policy analysis, enlarges the scope of usable ignorance. The strategy of usable ignorance, supplemented by recommendations in areas of agenda setting, managing pragmatic validity, and methodology development, is a way to begin dealing with the organized complexity of the knowledge system in which policy analysts work today.
Simulation Modelling Practice and Theory | 2002
William N. Dunn
Abstract This paper presents a new method for structuring decision problems as an essential aspect of solving them. This new method, the method of context validation, is a form of what Campbell (From Evolutionary Epistemology Via Selection Theory to a Sociology of Scientific Validity, 1996) and Dunn (Testing Rival Hypotheses with Pragmatic Eliminative Induction: The Case of National Maximum Speed Limits, unpublished; Am. Behav. Sci. 40 (3) (1997) 277; Knowledge, Power, and Participation in Environmental Policy Analysis, 2001) call pragmatic eliminative induction. By adding the term “pragmatic” to “eliminative induction,” the method is distinguished from Mill’s (J. Washington Acad. Sci. 16 (2) (1926) 317) influential analytical (or logical) variant of eliminative induction, which has been influential in designing social experiments that seek to investigate rival hypotheses, or so-called “threats to validity,” that must be tested and eliminated to know whether a technological intervention is responsible for changes in a target social system. The paper shows how context validation can be used to estimate an approximately complete set of rival hypotheses, a sine qua non of research on complex sociotechnical systems. The method is exemplified by applying it to a major sociotechnical experiment, the US National Maximum Speed Limit of 1974. The method is shown to mitigate the commission of Type III errors (solving the wrong problem), which in the present case stemmed from the failure to define the relation between speed and traffic safety as a social and political, as well as technical problem. Because of this failure, policy makers concluded that maximum speed limits were effective in saving lives when they were not.
Science Communication | 1987
William N. Dunn; Burkart Holzner; Muhammad Shahidullah; Andrea M. Hegedus
The task of designing social impact of science (SIS) indicators is an ill-structured or systemic problem involving competing design goals, indeterminate design states, unspecified design rules, and an unbounded design space. These features of the problem are not a result of imperfections of measurement alone; they are due primarily to properties of the knowledge system that make it resemble a tangled river delta (anastomotic reticulum) in which different functional patterns (serial, parallel, assembly, arborescent, segmented, cyclic) coexist. The stunning complexity of knowledge systems makes it difficult but nevertheless possible to develop SIS indicators that are policy relevant by virtue of their being at once relational, causal, and normative (see also Peters, this volume). Any attempt to improve the policy relevance of impact indicators will recognize that systemic problems require nonconventional solutions based on principles of externalization, formalization, and simplification. An initial attempt to externalize the design process yields typologies of science output indicators and social impact indicators that may be conjoined to form social impact of science (SIS) indicators. By formalizing rules for making and challenging causal inferences, we can formulate rival hypotheses about the role of knowledge functions and structures in mediating the impacts of science on the achievement of social goals. By simplifying the design process we can maximize the likelihood that SIS indicators and the basis for their construction are widely comprehended by groups that have a stake in the social performance of science.