Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel L. Stufflebeam is active.

Publication


Featured researches published by Daniel L. Stufflebeam.


Archive | 1983

The CIPP Model for Program Evaluation

Daniel L. Stufflebeam

This chapter is a review and update of the so-called CIPP Model1 for evaluation. That model (Stufflebeam, 1966) was developed in the late 1960s as one alternative to the views about evaluations that were most prevalent at that time — those oriented to objectives, testing, and experimental design. It emerged with other new conceptualizations, especially those developed by Scriven (1966) and Stake (1967). (For a discussion of these historical developments, see Chapter 1 of this book.) The CIPP approach was applied in many institutions; for example, the Southwest Regional Educational Laboratory in Austin, Texas; the National Center for Vocational and Technical Education; the U.S. Office of Education; and the school districts in Columbus, Toledo, and Cincinnati, Ohio; Dallas, Forth Worth, Houston, and Austin, Texas; and Saginaw, Detriot, and Lansing, Michigan. It was the subject of research and development by Adams (1971), Findlay (1979), Nevo (1974), Reinhard (1972), Root (1971), Webster (1975), and others. It was the central topic of the International Conference on the Evaluation of Physical Education held in Jyvaskyla, Finland in 1976 and was used as the advance organizer to group the evaluations that were presented and discussed during that week-long conference. It was also the central topic of the Eleventh National Phi Delta Kappa Symposium on Educational Research, and, throughout the 1970s it was referenced in many conferences and publications. It was most fully explicated in the Phi Delta appa book, Educational Evaluation and Decision Making (Stufflebeam et al., 1971) and most fully implemented in the Dallas Independent School District. Its conceptual and operational forms have evolved in response to critiques, applications, research, and parallel developments; and it continues to be referenced and applied in education and other fields.


Archive | 1983

The Standards for Evaluation of Educational Programs, Projects, and Materials

Daniel L. Stufflebeam; George F. Madaus

In 1980, a Joint Committee appointed by 12 organizations1 concerned with educational evaluation issued one of the most significant documents to date in the field of educational evaluation. It consisted of a set of 30 standards to be used both to guide the conduct of evaluation of educational programs, projects, and materials and also to judge the soundness of such evaluations. The document, entitled Standards for Evaluations of Educational Programs, Projects, and Materials, 2 was published in 1981 by the McGraw-Hill Company. The standards were the result of an extensive developmental process, which involved the work of about 200 people and required more than four years to complete. The committee’s work did not stop with the publication of the standards, and it presently works to promote sound use and conducts a process of ongoing review and development. This chapter briefly describes the development of the standards and the nature of the standards, and summarizes them for the reader.


American Journal of Evaluation | 2001

The Metaevaluation Imperative

Daniel L. Stufflebeam

Abstract The evaluation field has advanced sufficiently in its methodology and public service that evaluators can and should subject their evaluations to systematic metaevaluation. Metaevaluation is the process of delineating, obtaining, and applying descriptive information and judgmental information about an evaluation’s utility, feasibility, propriety, and accuracy and its systematic nature, competence, integrity/honesty, respectfulness, and social responsibility to guide the evaluation and publicly report its strengths and weaknesses. Formative metaevaluations—employed in undertaking and conducting evaluations—assist evaluators to plan, conduct, improve, interpret, and report their evaluation studies. Summative metaevaluations—conducted following an evaluation—help audiences see an evaluation’s strengths and weaknesses, and judge its merit and worth. Metaevaluations are in public, professional, and institutional interests to assure that evaluations provide sound findings and conclusions; that evaluation practices continue to improve; and that institutions administer efficient, effective evaluation systems. Professional evaluators are increasingly taking their metaevaluation responsibilities seriously but need additional tools and procedures to apply their standards and principles of good evaluation practice.


Archive | 1985

Introduction to Evaluation

Daniel L. Stufflebeam; Anthony J. Shinkfield

Evaluation is one of the most fundamental components of sound professional services. The clients of professionals deserve assistance that is directed to their needs, of high quality, up-to-date, and efficient. In order to hold professionals accountable for satisfying such standards, society must regularly subject professional services to evaluations. Some of the evaluation work that is directed at regulation and protection of the public interest obviously must be conducted by independent bodies, such as government agencies and accrediting boards. But fundamentally, the most important evaluations of professional services are those conducted (or commissioned) by the professionals themselves.


Archive | 2000

Program Evaluation: A Historical Overview

George F. Madaus; Daniel L. Stufflebeam

Evaluators need to be aware of both contemporary and historical aspects of their emerging profession, including its philosophical underpinnings and conceptual orientations. Without this background, evaluators are doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes.


American Journal of Evaluation | 2001

Evaluation checklists: practical tools for guiding and judging evaluations

Daniel L. Stufflebeam

This article describes a project designed to provide evaluators, their clients, and other stakeholders with checklists for guiding and assessing formative and summative evaluations. The checklists pertain to program, personnel, and product evaluations, and reflect different conceptualizations of evaluation. They are constructed for use in planning, contracting, conducting, reporting, and judging evaluations. The checklists, along with papers on the logic and methodology of checklists and guidelines for developing checklists, are conveniently available on the Western Michigan University Evaluation Center’s Web site www.wmich.edu/evalctr/checklists/.


Peabody Journal of Education | 1991

Principal Evaluation: New Directions for Improvement.

Daniel L. Stufflebeam; David Nevo

Together with parents and teachers, school principals play crucial roles in the effective education of Americas children and youth. In recent years, researchers and policymakers have supported what parents and teachers have long known experientially-that the quality of leadership provided by school principals significantly influences the quality of schools (Andrews & Soder, 1987; Bossert, Dwyer, Rowan, & Lee, 1982; Clinton, 1991; Duke, 1987, 1992; Greenfield, 1987; Hallinger & Murphy, 1987; Leithwood, 1988; Schmitt & Schechtman, 1990; Sergiovanni, 1987). Consequently, systematic and careful evaluation of principal qualifications, competence, and performance is critically important to the success of Americas elementary and secondary schools. The public interest is no less at risk from incompetent school principals than from incompetent doctors, lawyers, and accountants, and all such public servants should be carefully evaluated throughout their professional careers. Sound evaluations of the aptitudes, proficiencies, performance, and special achievements of principals not only protect the public from poor


American Journal of Evaluation | 2000

Lessons in contracting for evaluations

Daniel L. Stufflebeam

A 1995 evaluation of the U.S. Marine Corps’ personnel evaluation system and a 1991 evaluation of the National Assessment Governing Board procedure for setting cut scores on the mathematics section of the National Assessment of Educational Progress provided valuable lessons in how to minimize the risks of misunderstanding what an evaluation will involve, subversion of the evaluation, controversy, and animosity. Both evaluations were nationally significant, had to be conducted quickly, were politically volatile, were keyed to professional standards, and had substantial impacts. However, the latter evaluation went sour, whereas the former received an official commendation. Evaluators should do all they can to demonstrate to clients that sound evaluations are not to be feared, but should be valued and used. An investigation of what went right and wrong in these evaluations identified contracting as a key variable. This article advises evaluators and clients to regularly negotiate clear, sound contracts before proceeding with an evaluation, and presents a checklist to assist in the contracting process.


Evaluation & the Health Professions | 1978

Meta Evaluation : An Overview

Daniel L. Stufflebeam

meta evaluation, i. e., the practice of evaluating evaluation. The state of the art in this area is reviewed in the first part of this article. The second Fart discusses the arguments for and against the continued development and use of meta evaluation. The third part introduces an overall conceptualization of meta evaluation ; and the fourth and final part suggests standards and guidelines for use in assessing evaluation work.


Educational and Psychological Measurement | 1967

Estimating Test Norms from Variable Size Item and Examinee Samples

Desmond L. Cook; Daniel L. Stufflebeam

norms distribution for a 70-item multiple choice test could be made by administering a different sample of seven items to each of 10 examinee samples consisting of 100 subjects each as opposed to administering the 70 items to a sample of 1,000 subjects. Estimates of norm data were also obtained from each of the 10 examinee samples of 100 subjects each on the total 70-item test. Comparisons were made between the norm statistics (mean, standard deviation, and frequency distribution) and estimates of these same statistics derived from both the item samples and the examinee

Collaboration


Dive into the Daniel L. Stufflebeam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

I. Carl Candoli

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lori A. Wingate

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar

Ralph W. Tyler

Center for Advanced Study in the Behavioral Sciences

View shared research outputs
Top Co-Authors

Avatar

Chris L. S. Coryn

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar

Michael Scriven

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge