Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter A. Cohen is active.

Publication


Featured researches published by Peter A. Cohen.


American Educational Research Journal | 1982

Educational Outcomes of Tutoring: A Meta-analysis of Findings

Peter A. Cohen; James A. Kulik; Chen-Lin C. Kulik

A meta-analysis of findings from 65 independent evaluations of school tutoring programs showed that these programs have positive effects on the academic performance and attitudes of those who receive tutoring. Tutored students outperformed control students on examinations, and they also developed positive attitudes toward the subject matter covered in the tutorial programs. The meta-analysis also showed that tutoring programs have positive effects on children who serve as tutors. Like the children they helped, the tutors gained a better understanding of and developed more positive attitudes toward the subject matter covered in the tutorial program. Participation in tutoring programs had little or no effect, however, on the self-esteem of tutors and tutees.


Review of Educational Research | 1981

Student Ratings of Instruction and Student Achievement: A Meta-analysis of Multisection Validity Studies

Peter A. Cohen

The present study used meta-analytic methodology to synthesize research on the relationship between student ratings of instruction and student achievement. The data for the meta-analysis came from 41 independent validity studies reporting on 68 separate multisection courses relating student ratings to student achievement. The average correlation between an overall instructor rating and student achievement was .43; the average correlation between an overall course rating and student achievement was .47. While large effect sizes were also found for more specific rating dimensions such as Skill and Structure, other dimensions showed more modest relationships with student achievement. A hierarchical multiple regression analysis showed that rating/achievement correlations were larger for full-time faculty when students knew their final grades before rating instructors and when an external evaluator graded students’ achievement tests. The results of the meta-analysis provide strong support for the validity of student ratings as measures of teaching effectiveness.


Review of Educational Research | 1980

Effectiveness of Computer-based College Teaching: A Meta-analysis of Findings

James A. Kulik; Chen-Lin C. Kulik; Peter A. Cohen

This review used Glass’ (1976) meta-analytic techniques to integrate findings from 59 independent evaluations of computer-based college teaching. The meta-analysis showed that computer-based instruction made small but significant contributions to the course achievement of college students and also produced positive, but again small, effects on the attitudes of students toward instruction and toward the subject matter they were studying. Computer-assisted instruction also reduced substantially the amount of time needed for instruction. In general, the meta-analysis found little relationship between study findings and design features of the experiments, settings for the studies, or manner and date of publication of the findings.


Research in Higher Education | 1980

EFFECTIVENESS OF STUDENT-RATING FEEDBACK FOR IMPROVING COLLEGE INSTRUCTION: A Meta-Analysis of Findings

Peter A. Cohen

This article applied meta-analytic methodology to integrate findings from 22 comparisons of the effectiveness of student-rating feedback at the college level. On the average, feedback had a modest but significant effect on improving instruction. Instructors receiving mid-semester feedback averaged. 16 of a rating point higher on end-of-semester overall ratings than did instructors receiving no mid-semester feedback. This corresponded to a gain of over one-third of a standard-deviation unit, or a percentile gain of 15 points. The effects of student-rating feedback were accentuated when augmentation or consultation accompanied the ratings. Other study features, such as the length of time available to implement changes and the use of normative data, did not produce different effect sizes.


Review of Educational Research | 1988

Implementation Problems in Meta-Analysis

Philip C. Abrami; Peter A. Cohen; Sylvia d’Apollonia

The paper probes implementation problems in meta-analysis by comparing six quantitative reviews of the research on the validity of student ratings of instructional effectiveness. The disparate conclusions of the reviews are explained by comparing the syntheses at each step in a meta-analysis: specifying inclusion criteria, locating studies, coding study features, calculating individual study outcomes, and data analysis. In addition, the resolution of implementation problems is explored, and several suggestions are offered for improving the practice of quantitative synthesis including the following: techniques for the specification of inclusion criteria, guidelines for the systematic summary of study features, a call for the analysis of statistical power and the reduction of Type II errors, and a discussion of the analytical problems posed by the presence of between- and within-study findings.


Educational Technology Research and Development | 1981

A Meta-Analysis of Outcome Studies of Visual-Based Instruction

Peter A. Cohen; Barbara J. Ebeling; James A. Kulik

This article describes a statistical integration of findings from 74 studies of visualbased college teaching. In the typical study, students learned slightly more from visual-based instruction than from conventional teaching. In the typical study, visual-based instruction had no special effect on course completion, student attitudes, or the correlation between aptitude and achievement. Students were equally likely to complete visual-based and conventional classes; their attitudes toward the two kinds of classes were very similar; and aptitude played a strong role in determining student achievement in each kind of class.


Improving College and University Teaching | 1980

The Role of Colleagues in the Evaluation of College Teaching

Peter A. Cohen; Wilbert J. McKeachie

The role of faculty colleagues in the evaluation of col lege teaching has yet to be defined adequately: national studies of evaluative information on teaching (3, 23,37) have shown trends of diminishing faculty involve ment. The percentage of colleges and universities using col league opinions, classroom visits, examination of course syllabi, and student achievement measures has decreased over the years. On the other hand, there has been a drama tic increase in the use of student ratings. Although in the


Teaching of Psychology | 1982

Using an Interactive Feedback Procedure to Improve College Teaching.

Peter A. Cohen; Gregory Herr

Recent research has shown that the usefulness of feedback from student ratings of instruction is greatly enhanced when such feedback is supplemented by some type of consultation. A study by McKeachie et al. (1980) demonstrated that instructorswho received personal feedback of mid-term student ratings, along with consultation from an experienced teacher, were perceived to be more effective at the end of the term than either those instructors who received printed feedback or those who received no feedback at all. Erickson and Erickson (1979) evaluated a more formalized teaching consultation procedure. lnstructors who participated in this program spent about ten hours during the term meeting with teaching consultants and discussing feedback and change strategies. The Ericksons reported strong effects for this program; students of instructors who worked with the consultant reported more positive change in teaching performance over the term than did students of control group instructors. In addition, those instructors who were involved in the consultation procedure were more positive in their self-ratings of improvement than were instructors who did not receive consultation. Finally, Cohen (1980) synthesized findings from experimental and quasi-experimental studies investigating the effects of mid-term student-rating feedback and found that effects were especially strong for studies in which expert consultation accompanied student feedback. Thus there is a growing body of research that supports the use of consultants to improve college teaching. In spite of this evidence, however, most faculty members at institutions of higher learning do not make use of teaching consultants. At many institutions comprehensive services and resources are limited, and, at best, there are few teaching experts who could provide assistance. Indeed, in many cases potential teaching consultants remain unknown to the general faculty. Rarer by far is the availability of more formalized teaching consultation procedures such as those described by Erickson and Erickson. Even when such consultation services are recognized and available, faculty members rarely seek out this sort of resource. Such an involvement may be perceived by instructors as direct teaching remediation, but, more likely, it is easy for faculty members to resist activities that demand high personal involvement. What seems to be needed, then, is a feedback system that incorporates positive features of the consultation process without the high cost of face-to-face communication. Such a system would be self-directed and would guide instructors in delineating specific areasfor teaching improvement. With the present study, we evaluated the effectiveness of just such a model, the Formative Assessment of College Teaching (FACT) model (Cohen & Herr, 1979). We hypothesized that instructors who were given a programmed feedback booklet designed to help them review, interpret, and use mid-term student-rating data, would show greater teaching improvement than would instructors who received descriptive rating feedback alone or no feedback at all


American Psychologist | 1979

A Meta-Analysis of Outcome Studies of Keller's Personalized System of Instruction.

James A. Kulik; Chen-Lin C. Kulik; Peter A. Cohen


Teaching of Psychology | 1982

Validity of Student Ratings in Psychology Courses: A Research Synthesis.

Peter A. Cohen

Collaboration


Dive into the Peter A. Cohen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge