Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George S. Howard is active.

Publication


Featured researches published by George S. Howard.


American Psychologist | 1991

Culture tales: A narrative approach to thinking, cross-cultural psychology, and psychotherapy.

George S. Howard

Narrative (or storytelling) approaches to understanding human action have recently become more popular in several areas of psychology. Treating human thinking as instances of story elaboration offers numerous implications for many domains of psychological theory, research, and practice. For example, several instances of cultural diversity take on a different hue when viewed from a narrative perspective. Finally, several authors (e.g., Bruner, 1986; Howard, 1989; Mair, 1989; McAdams, 1985; Polkinghorne, 1988; Sarbin, 1986) see the development of identity as an issue of life-story construction; psychopathology as instances of life stories gone awry; and psychotherapy as exercises in story repair.


Applied Psychological Measurement | 1979

Internal invalidity in pretest-posttest self-report evaluations and a re-evaluation of retrospective pretests.

George S. Howard; Kenneth M. Ralph; Nancy A. Gulanick; Scott E. Maxwell; Don W. Nance; Sterling K. Gerber

True experimental designs (Designs 4, 5, and 6 of Campbell & Stanley, 1963) are thought to provide internally valid results. This paper describes five studies involving the evaluation of various treat ment interventions and identifies a source of inter nal invalidity when self-report measures are used in a Pretest-Posttest manner. An alternative approach (Retrospective Pretest-Posttest design) to measuring change is suggested, and data comparing its ac curacy with the traditional Pretest-Posttest design in measuring treatment effects is presented. Finally, the implications of these findings for evaluation re search using self-report instruments and the strengths and limitations of retrospective measures are discussed.


Evaluation Review | 1980

Response-Shift Bias

George S. Howard

Evaluations of experimental interventions which employ self-report measures are subject to an instrumentation-related source of contamination known as response-shift bias. The difficulty arises when the experimental intervention changes the subjects evaluation standard with regard to the dimension measured with the self-report instrument. In such cases even the true experimental designs (Designs 4, 5, and 6; Campbell and Stanlev, 1963) can provide internally invalid results. Retrospective pretest ratings are recommended as one way in which response-shift bias might be attenuated. Research demonstrating re sponse-shift effects and the superiority of retrospective ratings over tradittonal self-report pretest ratings in providing a measure of change is reviewed. Finally, the current status of retrospection in psychological research is reviewed, and issues are considered for future research needed to identify the unique strengths and limitations of retrospective approaches.


American Psychologist | 2015

Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?

Scott E. Maxwell; Michael Y. Lau; George S. Howard

Psychology has recently been viewed as facing a replication crisis because efforts to replicate past study findings frequently do not show the same result. Often, the first study showed a statistically significant result but the replication does not. Questions then arise about whether the first study results were false positives, and whether the replication study correctly indicates that there is truly no effect after all. This article suggests these so-called failures to replicate may not be failures at all, but rather are the result of low statistical power in single replication studies, and the result of failure to appreciate the need for multiple replications in order to have enough power to identify true effects. We provide examples of these power problems and suggest some solutions using Bayesian statistics and meta-analysis. Although the need for multiple replication studies may frustrate those who would prefer quick answers to psychologys alleged crisis, the large sample sizes typically needed to provide firm evidence will almost always require concerted efforts from multiple investigators. As a result, it remains to be seen how many of the recently claimed failures to replicate will be supported or instead may turn out to be artifacts of inadequate sample sizes and single study replications.


Applied Psychological Measurement | 1979

The Feasibility of Informed Pretests in Attenuating Response-Shift Bias

George S. Howard; Patrick R. Dailey; Nancy A. Gulanick

Response-shift bias has been shown to contami nate self-reported pretest/posttest evaluations of various interventions. To eliminate the detrimental effects of response shifts, retrospective measures have been employed as substitutes for the tradi tional self-reported pretest. Informed pretests, wherein subjects are provided information about the construct being measured prior to completing the pretest self-report, are considered in the present studies as an alternative method to retrospective pretests in reducing response-shift effects. In Study 1 subjects were given a 20-minute presentation on assertiveness, which failed to significantly improve the accuracy of self-reported assertiveness. Other procedural influences hypothesized to improve self- report accuracy—previous experience with the ob jective measure of assertiveness and previous com pletion of the self-report measure—also were not related to increased self-report accuracy. In a second study, information about interviewing skills was provided at pretest using behaviorally anchored rating scales to participants in a workshop on in terviewing skills. Response-shift bias was not at tentuated by providing subjects with information about interviewing prior to the intervention. Change measures which employed retrospective pretest measures demonstrated somewhat higher (although nonsignificant) validity coefficients than measures of change utilizing informed pretest data.


Psychological Methods | 2000

The proof of the pudding: an illustration of the relative strengths of null hypothesis, meta-analysis, and Bayesian analysis.

George S. Howard; Scott E. Maxwell; Kevin J. Fleming

Some methodologists have recently suggested that scientific psychologys over-reliance on null hypothesis significance testing (NHST) impedes the progress of the discipline. In response, a number of defenders have maintained that NHST continues to play a vital role in psychological research. Both sides of the argument to date have been presented abstractly. The authors take a different approach to this issue by illustrating the use of NHST along with 2 possible alternatives (meta-analysis as a primary data analysis strategy and Bayesian approaches) in a series of 3 studies. Comparing and contrasting the approaches on actual data brings out the strengths and weaknesses of each approach. The exercise demonstrates that the approaches are not mutually exclusive but instead can be used to complement one another.


Research in Higher Education | 1982

Do grades contaminate student evaluations of instruction

George S. Howard; Scott E. Maxwell

The correlation between grades and student satisfaction has been interpreted as providing support for belief in a grading leniency bias hypothesis. That is, easy graders are assumed to receive better evaluations than hard gradersbecause they are easy graders. Howard and Maxwell have demonstrated that the relationship between grades and satisfaction might be viewed as an expected result of important causal relationships of other variables (student motivation and progress in the course) with satisfaction and grades, rather than simply evidence of contamination due to grading leniency. Eighty-three students in a research methodology course provided data at two points in a semester. Cross-lagged panel correlation analysis was employed to ascertain the direction of casuality in the relationship between student satisfaction and grades. The findings replicate the Howard and Maxwell path analytic results in finding no evidence that a grades-influencing-satisfaction interpretation is more likely than its opposite, namely, a satisfaction-causing-grades one. The weak relationship between grades and student satisfaction, and the studys inability to find evidence to impugn a satisfaction-causing-grades interpretation of that relationship, renders the grade contamination objection to student evaluations even less poignant.


Applied Psychological Measurement | 1981

Influence of Subject Response Style Effects on Retrospective Measures

George S. Howard; Jim Millham; Stephen Slaten; Louise O'Donnell

Recent attempts to reduce internal invalidity in studies employing pretest/posttest self-report in dices of improvement have included the refinement of methodologies employing retrospective reports of pre-treatment states. The present study investigated the operation of social desirability and impression management response bias on such retrospective measures. The results do not support the hypothesis of greater bias on retrospective measurement and, in fact, are in a direction that might suggest an in terpretation of reduced bias on such measures. The results also continue to support superior validity of retrospective over traditional pretest/posttest in dices of improvement following treatment.


Educational and Psychological Measurement | 1984

Methods of Analysis with Response-Shift BIAS

James H. Bray; Scott E. Maxwell; George S. Howard

Howard and his colleagues have discovered an instrumentation related contamination which confounds the results of studies which employ self-report measures in a pre/post or posttest only design. This confounding influence is referred to as response-shift bias. Research has demonstrated that the traditional methods of analysis (i.e., analysis of posttests only, analysis of pre/post difference scores, and analysis of covariance using prescores (ANCOVA)) do not consider response-shift bias and produce biased estimates of the treatment effect. A retrospective pre/post design is recommended by Howard and his colleagues to control for response-shift bias. The only method of analysis which yields an unbiased estimate of the treatment effect is posttest minus retrospective pretest difference scores. The purpose of the present study is to determine the relative loss in statistical power of the traditional methods of analysis when response-shift bias is present. Analytic and Monte Carlo techniques were employed to compare the powers of five methods of analysis under various conditions. The results indicate that when there is a response-shift the most powerful method of analysis, overall, is the retrospective pre/post method and the loss in statistical power of the traditional methods can be substantial under many conditions. Recommendations and applications to applied research are discussed.


Research in Higher Education | 1978

The evaluation of faculty development programs

Donald P. Hoyt; George S. Howard

This article reviews literature pertinent to the evaluation of faculty development programs and presents data from several studies conducted at two institutions. These data were consistent with those previously reported in that faculty participants consistently expressed satisfaction with development services. In addition, one study found that most faculty members voluntarily took some action to improve their instructional effectiveness, though only a minority pursued these efforts in depth. Volunteers who worked intensively with a faculty development consultant improved more on objective measures of effectiveness than did those who were only superficially involved in improvement efforts; those who received no consultative assistance failed to improve significantly. Evidence from a final study provided a control for faculty motivation and led to the conclusion that improvement was contingent both on faculty desire to improve and on the availability of professional assistance.

Collaboration


Dive into the George S. Howard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James H. Bray

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Don W. Nance

Wichita State University

View shared research outputs
Top Co-Authors

Avatar

Paul R. Myers

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Janet K. Swim

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pennie Myers

Wichita State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge