Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles S. Reichardt is active.

Publication


Featured researches published by Charles S. Reichardt.


Educational Researcher | 1981

Qualitative and quantitative methods in evaluation research

David Brinberg; Thomas D. Cook; Charles S. Reichardt

Evaluation researchers, traditionally considered to be users of quantitative methods, are now actively exploring the qualitative aspects of the performance of the programmes they are evaluating. Rather than argue the validity of either the quantitative or the qualitative approach, most of the noted contributors to this volume conclude that both are required for comprehensive evaluation.


Psychology of Women Quarterly | 1995

Women, Homelessness, And Substance Abuse: Moving Beyond the Stereotypes

Lisa J. Geissler; Carol A. Bormann; Carol F. Kwiatkowski; G. Nicholas Braucht; Charles S. Reichardt

This study examined the characteristics of homeless women with substance abuse problems. Data were collected on a sample of 323 homeless substance abusers. First, 49 women and 274 men were compared to demonstrate distinct problems and treatment needs of the women. Results showed that the women were more likely than the men to abuse drugs, but less likely to receive substance abuse treatment. In addition, women spent more time in doubled-up living arrangements, and were more likely to receive outpatient psychiatric treatment. Second, two subgroups of women were compared: those who had been homeless for 6 months or less, and those who had been homeless longer than 6 months during their lifetime. The women who had been homeless longer were less educated, younger when they first became homeless, and were more likely to abuse alcohol, to have been assaulted, and to have attempted suicide. Implications for research and treatment are discussed.


Multivariate Behavioral Research | 1995

The Criteria for Convergent and Discriminant Validity in a Multitrait-Multimethod Matrix.

Charles S. Reichardt; S.C. Coleman

Two basic structures have been proposed for the data in a multitrait-multimethod matrix; additive and multiplicative. The well-known criteria for assessing convergent and discriminant validity proposed by Campbell and Fiske (1959) are shown, in general, to be inadequate for either structure. Model-specific criteria for assessing convergent and discriminant validity hold greater promise than the Campbell and Fiske criteria.


Evaluation Review | 1991

Random Measurement Error Does Not Bias the Treatment Effect Estimate in the Regression-Discontinuity Design. II. When an Interaction Effect Is Present

William M. K. Trochim; Joseph C. Cappelleri; Charles S. Reichardt

This article examines the regression-discontinuity (RD) design when there is random measure ment error and a treatment interaction effect. Two simulation issues -the specification of the pretest-posttest functional form and the choice of the point-of-estimation of the treatment effect- are examined Traditionally, an interaction effect in the general linear model has been con structed after centering the true scores by subtracting their mean. However, because the RD design has traditionally estimated the treatment effect at the cutoff, one is liable to obtain an apparently biased treatment effect that is actually attributable to the misspecification with regard to the point-of-estimation. Formulas are provided that allow one to control exactly in simulations the magnitude of a treatment effect at any point-of-estimation. These formulas can also be used for simulating the randomized experimental (RE) case where estimation is not at the pretest mean.


Psychological Methods | 2006

The Principle of Parallelism in the Design of Studies to Estimate Treatment Effects

Charles S. Reichardt

An effect is a function of a cause as well as of 4 other factors: recipient, setting, time, and outcome variable. The principle of parallelism states that if a design option exists for any 1 of these 4 factors, a parallel option exists for each of the others. For example, effects are often estimated by drawing a comparison across recipients who receive different treatments. The principle of parallelism implies that an effect can also be estimated by drawing a comparison across settings, times, or outcome variables. Typologies of methodological options are derived from the principle of parallelism. The typologies can help researchers recognize a broader set of options than they would otherwise and thereby improve the quality of research designs.


Evaluation Review | 1991

Random Measurement Error Does Not Bias the Treatment Effect Estimate in the Regression-Discontinuity Design: I. The Case of No Interaction

Joseph C. Cappelleri; William M. K. Trochim; T.D. Stanley; Charles S. Reichardt

A recently published Evaluation Review article (April 1990) claimed that because of random measurement error in the pretest (and the regression toward the mean that results) the estimate of the treatment effect of the regression-discontinuity (RD) design is biased A conceptual approach and a set of computer simulations are presented to arrive at the opposite conclusion: random measurement error in the pretest does not bias the estimate of the treatment effect in the RD design. This article, the first of two dealing with measurement error in the RD design, concentrates specifically on the case of no interaction between pretest and treatment on posttest. The claim that the RD effect estimate is not biased due to measurement error is in full agreement with the conclusion reached by several authors who have examined the design over the last two decades.


American Journal of Evaluation | 2011

Evaluating Methods for Estimating Program Effects

Charles S. Reichardt

I define a treatment effect in terms of a comparison of outcomes and provide a typology of all possible comparisons that can be used to estimate treatment effects, including comparisons that are relatively unknown in both the literature and practice. I then assess the relative merit, worth, and value of all possible comparisons based on the criteria of bias, precision, generalizability, ease of implementation, and cost. Which comparison is best depends on how these criteria are weighted and on the constraints of the specific research setting. I hope readers come to recognize a wider range of comparisons than they had previously, appreciate the value of considering all possible comparisons, and see how my typology of comparisons provides the basis for making fair appraisals of the relative strengths and weaknesses of different types of comparisons in the presence of the contingencies that are most likely to arise in practice.


Evaluation Review | 1995

Reports of the death of regression-discontinuity analysis are greatly exaggerated.

Charles S. Reichardt; William M. K. Trochim; Joseph C. Cappelleri

Stanley (1991) argues that both random measurement error in the pretest and treatment-effect interactions bias the estimate of the treatment effect when multiple regression is used to analyze the data from a regression-discontinuity design (RDD). Stanley also argues that these biases are so severe that they should cause researchers to consider using statistical procedures other than regression analysis. The authors of the present article disagree. Curvilinearity in the regression of the posttest on pretest scores can be difficult to model, can bias the regression analysis of data from the RDD if not modeled correctly, and therefore should cause researchers to consider alternatives to regression analysis. If the regression surfaces are linear, however, unbiased estimates can be obtained easily via regression analysis, whether or not either random measurement error in the pretest or treatment-effect interactions are present. Improving upon regression analysis is a worthy goal but requires understanding just what are and are not the weaknesses of the method. In addressing these issues, this article elucidates some of the general principles that underlie the use of multiple regression to analyze data from the RDD quasi-experiment.


Multivariate Behavioral Research | 2011

Commentary: Are Three Waves of Data Sufficient for Assessing Mediation?

Charles S. Reichardt

Maxwell, Cole, and Mitchell (2011) demonstrated that simple structural equation models, when used with cross-sectional data, generally produce biased estimates of meditated effects. I extend those results by showing how simple structural equation models can produce biased estimates of meditated effects when used even with longitudinal data. Even with longitudinal data, simple autoregressive structural equation models can imply the existence of indirect effects when only direct effects exist and the existence of direct effects when only indirect effects exist.


Memory & Cognition | 1973

On the independence of judged frequencies for items presented in successive lists

Charles S. Reichardt; John J. Shaughnessy; Joel Zimmerman

In an experiment examining retroactive interference effects in a frequency-judging task, all Ss were presented with a list of words occurring varying numbers of times according to either a massed- or distributed-practive (MP or DP) schedule. They were then asked to judge how often each word had occurred. Following this, Ss were given one of four types of second tasks a second list with different items followed by a frequency-judging task for that list (Condition New): a second list with items repeated from the first list but with different frequencies for each item, while either maintaining items as either MP or DP items (Condition Same) or switching MP items to DP, and vice versa (Condition Reverse): followed by a frequency-judging task for the second-list frequencies only: or a puzzle task for the amount of time required for second-list presentation and judgment in the other conditions (Condition None). Finally, all Ss were asked to recall List 1 frequencies, List 2 frequencies were less discriminable in Conditions Same and Reverse than in Condition New. Recall of List 1 frequencies, however, was not different for these three groups, but was poorer than in List 2 frequency judgments were not independent of List 1 frequencies.

Collaboration


Dive into the Charles S. Reichardt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melvin M. Mark

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge