Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara S. Plake is active.

Publication


Featured researches published by Barbara S. Plake.


Educational and Psychological Measurement | 1994

Validation of a Measure of Learning and Performance Goal Orientations

Teresa Debacker Roedel; Gregory Schraw; Barbara S. Plake

This study investigated the psychometric properties of an instrument (i.e., the Goals Inventory) that measured learning and performance goal orientations. Test-retest reliability estimates for the learning and performance goal scales were r = .73 and r = .76, respectively. Internal consistency estimates were assessed using Cronbachs alpha. These values were .80 and .75, respectively. Convergent and divergent validity were evaluated by comparing the Goals Inventory to measures of test anxiety, hope, and attributions for success and failure. All the theoretically explicit predictions of Dweck and Leggetts model were supported. Suggestions are made for the use and interpretation of the Goals Inventorys two subscales and for future research.


Computers in Human Behavior | 1985

Comparing Computerized versus Traditional Psychological Assessment.

Mark Lukin; E. Thomas Dowd; Barbara S. Plake; Robert G. Kraft

Abstract This study utilized a Latin Squares design to assess the equivalence of computerized testing methods compared to traditional pencil-and-paper formats in a clinical setting. Subjects were given an intake interview and three personality assessment instruments by either of the two administration formats at one week intervals. Subjects also completed a post-assessment evaluation instrument (semantic differential) to gauge their reactions to the testing experience. Data analysis indicated no significant differences between scores on measures of anxiety, depression, and psychological reactance, either across group or administration format. Importantly, while producing comparable results to the pencil-and-paper assessment, the computerized administration was preferred over the pencil-and-paper administration by 85% of the subjects. Discussion emphasizes the implications of this study as support for the use of computerized assessment in applied psychology.


Educational and Psychological Measurement | 1991

Psychometric properties of the american-international relations scale

Gargi Roysircar Sodowsky; Barbara S. Plake

A multidimensional instrument, the American-International Relations Scale, purports to measure the acculturation of international students, scholars, and academicians to the white-dominant society. Data from 606 respondents, making a return rate of 67%, were collected. Factor analysis of completed 34-item questionnaires (N - 481), using varimax rotation, yielded three interpretable factors, tentatively labeled (a) Perceived Prejudice, (b) Acculturation, and (c) Language Usage. Factor loadings of items ranged between .33 and .89 on one of the respective factors, with no item loading saliently on two or more factors. Full scale and subscale internal-consistency reliabilities (alphas) were .89, .88, .79, and .82, respectively.


Applied Psychological Measurement | 2000

Setting Performance Standards on Complex Educational Assessments

Ronald K. Hambleton; Richard M. Jaeger; Barbara S. Plake; Craig N. Mills

Performance assessments have become popular in education and credentialing, and performance standards are common for interpreting and reporting scores. However, because of the unique characteristics of these assessments compared to multiple-choice tests (such as polytomous scoring), new and validstandard-setting methods are needed. Well-known standard-setting methods are no longer applicable. A number of promising methods for setting performance standards are described and their strengths and weaknesses are discussed. Suggestions for additional research are offered.


Educational and Psychological Measurement | 1980

A Comparison of a Statistical and Subjective Procedure to Ascertain Item Validity: One Step in the Test Validation Process:

Barbara S. Plake

In an attempt to make commercial tests valid for users of both sexes and all race groups, many test publishers employ review procedures to remove potentially invalid (or biased) items from their tests. Statistical procedures to identify biased test items are often sophisticated and costly to test developers. Alternative approaches have employers review test items and select those that appear biased. To verify this methodology, a statistical and a subjective procedure were used to identify biased items in an elementary achievement test. The results show little agreement on items selected as biased. An implication to test developers is to adopt statistical support for a reviewers selection of biased items.


Archive | 1996

Teacher Assessment Literacy: What Do Teachers Know about Assessment?

Barbara S. Plake; James C. Impara

Publisher Summary It is estimated that teachers spend up to 50% of their instructional time in assessment-related activities. The chapter discusses the teacher assessment literacy—what teachers actually know about assessment. It has been found that teachers receive little or no formal assessment training in the preparatory programs and often they are ill-prepared to undertake assessment-related activities. With the introduction of “authentic” assessment strategies, it is more important for teachers to be more skilled in assessment. This is so because they often are involved directly in the administration and scoring of these assessments. Some studies have attempted to quantify the level of teacher preparation in educational assessment of students. The chapter highlights the study of a national survey of teacher assessment literacy. Also, it presents a more detailed analysis of teacher performance on the instrument. The results of this study indicate low levels of assessment competency for teachers. These results suggest that it is time for the education community to recognize that teachers are ill-equipped to successfully undertake one of the most prevalent activities of their instructional program: student assessment. This is especially salient due to the current trend in student assessment, involving an increase in assessment strategies such as performance, portfolio, and other types of “authentic assessments.”


Educational and Psychological Measurement | 1997

A new standard-setting method for performance assessments: The dominant profile judgment method and some field-test results

Barbara S. Plake; Ronald K. Hambleton; Richard M. Jaeger

Traditional standard-setting methods are not well suited for applications with polytomously scored performance assessments. The present article presents a standard-setting method-the dominant profile judgment (DPJ) method-designed for use with profiles of polytomous scores on exercises in a performance-based assessment. The method is direct, in that it guides standard-setfing panelists in the articulation of their standard-setting policies. Further, it allows complex policy statements that could incorporate compensatory and/or conjunctive components. A detailed description of the method is provided, Results of an application of this standard-setting method are presented. Recommendations for improvements in the method are discussed.


Educational Assessment | 2001

Ability of Panelists to Estimate Item Performance for a Target Group of Candidates: An Issue in Judgmental Standard Setting

Barbara S. Plake; James C. Impara

Recent researchers (Impara & Plake, 1998; National Research Council, 1999; Shepard, 1995) have called into question the ability of judges to make accurate item performance estimates for target subgroups of candidates, such as minimally competent candidates. The purpose of this study was to examine both the reliability and accuracy of item performance estimates from an Angoff (1971) standard setting application. Results provide evidence that item performance estimates were both reasonable and reliable. Factors that might have influenced these results are discussed.


Educational and Psychological Measurement | 1989

Providing Item Feedback in Computer-Based Tests: Effects of Initial Success and Failure

Steven L. Wise; Barbara S. Plake; Laura Boettcher Barnes; Leslie E. Lukin

This study investigated the effects of providing item feedback on student achievement test performance and anxiety, and how these effects may be moderated by the amount of success and students experience on the initial items of the test. Introductory statistics students were randomly assigned to six forms of a computer-based algebra test that differed in terms of (a) the difficulty of the first five items, and (b) the type of item feedback provided. Although test performance was not affected significantly by differences among the test forms, student anxiety levels were significantly increased when administered the test form using difficult initial items and providing item feedback along with a running score total. Implications for the use of item feedback in computer-based testing are discussed.


Computers in Human Behavior | 1986

The effects of item feedback and examinee control on test performance and anxiety in a computer-administered test

Steven L. Wise; Barbara S. Plake; Leslie A. Eastman; Laura L. Boettcher; Mark Lukin

Abstract This study investigated the usefulness of computer-administered tests in the reduction of test anxiety. Two testing methods were used: (a) providing immediate item feedback and (b) allowing examinees to control the order of item administration. The results did not support the use of item feedback in reducing test anxiety. In some cases, item feedback was found to significantly increase test anxiety and decrease test performance. Similar results were found for examinee control, where test anxiety was found to be as high, and in some cases, significantly higher than that found with a fixed order of item administration.

Collaboration


Dive into the Barbara S. Plake's collaboration.

Top Co-Authors

Avatar

James C. Impara

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Steven L. Wise

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Chad W. Buckendahl

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Gerald J. Melican

Educational Testing Service

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdullah A. Ferdous

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Jane Close Conoley

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Ronald K. Hambleton

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

John A. Glover

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge