Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brent Bridgeman is active.

Publication


Featured researches published by Brent Bridgeman.


Educational Researcher | 2012

Measuring Learning Outcomes in Higher Education: Motivation Matters

Ou Lydia Liu; Brent Bridgeman; Rachel M. Adler

With the pressing need for accountability in higher education, standardized outcomes assessments have been widely used to evaluate learning and inform policy. However, the critical question on how scores are influenced by students’ motivation has been insufficiently addressed. Using random assignment, we administered a multiple-choice test and an essay across three motivational conditions. Students’ self-report motivation was also collected. Motivation significantly predicted test scores. A substantial performance gap emerged between students in different motivational conditions (effect size as large as .68). Depending on the test format and condition, conclusions about college learning gain (i.e., value added) varied dramatically from substantial gain (d = 0.72) to negative gain (d = −0.23). The findings have significant implications for higher education stakeholders at many levels.


Written Communication | 1984

Survey of Academic Writing Tasks

Brent Bridgeman; Sybil B. Carlson

Questionaire responses from faculty members in 190 academic departments at 34 universities were analyzed to determine the writing tasks faced by beginning undergraduate and graduate students. In addition to undergraduate English departments, six fields were surveyed: electrical engineering, civil engineering, computer science, chemistry, psychology, and master of business administration programs. Results indicated considerable variability across fields in the kinds of writing required and in preferred assessment topics.


Journal of Educational Psychology | 1991

GENDER DIFFERENCES IN PREDICTORS OF COLLEGE MATHEMATICS PERFORMANCE AND IN COLLEGE MATHEMATICS COURSE GRADES

Brent Bridgeman; Cathy Wendler

Grades of men and women in 1st-year mathematics courses were obtained from a sample of 9 universities. In addition, placement test scores were available from 4 of the institutions. This information was combined with Scholastic Aptitude Test (SAT) scores and self-reported information on mathematics courses taken in high school, grades in those courses, and overall high school grade point average. Within a given college mathematics course, the average grades of women were about equal to or slightly higher than mens average grades, but mens average scores on the mathematical scale of the SAT were above womens average scores by a third of a standard deviation or more


Applied Measurement in Education | 2003

Effects of Screen Size, Screen Resolution, and Display Rate on Computer-Based Test Performance.

Brent Bridgeman; Mary Lou Lennon; Altamese Jackenthal

Computer-based tests administered in established commercial testing centers typically have used monitors of uniform size running at a set resolution. Web-based delivery of tests promises to expand access, but at the price of less standardization in equipment. This study evaluated the effects of variations in screen size, resolution, and presentation delay on verbal and mathematics scores in a sample of 357 collegebound high school juniors. There were 3 screen display conditions crossed with 2 presentation rate conditions: a 17-in. monitor set to a resolution of 1024 × 768, a 17-in. monitor set to a resolution of 640 × 480, and a simulated 15-in. monitor set to a resolution of 640 × 480 with items presented either with no delay or with a 5-sec delay between questions (to emulate a slow Internet connection). No significant effects on math scores were found. Verbal scores were higher, by about a quarter of a standard deviation, with the larger high-resolution display.


Applied Measurement in Education | 2012

Comparison of Human and Machine Scoring of Essays: Differences by Gender, Ethnicity, and Country

Brent Bridgeman; Catherine Trapani; Yigal Attali

Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related on average, this does not eliminate the possibility that machine and human scores may differ significantly for certain gender, ethnic, or country groups. Such differences were explored with essay data from two large-scale high-stakes testing programs: the Test of English as a Foreign Language and the Graduate Record Examination. Human and machine scores were very similar across most subgroups, but there were some notable exceptions. Policies were developed so that any differences between humans and machines would have a minimal impact on final reported scores.


Language Testing | 2012

Relationship of TOEFL iBT[R] Scores to Academic Performance: Some Evidence from American Universities.

Yeonsuk Cho; Brent Bridgeman

This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT®) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and graduate students were collected from 10 universities in the United States. The data consisted of students’ GPA, detailed course information, and admissions-related test scores including TOEFL iBT, GRE, GMAT, and SAT scores. Correlation-based analyses were conducted for subgroups by academic status and disciplines. Expectancy graphs were also used to complement the correlation-based analyses by presenting the predictive validity in terms of individuals in one of the TOEFL iBT score subgroups belonging to one of the GPA subgroups. The predictive validity expressed in terms of correlation did not appear to be strong. Nevertheless, the general pattern shown in the expectancy graphs indicated that students with higher TOEFL iBT scores tended to earn higher GPAs and that the TOEFL iBT provided information about the future academic performance of non-native English speaking students beyond that provided by other admissions tests. These observations led us to conclude that even a small correlation might indicate a meaningful relationship between TOEFL iBT scores and GPA. Limitations and implications are discussed.


Research in Higher Education | 1991

Essays and multiple-choice tests as predictors of college freshman GPA

Brent Bridgeman

The incremental validity of a short holistically scored expository essay for predicting freshman grade point average was explored in two samples. In one of the samples the essay was administered to incoming freshmen at state colleges as part of a basic skills assessment battery. In the second sample the essay was part of an achievement test that is one of the admissions tests used by highly selective colleges. In both samples, the essay added essentially nothing to what could be predicted from high school grade point average, Scholastic Aptitude Test scores, and a multiple-choice test of writing-related skills.


Journal of Educational Psychology | 1996

Success in college for students with discrepancies between performance on multiple-choice and essay tests.

Brent Bridgeman; Rick Morgan

Students with high scores (top third) on the essay portion of an Advanced Placement examination and low scores (bottom third) on the multiple-choice portion of the examination were compared with students with the opposite pattern (top third on the multiple-choice questions and bottom third on the essay questions). Across examinations in different subject areas (history, English, and biology), students who were relatively strong in the essay format and weak in the multiple-choice format were as successful in their college courses as students with the opposite pattern, especially in those courses where grades are typically not determined by multiple-choice tests. Although differential essay and multiple-choice test performance was not related to course grades, it was related to performance on other tests. Students in the high multiple-choice/low essay group performed much better than the other group on other multiple-choice tests, especially the verbal section of the SAT. In relation to their performance on multiple-choice tests, students in the high essay/low multiple-choice group performed well on other essay tests.


Journal of Educational Measurement | 2004

Impact of Fewer Questions per Section on SAT I Scores.

Brent Bridgeman; Catherine Trapani; Edward Curley

The impact of allowing more time for each question on the SAT I: Reasoning Test scores was estimated by embedding sections with a reduced number of questions into the standard 30-minute equating section of two national test administrations. Thus, for example, questions were deleted from a verbal section that contained 35 questions to produce forms that contained 27 or 23 questions. Scores on the 23-question section could then be compared to scores on the same 23 questions when they were embedded in a section that contained 27 or 35 questions. Similarly, questions were deleted from a 25-question math section to form sections of 20 and 17 questions. Allowing more time per question had a minimal impact on verbal scores, producing gains of less than 10 points on the 200–800 SAT scale. Gains for the math score were less than 30 points. High-scoring students tended to benefit more than lower-scoring students, with extra time creating no increase in scores for students with SAT scores of 400 or lower. Ethnic/racial and gender differences were neither increased nor reduced with extra time.


Language Testing | 2012

TOEFL iBT Speaking Test Scores as Indicators of Oral Communicative Language Proficiency

Brent Bridgeman; Donald E. Powers; Elizabeth Stone; Pamela Mollaun

Scores assigned by trained raters and by an automated scoring system (SpeechRaterTM) on the speaking section of the TOEFL iBT™ were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language Internet-based test (TOEFL iBT). Oral communicative effectiveness was evaluated both by rating scales and by the ability of the undergraduate raters to answer multiple-choice questions that could be answered only if the spoken response was understood. Correlations of these communicative competence indicators from the undergraduate raters with speech scores were substantially higher for the scores provided by the professional TOEFL iBT raters than for the scores provided by SpeechRater. Results suggested that both expert raters and SpeechRater are evaluating aspects of communicative competence, but that SpeechRater fails to measure aspects of the construct that human raters can evaluate.

Collaboration


Dive into the Brent Bridgeman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge