Jennifer L. Kobrin
College Board
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jennifer L. Kobrin.
Educational Assessment | 2011
Jennifer L. Kobrin; Brian F. Patterson
Prior research has shown that there is substantial variability in the degree to which the SAT and high school grade point average (HSGPA) predict 1st-year college performance at different institutions. This article demonstrates the usefulness of multilevel modeling as a tool to uncover institutional characteristics that are associated with this variability. The results revealed that the predictive validity of HSGPA decreased as mean total SAT (i.e., sum of the three SAT sections) score at an institution increased and as the proportion of White freshmen increased. The predictive validity of the three SAT sections (critical reading, mathematics, and writing) varied differently as a function of different institution-level variables. These results suggest that the estimates of validity obtained and aggregated from multiple institutions may not accurately reflect the unique contextual factors that influence the predictive validity of HSGPA and SAT scores at a particular institution.
Journal of Advanced Academics | 2009
Emily J. Shaw; Jennifer L. Kobrin; Sheryl Packman; Amy Elizabeth Schmidt
The media often communicates the existence of two distinct types of college applicants: the frenzied, overachieving, anxious student who applies to many institutions and the underprepared, less advantaged student who is not at all familiar with the application process. Although these two groups likely do exist, they are far from the norm of college applicants who are better exemplified as at least a few groups of students who can be classified based on relevant characteristics. We identified five unique clusters of students: Privileged High Achievers/Athletes, Disadvantaged Students, Average Students Needing More Guidance, Mostly Female Academics, and Privileged Low Achievers. These clusters differed from each other based on variables including: academic performance, demographic characteristics, home and school characteristics, participation in school activities, and the number and types of higher education institutions to which they apply. An understanding of these descriptive clusters, comprised of students with similar backgrounds and goals for higher education, is a necessary first step in developing more thoughtful and inclusive enrollment management and college preparation practices.
Educational and Psychological Measurement | 2011
Krista D. Mattern; Emily J. Shaw; Jennifer L. Kobrin
This study examined discrepant high school grade point average (HSGPA) and SAT performance as measured by the difference between a student’s standardized SAT composite score and standardized HSGPA. The SAT–HSGPA discrepancy measure was used to examine whether certain students are more likely to exhibit discrepant performance and in what direction. Additionally, the relationship between the SAT–HSGPA discrepancy measure and other academic indicators was examined. Finally, the relationship between the SAT–HSGPA discrepancy measure and the error term of three admission models (HSGPA only, SAT score only, and HSGPA and SAT scores) was examined. Results indicated that females, minority, low socioeconomic status, and nonnative English speakers were more likely to have higher HSGPAs relative to their SAT scores. Furthermore, using only HSGPA for admission overpredicted college performance for those students who had higher HSGPA as compared with SAT scores and underpredicted college performance for students with higher SAT scores as compared with HSGPA. The results underscore the utility of using both HSGPA and test scores for admission decisions.
Journal of Advanced Academics | 2010
Krista D. Mattern; Emily J. Shaw; Jennifer L. Kobrin
The purpose of the current study was to examine the academic consequences of attending an institution that is not considered an academic fit for a student. The results from the current study show that more able students perform better in college in terms of first-year GPA and retention to their second year regardless of the institution they attend. Additionally, after controlling for ability, students attending more selective institutions perform better in college. However, the results do not support an academic fit effect above and beyond individual and school effects. The results have implications for higher education admission policies. Specifically, institutions that want to maximize the percentage of admitted students that are successful and return for their second year should not minimize the academic qualifications of the applicants. They should not be worried about selecting “overqualified” applicants, who they believe may be bored or not challenged enough at their institution, as these students earn higher college first-year GPAs and are more likely to return for their second year. On the other hand, students who are not academically qualified are more likely to earn lower grades and leave the institution.
Educational and Psychological Measurement | 2012
Jennifer L. Kobrin; Young Koung Kim; Paul R. Sackett
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the relationship between SAT Mathematics (SAT-M) item characteristics and the item’s ability to predict college outcomes. Using multiple regression, SAT-M item characteristics (content area, format, cognitive complexity, and abstract/concrete classification) were used to predict three outcome measures: the correlation of item score with first-year college grade point average, the correlation of item score with mathematics course grades, and the percentage of students who answered the item correctly and chose to major in a mathematics or science field. Separate models were run including and excluding item difficulty and discrimination as covariates. The results revealed that many of the item characteristics were related to the outcome measures and that item difficulty and discrimination had a mediating effect on several of the predictor variables, particularly on the effects of nonroutine/insightful items and multiple-choice items.
Measurement: Interdisciplinary Research & Perspective | 2012
Krista D. Mattern; Jennifer L. Kobrin; Wayne J. Camara
Asresearchersatatestingorganizationconcernedwiththeappropriateusesandvalidityevidencefor our assessments, we provide an applied perspective related to the issues raised in the focusarticle (Newton, this issue). Newton’s proposal for elaborating the consensus definition of valid-ity is offered with the intention to reduce the risks of inadequate validation practice. His articleis taken in the spirit of helping our profession to improve with the ultimate goal of protectingthe public in the interpretation and use of test scores, and we wholeheartedly support his intent.However, we wonder if not more can be done, in particular, from a practitioner’s viewpoint.Specifically, we address (a) the question of whether the proposed changes to the definition willimprovevalidationpractice,(b)theoreticalversuspracticalissuesandimplicationsforpromotingrigorous validity research, and (c) possible next steps for our profession.IMPLICATIONS OF THE REVISED DEFINITION ON VALIDATION PRACTICEThe focus article is motivated by a realization that the current definition of validity, as statedin the
Archive | 2008
Sandra M. Barbuti; Brian F. Patterson; Jennifer L. Kobrin; Krista D. Mattern
Archive | 2008
Krista D. Mattern; Brian F. Patterson; Emily J. Shaw; Jennifer L. Kobrin; Sandra M. Barbuti
Archive | 2006
Jennifer L. Kobrin; Viji Sathy
Archive | 2002
Jennifer L. Kobrin; Glenn B. Milewski