Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian H. Kim is active.

Publication


Featured researches published by Brian H. Kim.


Journal of Applied Psychology | 2004

Developing a biodata measure and situational judgment inventory as predictors of college student performance.

Frederick L. Oswald; Neal Schmitt; Brian H. Kim; Lauren J. Ramsay; Michael A. Gillespie

This article describes the development and validation of a biographical data (biodata) measure and situational judgment inventory (SJI) as useful predictors of broadly defined college student performance outcomes. These measures provided incremental validity when considered in combination with standardized college-entrance tests (i.e., SAT/ACT) and a measure of Big Five personality constructs. Racial subgroup mean differences were much smaller on the biodata and SJI measures than on the standardized tests and college grade point average. Female students tended to outperform male students on most predictors and outcomes with the exception of the SAT/ACT. The biodata and SJI measures show promise for student development contexts and for selecting students on a wide range of outcomes with reduced adverse impact.


Psychological Bulletin | 2010

Evidence for Response Bias as a Source of Error Variance in Applied Assessment.

Robert E. McGrath; Matthew Mitchell; Brian H. Kim; Leaetta Hough

After 100 years of discussion, response bias remains a controversial topic in psychological measurement. The use of bias indicators in applied assessment is predicated on the assumptions that (a) response bias suppresses or moderates the criterion-related validity of substantive psychological indicators and (b) bias indicators are capable of detecting the presence of response bias. To test these assumptions, we reviewed literature comprising investigations in which bias indicators were evaluated as suppressors or moderators of the validity of other indicators. This review yielded only 41 studies across the contexts of personality assessment, workplace variables, emotional disorders, eligibility for disability, and forensic populations. In the first two contexts, there were enough studies to conclude that support for the use of bias indicators was weak. Evidence suggesting that random or careless responding may represent a biasing influence was noted, but this conclusion was based on a small set of studies. Several possible causes for failure to support the overall hypothesis were suggested, including poor validity of bias indicators, the extreme base rate of bias, and the adequacy of the criteria. In the other settings, the yield was too small to afford viable conclusions. Although the absence of a consensus could be used to justify continued use of bias indicators in such settings, false positives have their costs, including wasted effort and adverse impact. Despite many years of research, a sufficient justification for the use of bias indicators in applied settings remains elusive.


Journal of Applied Psychology | 2007

The use of background and ability profiles to predict college student outcomes

Neal Schmitt; Frederick L. Oswald; Brian H. Kim; Anna Imus; Stephanie M. Merritt; Alyssa Friede; Smriti Shivpuri

To determine whether profiles of predictor variables provide incremental prediction of college student outcomes, the authors 1st applied an empirical clustering method to profiles based on the scores of 2,771 entering college students on a battery of biographical data and situational judgment measures, along with SAT and American College Test scores and high school grade point average, which resulted in 5 student groups. Performance of the students in these clusters was meaningfully different on a set of external variables, including college grade point average, self-rated performance, class absenteeism, organizational citizenship behavior, intent to quit their university, and satisfaction with college. The 14 variables in the profile were all significantly correlated with 1 or more of the outcome measures; however, nonlinear prediction of these outcomes on the basis of cluster membership did not add incrementally to a linear-regression-based combination of these 14 variables as predictors.


Journal of Applied Psychology | 2003

Impact of Elaboration on Socially Desirable Responding and the Validity of Biodata Measures

Neal Schmitt; Fred L. Oswald; Brian H. Kim; Michael A. Gillespie; Lauren J. Ramsay; Tae Yong Yoo

The current study investigated the impact of requiring respondents to elaborate on their answers to a biodata measure on mean scores, the validity of the biodata item composites, subgroup mean differences, and correlations with social desirability. Results of this study indicate that elaborated responses result in scores that are much lower than nonelaborated responses to the same items by an independent sample. Despite the lower mean score on elaborated items, it does not appear that elaboration affects the size of the correlation between social desirability and responses to biodata items or that it affects criterion-related validity or subgroup mean differences in a practically significant way.


Journal of College Student Development | 2006

Individual Differences in Academic Growth: Do They Exist, and Can We Predict Them?

Smriti Shivpuri; Neal Schmitt; Frederick L. Oswald; Brian H. Kim

College admissions tests predict college performance well, particularly first year grade point average (GPA; Kuncel, Hezlett, & Ones, 2001, 2004). However, noncognitive measures may add to the incremental validity of cognitive measures in that they will assess a broader range of college performance dimensions and reduce racial subgroup differences in performance. Beyond predicting first year GPA, no studies, to our knowledge, have addressed patterns of academic growth across time. This paper reports data that demonstrate individual differences in academic growth patterns and variables that predict them. Results indicate that noncognitive predictors add to the prediction of GPA beyond traditional college admissions tests for our sample of freshmen students. Implications for student affairs professionals are discussed.


Human Performance | 2009

Estimating Trait and Situational Variance in a Situational Judgment Test

Alyssa J. Friede Westring; Frederick L. Oswald; Neal Schmitt; Stephanie Drzakowski; Anna Imus; Brian H. Kim; Smriti Shivpuri

In organizational research, situational judgment tests (SJTs) consistently demonstrate incremental validity, yet our theoretical understanding of SJTs is limited. Our knowledge could be advanced by decomposing the variance of SJT items into trait variance and situation variance; we do that by applying statistical methods used to analyze multitrait–multimethod matrices. A college-student sample (N = 2,747) was administered an SJT of goal orientation traits (i.e., mastery, performance-approach, and performance-avoid). Structural equation modeling was used to estimate the proportions of item variance to attributable to situational differences (across students) and to trait-based differences in students (across situations). Situation factors accounted for over three times the amount of variance as did individual difference factors. We conclude with general implications for the design of SJTs in organizational research.


Organizational Research Methods | 2005

Extending a Practical Method for Developing Alternate Test Forms Using Independent Sets of Items

Frederick L. Oswald; Alyssa Friede; Neal Schmitt; Brian H. Kim; Lauren J. Ramsay

This study describes alternate test form development for a Situational Judgment Inventory (SJI) predicting college performance. Principal axis factor analysis of responses to the SJI lent support for a general factor, yet each SJI form sampled items across 12 distinct rationally derived content areas. The first step of developing alternate forms involved random and representative sampling of SJI items across each content area, creating a large number of preliminary 36-item SJI test forms. Gibson and Weiner (1998) provided criteria for selecting alternate forms; however, the authors of the present study extended this approach in the next step of selecting alternate forms based on their estimated criterion-related validity with grade point average. Results provide initial support for the 144 alternate forms generated. This general approach reflects a practical and methodologically sound means of developing alternate forms of types of measures that are rationally heterogeneous yet empirically homogeneous.


Applied Measurement in Education | 2010

Differential Item Functioning in Biodata: Opportunity Access as an Explanation of Gender- and Race-Related DIF

Anna Imus; Neal Schmitt; Brian H. Kim; Frederick L. Oswald; Stephanie M. Merritt; Alyssa Friede Wrestring

Investigations of differential item functioning (DIF) have been conducted mostly on ability tests and have found little evidence of easily interpretable differences across various demographic subgroups. In this study, we examined the degree to which DIF in biographical data items referencing academically relevant background, experiences, and interests was related to differences in judgments about access to these experiences by members of different gender and race subgroups. DIF in the location parameter was significantly related (r = –.51, p < .01) to gender differences in perceived accessibility to experience. No significant relationships with accessibility were observed for DIF in the slope parameter across gender groups or for the slope and location parameters associated with DIF across Black and White groups. Practical implications for use of biodata and theoretical implications for DIF research are discussed.


Archive | 2009

Developing adaptive teams: A theory of dynamic team leadership

Steve W. J. Kozlowski; Daniel J. Watola; Jaclyn M. Jensen; Brian H. Kim; Isabel C. Botero


International Journal of Selection and Assessment | 2004

The Impact of Justice and Self-Serving Bias Explanations of the Perceived Fairness of Different Types of Selection Tests

Neal Schmitt; Frederick L. Oswald; Brian H. Kim; Michael A. Gillespie; Lauren J. Ramsay

Collaboration


Dive into the Brian H. Kim's collaboration.

Top Co-Authors

Avatar

Neal Schmitt

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Imus

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Smriti Shivpuri

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Alyssa Friede

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Robert E. McGrath

Fairleigh Dickinson University

View shared research outputs
Top Co-Authors

Avatar

Stephanie M. Merritt

University of Missouri–St. Louis

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge