Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David M. LaHuis is active.

Publication


Featured researches published by David M. LaHuis.


Organizational Research Methods | 2009

The Accuracy of Significance Tests for Slope Variance Components in Multilevel Random Coefficient Models

David M. LaHuis; Matthew W. Ferguson

This study examines the behavior of three tests for significant slope variance in multilevel random coefficient (MRC) models: the Hierarchical Linear Modeling chi-square test, the likelihood ratio test (LRT), and the corrected LRT. Monte Carlo simulations are conducted varying the numbers of groups, group size, and effect size. Results suggest that neither the number of groups nor group size influenced Type I errors. Group size has a stronger effect on power compared with the number of groups. The one-tailed LRT demonstrates the best balance between power and Type I errors. Recommendations for conducting MRC analyses are presented.


Organizational Research Methods | 2014

Explained Variance Measures for Multilevel Models

David M. LaHuis; Michael J. Hartman; Shotaro Hakoyama; Patrick C. Clark

One challenge in using multilevel models is determining how to report the amount of explained variance. In multilevel models, explained variance can be reported for each level or for the total model. Existing measures have been based primarily on the reduction of variance components across models. However, these measures have not been reported consistently because they have some undesirable properties. The present study is one of the first to evaluate the accuracy of these measures using Monte Carlo simulations. In addition, a measure based on the full partitioning of variance in multilevel models was examined. With the exception of the Level 2 explained variance measure, all other measures performed well across our simulated conditions.


Human Performance | 2005

Investigating Nonlinear Conscientiousness-Job Performance Relations for Clerical Employees

David M. LaHuis; Nicholas R. Martin; John M. Avis

In this study we tested for a nonlinear relation between Conscientiousness and job performance using two independent samples of clerical employees (N = 192, and N = 203, respectively). Based on several characteristics of clerical positions, we expected that Conscientiousness would be asymptotically related to job performance. In Study 1, we found evidence of a nonlinear relation using biodata and situational judgment items to measure Conscientiousness. In Study 2, we found similar results using a traditional Conscientiousness measure and controlling for cognitive ability. We discuss the implications for using Conscientiousness to select clerical employees.


Organizational Research Methods | 2007

Using Multilevel Random Coefficient Modeling to Investigate Rater Effects in Performance Ratings

David M. LaHuis; John M. Avis

There has been recent interest in how rater attributes lead to systematic variance in ratings of job performance. Although numerous rater characteristics have been proposed to affect performance ratings, there has been little empirical research studying them. We suggest this has been because of methodological problems with levels of analysis and propose multilevel random coefficient (MRC) modeling as a solution. We present a multilevel model of rater effects in which ratees are nested within raters. We also present two examples of applying MRC modeling to criterion-related validity data to study how rater-level variables influence performance ratings and the relationships selection assessments have with those ratings.


Organizational Research Methods | 2011

An Examination of Item Response Theory Item Fit Indices for the Graded Response Model

David M. LaHuis; Patrick C. Clark; Erin L. O'Brien

The current study examined the Type I error rates and power of several item response theory (IRT) item fit indices used in conjunction with the graded response model (GRM). Specifically, S — χ2, χ2*, and adjusted χ 2 degrees of freedom ratios (χ2/dfs) were examined. Model misfit was introduced by manipulating item parameters and by using a different IRT model to generate item data. Results indicated lower than expected Type I error rates for S — χ2 and χ 2*. Adjusted χ2/dfs resulted in large Type I error rates when used with cross validation and very low Type I error rates when used without cross validation. χ2* and adjusted χ 2/dfs without cross validation were the most powerful overall.


Organizational Research Methods | 2009

Investigating Faking Using a Multilevel Logistic Regression Approach to Measuring Person Fit

David M. LaHuis; Derek Copeland

This article describes how a multilevel logistic regression (MLR) approach to assessing person fit can be used to test hypotheses concerning faking on personality assessments. Item difficulty and person trait estimates obtained from a two-parameter logistic item response theory model are used to predict the probability of endorsing an item in a MLR equation. The regression slope for item difficulty reflects the extent to which the probability of endorsement decreases as item difficulty increases. Less negative slopes may indicate faking, and slope variance may be modeled with person-level variables using MLR. Two examples are presented. Example 1 models faking on a personality assessment with dichotomous items. Example 2 extends the approach to scales using polytomous items.


Human Performance | 2007

A Multiple-Task Measurement Framework for Assessing Maximum-Typical Performance

Phillip M. Mangos; Debra Steele-Johnson; David M. LaHuis; Edward D. White

This study presents a novel measurement framework for assessing and predicting maximum and typical performance. The proposed measurement approach addresses the need for organizations to assess maximum and typical performance changes over time in complex job settings requiring coordination of multiple tasks with changing priorities. We present results of an experiment in which participants engaged in a complex task with multiple task elements and instructions to either maximize a different task element in each of four performance blocks (variable-priority condition) or treat all task elements with equal priority (stable-priority condition). We estimated growth curves corresponding to each task element and calculated the area under each growth curve as a summary performance index. Growth curves corresponding to the maximized, high-priority task element in the variable-priority condition reflected maximum performance, whereas those corresponding to the deemphasized, lower priority elements reflected typical performance. We compared the shape of the maximum and typical growth curves in the variable-priority condition to their corresponding performance trajectories in the stable-priority condition. In addition, we tested the moderating influence of individual differences in action-state orientation on the obtained maximum and typical performance estimates. Results indicated support for the proposed measurement framework in terms of its usefulness for inducing sustained levels of maximum performance and for identifying and correcting sources of the maximum-typical performance discrepancy.


Organizational Research Methods | 2012

An Examination of Power and Type I Errors for Two Differential Item Functioning Indices Using the Graded Response Model

Patrick C. Clark; David M. LaHuis

This study examined two methods for detecting differential item functioning (DIF): Raju, van der Linden, and Fleer’s 1995 differential functioning of items and tests (DFIT) procedure and Thissen, Steinberg, and Wainer’s 1988 likelihood ratio test (LRT). The major research questions concerned which test provides the best balance of Type I errors and power and if the tests differ in terms of detecting different types of DIF. Monte Carlo simulations were conducted to address these questions. Equal and unequal sample size conditions were fully crossed with test lengths of 10 and 20 items. In addition, α and β parameters were manipulated in order to simulate DIF. Findings indicate that DFIT and LRT both had acceptable Type I error rates when sample sizes were equal but that DFIT produced too many Type I errors when sample sizes were unequal. Overall, the LRT exhibited greater power to detect both α and β parameter DIF than did DFIT. However, DFIT was more powerful than LRT when the last two β parameters had DIF as opposed to when the extreme β parameters had DIF.


Organizational Research Methods | 2018

Applying Item Response Trees to Personality Data in the Selection Context

David M. LaHuis; Caitlin E. Blackmore; Kinsey Blue Bryant-Lees; Kristin Delgado

Self-report personality scales are used frequently in personnel selection. Traditionally, researchers have assumed that individuals respond to items within these scales using a single-decision process. More recently, a flexible set of item response (IR) tree models have been developed that allow researchers to investigate multiple-decision processes. In the present research, we found that IR tree models fit the data better than a single-decision IR model when fitted to seven self-report personality scales used in a concurrent criterion-related validity study. In addition, we found evidence that the latent variable underlying the direction of a response (agree or disagree) decision process predicted job performance better than latent variables reflecting the other decision processes for the best fitting IR tree model.


International Journal of Selection and Assessment | 2011

DO APPLICANTS AND INCUMBENTS RESPOND TO PERSONALITY ITEMS SIMILARLY? A COMPARISON USING AN IDEAL POINT RESPONSE MODEL

Erin L. O'Brien; David M. LaHuis

Collaboration


Dive into the David M. LaHuis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tyler Barnes

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin J. Eschleman

San Francisco State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Wiemann

California State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge