Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dev K. Dalal is active.

Publication


Featured researches published by Dev K. Dalal.


Organizational Research Methods | 2012

Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression

Dev K. Dalal; Michael J. Zickar

Additive transformations are often offered as a remedy for the common problem of collinearity in moderated regression and polynomial regression analysis. As the authors demonstrate in this article, mean-centering reduces nonessential collinearity but not essential collinearity. Therefore, in most cases, mean-centering of predictors does not accomplish its intended goal. In this article, the authors discuss and explain, through derivation of equations and empirical examples, that mean-centering changes lower order regression coefficients but not the highest order coefficients, does not change the fit of regression models, does not impact the power to detect moderating effects, and does not alter the reliability of product terms. The authors outline the positive effects of mean-centering, namely, the increased interpretability of the results and its importance for moderator analysis in structural equations and multilevel analysis. It is recommended that researchers center their predictor variables when their variables do not have meaningful zero-points within the range of the variables to assist in interpreting the results.


Journal of Applied Psychology | 2014

Uncovering curvilinear relationships between conscientiousness and job performance: how theoretically appropriate measurement makes an empirical difference.

Nathan T. Carter; Dev K. Dalal; Anthony S. Boyce; Matthew S. O'Connell; Mei-Chuan Kung; Kristin M. Delgado

The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.


Organizational Research Methods | 2011

Using Mixed-Model Item Response Theory to Analyze Organizational Survey Responses: An Illustration Using the Job Descriptive Index

Nathan T. Carter; Dev K. Dalal; Christopher J. Lake; Bing C. Lin; Michael J. Zickar

In this article, the authors illustrate the use of mixed-model item response theory (MM-IRT) and explain its usefulness for analyzing organizational surveys. The authors begin by giving an overview of MM-IRT, focusing on both technical aspects and previous organizational applications. Guidance is provided on how researchers can use MM-IRT to check scoring assumptions, identify the influence of systematic responding that is unrelated to item content (i.e., response sets), and evaluate individual and group difference variables as predictors of class membership. After summarizing the current body of research using MM-IRT to address problems relevant to organizational researchers, the authors present an illustration of the use of MM-IRT with the Job Descriptive Index (JDI), focusing on the use of the ‘‘?’’ response option. Three classes emerged, one most likely to respond in the positive direction, one most likely to respond in the negative direction, and another most likely to use the ‘‘?’’ response. Trust in management, job tenure, age, race, and sex were considered as correlates of class membership. Results are discussed in terms of the applicability of MM-IRT and future research endeavors.


Journal of Applied Psychology | 2014

Are common language effect sizes easier to understand than traditional effect sizes

Margaret E. Brooks; Dev K. Dalal; Kevin P. Nolan

Communicating the results of research to nonscientists presents many challenges. Among these challenges is communicating the effectiveness of an intervention in a way that people untrained in statistics can understand. Use of traditional effect size metrics (e.g., r, r²) has been criticized as being confusing to general audiences. In response, researchers have developed nontraditional effect size indicators (e.g., binomial effect size display, common language effect size indicator) with the goal of presenting information in a more understandable manner. The studies described here present the first empirical test of these claims of understandability. Results show that nontraditional effect size indicators are perceived as more understandable and useful than traditional indicators for communicating the effectiveness of an intervention. People also rated training programs as more effective and were willing to pay more for programs whose effectiveness was described using the nontraditional effect size metrics.


Psychological Methods | 2017

Item response theory scoring and the detection of curvilinear relationships.

Nathan T. Carter; Dev K. Dalal; Li Guan; Alexander C. LoPilato; Scott Withrow

Psychologists are increasingly positing theories of behavior that suggest psychological constructs are curvilinearly related to outcomes. However, results from empirical tests for such curvilinear relations have been mixed. We propose that correctly identifying the response process underlying responses to measures is important for the accuracy of these tests. Indeed, past research has indicated that item responses to many self-report measures follow an ideal point response process—wherein respondents agree only to items that reflect their own standing on the measured variable—as opposed to a dominance process, wherein stronger agreement, regardless of item content, is always indicative of higher standing on the construct. We test whether item response theory (IRT) scoring appropriate for the underlying response process to self-report measures results in more accurate tests for curvilinearity. In 2 simulation studies, we show that, regardless of the underlying response process used to generate the data, using the traditional sum-score generally results in high Type 1 error rates or low power for detecting curvilinearity, depending on the distribution of item locations. With few exceptions, appropriate power and Type 1 error rates are achieved when dominance-based and ideal point-based IRT scoring are correctly used to score dominance and ideal point response data, respectively. We conclude that (a) researchers should be theory-guided when hypothesizing and testing for curvilinear relations; (b) correctly identifying whether responses follow an ideal point versus dominance process, particularly when items are not extreme is critical; and (c) IRT model-based scoring is crucial for accurate tests of curvilinearity.


Personality and Individual Differences | 2010

An ideal point account of the JDI Work satisfaction scale

Nathan T. Carter; Dev K. Dalal


Industrial and Organizational Psychology | 2010

Six Questions That Practitioners (Might) Have About Ideal Point Response Process Items

Dev K. Dalal; Scott Withrow; Robert E. Gibby; Michael J. Zickar


Industrial and Organizational Psychology | 2015

Stop Apologizing for Your Samples, Start Embracing Them

Xiaoyuan Zhu; Janet L. Barnes-Farrell; Dev K. Dalal


Industrial and Organizational Psychology | 2009

Using Dark Side Personality Traits to Identify Potential Failure

Dev K. Dalal; Kevin P. Nolan


Industrial and Organizational Psychology | 2010

The Lens Model: An Application of JDM Methodologies to IOOB Practice

Dev K. Dalal; Dalia L. Diab; William K. Balzer; Michael E. Doherty

Collaboration


Dive into the Dev K. Dalal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin P. Nolan

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Michael J. Zickar

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Scott Withrow

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing C. Lin

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Christopher J. Lake

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Dalia L. Diab

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Guan

University of Georgia

View shared research outputs
Researchain Logo
Decentralizing Knowledge