Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Herman Aguinis is active.

Publication


Featured researches published by Herman Aguinis.


Journal of Applied Psychology | 1997

Methodological Artifacts in Moderated Multiple Regression and Their Effects on Statistical Power

Herman Aguinis; Eugene F. Stone-Romero

Monte Carlo simulations were conducted to examine the degree to which the statistical power of moderated multiple regression (MMR) to detect the effects of a dichotomous moderator variable was affected by the main and interactive effects of (a) predictor variable range restriction, (b) total sample size, (c) sample sizes for 2 moderator variablebased subgroups, (d) predictor variable intercorrelation, and (e) magnitude of the moderating effect. Results showed that the main and interactive influences of these variables may have profound effects on power. Thus, future attempts to detect moderating effects with MMR should consider the power implications of both the main and interactive effects of the variables assessed in the present study. Otherwise, even moderating effects of substantial magnitude may go undetected.


Journal of Management | 2013

The Too-Much-of-a-Good-Thing Effect in Management

Jason R. Pierce; Herman Aguinis

A growing body of empirical evidence in the management literature suggests that antecedent variables widely accepted as leading to desirable consequences actually lead to negative outcomes. These increasingly pervasive and often countertheoretical findings permeate levels of analysis (i.e., from micro to macro) and management subfields (e.g., organizational behavior, strategic management). Although seemingly unrelated, the authors contend that this body of empirical research can be accounted for by a meta-theoretical principle they call the too-much-of-a-good-thing effect (TMGT effect). The authors posit that, due to the TMGT effect, all seemingly monotonic positive relations reach context-specific inflection points after which the relations turn asymptotic and often negative, resulting in an overall pattern of curvilinearity. They illustrate how the TMGT effect provides a meta-theoretical explanation for a host of seemingly puzzling results in key areas of organizational behavior (e.g., leadership, personality), human resource management (e.g., job design, personnel selection), entrepreneurship (e.g., new venture planning, firm growth rate), and strategic management (e.g., diversification, organizational slack). Finally, the authors discuss implications of the TMGT effect for theory development, theory testing, and management practice.


Organizational Research Methods | 2014

Best Practice Recommendations for Designing and Implementing Experimental Vignette Methodology Studies

Herman Aguinis; Kyle J. Bradley

We describe experimental vignette methodology (EVM) as a way to address the dilemma of conducting experimental research that results in high levels of confidence regarding internal validity but is challenged by threats to external validity versus conducting nonexperimental research that usually maximizes external validity but whose conclusions are ambiguous regarding causal relationships. EVM studies consist of presenting participants with carefully constructed and realistic scenarios to assess dependent variables including intentions, attitudes, and behaviors, thereby enhancing experimental realism and also allowing researchers to manipulate and control independent variables. We describe two major types of EVM aimed at assessing explicit (i.e., paper people studies) and implicit (i.e., policy capturing and conjoint analysis) processes and outcomes. We offer best practice recommendations regarding the design and implementation of EVM studies based on a multidisciplinary literature review, discuss substantive domains and topics that can benefit from implementing EVM, address knowledge gaps regarding EVM such as the need to increase realism and the number and diversity of participants, and address ways to overcome some of the negative perceptions about EVM by pointing to exemplary articles that have used EVM successfully.


Journal of Applied Psychology | 2012

Understanding and Estimating the Power to Detect Cross-Level Interaction Effects in Multilevel Modeling

John E. Mathieu; Herman Aguinis; Steven Andrew Culpepper; Gilad Chen

Cross-level interaction effects lie at the heart of multilevel contingency and interactionism theories. Researchers have often lamented the difficulty of finding hypothesized cross-level interactions, and to date there has been no means by which the statistical power of such tests can be evaluated. We develop such a method and report results of a large-scale simulation study, verify its accuracy, and provide evidence regarding the relative importance of factors that affect the power to detect cross-level interactions. Our results indicate that the statistical power to detect cross-level interactions is determined primarily by the magnitude of the cross-level interaction, the standard deviation of lower level slopes, and the lower and upper level sample sizes. We provide a Monte Carlo tool that enables researchers to a priori design more efficient multilevel studies and provides a means by which they can better interpret potential explanations for nonsignificant results. We conclude with recommendations for how scholars might design future multilevel studies that will lead to more accurate inferences regarding the presence of cross-level interactions.


Journal of Applied Psychology | 2015

Correlational Effect Size Benchmarks

Frank A. Bosco; Herman Aguinis; Kulraj Singh; James G. Field; Charles A. Pierce

Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions.


Journal of Management | 2011

Meta-Analytic Choices and Judgment Calls: Implications for Theory Building and Testing, Obtained Effect Sizes, and Scholarly Impact

Herman Aguinis; Dan R. Dalton; Frank A. Bosco; Charles A. Pierce; Catherine M. Dalton

The authors content analyzed 196 meta-analyses including 5,581 effect-size estimates published in Academy of Management Journal, Journal of Applied Psychology, Journal of Management, Personnel Psychology, and Strategic Management Journal from January 1982 through August 2009 to assess the presumed effects of each of 21 methodological choices and judgment calls on substantive conclusions. Results indicate that, overall, the various meta-analytic methodological choices available and judgment calls involved in the conduct of a meta-analysis have little impact on the resulting magnitude of the meta-analytically derived effect sizes. Thus, the present study, based on actual meta-analyses, casts doubt on previous warnings, primarily based on selective case studies, that judgment calls have an important impact on substantive conclusions. The authors also tested the fit of a multivariate model that includes relationships among theory-building and theory-testing goals, obtained effect sizes, year of publication of the meta-analysis, and scholarly impact (i.e., citations per year). Results indicate that the more a meta-analysis attempts to test an existing theory, the larger the number of citations, whereas the more a meta-analysis attempts to build new theory, the lower the number of citations. Also, in support of scientific particularism, as opposed to scientific universalism, the magnitude of the derived effects is not related to the extent to which a meta-analysis is cited. Taken together, the results provide a comprehensive data-based understanding of how meta-analytic reviews are conducted and the implications of these practices for theory building and testing, obtained effect sizes, and scholarly impact.


Organizational Research Methods | 2011

Debunking Myths and Urban Legends About Meta-Analysis

Herman Aguinis; Charles A. Pierce; Frank A. Bosco; Dan R. Dalton; Catherine M. Dalton

Meta-analysis is the dominant approach to research synthesis in the organizational sciences. We discuss seven meta-analytic practices, misconceptions, claims, and assumptions that have reached the status of myths and urban legends (MULs). These seven MULs include issues related to data collection (e.g., consequences of choices made in the process of gathering primary-level studies to be included in a meta-analysis), data analysis (e.g., effects of meta-analytic choices and technical refinements on substantive conclusions and recommendations for practice), and the interpretation of results (e.g., meta-analytic inferences about causal relationships). We provide a critical analysis of each of these seven MULs, including a discussion of why each merits being classified as an MUL, their kernels of truth value, and what part of each MUL represents misunderstanding. As a consequence of discussing each of these seven MULs, we offer best-practice recommendations regarding how to conduct meta-analytic reviews.


Organizational Research Methods | 2010

Using experience sampling methodology to advance entrepreneurship theory and research

Marilyn A. Uy; Maw Der Foo; Herman Aguinis

The authors propose the use of experience sampling methodology (ESM) as an innovative methodological approach to address critical questions in entrepreneurship research. ESM requires participants to provide reports of their thoughts, feelings, and behaviors at multiple times across situations as they happen in the natural environment. Thus, ESM allows researchers to capture dynamic person-by-situation interactions as well as between- and within-person processes, improve the ecological validity of results, and minimize retrospective biases. The authors provide a step-by-step description of how to design and implement ESM studies beginning with research design and ending with data analysis, and including issues of implementation such as time and resources needed, participant recruitment and orientation, signaling procedures, and the use of computerized devices and wireless technologies. The authors also describe a cell phone ESM protocol that enables researchers to monitor and interact with participants in real time, reduces costs, expedites data entry, and increases convenience. Finally, the authors discuss implications of ESMbased research for entrepreneurs, business incubators, and entrepreneurship educators.


Organizational Research Methods | 2001

A Generalized Solution for Approximating the Power to Detect Effects of Categorical Moderator Variables Using Multiple Regression

Herman Aguinis; Robert J. Boik; Charles A. Pierce

Investigators in numerous organization studies disciplines are concerned about the low statistical power of moderated multiple regression (MMR) to detect effects of categorical moderator variables. The authors provide a theoretical approximation to the power of MMR. The theoretical result confirms, synthesizes, and extends previous Monte Carlo research on factors that affect the power of MMR tests of categorical moderator variables and the low power of MMR in typical research situations. The authors develop and describe a computer program, which is available on the Internet, that allows researchers to approximate the power of MMR to detect the effects of categorical moderator variables given user-input information (e.g., sample size, reliability of measurement). The approximation also allows investigators to determine the effects of violating certain assumptions required for MMR. Given the typically low power of MMR, researchers are encouraged to use the computer program to approximate power while planning their research design and methodology.


Journal of Applied Psychology | 2010

Revival of Test Bias Research in Preemployment Testing

Herman Aguinis; Steven Andrew Culpepper; Charles A. Pierce

We developed a new analytic proof and conducted Monte Carlo simulations to assess the effects of methodological and statistical artifacts on the relative accuracy of intercept- and slope-based test bias assessment. The main simulation design included 3,185,000 unique combinations of a wide range of values for true intercept- and slope-based test bias, total sample size, proportion of minority group sample size to total sample size, predictor (i.e., preemployment test scores) and criterion (i.e., job performance) reliability, predictor range restriction, correlation between predictor scores and the dummy-coded grouping variable (e.g., ethnicity), and mean difference between predictor scores across groups. Results based on 15 billion 925 million individual samples of scores and more than 8 trillion 662 million individual scores raise questions about the established conclusion that test bias in preemployment testing is nonexistent and, if it exists, it only occurs regarding intercept-based differences that favor minority group members. Because of the prominence of test fairness in the popular media, legislation, and litigation, our results point to the need to revive test bias research in preemployment testing.

Collaboration


Dive into the Herman Aguinis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ravi S. Ramani

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank A. Bosco

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Catherine M. Dalton

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Nawaf Alabduljader

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Wayne F. Cascio

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Isabel Villamor

George Washington University

View shared research outputs
Researchain Logo
Decentralizing Knowledge