Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam W. Meade is active.

Publication


Featured researches published by Adam W. Meade.


Psychological Methods | 2012

Identifying Careless Responses in Survey Data.

Adam W. Meade; S. Bartholomew Craig

When data are collected via anonymous Internet surveys, particularly under conditions of obligatory participation (such as with student samples), data quality can be a concern. However, little guidance exists in the published literature regarding techniques for detecting careless responses. Previously several potential approaches have been suggested for identifying careless respondents via indices computed from the data, yet almost no prior work has examined the relationships among these indicators or the types of data patterns identified by each. In 2 studies, we examined several methods for identifying careless responses, including (a) special items designed to detect careless response, (b) response consistency indices formed from responses to typical survey items, (c) multivariate outlier analysis, (d) response time, and (e) self-reported diligence. Results indicated that there are two distinct patterns of careless response (random and nonrandom) and that different indices are needed to identify these different response patterns. We also found that approximately 10%-12% of undergraduates completing a lengthy survey for course credit were identified as careless responders. In Study 2, we simulated data with known random response patterns to determine the efficacy of several indicators of careless response. We found that the nature of the data strongly influenced the efficacy of the indices to identify careless responses. Recommendations include using identified rather than anonymous responses, incorporating instructed response items before data collection, as well as computing consistency indices and multivariate outlier analysis to ensure high-quality data.


Journal of Applied Psychology | 2008

Power and sensitivity of alternative fit indices in tests of measurement invariance.

Adam W. Meade; Emily C. Johnson; Phillip W. Braddy

Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonalds (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonalds NCI values exhibited better performance than a single change in McDonalds NCI value. Tables of these values are provided as are recommendations for best practices in MI testing.


Behavior Research Methods | 2011

The viability of crowdsourcing for survey research

Tara S. Behrend; David Sharek; Adam W. Meade; Eric N. Wiebe

Online contract labor portals (i.e., crowdsourcing) have recently emerged as attractive alternatives to university participant pools for the purposes of collecting survey data for behavioral research. However, prior research has not provided a thorough examination of crowdsourced data for organizational psychology research. We found that, as compared with a traditional university participant pool, crowdsourcing respondents were older, were more ethnically diverse, and had more work experience. Additionally, the reliability of the data from the crowdsourcing sample was as good as or better than the corresponding university sample. Moreover, measurement invariance generally held across these groups. We conclude that the use of these labor portals is an efficient and appropriate alternative to a university participant pool, despite small differences in personality and socially desirable responding across the samples. The risks and advantages of crowdsourcing are outlined, and an overview of practical and ethical guidelines is provided.


Organizational Research Methods | 2004

A Comparison of Item Response Theory and Confirmatory Factor Analytic Methodologies for Establishing Measurement Equivalence/Invariance

Adam W. Meade; Gary J. Lautenschlager

Recently, there has been increased interest in tests of measurement equivalence/ invariance (ME/I). This study uses simulated data with known properties to assess the appropriateness, similarities, and differences between confirmatory factor analysis and item response theory methods of assessing ME/I. Results indicate that although neither approach is without flaw, the item response theory–based approach seems to be better suited for some types of ME/I analyses.


Structural Equation Modeling | 2004

A Monte-Carlo Study of Confirmatory Factor Analytic Tests of Measurement Equivalence/Invariance.

Adam W. Meade; Gary J. Lautenschlager

In recent years, confirmatory factor analytic (CFA) techniques have become the most common method of testing for measurement equivalence/invariance (ME/I). However, no study has simulated data with known differences to determine how well these CFA techniques perform. This study utilizes data with a variety of known simulated differences in factor loadings to determine how well traditional tests of ME/I can detect these specific simulated differences. Results show that traditional CFA tests of ME/I perform well under ideal situations but that large sample sizes, a sufficient number of manifest indicators, and at least moderate communalities are crucial for assurance that ME/I conditions exist.


Structural Equation Modeling | 2007

Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance

Adam W. Meade; Daniel J. Bauer

This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power, the degree of factor overdetermination and the level of indicator communalities also play important roles. Based on these findings, no single rule of thumb regarding the ratio of sample size to number of indicators can ensure adequate power to detect a lack of measurement invariance.


Journal of Occupational and Organizational Psychology | 2004

Psychometric problems and issues involved with creating and using ipsative measures for selection

Adam W. Meade

Data are described as ipsative if a given set of responses always sum to the same total. However, there are many properties of data collection that can give rise to different types of ipsative data. In this study, the most common type of ipsative data used in employee selection (forced-choice ipsative data; FCID) is discussed as a special case of other types of ipsative data. Although all ipsative data contains constraints on covariance matrices (covariance-level interdependence), FCID contains additional item-level interdependencies as well. The psychological processes that give rise to FCID and the resultant psychometric properties are discussed. In addition, data from which both normative and ipsative responses were provided by job applicants illustrate very different patterns of correlations as well as very different selection decisions between normative, FCID and ipsatized measures.


Organizational Research Methods | 2006

Problems With Item Parceling for Confirmatory Factor Analytic Tests of Measurement Invariance

Adam W. Meade; Christina M. Kroustalis

Combining items into parcels in confirmatory factor analysis (CFA) can improve model estimation and fit. Because adequate model fit is imperative for CFA tests of measurement invariance, parcels have frequently been used. However, the use of parcels as indicators in a CFA model can have serious detrimental effects on tests of measurement invariance. Using simulated data with a known lack of invariance, the authors illustrate how models using parcels as indicator variables erroneously indicate that measurement invariance exists much more often than do models using items as indicators. Moreover, item-by-item tests of measurement invariance were often more informative than were tests of the entire parameter matrices.


Organizational Research Methods | 2007

Are Internet and Paper-and-Pencil Personality Tests Truly Comparable? An Experimental Design Measurement Invariance Study

Adam W. Meade; Lawrence C. Michels; Gary J. Lautenschlager

Recently, the use of technology in assessment for personnel selection has increased dramatically. An important consideration is whether test scores obtained via Internet administration are psychometrically equivalent to those obtained by the more traditional paper-and-pencil format. Our results suggest that there is comparability of scores for many personality constructs, including conscientiousness. However, invariance was not found for some scales between persons allowed to choose formats and those not given a choice of formats. As testing-format preference may be related to membership in federally protected demographic groups, this latter finding was somewhat troubling. Additionally, we illustrate the use of an experimental laboratory design to investigate possible causes of a lack of measurement invariance in Internet and paper-and-pencil comparisons.


Journal of Applied Psychology | 2010

A taxonomy of effect size measures for the differential functioning of items and scales.

Adam W. Meade

Much progress has been made in the past 2 decades with respect to methods of identifying measurement invariance or a lack thereof. Until now, the focus of these efforts has been to establish criteria for statistical significance in items and scales that function differently across samples. The power associated with tests of differential functioning, as with all significance tests, is affected by sample size and other considerations. Additionally, statistical significance need not imply practical importance. There is a strong need as such for meaningful effect size indicators to describe the extent to which items and scales function differently. Recently developed effect size measures show promise for providing a metric to describe the amount of differential functioning present between groups. Expanding upon recent developments, this article presents a taxonomy of potential differential functioning effect sizes; several new indices of item and scale differential functioning effect size are proposed and illustrated with 2 data samples. Software created for computing these indices and graphing item- and scale-level differential functioning is described.

Collaboration


Dive into the Adam W. Meade's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Phillip W. Braddy

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Lori Foster Thompson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Emily C. Johnson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Christina M. Kroustalis

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Eric A Surface

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Amy M. DuVernet

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Pappalardo

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

J. William Stoughton

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge