Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lynn A. McFarland is active.

Publication


Featured researches published by Lynn A. McFarland.


Journal of Applied Psychology | 2000

Variance in faking across noncognitive measures.

Lynn A. McFarland; Ann Marie Ryan

There are discrepant findings in the literature regarding the effects of applicant faking on the validity of noncognitive measures. One explanation for these mixed results may be the failure of some studies to consider individual differences in faking. This study demonstrates that there is considerable variance across individuals in the extent of faking 3 types of noncognitive measures (i.e., personality test, biodata inventory, and integrity test). Participants completed measures honestly and with instructions to fake. Results indicated some measures were more difficult to fake than others. The authors found that integrity, conscientiousness, and neuroticism were related to faking. In addition, individuals faked fairly consistently across the measures. Implications of these results and a model of faking that includes factors that may influence faking behavior are provided.


Human Performance | 2003

Understanding Racial Differences on Cognitive Ability Tests in Selection Contexts: An Integration of Stereotype Threat and Applicant Reactions Research

Jonathan C. Ziegert; Lynn A. McFarland

This study integrates research on stereotype threat with research on applicant perceptions to examine how these two paradigms jointly enhance the understanding of racial subgroup cognitive ability test differences in selection contexts. A simulated selection context was used so that both stereotype threat and face validity could be manipulated. Participants were 250 White and 144 Black students. Using a 3 (stereotype threat: diagnostic, non-diagnostic, control) × 2 (face validity: face valid, generic) × 2 (race: Black, White) between-subjects design, our results found that stereotype threat interacted with face validity and race, but only for individuals highly identified with their racial group. Results suggested that Blacks performed best when taking the generic test in the control condition, whereas when taking the face valid test, they performed best in the non-diagnostic condition. Across all threat and face validity conditions, Black performance was worst in the diagnostic condition. In addition, correlational analyses found important individual differences in perceptions of stereotype threat, such that these perceptions contributed to lower face validity, lower test-taking motivation, and higher anxiety. Further, motivation positively and anxiety negatively influenced actual test performance. Thus, this study finds that research on stereotype threat and applicant perceptions are complementary, and together contribute to a better understanding of subgroup differences on cognitive ability tests.


Journal of Management | 2003

Impression Management Use and Effectiveness Across Assessment Methods

Lynn A. McFarland; Ann Marie Ryan; S. David Kriska

Considerable research has focused on candidate impression management (IM) use in unstructured interviews. However, little research has explored candidate IM use in other, frequently used assessment methods. This study examines the extent to which candidates, under consideration for a promotion, use verbal IM tactics in two types of structured individual assessment methods: a situational interview and a role-play. Based on a cybernetic model of IM, we predicted that IM use and effectiveness would vary across the two methods. Thus, this study examines the consistency of IM use across assessment methods; an issue that has not previously been explored. As expected, the situational interview resulted in greater use of candidate IM tactics. Additionally, other-focused tactics were used significantly more frequently than self-focused tactics in both assessment methods. IM use in the situational interview predicted assessor ratings and final promotion scores, while IM use in the role-play did not. Overall, these ...


Human Performance | 2003

An Examination of Stereotype Threat in a Motivational Context

Lynn A. McFarland; Dalit M. Lev-Arey; Jonathan C. Ziegert

This study was conducted to explore 2 potential boundary conditions of the stereotype threat effect. First, we sought to determine if threat would occur for a test administered in a motivational context where consequences were linked to test performance. Second, we examine if the threat elicited by 1 test would generalize to a different measure administered in the same testing session. Using a 2 (control vs. threat) × 2 (order of administration of a personality and intelligence test) × 2 (Black vs. White) between-subjects design, we found that threat can influence test scores, but the relationship between threat and test scores is dependent on both domain identity and racial identity. Interestingly, we found that changes in racial identity (assessed before and after the test) had a significant and positive relationship with cognitive ability test performance for Black test-takers, but not for Whites. It seems that Black individuals who dis-identified themselves from their race (during the course of the testing) were able to perform better on the cognitive ability test. Finally, we find that those in the threat condition performed significantly better on the personality test than those in the control condition, suggesting that threat can generalize and influence performance on tests for which no stereotype exists. Implications of these results for research and practice are discussed.


Journal of Personality Assessment | 2002

Item placement on a personality measure: Effects on faking behavior and test measurement properties

Lynn A. McFarland; Ann Marie Ryan; Aleksander Ellis

Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure.


Applied Psychological Measurement | 1999

Correlates of Person Fit and Effect of Person Fit on Test Validity

Neal Schmitt; David Chan; Joshua M. Sacco; Lynn A. McFarland; Danielle Jennings

Person-fit indices (lz and multitest lzm) derived from item response theory and used to identify misfitting examinees were computed based on responses to cognitive ability and personality tests. lz indices from different ability domains within the cognitive tests were uncorrelated with each other; lz indices from different tests within the personality domain were moderately intercorrelated. Cross-domain correlations were near 0. Test-taking motivation and conscientiousness were correlated moderately with multitest lzm for personality tests and to a lesser extent for cognitive tests. Test reactions were uncorrelated with any of the lz measures. Males had higher mean lz s than females. This difference could be partly attributed to differences in conscientiousness. African-Americans had higher mean lz than Whites. This effect could not be accounted for by test-taking motivation or conscientiousness. High values of lz affected the criterion-related validity of the set of cognitive tests such that the validity estimate decreased as lz increased.


Journal of Management | 2007

Antecedents of Impression Management Use and Effectiveness in a Structured Interview

Chad H. Van Iddekinge; Lynn A. McFarland; Patrick H. Raymark

The authors examine personality variables and interview format as potential antecedents of impression management (IM) behaviors in simulated selection interviews. The means by which these variables affect ratings of interview performance is also investigated. The altruism facet of agreeableness predicted defensive IM behaviors, the vulnerability facet of emotional stability predicted self- and other-focused behaviors, and interview format (behavior description vs. situational questions) predicted self-focused and defensive behaviors. Consistent with theory and research on situational strength, antecedent—IM relations were consistently weaker in a strong situation in which interviewees had an incentive to manage their impressions. There was also evidence that IM partially mediated the effects of personality and interview format on interview performance in the weak situation.


Journal of Applied Psychology | 2015

Social Media: A Contextual Framework to Guide Research and Practice

Lynn A. McFarland

Social media are a broad collection of digital platforms that have radically changed the way people interact and communicate. However, we argue that social media are not simply a technology but actually represent a context that differs in important ways from traditional (e.g., face-to-face) and other digital (e.g., email) ways of interacting and communicating. As a result, social media is a relatively unexamined type of context that may affect the cognition, affect, and behavior of individuals within organizations. We propose a contextual framework that identifies the discrete and ambient stimuli that distinguish social media contexts from digital communication media (e.g., email) and physical (e.g., face-to-face) contexts. We then use this contextual framework to demonstrate how it changes more person-centered theories of organizational behavior (e.g., social exchange, social contagion, and social network theories). These theoretical insights are also used to identify a number of practical implications for individuals and organizations. This studys major contribution is creating a theoretical understanding of social media features so that future research may proceed in a theory-based, rather than platform-based, manner. Overall, we intend for this article to stimulate and broadly shape the direction of research on this ubiquitous, but poorly understood, phenomenon.


Human Performance | 2011

Understanding Faking Behavior Through the Lens of Motivation: An Application of VIE Theory

Jill E. Ellingson; Lynn A. McFarland

This article proposes a conceptual framework to explain faking behavior on self-report personality inventories. Unlike prior conceptualizations, this framework is simultaneously parsimonious yet inclusive. The theory posits that all determinants of faking behavior occur through valence, instrumentality, expectancy, or ability to fake. We review the faking literature to show how the multitude of factors found to influence faking can be concisely modeled within our framework. We intend for this theory to serve as a guide for future research on faking behavior, and we encourage researchers to explore and adopt the framework in the interest of enabling a more theoretically satisfying approach to the study of faking.


Journal of Personality Assessment | 2005

Racial differences in socially desirable responding in selection contexts : Magnitude and consequences

Nicole M. Dudley; Lynn A. McFarland; Scott A. Goodman; Steven T. Hunt; Eric J. Sydell

Two studies were conducted to examine the magnitude and consequences of racial differences on social desirability (SD) scales. Study 1 included 1,063 job applicants, and Study 2 included 3 sets of incumbents (total N = 534). In both studies, participants were administered several personality measures and an SD scale. Across all samples, Whites scored lower on the SD scale than Blacks (average d = .37), Hispanics (average d = .47), and Asians (average d = 1.04), but these differences were not observed on the personality scales. The consequence of differences in socially desirable responding (SDR) is that fewer minority group members would be selected if SD scales were used to derive cut scores to eliminate individuals from the applicant pool or if the scales were used to correct personality test scores for SDR. However, applying the SD correction did not affect the validity of the personality test for any of the racial groups. Overall, our findings suggest that researchers and practitioners should consider the use of SD scales very carefully, as their use may have unintended consequences. These studies also demonstrate a need to closely examine the construct validity of SD measures across diverse groups.

Collaboration


Dive into the Lynn A. McFarland's collaboration.

Top Co-Authors

Avatar

Ann Marie Ryan

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua M. Sacco

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge