Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathleen L. Slaney is active.

Publication


Featured researches published by Kathleen L. Slaney.


Psychological Methods | 2008

A proposed framework for conducting data-based test analysis.

Kathleen L. Slaney; Michael D. Maraun

The authors argue that the current state of applied data-based test analytic practice is unstructured and unmethodical due in large part to the fact that there is no clearly specified, widely accepted test analytic framework for judging the performances of particular tests in particular contexts. Drawing from the extant test theory literature, they propose a rationale that may be used in data-based test analysis. The components of the proposed test analytic framework are outlined in detail, as are examples of the framework as applied to commonly encountered test evaluative scenarios. A number of potential extensions of the framework are discussed.


Multivariate Behavioral Research | 2003

An Analysis of Meehl's MAXCOV-HITMAX Procedure for the Case of Dichotomous Indicators

Michael D. Maraun; Kathleen L. Slaney; Louis Goddyn

MAXCOV-HITMAX was invented by Paul Meehl for the detection of latent taxonic structures (i.e., structures in which the latent variable, _, is not continuously, but rather Bernoulli, distributed). It involves the examination of the shape of a certain conditional covariance function, and is based on Meehls claims that: (R1) Given a latent taxonic structure, this conditional covariance function is single peaked; and that (R2), continuous latent structures produce a flat, rather than single-peaked, curve. While Meehl has recommended that continuous indicators be used as input into MAXCOV-HITMAX, the use of dichotomous indicators has become popular. The current work investigates whether (R1) and (R2) are true for the case of dichotomous indicators. The conclusions will be that, for dichotomous indicators: (a) (R1) is not true; (b) (R1) is made true given that there are a large number of indicators; and (c) (R2) is not true, certain unexceptional Rasch structures, for example, producing single-peaked curves. Implications are briefly discussed of these results for the case of MAXCOV-HITMAX with continuous indicators.


Journal of Psychoeducational Assessment | 2009

Psychometric Assessment and Reporting Practices: Incongruence between Theory and Practice.

Kathleen L. Slaney; Masha Tkatchouk; Stephanie M. Gabriel; Michael D. Maraun

The aim of the current study is twofold: (a) to investigate the rates at which researchers assess and report on the psychometric properties of the measures they use in their research and (b) to examine whether or not researchers appear to be generally employing sound/unsound rationales when it comes to how they conduct test evaluations. Based on a sample of 368 articles published in four journals in the year 2004, the findings suggest that, although evidence bearing on score precision/reliability and the internal structure of item responses remains under-reported, researchers appear to be assessing the relationships between test scores and external variables relatively more frequently than in the past. However, findings also indicate that, all told, very few researchers are assessing and reporting on internal score validity, and score precision/reliability, and external score validity, and in that sequence, suggesting that applied researchers may not always be adopting sound test-evaluative rationales in their psychometric assessments.


Canadian Journal of School Psychology | 2010

A Review of Psychometric Assessment and Reporting Practices: An Examination of Measurement-Oriented Versus Non-Measurement-Oriented Domains

Kathleen L. Slaney; Masha Tkatchouk; Stephanie M. Gabriel; Leona P. Ferguson; Jared R. S. Knudsen; Julien C. Legere

The primary aim of the present study is to determine whether the psychometric evaluation practices and test-analytic rationales of researchers publishing in journals with a measurement focus differ from those of researchers publishing in journals with varying substantive research foci. Several components of two different samples of articles were examined and compared; one contained articles from a set of measurement-oriented journals (n = 402) and the other contained articles published in journals representing a cross-section of research domains (n = 289). Findings indicate that, contrary to expectations, articles published in measurement-oriented journals, as compared with general journals, generally may not reflect better psychometric analysis and reporting practices or sounder test-analytic rationales on the part of the researchers. It was also found that although researchers are generally evaluating either score precision/reliability or validity, they seldom evaluate both, indicating that there may be a general lack of appreciation for the importance of conducting a full and coherent data-based test analysis whenever a measure is employed. A number of limitations of the study and recommendations for future research are also addressed. Le but fondamental de cette étude est de déterminer si les pratiques de l’évaluation psychométrique et le raisonnement analytique de test des chercheurs qui se font publiés dans des revues scientifiques orientées vers la mesure diffèrent de ceux des chercheurs qui se font publiés dans des revues scientifiques orientées sur une variété de recherche substantielle. Nous avons examiné et comparé plusieurs composantes de deux différents échantillons d’articles, dont un contenait des articles d’un ensemble de revues scientifiques orientées vers la mesure (n = 402), et dont l’autre contenait des articles publiés dans des revues scientifiques qui représentaient une coupe transversale de domaines de recherche (n = 289). Les résultats indiquent que, contraire à nos attentes, les articles publiés dans les revues scientifiques orientées vers la mesure, à comparer à des revues scientifiques appliquées, pouvaient généralement ne pas refléter de meilleures analyses psychométriques et de comptes rendus, ou de raisonnement analytique de tests valables de la part des chercheurs. Nous avons aussi trouvé que, même que les chercheurs évaluent généralement ou bien la précision/ fiabilité de la cote, ou bien sa validité, c’est rare qu’ils évaluent en indiquant qu’il pourrait y avoir un manque général d’appréciation de l’importance de mener une analyse complète et cohérente de test de base de données chaque fois qu’une mesure est employée. Un certain nombre de limitations de l’étude et de recommendations pour des pistes de recherche futures a été abordé.


International Journal of Forensic Mental Health | 2011

Is My Test Valid? Guidelines for the Practicing Psychologist for Evaluating the Psychometric Properties of Measures

Kathleen L. Slaney; Jennifer E. Storey; Jordan I. Barnes

A general logic for data-based test evaluation based on Slaney and Marauns (2008) framework is described. On the basis of this framework and other well-known test theoretic results, a set of guidelines is proposed to aid researchers in the assessment of the psychometric properties of the measures they use in their research. The guidelines are organized into eight areas and range from general recommendations, pertaining to understanding different psychometric properties of quantitative measures and at what point in a test evaluation their respective assessments should occur, to clarifications of core psychometric concepts such as factor structure, reliability, coefficient alpha, and dimensionality. Finally, an illustrative example is provided with a data-based test evaluation of the Hare Psychopathy Checklist-Revised (Hare, 1991) as a measure of psychopathic personality disorder in a sample of 384 male offenders serving sentences in a Canadian correctional facility.


Theory & Psychology | 2012

Laying the cornerstone of construct validity theory: Herbert Feigl’s influence on early specifications

Kathleen L. Slaney

Although the theoretical foundations of construct validity theory have been fairly well described, there remains equivocation over what should properly be taken to be its philosophical underpinnings, with some characterizing it as an essentially positivist enterprise, others identifying a realist philosophy underlying the theory, and others still characterizing its foundations as containing elements of both positivist and realist thinking. This paper summarizes recent work representing each of these three different stances on the philosophical foundations of construct validity theory. Explicit connections are drawn between the work of Herbert Feigl—who pioneered a philosophy of science whose roots lay in logical positivism, but which contained notably realist overtones—and early specifications of construct validity theory. Finally, an appeal is made for a realist interpretation of construct validity theory based both on the connections between early articulations of the theory and key Feiglian ideas and also on Cronbach and Meehl’s later reflections on the origins of their influential work.


Archive | 2017

Validating Psychological Constructs

Kathleen L. Slaney

The first € price and the £ and


Assessment in Education: Principles, Policy & Practice | 2016

The multiplicity of validity: a game within a game?

Kathleen L. Slaney

price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. K. Slaney Validating Psychological Constructs


International Journal of Forensic Mental Health | 2011

When “Good Enough” Is Just Not Good Enough: Response to Holden and Marjanovic

Kathleen L. Slaney; Jennifer E. Storey; Jordan I. Barnes

To begin, I would like to commend the authors of the focal papers of this special issue. Each has addressed important topics bearing on the quite complex and thorny question of how the term ‘validity’ ought to be used in testing. Together, these papers cover a good deal of ground and provide many useful insights as where to begin to look for answers. I would like to share all my reactions to the points raised; however, space constraints clearly prohibit this. I will, instead, restrict my comments to three issues, each of which I consider central to a fruitful discussion of how ‘validity’ should be used in the context of testing. The first issue concerns Newton and Shaw’s (2015) suggestion that ‘validity’ potentially be framed as a family resemblance concept. This requires, first, that the ordinary concept of ‘validity’ be distinguished from the constrained usage of the concept in testing. This is important because the former certainly is already a family resemblance concept and the question then becomes, how parasitic on the ordinary concept is the constrained concept? If there is little or no overlap, then one may argue that the uses of the concept in testing are more or less technical. However, to the extent that there is overlap (and, I think it would be difficult to argue against this), then there must be some examination of where (and where not) the uses in testing correspond to ordinary usage. This will be important so that the testing community can make decisions regarding how the term should be used, whether that means embracing a ‘loose’ family resemblance concept, imposing a technical definition, or some combination thereof. However, given the multiplicity of ways ‘validity’ is currently employed in testing discourse, settling on a consensus definition seems extremely unlikely, and, if testing validity is a family resemblance concept, perhaps also not ultimately desirable. Although I echo Newton and Shaw’s (2013) concerns regarding the ‘conceptual ambiguity’ and ‘terminological redundancy’ in validity discourse, and agree fully that some reasonable degree of ‘intersubjective agreement’ regarding the application of the concept of ‘validity’ is needed, the fact is that a single consensus definition of ‘validity’ for testing – even if it was possible to settle on one – would likely either be too vague to be useful, or would fail to capture the different ways in which the concept of ‘validity’ is meaningfully, and usefully, put to work in testing discourse and practice.


Review of General Psychology | 2018

Random or Fixed? An Empirical Examination of Meta-Analysis Model Choices.

Kathleen L. Slaney; Donna Tafreshi; Richard Hohn

In this article, we respond to a commentary by Holden and Marjanovic (this issue) on Slaney, Storey, and Barnes’ article “‘Is My Test Valid?’: Guidelines for the Practicing Psychologist for Evaluating the Psychometric Properties of Measures” (this issue). Specifically, we reply to Holden and Marjanovics claims that our guidelines: endorse a “construct approach” to test evaluation and development, rely too heavily on modern test theoretic methods and as such are too mathematically and technically intractable to be practically useful, and may present too unrealistic a challenge to be used in test development and the evaluation of well-established measures. Finally, we attempt to clarify the major themes that the guidelines described in Slaney, Storey, and Barnes were intended to convey.

Collaboration


Dive into the Kathleen L. Slaney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack Martin

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge