Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joni M. Lakin is active.

Publication


Featured researches published by Joni M. Lakin.


Gifted Child Quarterly | 2008

Identifying Academically Gifted English- Language Learners Using Nonverbal Tests A Comparison of the Raven, NNAT, and CogAT

David F. Lohman; Katrina A. Korb; Joni M. Lakin

In this study, the authors compare the validity of three nonverbal tests for the purpose of identifying academically gifted English-language learners (ELLs). Participants were 1,198 elementary children (approximately 40% ELLs). All were administered the Raven Standard Progressive Matrices (Raven), the Naglieri Nonverbal Ability Test (NNAT), and Form 6 of the Cognitive Abilities Test (CogAT). Results show that the U.S. national norms for the Raven substantially overestimate the number of high-scoring children; that because of errors in norming, the NNAT overestimates the number of both high-scoring and low-scoring children; that primary-level ELL children score especially poorly on the NNAT; that the standard error of measurement was twice as large for the NNAT as for the Raven or the CogAT; that ELL children scored .5 to .67 standard deviations lower than non-ELL children on the three nonverbal tests; and that none of the nonverbal tests predict achievement for ELL students very well. Putting Research to Use: Do nonverbal reasoning tests level the field for ELL children? Many practitioners have assumed that they do. However ELL children in this study scored 8 to 10 points lower than non-ELL children on the three nonverbal tests. The study also shows that practitioners cannot assume that national norms on the tests are of comparable quality. When put on the same scale as CogAT, Raven scores averaged 10 points higher than CogAT and NNAT scores. For NNAT, the mean is correct but the variability was up to 40% too large. Thus, when using national norms, both the Raven and NNAT will substantially overestimate the number of high-scoring children.


British Journal of Educational Psychology | 2009

Consistencies in sex differences on the Cognitive Abilities Test across countries, grades, test forms, and cohorts.

David F. Lohman; Joni M. Lakin

BACKGROUND Strand, Deary, and Smith (2006) reported an analysis of sex differences on the Cognitive Abilities Test (CAT) for over 320,000 UK students 11-12 years old. Although mean differences were small, males were overrepresented at the upper and lower extremes of the score distributions on the quantitative and non-verbal batteries and at the lower extreme of the verbal battery. AIMS We investigate whether these results were unique to the UK or whether they would be seen in other countries, at other grades, cohorts, or forms of the test. SAMPLE The sample consisted of three nationally representative cohorts of US students in grades 3 through 11 (total N=318,599) for the 1984, 1992, and 2000 standardizations of the US version of the CAT. METHODS We replicated and extended the Strand et al. (2006) results by comparing the proportions of males and females at each score level across countries (UK vs. US), grades (3-11), and cohorts/test forms (Forms 4, 5, and 6 standardized in 1984, 1992, and 2000, respectively). RESULTS The results showed an astonishing consistency in sex differences across countries, grades, cohorts, and test forms. CONCLUSIONS Implications for the current debate about sex differences in quantitative reasoning abilities are discussed.


Educational and Psychological Measurement | 2012

Investigating ESL Students' Performance on Outcomes Assessments in Higher Education.

Joni M. Lakin; Diane Elliott; Ou Lydia Liu

Outcomes assessments are gaining great attention in higher education because of increased demand for accountability. These assessments are widely used by U.S. higher education institutions to measure students’ college-level knowledge and skills, including students who speak English as a second language (ESL). For the past decade, the increasing number of ESL students has changed the landscape of U.S. higher education. However, little research exists documenting how ESL students perform on outcomes assessments. In this study, the authors investigated ESL students’ performance on the Educational Testing Service Proficiency Profile in terms of factor structure, criterion validity, and differential item functioning. The test showed partial measurement invariance between ESL and non-ESL students, consistent criterion validity, and few examples of differential item functioning. The results suggest the critical need for consideration of language background in outcomes assessment research in higher education.


Journal for the Education of the Gifted | 2011

The Predictive Accuracy of Verbal, Quantitative, and Nonverbal Reasoning Tests: Consequences for Talent Identification and Program Diversity:

Joni M. Lakin; David F. Lohman

Effective talent-identification procedures minimize the proportion of students whose subsequent performance indicates that they were mistakenly included in or excluded from the program. Classification errors occur when students who were predicted to excel subsequently do not excel or when students who were not predicted to excel do. Using a longitudinal sample, we assessed the accuracy of measures of verbal reasoning, quantitative reasoning, nonverbal reasoning, and current achievement for predicting later achievement. We found that seemingly small differences in predictive validity substantially changed the number of students erroneously included or excluded from the program. Surprisingly, nonverbal tests not only led to more classification errors but also failed to identify more English language learners and minority students. To increase equity and maintain fairness, practitioners should carefully evaluate claims that scores from alternative assessments are as valid as scores from conventional ability tests and verify that the use of these tests result in greater diversity.


Journal of Science Teacher Education | 2015

Assessing Dimensions of Inquiry Practice by Middle School Science Teachers Engaged in a Professional Development Program

Joni M. Lakin; Carolyn S. Wallace

Inquiry-based teaching promotes students’ engagement in problem-solving and investigation as they learn science concepts. Current practice in science teacher education promotes the use of inquiry in the teaching of science. However, the literature suggests that many science teachers hold incomplete or incorrect conceptions of inquiry. Teachers, therefore, may believe they are providing more inquiry experiences than they are, reducing the positive impact of inquiry on science interest and skills. Given the prominence of inquiry in professional development experiences, educational evaluators need strong tools to detect intended use in the classroom. The current study focuses on the validity of assessments developed for evaluating teachers’ use of inquiry strategies and classroom orientations. We explored the relationships between self-reported inquiry strategy use, preferences for inquiry, knowledge of inquiry practices, and related pedagogical content knowledge. Finally, we contrasted students’ and teachers’ reports of the levels of inquiry-based teaching in the classroom. Self-reports of inquiry use, especially one specific to the 5E instructional model, were useful, but should be interpreted with caution. Teachers tended to self-report higher levels of inquiry strategy use than their students perceived. Further, there were no significant correlations between either knowledge of inquiry practices or PCK and self-reported inquiry strategy use.


Educational and Psychological Measurement | 2012

Multigroup Generalizability Analysis of Verbal, Quantitative, and Nonverbal Ability Tests for Culturally and Linguistically Diverse Students.

Joni M. Lakin; Emily Lai

For educators seeking to differentiate instruction, cognitive ability tests sampling multiple content domains, including verbal, quantitative, and nonverbal reasoning, provide superior information about student strengths and weaknesses compared with unidimensional reasoning measures. However, these ability tests have not been fully evaluated with respect to fairness and validity for English-language learners (ELL). In particular, reliability is an important aspect of validity that has not been sufficiently evaluated. In this study, multivariate generalizability methodologies were used to explore the differential reliability of the Cognitive Abilities Test across ELL and non-ELL students in two schools with large Hispanic populations. Results suggest that verbal and quantitative reasoning skills are measured less precisely for ELL students than for non-ELL students. However, the composite score of the three batteries showed strong reliability in both groups. We conclude that multidimensional tests provide reliable information about the academic strengths of ELL and non-ELL students, though further research is needed.


Journal of Advanced Academics | 2016

Universal Screening and the Representation of Historically Underrepresented Minority Students in Gifted Education Minding the Gaps in Card and Giuliano’s Research

Joni M. Lakin

A research paper by Card and Giuliano took advantage of a natural experiment in a large school district to explore the impact that universal screening policies had on the identification of historically underrepresented minorities in gifted and talented programs. The authors concluded that the universal screening system was more effective than the previous teacher and parent referral system in addressing the underidentification of African American, Hispanic, female, low socioeconomic status, and English learner students. However, the present article identified gaps in the system that allowed new inequities to emerge. This review of their study concludes that districts must be advocates for gifted and talented students who come from culturally and linguistically diverse backgrounds. Implementing universal screening procedures can be an important tool in ensuring fair access to gifted and talented services, but districts must manage the increased resource demands of such programs.


Educational Assessment | 2014

Test Directions as a Critical Component of Test Design: Best Practices and the Impact of Examinee Characteristics.

Joni M. Lakin

The purpose of test directions is to familiarize examinees with a test so that they respond to items in the manner intended. However, changes in educational measurement as well as the U.S. student population present new challenges to test directions and increase the impact that differential familiarity could have on the validity of test score interpretations. This article reviews the literature on best practices for the development of test directions as well as documenting differences in test familiarity for culturally and linguistically diverse students that could be addressed with test directions and practice. The literature indicates that choice of practice items and feedback are critical in the design of test directions and that more extensive practice opportunities may be required to reduce group differences in test familiarity. As increasingly complex and rich item formats are introduced in next-generation assessments, test directions become a critical part of test design and validity.


Journal of Language Identity and Education | 2018

Mainstream Teachers’ Implicit Beliefs about English Language Learners: An Implicit Association Test Study of Teacher Beliefs

Jamie Harrison; Joni M. Lakin

ABSTRACT Teacher attitudes toward inclusion of English Learners (ELs) in the mainstream classroom have primarily focused on explicit beliefs as accessed through observation, case studies, and self-report surveys. The authors explore implicit mainstream teacher beliefs about ELs using the newly created Implicit Association Test–EL, with correlations to explicit beliefs being made using the English-as-a-Second-Language (ESL) Students in Mainstream Classrooms: A Survey of Teachers’ Explicit Beliefs survey. Findings from the IAT–EL indicate a slightly negative implicit belief about ELs from 197 respondents. Implicit and explicit beliefs about ELs were not significantly correlated, which is in keeping with current Implicit Associate Test (IAT) literature.


Archive | 2018

Expertise and Individual Differences

Phillip L. Ackerman; Joni M. Lakin

There are several types of expert knowledge, including factual (declarative), skills (procedural), and tacit knowledge. Expertise represents a high level of achievement in one or more of these domains. These types of knowledge are described, along with how they are acquired, and how they are maintained. The structure of cognitive individual-differences constructs of achievement, aptitude, and intelligence is also examined. Together with interests and motivation, investment of cognitive resources represents important determinants of both the level and domains of expertise that may be acquired. Strategies for identifying student potential for acquiring expertise are reviewed, along with the use of various types of assessments, and expectations for talent identification for medium-term and long-term predictions.

Collaboration


Dive into the Joni M. Lakin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge