Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason C. Immekus is active.

Publication


Featured researches published by Jason C. Immekus.


Journal of Psychiatric Research | 2009

On the psychometric validity of the domains of the PDSQ: An illustration of the bi-factor item response theory model

Robert D. Gibbons; A. John Rush; Jason C. Immekus

Competing item response theory (IRT) models were used to test the factor structure of the psychiatric diagnostic screening questionnaire (PDSQ; Zimmerman M, Mattia JI. A self-report scale to help make psychiatric diagnoses: the psychiatric diagnostic screening questionnaire. Archives of General Psychiatry 2001;58:787-94), a self-report psychiatric measure comprised of 139 items sampled from 15 symptom domains (e.g., Psychosis, Mania). Tested IRT models included: (a) a unidimensional model, (b) a simple structure model, (c) a bi-factor model, and (d) models that included 6, 10, and 15 sub-domain alternative conceptualizations of the scale. Based on the responses of 3791 individuals with major depressive disorder, the bi-factor model was found to provide a theoretically and statistically plausible description of the PDSQ factor structure. Primary dimension loadings were low to moderate; group factor loadings were moderate to high. Results support the validity of the PDSQ in identifying distinct categories of illness as defined by the diagnostic and statistical manual diagnostic groups, since preserving the 15 symptom categories (domains) provided a more accurate characterization of the observed data by the IRT models. The bi-factor model is useful in evaluating the multidimensional structure of mental health data. The specification of alternative IRT models is demonstrated as a noteworthy benefit over classical test theory for psychiatric measurement.


Educational and Psychological Measurement | 2008

Dimensionality Assessment Using the Full-Information Item Bifactor Analysis for Graded Response Data An Illustration With the State Metacognitive Inventory

Jason C. Immekus; P.K. Imbrie

Dimensionality assessment using the full-information item bifactor model for graded response data is provided. The model applies to data in which each item relates to a general factor and one group factor. Specifically, alternative model specification within item response theory (IRT) is shown to test a scales factor structure. For illustrative purposes, the bifactor model and competing IRT models were fit to the data of separate cohorts of incoming college students (Cohort 1, n = 1,490; Cohort 2, n = 1,533) to test the dimensionality of an adapted version of the State Metacognitive Inventory. Overall, the bifactor analysis did not strongly support distinct group factors after accounting for the general factor. Instead, results suggested conceptualizing the scale as unidimensional, indicating that scores should be based on the total scale, not subscales. Considerations related to the use of the bifactor IRT model are discussed.


Educational and Psychological Measurement | 2010

A Test and Cross-Validation of the Revised Two-Factor Study Process Questionnaire Factor Structure Among Western University Students:

Jason C. Immekus; P.K. Imbrie

The Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a measure of university students’ approach to learning. Original evaluation of the scale’s psychometric properties was based on a sample of Hong Kong university students’ scores. The purpose of this study was to test and cross-validate the R-SPQ-2F factor structure, based on separate cohort data (Cohort 1: n = 1,490; Cohort 2: n = 1,533), among students attending a university in the United States. Factor analytic results did not support the scale’s original factor structure, instead suggesting an alternative four-factor model of the scale data. In the cross-validation study, multisample confirmatory factor analysis results indicated that the scale’s measurement model parameters (e.g., factor loadings) were invariant across independent samples. Despite support for the scale’s respecified factor structure for Western university students, continued research is recommended to improve the scale’s psychometric properties. Implications for test score use and interpretation are discussed.


Educational and Psychological Measurement | 2010

Factor Structure Invariance of the Kaufman Adolescent and Adult Intelligence Test Across Male and Female Samples

Jason C. Immekus; Susan J. Maller

Multisample confirmatory factor analysis (MCFA) and latent mean structures analysis (LMS) were used to test measurement invariance and latent mean differences on the Kaufman Adolescent and Adult Intelligence Scale™ (KAIT) across males and females in the standardization sample. MCFA found that the parameters of the KAIT two-factor model were invariant across groups. A follow-up LMS found intercept differences on the Memory for Block Designs, Famous Faces, Auditory Comprehension, and Logical Steps subtests, indicating low to moderately higher expected scores for males. Thus, latent means were not tested for invariance. Although the KAIT two-factor model met partial measurement invariance, it did not demonstrate strong factorial invariance. Implications for test score interpretation are discussed.


Educational and Psychological Measurement | 2009

Item Parameter Invariance of the Kaufman Adolescent and Adult Intelligence Test Across Male and Female Samples

Jason C. Immekus; Susan J. Maller

The Kaufman Adolescent and Adult Intelligence Test (KAIT™) is an individually administered test of intelligence for individuals ranging in age from 11 to 85+ years. The item response theory—likelihood ratio procedure, based on the two-parameter logistic model, was used to detect differential item functioning (DIF) in the KAIT across males and females in the standardization sample. Root mean squared differences and item parameter differences were used to indicate the magnitude of DIF and identify which group the item parameter favored. A z test of proportion differences was conducted to determine if the number of parameters exhibiting gender DIF exceeded the number expected by chance, estimated by randomly dividing the sample in half and repeating the analyses. Of the 176 item parameters examined, 42 (24%) lacked invariance, with most items reporting uniform DIF. Implications for test score interpretation and future research are discussed.


Diaspora, Indigenous, and Minority Education | 2013

Experiences of Central California Latino Male Youth: Recollecting Despair and Success in Barrios and Schools

Juan Carlos González; Jason C. Immekus

This study examined the community and schooling experiences of Latino male youth (ages 14–24) in the California Central Valley. Seven semi-structured focus groups (n = 35) were conducted with Latino youth regarding how factors related to health, safety, and education affected their lives. Latino Critical Theory was used as a framework to explain how the disparities described by Latino youth are part of larger social and academic inequalities. Findings showed that the Latino youth have unique race-, gender-, and class-specific experiences related to structural inequalities. The study concludes with recommendations for policy and practice that can help improve Latino youths’ lives primarily by addressing structural inequalities in their communities and schools. Strategies for how community and school leaders can address the Latino youth disparities are provided.


frontiers in education conference | 2005

Work in progress - a model to evaluate team effectiveness

P.K. Imbrie; Jason C. Immekus; Susan J. Maller

This work-in-progress presents instruments that have been developed to assess team effectiveness for students in engineering classrooms. The instruments include: a) a 24-item self-report instrument (Team Effectiveness Scale) requiring students to indicate the degree their team worked together across the following domains: interdependency, goal-setting, potency, and learning; and b) a 6-item measure (Peer Assessment Scale) that asks students to rate each team members contribution towards the functionality of their team. Evidence of scale psychometric properties are provided along with the relationship between peer assessment and team effectiveness scores and the degree to which scores on the team effectiveness scale discriminated between functional and dysfunctional teams, consistent with instructor judgments


Psychiatric Services | 2008

Using Computerized Adaptive Testing to Reduce the Burden of Mental Health Assessment

Robert D. Gibbons; David J. Weiss; David J. Kupfer; Ellen Frank; Andrea Fagiolini; Victoria J. Grochocinski; Dulal K. Bhaumik; Angela Stover; R. Darrell Bock; Jason C. Immekus


2005 Annual Conference | 2005

Assessing Team Effectiveness

Jason C. Immekus; Susan J. Maller; P.K. Imbrie


2005 Annual Conference | 2005

Evaluating The Outcomes Of A Service Learning Based Course In An Engineering Education Program: Preliminary Results Of The Assessment Of The Engineering Projects In Community Service Epics.

Sara Tracy; Jason C. Immekus; Susan J. Maller; William C. Oakes

Collaboration


Dive into the Jason C. Immekus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Carlos González

University of Missouri–Kansas City

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. John Rush

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Angela Stover

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Brian F. French

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge