Jennifer Koran
Southern Illinois University Carbondale
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jennifer Koran.
Behaviour Research and Therapy | 2015
Sarah J. Kertz; Jennifer Koran; Kimberly T. Stevens; Thröstur Björgvinsson
Repetitive negative thinking (RNT) is a common symptom across depression and anxiety disorders and preliminary evidence suggests that decreases in rumination and worry are related to improvement in depression and anxiety symptoms. However, despite its prevalence, relatively little is known about transdiagnostic RNT and its temporal associations with symptom improvement during treatment. The current study was designed to examine the influence of RNT on subsequent depression and anxiety symptoms during treatment. Participants (n = 131; 52% female; 93% White; M = 34.76 years) were patients presenting for treatment in a brief, cognitive behavior therapy based, partial hospitalization program. Participants completed multiple assessments of depression (Center for the Epidemiological Studies of Depression-10 scale), anxiety (the 7-item Generalized Anxiety Disorder Scale), and repetitive negative thinking (Perseverative Thinking Questionnaire) over the course of treatment. Results indicated statistically significant between and within person effects of RNT on depression and anxiety, even after controlling for the effect of time, previous symptom levels, referral source, and treatment length. RNT explained 22% of the unexplained variability in depression scores and 15% of the unexplained variability in anxiety scores beyond that explained by the control variables. RNT may be an important transdiagnostic treatment target for anxiety and depression.
Educational and Psychological Measurement | 2015
Nidhi Kohli; Jennifer Koran; Lisa Henn
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior to the CTT framework for the purpose of estimating person and item parameters. In previous simulation studies, IRT models were used both as generating and as fitting models. Hence, results favoring the IRT framework could be attributed to IRT being the data-generation framework. Moreover, previous studies only considered the traditional CTT framework for the comparison, yet there is considerable literature suggesting that it may be more appropriate to use CTT statistics based on an underlying normal variable (UNV) assumption. The current study relates the class of CTT-based models with the UNV assumption to that of IRT, using confirmatory factor analysis to delineate the connections. A small Monte Carlo study was carried out to assess the comparability between the item and person statistics obtained from the frameworks of IRT and CTT with UNV assumption. Results show the frameworks of IRT and CTT with UNV assumption to be quite comparable, with neither framework showing an advantage over the other.
Multivariate Behavioral Research | 2015
Jennifer Koran; Todd C. Headrick; Tzu Chun Kuo
This article derives a standard normal-based power method polynomial transformation for Monte Carlo simulation studies, approximating distributions, and fitting distributions to data based on the method of percentiles. The proposed method is used primarily when (1) conventional (or L) moment-based estimators such as skew (or L-skew) and kurtosis (or L -kurtosis) are unknown or (2) data are unavailable but percentiles are known (e.g., standardized test score reports). The proposed transformation also has the advantage that solutions to polynomial coefficients are available in simple closed form and thus obviates numerical equation solving. A procedure is also described for simulating power method distributions with specified medians, inter-decile ranges, left-right tail-weight ratios (skew function), tail-weight factors (kurtosis function), and Spearman correlations. The Monte Carlo results presented in this study indicate that the estimators based on the method of percentiles are substantially superior to their corresponding conventional product-moment estimators in terms of relative bias. It is also shown that the percentile power method can be modified for generating nonnormal distributions with specified Pearson correlations. An illustration shows the applicability of the percentile power method technique to publicly available statistics from the Idaho state educational assessment.
Structural Equation Modeling | 2010
Jennifer Koran; Gregory R. Hancock
Valuable methods have been developed for incorporating ordinal variables into structural equation models using a latent response variable formulation. However, some model parameters, such as the means and variances of latent factors, can be quite difficult to interpret because the latent response variables have an arbitrary metric. This limitation can be particularly problematic in growth models, where the means and variances of the latent growth parameters typically have important substantive meaning when continuous measures are used. However, these methods are often applied to grouped data, where the ordered categories actually represent an interval-level variable that has been measured on an ordinal scale for convenience. The method illustrated in this article shows how category threshold values can be incorporated into the model so that interpretation is more meaningful, with particular emphasis given to the application of this technique with latent growth models.
Assessment | 2017
Makoto Miyoshi; Kimberly K. Asner-Self; Sheng Yanyan; Jennifer Koran
The current study examined psychometric properties of the Japanese version of Abbreviated Multidimensional Acculturation Scale (AMAS-ZABB-JP) and the 20-item Multigroup Ethnic Identity Measure (MEIM-JP) with 273 Japanese sojourners and immigrants to the United States. The theoretical six-factor structure for the AMAS-JP and two-factor structure for the MEIM-JP was consistent with the literature. The subscales of the AMAS and MEIM showed expected patterns of correlation with each other and with additional variables (i.e., number of years in the United States), providing evidence for construct validity. Cronbach’s alpha reflected high levels of reliability for both scales. Despite strong psychometric findings, there were translational and cultural-based findings that suggest the need for further research.
Applied Measurement in Education | 2017
Jennifer Koran; Rebecca J. Kopriva
ABSTRACT Providing appropriate test accommodations to most English language learners (ELLs) is important to facilitate meaningful inferences about learning. This study compared teacher large-scale test accommodation recommendations to those from a literature- and practitioner-grounded accommodation selection taxonomy. The taxonomy links student-specific needs, strengths, and schooling experiences to large-scale test accommodation recommendations that differentially minimize barriers of access for students with different profiles. A blind panel of experts rated four sets of recommendations for each of 114 ELLs. Results found the taxonomy was a significantly better fit for distinguishing accommodations by student need than teacher recommendations. Further, the fit of teacher recommendations showed no difference when the teacher used a structured data collection procedure to gather profile information about each of their ELLs and when they did not, and teachers’ recommendations were not found to differ significantly from a random set of accommodations. Findings are consistent with previous literature that suggests the task of matching specific accommodations to individual needs, rather than the task of identifying individual needs, is where teachers struggle in recommending appropriate test accommodations.
Measurement and Evaluation in Counseling and Development | 2016
Jennifer Koran
Proactive preliminary minimum sample size determination can be useful for the early planning stages of a latent variable modeling study to set a realistic scope, long before the model and population are finalized. This study examined existing methods and proposed a new method for proactive preliminary minimum sample size determination.
Journal of Educational Measurement | 2009
Jennifer Koran; Nidhi Kohli; André A. Rupp
Society for Information Technology & Teacher Education International Conference | 2016
Rodney Greer; Jennifer Koran; Lyle J. White
Journal of Modern Applied Statistical Methods | 2016
Jennifer Koran; Todd C. Headrick