April Ginther
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by April Ginther.
Language Testing | 2002
April Ginther
The listening comprehension section of the TOEFL has traditionally involved audio presentations of language without accompanying visual stimuli. Now that TOEFL is computer based, listening comprehension items are being created that include both audio and visual information. A nested cross-over design (participants nested in proficiency, level and form) was used to examine the effects of visual condition (present or absent), type of stimuli (dialogues/short conversations, academic discussions and mini-talks) and language proficiency (high or low) on performance on CBT (Computer-based Test) listening comprehension items. Three two-way interactions were significant: proficiency by type of stimuli, type of stimuli by visual condition and type of stimuli by time. The interaction between type of stimuli by visual condition although weak, was perhaps the most interesting and indicated that the presence of visuals results in facilitation of performance when the visuals bear information that complements the audio portion of the stimulus.
Language Testing | 2010
April Ginther; Slobodanka Dimova; Rui Yang
Information provided by examination of the skills that underlie holistic scores can be used not only as supporting evidence for the validity of inferences associated with performance tests but also as a way to improve the scoring rubrics, descriptors, and benchmarks associated with scoring scales. As fluency is considered a critical, perhaps foundational, component of speaking proficiency, temporal measures of fluency are expected to be strongly related to holistic ratings of speech quality.This study examines the relationships among selected temporal measures of fluency and holistic scores on a semi-direct measure of oral English proficiency. The spoken responses of 150 respondents to one item on the Oral English Proficiency Test (OEPT) were analyzed for selected temporal measures of fluency. The examinees represented three first language backgrounds (Chinese, Hindi, and English) and the range of scores on the OEPT scale. While strong and moderate correlations between OEPT scores and speech rate, speech time ratio, mean length of run, and the number and length of silent pauses were found, fluency variables alone did not distinguish adjacent levels of the OEPT scale. Temporal measures of fluency may reasonably be selected for the development of automated scoring systems for speech; however, identification of an examinee’s level remains dependent on aspects of performance only partially represented by fluency measures.
Language Testing | 2016
Xun Yan; Yukiko Maeda; Jing Lv; April Ginther
Elicited imitation (EI) has been widely used to examine second language (L2) proficiency and development and was an especially popular method in the 1970s and early 1980s. However, as the field embraced more communicative approaches to both instruction and assessment, the use of EI diminished, and the construct-related validity of EI scores as a representation of language proficiency was called into question. Current uses of EI, while not discounting the importance of communicative activities and assessments, tend to focus on the importance of processing and automaticity. This study presents a systematic review of EI in an effort to clarify the construct and usefulness of EI tasks in L2 research. The review underwent two phases: a narrative review and a meta-analysis. We surveyed 76 theoretical and empirical studies from 1970 to 2014, to investigate the use of EI in particular with respect to the research/assessment context and task features. The results of the narrative review provided a theoretical basis for the meta-analysis. The meta-analysis utilized 24 independent effect sizes based on 1089 participants obtained from 21 studies. To investigate evidence of construct-related validity for EI, we examined the following: (1) the ability of EI scores to distinguish speakers across proficiency levels; (2) correlations between scores on EI and other measures of language proficiency; and (3) key task features that moderate the sensitivity of EI. Results of the review demonstrate that EI tasks vary greatly in terms of task features; however, EI tasks in general have a strong ability to discriminate between speakers across proficiency levels (Hedges’ g = 1.34). Additionally, construct, sentence length, and scoring method were identified as moderators for the sensitivity of EI. Findings of this study provide supportive construct-related validity evidence for EI as a measure of L2 proficiency and inform appropriate EI task development and administration in L2 research and assessment.
Language Testing | 2018
April Ginther; Xun Yan
This study examines the predictive validity of the TOEFL iBT with respect to academic achievement as measured by the first-year grade point average (GPA) of Chinese students at Purdue University, a large, public, Research I institution in Indiana, USA. Correlations between GPA, TOEFL iBT total and subsection scores were examined on 1990 mainland Chinese students enrolled across three academic years (N2011 = 740, N2012 = 554, N2013 = 696). Subsequently, cluster analyses on the three cohorts’ TOEFL subsection scores were conducted to determine whether different score profiles might help explain the correlational patterns found between TOEFL subscale scores and GPA across the three student cohorts. For the 2011 and 2012 cohorts, speaking and writing subscale scores were positively correlated with GPA; however, negative correlations were observed for listening and reading. In contrast, for the 2013 cohort, the writing, reading, and total subscale scores were positively correlated with GPA, and the negative correlations disappeared. Results of cluster analyses suggest that the negative correlations in the 2011 and 2012 cohorts were associated with a distinctive Reading/Listening versus Speaking/Writing discrepant score profile of a single Chinese subgroup. In 2013, this subgroup disappeared in the incoming class because of changes made to the University’s international undergraduate admissions policy. The uneven score profile has important implications for admissions policy, the provision of English language support, and broader effects on academic achievement.
Archive | 2016
Xun Yan; Suthathip Thirakunkovit; Nancy Kauper; April Ginther
The Oral English Proficiency Test (OEPT) is a computer-administered, semi-direct test of oral English proficiency used to screen the oral English proficiency of prospective international teaching assistants (ITAs) at Purdue University. This paper reports on information gathered from the post-test questionnaire (PTQ), which is completed by all examinees who take the OEPT. PTQ data are used to monitor access to the OEPT orientation video and practice test, to evaluate examinee perceptions of OEPT characteristics and administration, and to identify any problems examinees may encounter during test administration. Responses to the PTQ are examined after each test administration (1) to ensure that no undue or unexpected difficulties are encountered by examinees and (2) to provide a basis for modifications to our administrative procedures when necessary. In this study, we analyzed 1440 responses to both closed-ended and open-ended questions of the PTQ from 1342 test-takers who took the OEPT between August 2009 and July 2012. Responses to these open-ended questions on the OEPT PTQ provided an opportunity to examine an unexpectedly wide variety of response categories. The analysis of the 3-year data set of open-ended items allowed us to better identify and evaluate the effectiveness of changes we had introduced to the test administration process during that same period of time. Carefully considering these responses has contributed substantially to our quality control processes.
Language Testing | 2018
Xun Yan; Lixia Cheng; April Ginther
This study investigated the construct validity of a local speaking test for international teaching assistants (ITAs) from a fairness perspective, by employing a multi-group confirmatory factor analysis (CFA) to examine the impact of task type and examinee first language (L1) background on the internal structure of the test. The test consists of three types of integrated speaking tasks (i.e., text-speaking, graph-speaking, and listening-speaking) and the three L1s that are most represented among the examinees are Mandarin, Hindi, and Korean. Using scores of 1804 examinees across three years, the CFA indicated a two-factor model with a general speaking factor and a listening task factor as the best-fitting internal structure for the test. The factor structure was invariant for examinees across academic disciplines and L1 backgrounds, although the three examinee L1 groups demonstrated different factor variances and factor means. Specifically, while Korean examinees showed a larger variance in oral English proficiency, Hindi examinees demonstrated a higher level of oral proficiency than did Mandarin and Korean examinees. Overall, the lack of significance for multiple task factors and the invariance of factor structure suggest that the test measures the same set of oral English skills for all examinees. Although the factor variances and factor means for oral proficiency differed across examinee L1 subgroups, they reflect the general oral proficiency profiles of English speakers from these selected L1 backgrounds in the university and therefore do not pose serious threats to the fairness of the test. Findings of this study have useful implications for fairness investigations on ITA speaking tests.
Studies in Second Language Acquisition | 2002
April Ginther
In the introduction to The power of tests: A critical perspective on the uses of language tests , Elana Shohamy raises the following questions: What is the meaning of a test for test takers, parents, teachers, and school administrators? What are the short- and long-term consequences of tests on the lives of individuals? What are the motivating factors behind the administration of language tests? What are the politics of the tests? These kinds of questions logically arise when the examination of testing includes a concern with the use of tests by educational institutions, policy makers, and society at large. Focusing primarily on the misuse of tests, this volume chronicles both intended and unintended test consequences.
Journal of Second Language Writing | 2000
Leslie Grant; April Ginther
ETS Research Report Series | 1998
Lawrence T. Frase; Joseph Faletti; April Ginther; Leslie Grant
ETS Research Report Series | 2001
April Ginther