Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yo In’nami is active.

Publication


Featured researches published by Yo In’nami.


System | 2006

The effects of test anxiety on listening test performance

Yo In’nami

Although decisions or inferences we make based on test scores depend both on characteristics of test-takers and of testing situations, little research has been undertaken on the effects of these characteristics on test performance (e.g., Alderson and Banerjee, 2002). This study focuses on one of the personal characteristics of test-takers, namely test anxiety, and investigates the effects of test anxiety on listening test performance. Previous research in second language studies has suffered from the following five limitations, all of which were addressed in the current study: (a) no control of measurement errors, (b) insufficient validation of questionnaires, (c) little attention to the effects of test anxiety on test performance, (d) too small a number of questionnaire items, and (e) lack of attention to the effects of test anxiety in listening. Participants took a listening performance test, and answered two questionnaires designed to measure test anxiety. Results based on structural equation modeling show that test anxiety does not affect listening test performance. The results support [Aida, Y., 1994. Examination of Horwitz, Horwitz, and Cope’s construct of foreign language anxiety: the case of students of Japanese. The Modern Language Journal 78, 155–168.] and [MacIntyre, P.D., Gardner, R.C., 1989. Anxiety and second-language learning: toward a theoretical clarification. Language Learning 39, 251–275.], and also suggest that in foreign language anxiety [Horwitz, E.K., Horwitz, M.B., Cope, J., 1986. Foreign language classroom anxiety. The Modern Language Journal 70, 125– 132.], test anxiety seems to work differently compared with communication apprehension and fear of negative evaluation. � 2006 Elsevier Ltd. All rights reserved.


Language Testing | 2016

Task and Rater Effects in L2 Speaking and Writing: A Synthesis of Generalizability Studies.

Yo In’nami; Rie Koizumi

We addressed Deville and Chalhoub-Deville’s (2006), Schoonen’s (2012), and Xi and Mollaun’s (2006) call for research into the contextual features that are considered related to person-by-task interactions in the framework of generalizability theory in two ways. First, we quantitatively synthesized the generalizability studies to determine the percentage of variation in L2 speaking and L2 writing performance that was accounted for by tasks, raters, and their interaction. Second, we examined the relationships between person-by-task interactions and moderator variables. We used 28 datasets from 21 studies for L2 speaking, and 22 datasets from 17 studies for L2 writing. Across modalities, most of the score variation was explained by examinees’ performance; the interaction effects of tasks or raters were greater than the independent effects of tasks or raters. Task and task-related interaction effects explained a greater percentage of the score variances, than did the rater and rater-related interaction effects. The variances associated with the person-by-task interactions were larger for assessments based on both general and academic contexts, than for those based only on academic contexts. Further, large person-by-task interactions were related to analytic scoring and scoring criteria with task-specific language features. These findings derived from L2 speaking studies indicate that contexts, scoring methods, and scoring criteria might lead to varied performance over tasks. Consequently, this particularly requires us to define constructs carefully.


Archive | 2013

Structural Equation Modeling in Educational Research

Yo In’nami; Rie Koizumi

Structural equation modeling (SEM) is a collection of statistical methods for modeling the multivariate relationship between variables. It is also called covariance structure analysis or simultaneous equation modeling and is often considered an integration of regression and factor analysis.


International Journal of Testing | 2013

Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

Yo In’nami; Rie Koizumi

The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of parameters in a model using Monte Carlo analysis? (b) How are the results from Monte Carlo sample size analysis comparable with those from the N ≥ 100 rule and from the N: q ≥ 10 (sample size–free parameter ratio) rule? Regarding (a), parameter bias, standard error bias, coverage, and power were overall satisfactory, suggesting that sample size for SEM models in second language testing and learning studies is generally appropriate. Regarding (b), both rules were often inconsistent with the Monte Carlo analysis, suggesting that they do not serve as guidelines for sample size. We encourage applied SEM researchers to perform Monte Carlo analyses to estimate the requisite sample size of a model.


Language Assessment Quarterly | 2017

Issues of Language Assessment in Japan: Past, Present and Future: Editorial by Guest Editors of the Special Issue

Yo In’nami; Rie Koizumi; Yasuyo Sawaki; Yoshinori Watanabe

The last two decades have witnessed the growth of interest in the field of language assessment worldwide, perhaps even more than in any decade in the past. Several important issues, both practical and theoretical, have been brought to the awareness of researchers and practitioners, who in turn have made efforts to address those issues to enhance good assessment practices for the sake of the common good. Japan is no exception, as illustrated by the development of the Japan Language Testing Association (JLTA) over the last few decades. JLTA was established in 1996, modestly with five members, to prepare for the 1999 Language Testing Research Colloquium (LTRC), which was then to be held in an Asian country for the first time. The association has since been growing, currently holding over 200 members, including researchers, teachers, and students. Its development is evidenced not only in the sheer size of its membership, but also in the quality of the paper presentations, workshops, and other concerted efforts to promote language assessment literacy (http://jlta.ac/) and the academic articles that are published in the JLTA Journal (https://www. jstage.jst.go.jp/browse/jltajournal) each year. This special issue comprises state-of-the-art articles and empirical papers that showcase current language assessment research and practice in the country. In so doing, an attempt was made to cover as diverse a range of subjects as possible, from classroom practices and the use of external assessments to the specific curriculum and frameworks in Japanese and English language education. This issue covers a wide range of topics by dealing with diverse assessment purposes, contexts, and target populations in the country. We hope that many of the topics addressed in this issue will also appeal to the experience of readers elsewhere in the world and that the approach taken in the articles will help readers to look at their own issues of concern from a new angle. The first article in the special issue is an overview paper by Hatasa and T. Watanabe, who present a much-needed review of how Japanese as a Second Language (JSL) has been assessed since the 1980s, a topic on which the currently available documentation is rather limited in the language assessment literature. With a focus on the societal impact of political factors, along with the consequences of assessment practices, their article discusses various assessments used in Japan, including large-scale tests, ranging from the modified version of the American Council on the Teaching of Foreign Languages Oral Proficiency Interview (ACTFL OPI), placement tests (i.e., the Simple Performance Oriented Test or SPOT), and the Japanese Computerized Adaptive Test (J-CAT) to the assessments of Japanese for Specific Purposes Tests (e.g., The National Examination for Registered Nurses and the National Examination for Certified Care Workers). The authors also examine how the Common European Framework of Reference for Languages (CEFR; Council of Europe, 2001) has triggered the development of a Japanese version of the CEFR (i.e., JF Standard). By referring to several studies that have since been published, they discuss the impact of JSL curriculum and instructional methodology. The article concludes by identifying some of the pressing issues surrounding JSL testing and suggests directions for future research to address these issues.


Language Assessment Quarterly | 2017

Using EIKEN, TOEFL, and TOEIC to Award EFL Course Credits in Japanese Universities.

Yo In’nami; Rie Koizumi

ABSTRACT Despite the wide use of language tests as a basis for awarding English language course credits at Japanese universities, little has been published about how universities set policies on awarding credits according to external test scores. To narrow this gap, the characteristics of such policies were investigated in relation to the EIKEN Test in Practical English Proficiency (EIKEN), the Test of English as a Foreign LanguageTM (TOEFL®), and the Test of English for International CommunicationTM (TOEIC®). Analyses of 18 national and 28 private universities showed that each university had a median of 58.50 EFL courses for which credits were offered on the basis of external test scores. Moreover, approximately one-third of cases of credit awarding showed a discrepancy between skills targeted in courses and those measured on the tests used in credit-awarding policies, suggesting that credit awarding based on these proficiency measures seems overall adequate. However, credit-awarding policies were problematic for four-skill (62.44% and 63.37% for national and private universities, respectively) and listening-speaking courses (61.26% and 65.29%). Academic staff responses to the questionnaire revealed some possible reasons EFL course credits could be offered despite gaps between skills targeted in courses and those measured on tests. Implications are provided for the improvement of credit-awarding policies.


Archive | 2016

Multifaceted Rasch Analysis of Paired Oral Tasks for Japanese Learners of English

Rie Koizumi; Yo In’nami; Makoto Fukazawa

Background and aims: Despite the importance of enhancing and assessing oral interactive ability, few studies have investigated paired oral assessment for Japanese learners of English. This study refines Koizumi et al. (in press), expands the number of paired oral tasks that are calibrated on a logit scale, and examines aspects of validity mainly related to paired oral tasks and raters. Methods: A total of 190 Japanese students from three universities participated in 11 paired oral tasks. Their responses were recorded and evaluated by three raters using a holistic scale. A multifaceted Rasch measurement program, Facets (Linacre 2014), was used. The rating scale model was used to examine the test takers’ abilities, task difficulty, rater severity, and rating scale functions. Structural equation modeling and generalizability theory were also employed. Results and discussions: Results showed a unitary factor structure in the test, with some error correlations between tasks and between raters. We also found that all tasks and raters fit the Rasch model, the rating scale functioned properly, and there was a relatively wide range of tasks in terms of difficulty levels but also a lack of tasks at the higher and lower ends and in between. Results also suggested that large percentages of score variance were explained by persons (test takers), interactions between persons and tasks and between persons and raters, and residuals, and that four tasks with two raters or three tasks with three raters are needed to gain a sufficient reliability of φ = 0.70. As a result, we could provide pieces of validity evidence for the interpretation of the paired oral tasks developed.


Journal of Language Teaching and Research | 2013

Vocabulary Knowledge and Speaking Proficiency among Second Language Learners from Novice to Intermediate Levels

Rie Koizumi; Yo In’nami


Language Testing in Asia | 2016

Factor structure of the Test of English for Academic Purposes (TEAP®) test in relation to the TOEFL iBT® test

Yo In’nami; Rie Koizumi; Keita Nakamura


Language Testing in Asia | 2016

Validity evidence of Criterion® for assessing L2 writing proficiency in a Japanese university context

Rie Koizumi; Yo In’nami; Keiko Asano; Toshie Agawa

Collaboration


Dive into the Yo In’nami's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yo In'nami

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maki Shimizu

Takasaki University of Health and Welfare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chikako Nakagawa

Japan Society for the Promotion of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makoto Fukazawa

University of the Ryukyus

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge