Vahid Aryadoust
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vahid Aryadoust.
Educational Assessment | 2015
Vahid Aryadoust
Forty science students received training for 12 weeks on delivering effective presentations and using a tertiary-level English oral presentation scale comprising three subscales (Verbal Communication, Nonverbal Communication, and Content and Organization) measured by 18 items. For their final project, each student was given 10 to 12 min to present on 1 of the 5 compulsory science books for the module and was rated by the tutor, peers, and himself/herself. Many-facet Rasch measurement, correlation, and analysis of variance were performed to mine the data. The results show that the student raters, tutor, items, and rating scales achieved high psychometric quality, though a small number of assessments exhibited bias. Although all of the biased self-assessments were underestimations of presentation skills, the peer and tutor assessment bias had a mixed pattern. In addition, self-, peer, and tutor assessments had low to medium correlations on the subscales, and a significant difference was found between the assessments. Implications are discussed.
International Journal of Testing | 2015
Vahid Aryadoust
The present study uses a mixture Rasch model to examine latent differential item functioning in English as a foreign language listening tests. Participants (n = 250) took a listening and lexico-grammatical test and completed the metacognitive awareness listening questionnaire comprising problem solving (PS), planning and evaluation (PE), mental translation (MT), person knowledge (PK), and directed attention (DA). The listening test was subjected to MRM analysis where a two-latent class model had a sufficient fit. Next, an artificial neural network and a chi-square test were used to examine the nature of the latent classes. Class 1 comprised high-ability listeners capable of multitasking and obtained high PS, PE, and lexico-grammatical test scores but low DA, PK, and MT scores. Class 2 comprised low-ability listeners with limited multitasking skills who obtained high DA, PK, and MT scores but low scores on PS, PE, and the lexico-grammatical test. Finally, a model of listening comprehension is postulated and discussed.
Archive | 2016
Lawrence Jun Zhang; Vahid Aryadoust; Donglan Zhang
Strategies-based instruction (SBI) is widely accepted and successfully implemented in North America in language and literacy programmes, but little has been reported on how this strategy would work in a bilingual/biliteracy learning context. This chapter reports on the efficacy of such an intervention conducted in two Singapore primary schools, where the government implements a unique bilingual/biliteracy policy in education, by which English is offered as the first language and one of the other three mother tongue languages (Chinese, Malay and Tamil) as a second language subject in the national curriculum. Although the Singapore quadrilingual education policy has been internationally acclaimed as being successful, some students face challenges in biliteracy learning, resulting in some students’ underachievement. To help these students catch up with their better-performing peers, we designed an intervention programme to answer the following research questions: (1) When integrated into the regular curriculum, does SBI have an impact on bilingual students’ understanding of the writing processes in their two languages? (2) Specifically, does SBI lead to writing improvement in both languages? The study had an experimental group and a control group. Such a design was intended for comparing the pedagogical efficacy of SBI on student improvement in writing in English and writing in Chinese over a period of one semester (10 weeks of teaching) in the regular school curriculum. Results suggest that the use of SBI not only raised students’ awareness of writing strategies but also improved their English and Chinese writing scores. Thus, we conclude that SBI was a useful dimension to the writing curriculum in the two schools involved in this study.
Educational Psychology | 2016
Vahid Aryadoust
This study sought to examine the development of paragraph writing skills of 116 English as a second language university students over the course of 12 weeks and the relationship between the linguistic features of students’ written texts as measured by Coh-Metrix – a computational system for estimating textual features such as cohesion and coherence – and the scores assigned by human raters. The raters’ reliability was investigated using many-facet Rasch measurement (MFRM); the growth of students’ paragraph writing skills was explored using a factor-of-curves latent growth model (LGM); and the relationships between changes in linguistic features and writing scores across time were examined by path modelling. MFRM analysis indicates that despite several misfits, students’ and raters’ performances and scale’s functionality conformed to the expectations of MFRM, thus providing evidence of psychometric validity for the assessments. LGM shows that students’ paragraph writing skills develop steadily during the course. The Coh-Metrix indices have more predictive power before and after the course than during it, suggesting that Coh-Metrix may struggle to discriminate between some ability levels. Whether a Coh-Metrix index gains or loses predictive power over time is argued to be partly a function of whether raters maintain or lose sensitivity to the linguistic feature measured by that index in their own assessment as the course progresses.
Educational Psychology | 2017
Vahid Aryadoust; Mehdi Riazi
Research into second language writing has developed in depth and scope over the past few decades. Researchers have shown growing interest in new approaches to the teaching and assessing of writing. The provision of diagnostic and/or (automated) corrective feedback (Lee & Coniam, 2013; Liu & Kunnan, 2016), predicting writers’ ability using psycholinguistic features of their essays (Riazi, 2016) and rater performance (Schaefer, 2008) are but a few major research streams in second language writing. Such new approaches have been discussed heatedly in the scholarly literature, but there remains a need to investigate new issues emerging from these fields in different environments. Specifically, the role of assessment in writing, the validity of the uses and interpretations of qualitative feedback and scores, and the effectiveness of genre-based approaches to writing continue to be major causes of concerns for practitioners and researchers alike. Assessment is a burgeoning field in second language writing research and can be characterised as comprising three fundamental subfields. The first line of research focuses on the development of assessment instruments for various purposes and stakes, including proficiency and/or in-class assessment. Specifically, developing and validating rubrics for diagnostic assessment (Kim, 2011) and investigating the quality of feedback provided by teachers (Bruton, 2009; Diab, 2011) are areas that have drawn intense interest. For example, Kim (2011) investigated the validity of an assessment tool (checklist) for providing diagnostic feedback on writing, which achieved considerable success. Kim showed that the assessment tool – which taps into five dimensions of writing (e.g. ‘content fulfilment’ and ‘organizational effectiveness’) – is psychometrically reliable and can be used effectively to diagnose English learners’ errors. Kim’s research is one of the first studies that examines the underlying structure of a diagnostic writing tool, and more research of this kind is required to explore the benefits that language learners can reap from such pedagogical instruments. Specifically, the development and validation of reliable in-class assessments, which includes diagnostic and formative tasks, is an area in second language writing that requires further research attention and investment. The second line of research in second language writing concerns the validity of the uses and interpretations of scores and qualitative feedback provided by teachers/raters as well as automated writing evaluation (AWE) systems such as Criterion, My Access and WriteToLearn (Attali & Burstein, 2005; Dikli, 2010; Dikli & Bleyle, 2014; Liu & Kunnan, 2016). A recent meta-analysis suggests that the effectiveness and role of teacher feedback in second language writing remains an open question (Liu & Brown, 2015), which is in line with an extensive survey of the literature conducted by Van Beuningen (2010). Van Beuningen divided feedback research into the learning-to-write approach (Leki, Cumming, & Silva, 2008) and the writing-to-learn approach (Ortega, 2009). The former approach aims to foster students’ writing ability through continuous corrective feedback and to help them become independent and effective writers. On the other hand, the writing-to-learn agenda, which emerges from, for example, Manchón (2009) and Ortega (2009), takes a more quantitative approach by applying more controlled research methods and looking into the psycholinguistic and (meta)cognitive aspects of learning (Van Beuningen, 2010). This stream of research provides convincing evidence that, despite all articulated objections and controversies, providing corrective feedback to students can facilitate their learning
Language Testing | 2016
Vahid Aryadoust; Limei Zhang
The present study used the mixed Rasch model (MRM) to identify subgroups of readers within a sample of students taking an EFL reading comprehension test. Six hundred and two (602) Chinese college students took a reading test and a lexico-grammatical knowledge test and completed a Metacognitive and Cognitive Strategy Use Questionnaire (MCSUQ) (Zhang, Goh, & Kunnan, 2014). MRM analysis revealed two latent classes. Class 1 was more likely to score highly on reading in-depth (RID) items. Students in this class had significantly higher general English proficiency, better lexico-grammatical knowledge, and reported using reading strategies more frequently, especially planning, monitoring, and integrating strategies. In contrast, Class 2 was more likely to score highly on skimming and scanning (SKSN) items, but had relatively lower mean scores for lexico-grammatical knowledge and general English proficiency; they also reported using strategies less frequently than did Class 1. The implications of these findings and further research are discussed.
Language Assessment Quarterly | 2016
Vahid Aryadoust
ABSTRACT The fairness and precision of peer assessment have been questioned by educators and academics. Of particular interest, yet poorly understood, are the factors underlying the biases that cause unfair and imprecise peer assessments. To shed light on this issue, I investigated gender and academic major biases in peer assessments of oral presentations. The study sample comprised 66 science students enrolled in a formative assessment-based communication module at an Asian university. Each student presented an oral presentation in English and also evaluated 10–14 of their classmates’ oral presentations. The students’ evaluations were anchored by the instructor’s evaluation of each oral presentation. I performed many-facet Rasch measurement (MFRM) for two purposes: (a) to examine the effect of multiple facets on the student and teacher ratings of oral presentations and (b) to adjust the ratings on oral presentations according to gender and academic major biases. The scores assigned by student raters had good fit to MFRM; however, when students evaluated oral presentations by peers of the opposite sex, the scores were overestimated. An academic major bias was also observed, where students consistently underestimated the scores of same-major peers. After adjusting for biases, it was concluded that peer assessments can be a reliable and useful form of formative assessment.
2013 IEEE Workshop on Hybrid Intelligent Models and Applications (HIMA) | 2013
Vahid Aryadoust
This study reports a novel application of the Adaptive Neuro Fuzzy Inference Systems (ANFIS) to a second language listening test, and compares it with path modeling of observed variables. Seven variables were defined and hypothesized to influence the primary dependent variable, test item difficulty. Next, a matrix of these eight variables was developed and subjected to ANFIS and path modeling. ANFIS analysis found stronger effects for several of the seven explanatory variables. Path modeling captured some of the same effects through a mediating variable, test section, which captures aggregate differences across different subsections of the test. In general, neurofuzzy models (NFMs) appear to be a promising tool in language and educational assessment.
International Journal of Listening | 2016
Christine Chuen Meng Goh; Vahid Aryadoust
This is the final draft, after peer-review, of a manuscript published in International Journal of Listening. The published version is available online at http://www.tandfonline.com/doi/full/10.1080/10904018.2016.1138689
International Journal of Testing | 2015
Purya Baghaei; Vahid Aryadoust
Research shows that test method can exert a significant impact on test takers’ performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared feature are a source of response dependence (Marais & Andrich, 2008). In this study, we use the Rasch testlet model (Wang & Wilson, 2005a) to examine the effect of test method on violating the unidimensionality assumption of the Rasch model. Results show that test formats can introduce small to large construct-irrelevant variance, contaminate test scores, and lead to the violation of the conditional independence assumption. Our findings further suggest that the degree of construct-irrelevant variance exerted by test method could be a function of test format familiarity.