John M. Norris
Georgetown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John M. Norris.
Language Learning | 2000
John M. Norris; Lourdes Ortega
This study employed (and reports in detail) systematic procedures for research synthesis and meta-analysis to summarize findings from experimental and quasi-experimental investigations into the effectiveness of L2 instruction published between 1980 and 1998. Comparisons of average effect sizes from 49 unique sample studies reporting sufficient data indicated that focused L2 instruction results in large target-oriented gains, that explicit types of instruction are more effective than implicit types, and that Focus on Form and Focus on Forms interventions result in equivalent and large effects. Further findings suggest that the effectiveness of L2 instruction is durable and that the type of outcome measures used in individual studies likely affects the magnitude of observed instructional effectiveness. Generalizability of findings is limited because the L2 type-of-instruction domain has yet to engage in rigorous empirical operationalization and replication of its central research constructs. Changes in research practices are recommended to enhance the future accumulation of knowledge about the effectiveness of L2 instruction.
Language Teaching Research | 2009
John M. Norris
In this era of acute demands for accountability testing, institutional accreditation, outcomes assessment, and quality control, language educators are developing a heightened awareness of program evaluation and some of the roles that it may play in determining how language teaching and learning occurs (or does not). Unfortunately, it is precisely these associations that language teachers tend to draw – evaluation is something that is done to us by external experts for mandated purposes, and the uses for and control over evaluation reside well beyond the purview of the language teacher, classroom, or school. Though understandable, particularly in light of the ways in which evaluative processes (like standardized testing) have come to be wielded by some as political weapons rather than educational tools, these perceptions belie the potential value that evaluation can contribute to understanding and improving language teaching practices and programs. Indeed, since the late 1980s, a small cadre of language program evaluators has provided cogent and critical insights into the ways in which evaluation can serve internal as well as external interests, can inform formative as well as summative purposes, can empower language teachers and learners as well as ensure adherence to standards or outcomes, can draw upon multiple methodologies (‘qualitative’ as well as ‘quantitative’), and can transform the value as well as the effectiveness of language education in society (see Alderson & Beretta, 1992; Brown, 1995; Kiely & Rea-Dickins, 2005; Lynch, 1996; Weir & Roberts, 1994). Still, despite the availability of such useful resources, the historical relationship between language education and program evaluation might be described at best as ambivalent. At the same time, it is possibly only now – only with increased demands for evaluation in the contemporary educational landscape – that language educators in a variety of settings are becoming sufficiently tuned in to the necessity of evaluation as a path towards program improvement, educational effectiveness, and perhaps survival of the language teaching profession (see Norris, 2006). A timely and crucial contribution to the sustained development of program evaluation practice in support of language education must be an increase in public discourse about evaluation and the sharing of meaningful
Annual Review of Applied Linguistics | 2016
John M. Norris
ABSTRACT Task-based language assessment (TBLA) has generated interest since the early 1990s, primarily in conjunction with the ongoing development of task-based language teaching (TBLT) and the pursuit of developing appropriate testing models for this approach to pedagogy (Norris, 2002, 2009). However, tasks also offer considerable advantages for language assessment, beyond their obvious relevance within TBLT classes and programs. In fact, major innovations in the general domain of language assessment over the past two decades have occurred in conjunction with the introduction of tasks into assessment design, largely in response to the need for tests that better represent examinees’ abilities to use the language (Mislevy, Steinberg, & Almond, 2002), but also because tasks offer a meaningful space for language teachers, testers, learners, and others to examine, understand, and improve language learning endeavors (Van Gorp & Deygers, 2013). This article reviews the considerable range of current uses for TBLA, illustrating different types of assessment with concrete examples and highlighting distinct roles for tasks as a basic unit of analysis in test design, interpretation, and intended consequence. Ultimately, it argues that tasks offer a fundamental, though not exclusive, foundation for useful language assessment, and that task-based assessment, though challenging, is probably worth the effort.
Language Teaching Research | 2017
Aleksandra Malicka; Roger Gilabert Guerrero; John M. Norris
Needs analysis (NA) has long been argued to be the prerequisite for the design of language curricula or syllabi and the selection of tasks. According to Long (2005), a one-size-fits-all approach should be substituted by a careful examination of learners’ needs in a particular domain or learner community. Despite the increasing practice of carrying out a NA as a first step in curriculum design, it is still unclear how exactly the insights obtained from NA can be used in meaningful ways to take informed decisions about task and syllabus design. This study attempts to fill this gap by applying the findings obtained in a NA in the domain of a hotel receptionist’s job to the design of pedagogic tasks. The goals of this study were to obtain insights into what tasks are done in this domain (task selection), what kind of language use is associated with these tasks (task discourse analysis), how the information about perceived difficulty of tasks can be translated into instructionally manipulable variables (task difficulty), and in what order the resulting tasks should be presented to learners (task sequencing). The study design employed in-depth qualitative data collection, including 10 semi-structured interviews and three observations, and the sources were domain experts and domain novices. By linking the information obtained in the NA with a theoretical task complexity model, the study provides a detailed account of how real-life tasks can be translated into an articulated set of genuine and instructionally relevant pedagogic tasks.
Studies in Second Language Acquisition | 2006
John M. Norris
LANGUAGE ASSESSMENT AND PROGRAMME EVALUATION. Brian K. Lynch . Edinburgh: Edinburgh University Press, 2003. Pp. ix + 182.
Applied Linguistics | 2009
John M. Norris; Lourdes Ortega
34.00 paper. In this slim volume, Lynch offers a unique contribution of interest for language educators and of particular utility for students who are new to the domain. By integrating the treatment of assessment and evaluation and by exploring issues in design, development, analysis, and ethics, the volume effectively introduces readers to key concerns in these oft-confounded processes. Additionally, although SLA is not addressed directly, SLA researchers might take interest in Lynchs treatment of paradigms and their role in applied work.
Archive | 2006
John M. Norris; Lourdes Ortega
Language Learning | 2001
John M. Norris; Lourdes Ortega
The Handbook of Second Language Acquisition | 2008
John M. Norris; Lourdes Ortega
Published in <b>2009</b> in Amsterdam by Benjamins | 2009
Kris Van den Branden; Martin Bygate; John M. Norris