Brian K. Lynch
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian K. Lynch.
Language Testing | 1998
Brian K. Lynch; Tim McNamara
Second language performance tests, through the richness of the assessment context, introduce a range of facets which may influence the chances of success of a candidate on the test. This study investigates the potential roles of Generalizability theory (G-theory) (Brennan, 1983; Shavelson and Webb, 1991) and Many-facet Rasch measurement (Linacre, 1989; Linacre and Wright, 1993; McNamara, 1996) in the development of such a performance-based assessment procedure. This represents an extension of preliminary investigations into the relative contributions of these procedures (e.g., Bachman et al., 1995) to another assessment setting. Data for this study come from a trial of materials from the access: test, a test of communicative skills in English as a Second Language for intending immigrants to Australia. The performances of 83 candidates on the speaking skills module were multiply rated and analysed using GENOVA (Crick and Brennan, 1984) and FACETS (Linacre and Wright, 1993). The advantages and specific roles of these contrasting analytical techniques are considered in detail in the light of this assessment context.
Language Testing | 2001
Brian K. Lynch
This article examines language assessment from a critical perspective, defining critical in a manner similar to Pennycook (1999; 2001). I argue that alternative assessment, as distinct from testing, offers a partial response to the challenges presented by a critical perspective on language assessment. Shohamy’s (1997; 1999; 2001) critical language testing (CLT) is discussed as an adequate response to the critical challenge. Ultimately, I argue that important ethical questions, along with other issues of validity, will be articulated differently from a critical perspective than they are in the more traditional approach to language assessment.
TESOL Quarterly | 2001
Brian K. Lynch
Preface 1. Introduction: paradigms and purposes 2. Designing assessment and evaluation 3. Developing measures of language ability and program effectiveness 4. Analysing measurement data for assessment and evaluation 5. Developing interpretivist procedures for language assessment and program evaluation 6. Analysing interpretivist assessment and evaluation information 7. Validity and ethics References.
TESOL Quarterly | 2005
Brian K. Lynch; Peter Shaw
Portfolios have been used in a variety of ways for assessing student work. In education, generally, and more specifically in second language education, portfolios have been associated with alternative assessment (Darling-Hammond, 1994; Hamayan, 1995; Shohamy, 1996; Wolf, Bixby, Glenn, & Gardener, 1991). This article defines alternative assessment as representing a paradigm and culture that is different from traditional testing, requiring a different approach to addressing the issues of validity and ethics. We present a framework that integrates a consideration of how power relations determine the ethics and validity of assessment inferences. We then apply that framework to the assessment of student portfolios in a master of arts in TESOL (MA TESOL) program.
TESOL Quarterly | 1994
Brian K. Lynch; Fred Davidson
In discussing the use of both criterion-referenced measurement and norm-referenced measurement techniques for item analysis, Brown (1989) has called for strengthening the relationship between testing and the curriculum. Alderson and Wall (1993) have pointed out the need for actual studies on the existence of washback, or the influence of tests on teaching. This article answers those calls by presenting criterion-referenced language test development (CRLTD) as a means for linking ESL curricula, teacher experience, and language tests. CRLTD focuses on the generation of test specifications, as adapted from Popham (1978), and their refinement following the production of items or tasks from those specifications. Sample specifications are presented from university ESL/EFL programs at the University of Illinois, Urbana-Champaign, and the University of California, Los Angeles. CRLTD is elaborated further in the form of a workshop designed to translate curricular goals into test instruments with the active participation of teachers. The article concludes by examining data from teachers who have used CRLTD and with a discussion of its benefits as a proactive process for teaching and assessment.
Language Testing | 1988
Lyle F. Bachman; Antony Kunnan Swathi Vanniaraian; Brian K. Lynch
This paper describes the content analysis undertaken as part of the Cambridge- TOEFL comparability study, the purpose of which is to examine the compara bility of two EFL test batteries.
TESOL Quarterly | 1990
Brian K. Lynch
The literature on the evaluation of language teaching programs has focused almost entirely on specific issues of methodology and measurement. This article presents a generalized model for ESL program evaluation. The context-adaptive model consists of a series of seven steps designed to guide the program evaluator through consideration of the issues, information, and design elements necessary for a thorough evaluation. These steps are illustrated with examples from the evaluation of the Reading English for Science and Technology (REST) Project at the University of Guadalajara, Mexico. The model is intended to be flexible, lending itself to effective adaptation and refinement as it is implemented in a variety of ESL/EFL contexts.
Language Testing | 1997
Brian K. Lynch
The central question to be addressed here is whether any test can be defended as ethical, or moral. Ethicality is defined in terms of issues such as harm, consent, confidentiality of data and fairness. Frameworks for determining equity of edu cational opportunity are presented and discussed. A statewide assessment project in Victoria, Australia (the Learning Assessment Project) is then examined in relation to these concerns, and the possibility of more ethical approaches to testing is considered.
Language Testing | 1984
Thom Hudson; Brian K. Lynch
The distinction between norm-referenced measurement (NRM) and criterion-referenced measurement (CRM) has become recognized as an issue in second-language testing. Traditional methods used to determine the reliability and validity of a test, as well as to analyse items for test improvement have been based on NRM principles. These traditional methods are not entirely appropriate for criterion-referenced tests designed to measure course achievement. This study presents approaches to test development item analysis, reliability and validity based on CRM principles. These CRM approaches are discussed and compared with NRM approaches in terms of the types of decisions which result from either approach. The study was conducted using data from an ESL achievement testing project currently in progress at UCLA. The results indicate that CRM approaches provide information not available through NRM methods.
Computer Assisted Language Learning | 2000
Brian K. Lynch
This article describes the approach to program evaluation used in the Project-Oriented Computer Assisted Language Learning (PrOCALL) innovation. The design of the evaluation drew upon previous evaluative work done in network-based classrooms (Bruce et al., 1993) and the context-adaptive model for language program evaluation (Lynch, 1996). Rather than a fixed, a priori approach, the evaluation evolved to meet the changing understandings and expectations of the evaluation’s primary audience—in this case, the project director and participating teachers. A detailed presentation of the data gathering and analysis procedures is given, along with preliminary interpretations. The issue of validity and lessons learned in this evaluation are discussed, and recommendations for future evaluations of innovations similar to PrOCALL are offered.