Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lyle F. Bachman is active.

Publication


Featured researches published by Lyle F. Bachman.


Language Testing | 2000

Modern language testing at the turn of the century: assuring that what we count counts

Lyle F. Bachman

In the past twenty years, language testing research and practice have witnessed the refinement of a rich variety of approaches and tools for research and development, along with a broadening of philosophical perspectives and the kinds of research questions that are being investigated. While this research has deepened our understanding of the factors and processes that affect performance on language tests, as well as of the consequences and ethics of test use, it has also revealed lacunae in our knowledge, and pointed to new areas for research. This article reviews developments in language testing research and practice over the past twenty years, and suggests some future directions in the areas of professionalizing the field and validation research. It is argued that concerns for ethical conduct must be grounded in valid test use, so that professionalization and validation research are inseparable. Thus, the way forward lies in a strong programme of validation that includes considerations of ethical test use, both as a paradigm for research and as a practical procedure for quality control in the design, development and use of language tests.


TESOL Quarterly | 1982

The Construct Validation of Some Components of Communicative Proficiency

Lyle F. Bachman; Adrian S. Palmer

The notion of communicative competence has received wide attention in the past few years, and numerous attempts have been made to define it. Canale and Swain (1980) have reviewed these attempts and have developed a framework which defines several hypothesized components of communicative competence and makes the implicit claim that tests of components of communicative competence measure different abilities. In this study we examine the construct validity of some tests of components communicative competence and of a hypothesized model. Three distinct traits—linguistic competence, pragmatic competence and sociolinguistic competence—were posited as components of communicative competence. A multitrait-multimethod design was used, in which each of the three hypothesized traits was tested using four methods: an oral interview, a writing sample, a multiple-choice test and a self-rating. The subjects were 116 adult non-native speakers of English from various language and language-learning backgrounds. Confirmatory factor analysis was used to examine the plausibility of several causal models, involving from one to three trait factors. The results indicate that the model which best fits the data includes a general and two specific trait factors —grammatical/pragmatic competence and sociolinguistic competence. The relative importance of the trait and method factors in the various tests used is also indicated.


Language Testing | 2002

Some reflections on task-based language performance assessment

Lyle F. Bachman

The complexities of task-based language performance assessment (TBLPA) are leading language testers to reconsider many of the fundamental issues about what we want to assess, how we go about it and what sorts of evidence we need to provide in order to justify the ways in which we use our assessments. One claim of TBLPA is that such assessments can be used to make predictions about performance on future language use tasks outside the test itself. I argue that there are several problems with supporting such predictions. These problems are related to task selection, generalizability and extrapolation. Because of the complexity and diversity of tasks in most ‘real-life’ domains, the evidence of content relevance and representativeness that is required to support the use of test scores for prediction is extremely difficult to provide. A more general problem is the way in which difficulty is conceptualized, both in the way tasks are described and in current measurement models. The conceptualization of ‘difficulty features’ confounds task characteristics with test-takers’ language ability and introduces a hypothetical ‘difficulty’ factor as a determinant of test performance. In current measurement models, ‘difficulty’ is essentially an artifact of test performance, and not a characteristic of assessment tasks themselves. Because of these problems, current approaches to using task characteristics alone to predict difficulty are unlikely to yield consistent or meaningful results. As a way forward, a number of suggestions are provided for both language testing research and practice.


Archive | 1999

Interfaces between second language acquisition and language testing research

Lyle F. Bachman; Andrew D. Cohen

1. Language testing - SLA interfaces: An update Lyle F. Bachman and Andrew D. Cohen 2. Construct definition and validity inquiry in SLA research Carol A. Chapelle 3. Research in interlanguage variation: Implications for language testing Elaine Tarone 4. Strategies and processes in test-taking and SLA Andrew D. Cohen 5. Describing language development? Rating scales and SLA Geoff Brindley 6. Testing methods in context-based second language research Dan Douglas 7. How can language testing and SLA benefit from the other? The case of discourse Elana Shohamy Appendix Index.


Annual Review of Applied Linguistics | 1989

Assessment and Evaluation

Lyle F. Bachman

Research and development in the assessment of language abilities in the past decade have been concerned both with achieving a better understanding of the nature of language abilities and other factors that affect performance on language tests and with developing methods of assessment that are consistent with the way applied linguists view language use. The way language testers conceptualize language abilities has been strongly influenced by the broadened view of language proficiency as communicative competence that has emerged in applied linguistics. And while this view of language proficiency provides a much richer conceptual basis for characterizing the language abilities to be measured, it has presented language testers with a major challenge in defining these abilities and the interactions among them with sufficient precision to permit their measurement. Language testing researchers have also been influenced by developments in second language acquisition, investigating the effects on test performance of other factors such as background knowledge, cognitive style, native language, ethnicity, and sex. Finally, language testing research and practice have been influenced by advances in psychometrics, in that more sophisticated analytic tools are being used both to unravel the tangled web of language abilities and to assure thhat the measures of these abilities are reliable, valid, efficient, and appropriate for the uses for which they are intended.


Language Testing | 1991

An exploratory study into the construct validity of a reading comprehension test: triangulation of data sources

Neil J. Anderson; Lyle F. Bachman; Kyle Perkins; Andrew D. Cohen

Recent research in reading comprehension has focused on the processes of reading, while recent thinking in language testing has recognized the impor tance of gathering information on test taking processes as part of construct validation. And while there is a growing body of research on test-taking strate gies in language testing, as well as research into the relationship between item content and item performance, no research to date has attempted to examine the relationships among all three - test taking strategies, item content and item performance. This study thus serves as a methodological exploration in the use of information from both think-aloud protocols and more commonly used types of information on test content and test performance in the investigation of construct validity.


TESOL Quarterly | 1982

The Trait Structure of Cloze Test Scores.

Lyle F. Bachman

Although there is considerable evidence supporting the predictive validity of cloze tests, recent research into the construct validity of cloze tests has produced differing results. Chihara et al. (1977) concluded that cloze tests are sensitive to discourse constraints across sentences, while Alderson (1979) concluded that cloze tests measure only lower-order skills. Anderson (1980) has concluded that Cloze tests measure sensitivity to both cohesive relationships and sentence-level syntax. Factor analytic studies (Weaver and Kingston 1963, Ohnmacht et al. 1970) have identified several factors in cloze and other language tests and suggest that cloze deletions should be based on the linguistic and coherence structures of language. In the present study, the trait structure of a cloze test was examined using confirmatory factor analysis. A cloze passage with rationally selected deletions of syntactic and cohesive items was constructed and given to two groups of non-native English speaking students entering the University of Illinois. A trait structure with three specific traits and one general trait provided the best explanation of the data. The results suggest that a modified cloze passage, using rational deletions, is capable of measuring both syntactic and discourse level relationships in a text, and that this advantage may outweigh considerations of reduced redundancy which underlie random deletion procedures.


Language Testing | 1992

Differential Item Functioning on Two Tests of EFL Proficiency.

Katherine E. Ryan; Lyle F. Bachman

ment are based on bias issues, these investigations have also suggested that DIF may not be ’bias’. Other factors that impact item performance include, for example, prior experience and background (Holland and Thayer, 1988) and instructional and curricular differences between groups (Linn and Harnisch, 1984). In the context of L2 proficiency testing, differential test performance has been attributed to a number of factors other than language proficiency: cultural background (e.g., Briere, 1968), background knowledge (e.g., Erickson and Molloy, 1983; Alderson and Urquhart, 1985; Hale, 1988) and native language _ (e.g., Swinton and Powers, 1980; Alderman and Holland, 1981; Farhady, 1982; Politzer and McGroarty, 1985; Spurling and Ilyin, 1985; Oltman, Stricker and Barrows, 1988). But while differences in both native language and background knowledge have been shown to be related to total test scores, the extent to which such differences lead to DIF or what the implications of DIF are in the L2 proficiency testing context, has not been widely investigated. In an early study examining DIF in the TOEFL, Alderman and Holland (1981) found that there were significant differences in


Language Testing | 1998

A Latent Variable Approach to Listening and Reading: Testing Factorial Invariance Across Two Groups of Children in the Korean/English Two-Way Immersion Program.

Jungok Bae; Lyle F. Bachman

This study investigated the factorial distinctness of two receptive language skills, reading and listening, and the equivalence of factor structure across two groups using simultaneous multigroup covariance structure analyses. The subjects were two groups of students from grades two, three and four, enrolled in the Korean/English Two-Way Immersion Program in the Los Angeles Unified School District: Korean-American students and non-Korean-American students, all learning Korean as a primary/foreign language. The analyses were based on data from tests of listening and reading in Korean. The results indicate the following: 1) the two receptive skills are factorially separable, 2) a two-factor model with listening and reading factors applies across the two groups of learners, 3) the correlation between the listening and reading factors was high and the same across the two groups, 4) the variation in levels of listening and reading proficiency differed across the groups, 5) with the exception of one listening test task, the degree to which the listening and reading test tasks measured listening and reading ability was the same across the two groups, and 6) the test task type that had the highest factor loadings for both groups was one which presented test takers with a set of passages (listening or reading), each of which was followed by comprehension questions. The study also makes a methodological contribution in that it investigated the nature of the two receptive language skills in a latent variable framework, using simultaneous analyses of two groups of young children, and also demonstrated a way to detect measurement invariance as a critical prerequisite to achieve validity of inferences based on measures.


Annual Review of Applied Linguistics | 1988

Language Testing--SLA Research Interfaces.

Lyle F. Bachman

Language testing [LT] research and second language acquisition [SLA] research are often seen as distinct areas of inquiry in applied linguistics. To oversimplify slightly, SLA research takes a longitudinal view, concerning itself primarily with the description and explanation of how second language proficiency develops, while LT research typically observes a “slice of life”, and attempts to arrive at a more or less static description of language proficiency at a given stage of development.

Collaboration


Dive into the Lyle F. Bachman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antony John Kunnan

California State University

View shared research outputs
Top Co-Authors

Avatar

Brian K. Lynch

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mikyung Kim

University of California

View shared research outputs
Top Co-Authors

Avatar

Jungok Bae

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Salvador

University of California

View shared research outputs
Top Co-Authors

Avatar

Dorry M. Kenyon

Center for Applied Linguistics

View shared research outputs
Researchain Logo
Decentralizing Knowledge