Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Dean Brown is active.

Publication


Featured researches published by James Dean Brown.


TESOL Quarterly | 1997

The Elements of Language Curriculum: A Systematic Approach to Program Development

James Dean Brown

1. Introduction 2. Needs Analysis 3. Goals and Objectives 4. Testing 5. Materials 6. Teaching 7. Program Evaluation1. Introduction 2. Needs Analysis 3. Goals and Objectives 4. Testing 5. Materials 6. Teaching 7. Program Evaluation


TESOL Quarterly | 1998

The Alternatives in Language Assessment

James Dean Brown; Thom Hudson

Language testing differs from testing in other content areas because language teachers have more choices to make. The purpose of this article is to help language teachers decide what types of language tests to use in their particular institutions and classrooms for their specific purposes. The various kinds of language assessments are classified into three broad categories: (a) selected-response assessments (including true-false, matching, and multiple-choice assessments); (b) constructedresponse assessments (including fill-in, short-answer, and performance assessments); and (c) personal-response assessments (including conference, portfolio, and self- or peer assessments). For each assessment type, we provide a clear definition and explore its advantages and disadvantages. We end the article with a discussion of how teachers can make rational choices among the various assessment options by thinking about (a) the consequences of the washback effect of assessment procedures on language teaching and learning, (b) the significance of feedback based on the assessment results, and (c) the importance of using multiple sources of information in making decisions based on assessment information.


TESOL Quarterly | 1991

Do English and ESL Faculties Rate Writing Samples Differently

James Dean Brown

This study investigates the degree to which differences exist in the writing scores of native speakers and international students at the end of their respective first-year composition courses (ESL 100 and ENG 100, in this case). Eight members each from the ESL and English faculties at the University of Hawaii at Manoa rated 112 randomly assigned compositions without knowing which type of students had written each. A holistic 6-point (0-5) rating scale initially devised by the English faculty was used by all raters. Raters were also asked to choose the best and worst features (from among cohesion, content, mechanics, organization, syntax, or vocabulary) of each composition as they rated it. The results indicated that there were no statistically significant mean differences between native-speaker and ESL compositions or between the ratings given by the English and ESL faculties. However, the features analysis showed that the ESL and English faculties may have arrived at their scores from somewhat different perspectives.


Language Testing | 2002

Examinee abilities and task difficulty in task-based second language performance assessment

John M. Norris; James Dean Brown; Thom Hudson; William Bonk

This article summarizes findings from investigations into the development and use of a prototype English language task-based performance test. Data included performances by 90 examinees on 13 complex and skills-integrativetasks, a priori estimations of examinee proficiency differences, a priori estimations of task difficulty based on cognitive processing demands, performance ratings according to task-specific as well as holistic scales and criteria, and examinee self-ratings. Findings indicated that the task-based test could inform intended inferences about examinees’ abilities to accomplish specific tasks as well as inferences about examinees’ likely abilities with a domain of tasks. Although a relationship between task difficulty estimates and examinee performances was observed, these estimates were not found to provide a trustworthy basis for inferring examinees’ likely abilities with other tasks. These findings, as well as study limitations, are further discussed in light of the intended uses for performance assessment within language education, and recommendations are made for needed research into the interaction between task features, cognitive processing and language performance.


Language Testing | 1999

The relative importance of persons, items, subtests and languages to TOEFL test variance

James Dean Brown

The purpose of this project was to explore the relative contributions to TOEFL score dependability (which is analogous to classical theory reliability) of various numbers of persons, items, subtests, languages and their various interactions. To these ends, three research questions were formulated: (1) What are the characteristics of the distributions, and how high are the classical theory reliability estimates for the whole test and its subtests? (2) For each of the 15 languages, what are the relative contributions to test variance of persons, items, subtests and their interactions? (3) Across all 15 languages, what are the relative contributions to test variance of persons, items, subtests and languages, as well as their various interactions? The study sampled 15 000 test takers, 1000 each from 15 different language backgrounds, from the total of 24 500 participants in the TOEFL generic data set which itself was a sample from the May 1991 worldwide administration of the TOEFL. The test was administered under normal operational conditions and included all three subtests: (1) Listening Comprehension, (2) Structure and Written Expression, and (3) Vocabulary and Reading Comprehension. The analyses included descriptive statistics, classical theory reliability estimates, and a series of generalizability studies conducted to isolate the variance components due to persons, items, subtests and languages, and their effects on the dependability of the test. Unlike previous research, the results here indicate that, when considered in concert with other important sources of variance (persons, items and subtests), language differences alone account for only a very small proportion of TOEFL test variance. These results should prove useful to test developers and researchers interested in the relative effects of such factors on test design.


Language Testing | 1993

What are the characteristics of natural cloze tests

James Dean Brown

This study investigates the characteristics of natural cloze tests. Natural cloze tests are defined here as cloze procedures developed without intercession based on the test developers knowledge and intuitions about passage difficulty, suitable topics, etc. (i.e., the criteria which are often used to select a cloze passage appropriate for a particular group of students). Fifty reading passages were randomly selected from an American public library. Each passage was made into a 30-item cloze test (every twelfth word deletion). The subjects were 2298 EFL students from 18 colleges and universities in Japan. Each student completed one of the 30-item cloze tests. The 50 cloze tests were randomly administered across all of the subjects so that any variations in statistical characteristics could be assumed to be due to other than sampling differences. The students also took a 10-item cloze test that was common to all students. The 50 cloze tests were compared in terms of descriptive, reliability and validity testing characteristics. The results indicate that natural cloze tests are not necessarily well-centred, reliable and valid. A typical natural cloze is described, but considerable variations were also found in the characteristics of these cloze tests (with many of them having skewed distributions and/or poor reliability). The implications for cloze test construction and use are discussed.


TESOL Quarterly | 1991

Statistics as a Foreign Language- Part 1: What to Look for in Reading Statistical Language Studies*

James Dean Brown

This article is addressed to those practicing EFL/ESL teachers who currently avoid statistical studies. In particular, it is designed to provide teachers with strategies that can help them gain access to statistical studies on language learning and teaching so that they can use the information found in such articles to better serve their students. To that end, five attack strategies are advocated and discussed: (a) use the abstract to decide if the study has value for you; (b) let the conventional organization of the paper help you; (c) examine the statistical reasoning involved in the study; (d) evaluate what you have read in relation to your professional experience; and (e) learn more about statistics and research design. Each of these strategies is discussed, and examples are drawn from the article following this one in this issue of the TESOL Quarterly.


TESOL Quarterly | 1992

Statistics as a Foreign Language--Part 2: More Things to Consider in Reading Statistical Language Studies.

James Dean Brown

As was Part 1 of this article, Part 2 is addressed to those practicing EFL/ESL teachers who currently avoid reading statistical studies. Assuming that the first article has been read, the discussion here continues by exploring more advanced strategies that will help in understanding statistical studies on language learning and teaching. To that end, five new strategies are proposed: (a) think about the variables of focus, (b) examine whether the correct statistical tests have been chosen, (c) check the assumptions underlying the statistical tests, (d) consider why the statistical tests have been applied, and (e) practice reading statistical tables. Along the way, necessary terminology is explained so that each of these strategies can be clearly understood. In addition, each of the strategies is discussed with appropriate tables, figures, and examples drawn from recent issues of the TESOL Quarterly-all of which are explained in turn. This article attempts to cover a very complex subject area, statistical language research, in a manner that will give readers the necessary means for gaining access to such studies. Hopefully, these introductory articles will also whet the appetite of a number of readers so that they will be inspired to continue expanding their knowledge in this vital area of language research.


Archive | 2009

Open-Response Items in Questionnaires

James Dean Brown

1. Have you filled out any questionnaires recently? What were they about? 2. Think about the questions — what types of questions were asked? 3. What kinds of information were the different types of questions looking for? 4. How about you — what experience have you had writing or using a questionnaire? 5. What do you think the steps are for using questionnaires in research?


TESOL Quarterly | 1990

Research Issues: The Use of Multiple t Tests in Language Research

James Dean Brown; Graham Crookes

Flynn, S. (1987). Contrast and constructions in a parameter-setting model of L2 acquisition. Language Learning, 37(1), 19-62. Hermon, G. (1987, March). Government and binding theory: Implications for L1 and L2 acquisition. Paper presented at the Second Language Acquisition and Teacher Education Seminar, Champaign, IL. McLaughlin, B. (1987). Theories of second-language learning. London: Edward Arnold.

Collaboration


Dive into the James Dean Brown's collaboration.

Top Co-Authors

Avatar

Thom Hudson

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Bonk

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge