Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim McNamara is active.

Publication


Featured researches published by Tim McNamara.


Language Learning | 2001

Can We Predict Task Difficulty in an Oral Proficiency Test? Exploring the Potential of an Information-Processing Approach to Task Design.

Noriko Iwashita; Tim McNamara; Catherine Elder

This study addresses the following question: Are different task characteristics and performance conditions (involving assumed different levels of cognitive demand) associated with different levels of fluency, complexity, or accuracy in test candidate responses? The materials for the were a series of narrative tasks involving a picture stimulus; the participants were 193 pre-university students taking English courses. We varied the conditions for tasks in each dimension and measured the impact of these factors on task performance with both familiar detailed discourse measures and specially constructed rating scales, analyzed using Rasch methods. We found that task performance conditions in each dimension failed to influence task difficulty and task performance as expected. We discuss implications for the design of speaking assessments and broader research.


Language Testing | 2001

Language assessment as social practice: challenges for research

Tim McNamara

In this article I argue that a growing awareness of the fundamentally social character of language assessment challenges us to rethink our priorities and responsibilities in language testing research. This awareness has been brought about by the treatment of the social character of educational assessment in Samuel Messick’s influential work on validity, and by the intellectual changes triggered by postmodernism, where models of individual consciousness have been reinterpreted in the light of socially motivated critiques. The article concludes by arguing that the institutional character of assessment often means that the needs of learners are not well served by much language assessment theory and practice, and calls for a reexamination of our research priorities.


Language Testing | 1998

Using G-theory and Many-facet Rasch measurement in the development of performance assessments of the ESL speaking skills of immigrants

Brian K. Lynch; Tim McNamara

Second language performance tests, through the richness of the assessment context, introduce a range of facets which may influence the chances of success of a candidate on the test. This study investigates the potential roles of Generalizability theory (G-theory) (Brennan, 1983; Shavelson and Webb, 1991) and Many-facet Rasch measurement (Linacre, 1989; Linacre and Wright, 1993; McNamara, 1996) in the development of such a performance-based assessment procedure. This represents an extension of preliminary investigations into the relative contributions of these procedures (e.g., Bachman et al., 1995) to another assessment setting. Data for this study come from a trial of materials from the access: test, a test of communicative skills in English as a Second Language for intending immigrants to Australia. The performances of 83 candidates on the speaking skills module were multiply rated and analysed using GENOVA (Crick and Brennan, 1984) and FACETS (Linacre and Wright, 1993). The advantages and specific roles of these contrasting analytical techniques are considered in detail in the light of this assessment context.


Language Testing | 1990

Item Response Theory and the validation of an ESP test for health professionals

Tim McNamara

This paper uses a discussion of the role of Rasch model IRT in the validation of two sub-tests of the Occupational English Test to argue for the usefulness of IRT as a tool in the exploration of test constructs and also to consider the implications of the empirical analysis presented for the validity of communica tive language tests involving the skills of speaking and writing.


Language Testing | 2002

Estimating the Difficulty of Oral Proficiency Tasks: What Does the Test-Taker Have To Offer?.

Catherine Elder; Noriko Iwashita; Tim McNamara

This study investigates the impact of performance conditions on perceptions of task difficulty in a test of spoken language, in light of the cognitive complexity framework proposed by Skehan (1998). Candidates performed a series of narrative tasks whose characteristics, and the conditions under which they were performed, were manipulated, and the impact of these on task performance was analysed. Test-takers recorded their perceptions of the relative difficulty of each task and their attitudes to them. Results offered little support for Skehan’s framework in the context of oral proficiency assessment, and also raise doubts about post hoc estimates of task difficulty by test-takers.


TESOL Quarterly | 1997

Theorizing Social Identity; What Do We Mean by Social Identity? Competing Frameworks, Competing Discourses

Tim McNamara

The TESOL Quarterly invites commentary on current trends or practices in the TESOL profession. It also welcomes responses or rebuttals to any articles or remarks published here in The Forum or elsewhere in the Quarterly.


Language Testing | 1997

The effect of interlocutor and assessment mode variables in overseas assessments of speaking skills in occupational I settings

Tim McNamara; Tom Lumley

The increasing demand for performance assessment of speaking skills in second languages has led to logistic complications, for example, the delivery of tests in overseas locations. One solution to the problem has been to train native speaker interlocutors to carry out a series of oral interactions with the candidate, with assessment from audiorecordings of the test session postponed and conducted cen trally by a small team of trained raters. This technique is currently used in two large-scale occupationally related ESP tests administered internationally on behalf of the Australian government. But these procedures raise questions about the effect of such facets of the assessment situation as interlocutor variables and the quality of the audiotape recording. Recent developments in multifaceted Rasch measure ment have significantly broadened the possibilities for investigation of these issues. The research presented in this article investigates potential problems associated with the above approach to the offshore testing of speaking skills. Data from audiotape-based assessments of 70 offshore candidates from two administrations of the Occupational English Test, an advanced-level ESP test for health pro fessionals, are considered. In addition to multiple ratings of candidate perform ance, each recording is rated for perceptions of the competence of the interlocutor, the rapport established between the candidate and the interlocutor, and the audi bility of the interaction. These aspects of the assessment situation are treated as facets in a series of multifaceted Rasch analyses of the data. The results of the analysis reveal the effects of interlocutor variability and audio tape quality on ratings. The article concludes with an evaluation of the overall feasibility of the procedure, and implications for test administration are considered. The study is also a further demonstration of the application of multifaceted Rasch measurement in performance assessment settings.


Language Assessment Quarterly | 2006

Validity in Language Testing: The Challenge of Sam Messick's Legacy

Tim McNamara

The thought of Samuel Messick has influenced language testing in 2 main ways: in proposing a new understanding of how inferences made based on tests must be challenged, and in drawing attention to the consequences of test use. The former has had a powerful impact on language-testing research, most notably in Bachmans work on validity and the design of language tests. Messicks writing on test consequences has informed debate on ethics, impact, accountability, and washback in language testing in the work of several researchers. But the character of Messicks work challenges us in many additional ways. Messick located validity theory in the area of values. This article explores the implications of this position, highlighting the social construction of language test constructs. Language test constructs are increasingly the target of policy, a development that threatens to render traditionally conceived validation work of only marginal relevance. The less obvious, covert social construction of language test constructs is explored in the light of Butlers theory of performativity. The article concludes with a consideration of recent adaptations of Messicks work in the influential validation models of Mislevy and Kane, and considers their failure to address questions of values and the social context of assessment properly. Tackling these questions is the ongoing challenge of Messicks legacy.


Language Policy | 2003

LINGUISTIC IDENTIFICATION IN THE DETERMINATION OF NATIONALITY: A PRELIMINARY REPORT

Diana Eades; Helen Fraser; Jeff Siegel; Tim McNamara; Brett Baker

The authors of this report are five Australian experts in the fields of sociolinguistics, phonetics (analysis of accent or pronunciation)and language testing. Their report raises concerns about the “language analysis” that is being done by overseas agencies and that is being used by the Australian government in determining the nationality of refugee claimants, and concludes that “languageanalysis”, as it is currently used, is not valid or reliable. It appears to be based on “folk views” about the relationship between language and nationality and ethnicity, rather than sound linguistic principles. The report found that: i) a persons nationality cannot always be determined by the language he or she speaks, ii) a few key words and their pronunciation normally cannot reveal a persons nationality or ethnicity, iii) common perceptions about pronunciation differences among groups of people cannot be relied upon, iv) any analysis of pronunciation must be based on thorough knowledge of the language and region in question and must involve detailed phonetic analysis. Further more, in a study of 58 Refugee Review Tribunal (RRT) decisions in which this “language analysis” was at issue, it was found that there were doubts over its validity. The authors have grave concerns that the use of “languageanalysis” in the determination of nationality may be preventing Australia from properly discharging its responsibilities under the Refugees Convention and therefore call on the Australian Government to stop using this type of analysis.


Language Testing | 2012

Developing a comprehensive, empirically based research framework for classroom-based assessment

Kathryn Hill; Tim McNamara

This paper presents a comprehensive framework for researching classroom-based assessment (CBA) processes, and is based on a detailed empirical study of two Australian school classrooms where students aged 11 to 13 were studying Indonesian as a foreign language. The framework can be considered innovative in several respects. It goes beyond the scope of earlier models in addressing a number of gaps in previous research, including consideration of the epistemological bases for observed assessment practices and a specific learner and learning focus. Moreover, by adopting the broadest possible definition of CBA, the framework allows for the inclusion of a diverse range of data, including the more intuitive forms of teacher decision-making found in CBA (Torrance & Pryor, 1998). Finally, in contrast to previous studies the research motivating the development of the framework took place in a school-based foreign language setting. We anticipate that the framework will be of interest to both researchers and classroom practitioners.

Collaboration


Dive into the Tim McNamara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kathryn Hill

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

John Pill

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gillian Webb

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Annie Brown

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoff McColl

University of Melbourne

View shared research outputs
Researchain Logo
Decentralizing Knowledge