Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathryn Hill is active.

Publication


Featured researches published by Kathryn Hill.


Language Testing | 2004

Size and Strength: Do We Need Both to Measure Vocabulary Knowledge?.

Batia Laufer; Cathie Elder; Kathryn Hill; Peter Congdon

This article describes the development and validation of a test of vocabulary size and strength. The rst part of the article sets out the theoretical rationale for the test, and describes how the size and strength constructs have been conceptualized and operationalized. The second part of the article focusses on the process of test validation, which involved the testing of the hypotheses implicit in the test design, using both unidimensional and multifaceted Rasch analyses. Possible applications for the test include determining the status of a learner’s vocabulary development as well as screening and placement. A model for administering the test in computer adaptive mode is also proposed. The study has implications both for the design and delivery of this test as well as for theories of vocabulary acquisition.


Language Testing | 2012

Developing a comprehensive, empirically based research framework for classroom-based assessment

Kathryn Hill; Tim McNamara

This paper presents a comprehensive framework for researching classroom-based assessment (CBA) processes, and is based on a detailed empirical study of two Australian school classrooms where students aged 11 to 13 were studying Indonesian as a foreign language. The framework can be considered innovative in several respects. It goes beyond the scope of earlier models in addressing a number of gaps in previous research, including consideration of the epistemological bases for observed assessment practices and a specific learner and learning focus. Moreover, by adopting the broadest possible definition of CBA, the framework allows for the inclusion of a diverse range of data, including the more intuitive forms of teacher decision-making found in CBA (Torrance & Pryor, 1998). Finally, in contrast to previous studies the research motivating the development of the framework took place in a school-based foreign language setting. We anticipate that the framework will be of interest to both researchers and classroom practitioners.


ACM Sigapl Apl Quote Quad | 2002

12. DISCOURSE AND ASSESSMENT

Tim McNamara; Kathryn Hill; Lynette A. May

Many contemporary currents in applied linguistics have favored discourse studies within assessment; there have been calls for cross-fertilization with other areas within applied linguistics, critiques of the positivist tradition within language testing research, and the growing impact of Conversation Analysis (CA) and sociocultural theory. This chapter focuses on the resulting increase in discourse-based studies of oral proficiency assessment techniques. These studies initially focused on the traditional oral proficiency interview but have since been extended to new test formats, including paired and group interaction. We discuss the research carried out on a number of factors in the assessment setting, including the roles of interlocutor, candidate, and rater, and the impact of tasks, task performance conditions, and rating criteria. Recent research has also concentrated more specifically on the assessment of pragmatic competence and on the applications of technology within the assessment of spoken language, including the comparability of semidirect and direct methods for such assessment and the use of computer corpora.


Australasian Psychiatry | 2015

All the world’s a stage: evaluating psychiatry role-play based learning for medical students

Joel King; Kathryn Hill; Andrew Gleason

Objective: This paper describes an evaluation of an innovative approach, role-play based learning (RBL), as a vehicle for teaching psychiatry. The aim of this intervention, where medical students perform both doctor and patients roles, was to provide an interactive learning format that engaged students while developing clinical knowledge and communication skills in a structured, reflective environment. Method: Questionnaires were completed by 107 students from three clinical schools of the University of Melbourne. Data were analysed using descriptive and inferential statistics and thematic content analysis. Results: Student evaluations of the RBL sessions were overwhelmingly positive. Respondents reported improvements in engagement, confidence and empathy, as well as in their learning, and that the sessions provided good preparation for internship as well as for exams. Conclusion: The RBL tutorial programme is unique and flexible and could readily be adapted for use in other specialty rotations. It is also timely, given the increased interest in simulation prompted by increasing pressure on training places across the health sciences in Australia.


Measurement: Interdisciplinary Research & Perspective | 2015

Validity Inferences under High-Stakes Conditions: A Response from Language Testing.

Kathryn Hill; Tim McNamara

Those of us working in secondand foreign-language testing will find Koretz’s concern for validity inferences under high-stakes (VIHS) conditions both welcome and familiar. While the focus of this article is more narrowly on the potential for two instructional responses to test-based accountability, reallocation and coaching, to undermine the validity of score-based inferences about achievement by inflating scores and exaggerating mastery of the domain of interest, rather than the broader set of problems around test-based accountability (Haertel, 2013), the issues raised have also been extensively discussed for several years in the field of language testing, albeit using somewhat different terminology. Washback (or backwash) is the specific term used to refer to the effects of high-stakes assessment on teaching and learning and is considered a subset of test impact more generally (Cheng, 2008). The inflationary or deflationary effect of nonsubstantive (construct irrelevant) performance elements on scores (associated with coaching in the article) is known as test method effect (Bachman, 1990). The context of the article, where outcomes-based control of educational effort is found to have a distorting effect on teaching and learning efforts, has many parallels in language testing, not only in the United States, where the negative impact on bilingual education of test-based accountability under No Child Left Behind and more recently the Common Core Standards has been widely discussed (Menken, 2008), but internationally, particularly in studies of the problematic impact of the Common European Framework of Reference (Council of Europe 2001) and PISA testing on language education (McNamara, 2011). But issues around validity inferences under high-stakes conditions arise in language testing in more contexts than the kind of testbased accountability scenarios that are the focus of this article. Since the appearance of a special volume in Language Testing (the leading journal in the field) featuring an article on washback by Messick (1996), there has been a significant amount of research undertaken on washback and the broader impacts of language assessment, including a suite of studies sponsored by the major language testing agencies themselves. However, the high-stakes language assessments forming the focus of these studies are typically used for selection and/or accreditation purposes rather than for school and teacher accountability: language tests act as gatekeeping devices to control access to employment, education, migration, and citizenship. These include commercial language tests used for all these purposes, such as the International English Language Testing System (IELTS)


Measurement: Interdisciplinary Research & Perspective | 2012

A Response from Languages.

Tim McNamara; Kathryn Hill

The suggested role for assessment in developing “Roadmaps for Learning” has potentially important implications for the learning of second or foreign languages in school, a major concern of applied linguistics. In this response, we will consider how the findings of a detailed ethnographic study of classroom-based assessment in two foreign language classrooms in Australia (Hill, 2009) can illuminate a number of the issues and suggestions in the article. We will also evaluate the success of attempts within language testing so far to map growth along the lines suggested in the article, and the implications this has for the applicability of the suggested approach to skill-oriented areas of the curriculum (such as proficiency in the practical use of languages, the current target of school languages curricula), as distinct from the knowledge-oriented areas of the curriculum (such as mathematics or science).


Archive | 2003

Assessment Research in Second Language Curriculum Initiatives

Kathryn Hill; Tim McNamara

This paper considers the development of second and foreign language education in schools in the Asia-Pacific region from the point of view of assessment. It will be argued that assessment research can play a diverse range of roles in language policy formulation and implementation within education systems.


Archive | 1999

Dictionary of language testing

Alan Davies; Annie Brown; Cathie Elder; Kathryn Hill; Tom Lumley; Tim McNamara


Annual Review of Applied Linguistics | 2002

Discourse and Assessment.

Tim McNamara; Kathryn Hill; Lynette A. May


International English Language Testing System (IELTS) Research Reports 1999, Volume 2 | 1999

A comparison of IELTS and TOEFL as predictors of academic success

Kathryn Hill; Neomy Storch; Brian K. Lynch

Collaboration


Dive into the Kathryn Hill's collaboration.

Top Co-Authors

Avatar

Annie Brown

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Tim McNamara

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neomy Storch

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Lumley

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Davies

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Andrew Scrimgeour

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Angela Scarino

University of South Australia

View shared research outputs
Researchain Logo
Decentralizing Knowledge