Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Pill is active.

Publication


Featured researches published by John Pill.


Language Testing | 2013

Defining the language assessment literacy gap: evidence from a parliamentary inquiry

John Pill; Luke Harding

This study identifies a unique context for exploring lay understandings of language testing and, by extension, for characterizing the nature of language assessment literacy among non-practitioners, stemming from data in an inquiry into the registration processes and support for overseas trained doctors by the Australian House of Representatives Standing Committee on Health and Ageing. The data come from Hansard transcripts of public hearings of the inquiry. Sections of the data related to language and language testing (as part of the current registration process for doctors seeking employment in Australia) were identified and coded using a thematic analysis. Findings reveal misconceptions about who is responsible for tests and for decisions based on scores in this context, as well as misconceptions about language testing procedures. Issues also emerge concerning the location of expertise in language and language testing. Discussion of these findings contributes to current debate within the language testing community (e.g., Taylor, 2009) about where responsibility lies for increasing language assessment literacy among non-practitioner stakeholders and how this might best be achieved.


Medical Teacher | 2015

How we developed Doctors Speak Up: an evidence-based language and communication skills open access resource for International Medical Graduates

Robyn Woodward-Kron; Catriona Fraser; John Pill; Eleanor Flynn

Abstract Background: Some International Medical Graduates (IMGs) need to develop language and communication skills for patient-centred care but have limited opportunities to do so. Aim: To develop an evidence-based, language and communication skills web resource for IMG doctors and supervisors, focussing on culturally challenging patient interviews. Methods: Forty-eight IMGs participated in four practice OSCEs. We video-recorded the interactions and applied discourse analytic methods to investigate salient language and communication features. Results: The findings from the OSCE workshops showed that many participants demonstrated aspects of patient-centred interviewing but were hindered by limited interactional competence to elicit information and negotiate behaviours as well as a limited repertoire of English grammar, vocabulary, and phonological phrasing for effective interaction. These findings guided the choice of content and pedagogy for the development of the web-based resource Doctors Speak Up. Conclusion: Evaluation and uptake of the Doctors Speak Up website confirm the demand for a resource combining targeted communication skills and language instruction. Over 19 500 users visited the website between March 2012 and November 2013.


Language Assessment Quarterly | 2012

How professionally relevant can language tests be? : A response to Wette (2011).

John Pill; Robyn Woodward-Kron

This is a response to the commentary “English Proficiency Tests and Communication Skills Training for Overseas-Qualified Health Professionals in Australia and New Zealand” by Rosemary Wette, published in Language Assessment Quarterly, Volume 8, Issue 2, 2011.


Language Testing | 2016

Drawing on indigenous criteria for more authentic assessment in a specific-purpose language test: Health professionals interacting with patients:

John Pill

The indigenous assessment practices (Jacoby & McNamara, 1999) in selected health professions were investigated to inform a review of the scope of assessment in the speaking sub-test of a specific-purpose English language test for health professionals, the Occupational English Test (OET). The assessment criteria in current use on the test represent a generalized view of language and are concerned with Overall Communicative Effectiveness, Fluency, Intelligibility, Appropriateness of Language, and Resources of Grammar and Expression. The research study focused on healthcare consultations between trainee health professionals and patients. Educators and supervisors observed these interactions and subsequently provided feedback on trainees’ performances. The assumption was that, in their comments, educators would give information pertinent to trainees’ acculturation to the expectations and behaviours of the profession, that is, to “what matters” to practitioners. Thematic analysis was undertaken to establish the aspects of performance that matter to health professionals in these contexts. Data for each profession were coded independently. Clear similarities across the professions became apparent as themes emerged. An exploratory conceptual model of what health professionals value in the consultation was developed, comprising three focal areas: foundation, performance and goals of the consultation. Findings from the analysis provided an empirical basis for the generation and definition of two additional, professionally relevant criteria for use in the OET speaking sub-test – Clinician Engagement and Management of Interaction – and of a checklist of performance indicators to be used to train assessors in applying the new criteria. This process of developing, through close analysis of domain experts’ commentary, test criteria that are potentially more authentic to the target language use situation is novel and may be replicated effectively in other specific-purpose language testing contexts.


Language Assessment Quarterly | 2011

Assessor decision-making while marking a note-taking listening test: The case of the OET

Luke Harding; John Pill; Kerry Ryan

This article investigates assessor decision making when using and applying a marking guide for a note-taking task in a specific purpose English language listening test. In contexts where note-taking items are used, a marking guide is intended to stipulate what kind of response should be accepted as evidence of the ability under test. However, there remains some scope for assessors to apply their own interpretations of the construct in judging responses that fall outside the information provided in a marking guide. From a content analysis of data collected in a stimulated recall group discussion, a taxonomy of the types of decisions made by assessors is derived and the bases on which assessors make such decisions are discussed. The present study is therefore a departure point for further investigations into how assessor decision-making processes while marking open-ended items might be improved.


Language Testing | 2016

How much is enough? Involving occupational experts in setting standards on a specific-purpose language test for health professionals

John Pill; Tim McNamara

This paper considers how to establish the minimum required level of professionally relevant oral communication ability in the medium of English for health practitioners with English as an additional language (EAL) to gain admission to practice in jurisdictions where English is the dominant language. A theoretical concern is the construct of clinical communicative competence and its separability (or not) from other aspects of professional competence, while a methodological question examines the technical difficulty of determining a defensible minimum standard. The paper reports on a standard-setting study to set a minimum standard of professionally relevant oral competence for three health professions – medicine, nursing, and physiotherapy – as measured by the speaking sub-test of the Occupational English Test, a profession-specific test of clinically related communicative competence. While clinical educators determined the standard, it is to be implemented by raters trained as teachers of EAL; therefore, the commensurability of the views of each group is a central issue. This also relates to where the limits of authenticity lie in the context of testing language for specific purposes: to represent the views of domain experts, a sufficient alignment of their views with scores given by the raters of test performances is vital. The paper considers the construct of clinical communicative competence and describes the standard-setting study, which used the analytical judgement method. The method proved successful in capturing sufficiently consistent judgements to define defensible standards. Findings also indicate that raters can act as proxies for occupational experts, although it remains unclear whether the views of performances held by these two groups are directly comparable. The new minimum standards represented by the cut scores were found to be somewhat harsher than those in current use, particularly in medicine.


Language Testing | 2016

Language test as boundary object : Perspectives from test users in the healthcare domain

Susy Macqueen; John Pill; Ute Knoch

Objects that sit between intersecting social worlds, such as Language for Specific Purposes (LSP) tests, are boundary objects – dynamic, historically derived mechanisms which maintain coherence between worlds (Star & Griesemer, 1989). They emerge initially from sociopolitical mandates, such as the need to ensure a safe and efficient workforce or to control immigration, and they develop into standards (i.e. stabilized classifying mechanisms). In this article, we explore the concept of LSP test as boundary object through a qualitative case study of the Occupational English Test (OET), a test which assesses the English proficiency of healthcare professionals who wish to practise in English-speaking healthcare contexts. Stakeholders with different types of vested interest in the test were interviewed (practising doctors and nurses who have taken the test, management staff, professional board representatives) to capture multiple perspectives of both the test-taking experience and the relevance of the test to the workplace. The themes arising from the accumulated stakeholder perceptions depict a ‘boundary object’ that encompasses a work-readiness level of language proficiency on the one hand and aspects of communication skills for patient-centred care on the other. We argue that the boundary object metaphor is useful in that it represents a negotiation over the adequacy and effects of a test standard for all vested social worlds. Moreover, the test should benefit the worlds it interconnects, not just in terms of the impact on the learning opportunities it offers candidates, but also the impact such learning carries into key social sites, such as healthcare workplaces.


Language Testing | 2016

Extending the Scope of Speaking Assessment Criteria in a Specific-Purpose Language Test: Operationalizing a Health Professional Perspective.

Sally O’Hagan; John Pill; Ying Zhang

Criticism of specific-purpose language (LSP) tests is often directed at their limited ability to represent fully the demands of the target language use situation. Such criticisms extend to the criteria used to assess test performance, which may fail to capture what matters to participants in the domain of interest. This paper reports on the outcomes of an attempt to expand the construct of a specific-purpose test through the inclusion of two new professionally relevant criteria designed to reflect the values of domain experts. The test in question was the speaking component of the Occupational English Test (OET), designed to assess the language proficiency of overseas-trained health professionals applying to practise their profession in Australia. The criteria were developed from analysis of health professionals’ feedback to trainees, a source that reflected what the professionals value, that is, their indigenous assessment criteria. The criteria considered amenable to inclusion in the OET were as follows: (1) Clinician Engagement with the patient and (2) Management of Interaction in the consultation. Seven OET assessors were trained to apply these professionally relevant criteria at a workshop that introduced a checklist derived from the original data analysis as a tool to aid understanding of the new criteria. Following the workshop, assessors rated a total of 300 pre-recorded OET speaking test performances using both new and existing criteria. Statistical analyses of the ratings indicate the extent to which a) the judgements of the language-trained assessors using the new criteria were consistent and b) the new and existing criteria aligned in terms of the construct(s) they represent. Furthermore, feedback from the assessors in the process shows how comfortable and confident they are to represent a health professional perspective.


Journal of Advanced Nursing | 2014

What counts as effective communication in nursing? Evidence from nurse educators' and clinicians' feedback on nurse interactions with simulated patients.

Sally Roisin O'Hagan; Elizabeth Manias; Catherine Elder; John Pill; Robyn Woodward-Kron; Tim McNamara; Gillian Webb; Geoff McColl


TESOL Quarterly | 2012

Health Professionals' Views of Communication: Implications for Assessing Performance on a Health-Specific English Language Test.

Catherine Elder; John Pill; Robyn Woodward-Kron; Tim McNamara; Elizabeth Manias; Gillian Webb; Geoff McColl

Collaboration


Dive into the John Pill's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim McNamara

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gillian Webb

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Geoff McColl

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ute Knoch

University of Melbourne

View shared research outputs
Researchain Logo
Decentralizing Knowledge