Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandra John is active.

Publication


Featured researches published by Alexandra John.


The Cleft Palate-Craniofacial Journal | 2006

The cleft audit protocol for speech-augmented: A validated and reliable measure for auditing cleft speech

Alexandra John; Debbie Sell; Triona Sweeney; Anne Harding-Bell; Alison Williams

Objectives To develop an assessment tool for use in intercenter audit studies of cleft speech and to test its acceptability, validity, and reliability. The tool is to be used systematically to record and report speech outcomes, providing an indication of treatment needs and continuing burden of care. Setting Regional Cleft Center, U.K. Methods The Cleft Audit Protocol for Speech—Augmented (CAPS-A) was developed by three cleft speech experts who identified the key features required from existing assessment measures. Criterion validity was assessed by comparing the Cleft Audit Protocol for Speech—Augmented outcomes reported for 20 cases with clinical assessment results and other investigations. Intra- and interrater reliability were tested following the training of specialist speech and language therapists who used the Cleft Audit Protocol for Speech—Augmented on two occasions to assess 10 cases. The raters evaluated acceptability and ease of using a questionnaire. Results The mean percentage agreement for criterion validity in each section was 87% (range 70% to 100%). Both intra- and interexaminer reliability were rated as good/very good (Kappa 0.61 to 1.00) for seven sections and moderate (Kappa 0.41 to 0.60) for three sections. Raters reported that the Cleft Audit Protocol for Speech—Augmented was acceptable and easy to use with appropriate training. Conclusion An acceptable, valid, and reliable cleft speech audit tool has been developed based on a small sample. The Cleft Audit Protocol for Speech—Augmented is recommended for use in intercenter audit studies in the U.K. and Ireland and could be used in other English-speaking countries. In addition, it has wider applicability for use in reporting speech outcomes of surgical procedures.


International Journal of Language & Communication Disorders | 1999

Therapy outcome measures in speech and language therapy: comparing performance between different providers

Pam Enderby; Alexandra John

BACKGROUND The use of outcome measures to monitor improved quality of care has been advocated for 20 years but has only achieved prominence with the increasing resource pressures and related changed in health service provision in the past 6 years. OBJECTIVE This paper describes the development of an approach to outcome measurement suitable for all patients receiving speech and language therapy. The measure, which is based on rating the dimensions of impairment, disability, handicap and well-being, is tested to assess whether it can usefully be used to compare the services of different providers. METHOD Five trusts volunteered for the study. Service descriptions suggest that these services are typical for the purposes of providing speech and language therapy. Twenty-five therapists were trained to use the Therapy Outcome Measure (TOM); their reliability was assessed and they provided prospective data on clients with speech and language impairments related to dysphasia, stammering and dysphonia. RESULTS The study provides evidence indicating the differences in the types of patients being referred to different providers of speech and language therapy. Different services have different impacts on the number and type of domains and that services discharge patients at different points in their recovery. DISCUSSION Different outcomes by different providers may be associated with different referral policies, base populations, skills and work policies of therapists. Differences in outcomes associated with certain therapy services can initiate the task of analysing attributions and progress endeavours to provide equitable quality of care which is the philosophy underpinning the move towards benchmarking in health service delivery.


International Journal of Language & Communication Disorders | 2000

Reliability of speech and language therapists using therapy outcome measures

Alexandra John; Pam Enderby

The Therapy Outcome Measure (TOM) aims to provide Speech and Language Therapists (SLT) with a practical tool to measure outcomes of care by providing a quick and simple measure which can be used over time with patients and clients in a routine clinical setting. The TOM allows therapists to reflect their clinical judgement on the dimensions of impairment, disability/activity, handicap/participation and well-being on an 11-point ordinal scale. The purpose of this paper is to examine the reliability and the influences on reliability of SLT using this measure. Three studies are presented and give information on 73 SLT using the measure with different client groups. Study one assesses the degree of reliability of six SLT using the TOM following training and practice. Reliability was studied on three occasions and the results demonstrate the influence of training and practice. Study two included 56 SLT to examine reliability over a broader range of client groups and to investigate the effect of the SLT specializing on rating patients from within or -out that specialism. Eleven therapists were included in the study, which examined the influence of a specific training approach. The participating SLT achieved a substantial-moderate level of reliability; this was established in all domains. The degree of reliability achieved on the TOM was affected by some, but not extensive, training and experience.


Age and Ageing | 2011

Rehabilitation of older patients: day hospital compared with rehabilitation at home. Clinical outcomes

Stuart G Parker; Phillip Oliver; Mark Pennington; John Bond; Carol Jagger; Pam Enderby; Richard Curless; Alessandra Vanoli; Kate Fryer; Steven A. Julious; Alexandra John; Timothy Chater; Cindy Cooper; Christopher Dyer

OBJECTIVES to test the hypothesis that older people and their informal carers are not disadvantaged by home-based rehabilitation (HBR) relative to day hospital rehabilitation (DHR). DESIGN pragmatic randomised controlled trial. SETTING four geriatric day hospitals and four home rehabilitation teams in England. PARTICIPANTS eighty-nine patients referred for multidisciplinary rehabilitation. The target sample size was 460. INTERVENTION multidisciplinary rehabilitation either in the home or in the day hospital. MEASUREMENTS the primary outcome measure was the Nottingham extended activities of daily living scale (NEADL). Secondary outcome measures included EQ-5D, hospital anxiety and depression scale, therapy outcome measures, hospital admissions and the General Health Questionnaire for carers. RESULTS at the primary end point of 6 months NEADL scores were not significantly in favour of HBR cf. DHR; mean difference -2.139 (95% confidence interval -6.87 to 2.59, P = 0.37). A post hoc analysis suggested non-inferiority for HBR for NEADL but there was considerable statistical uncertainty. CONCLUSION taken together the statistical analyses and lack of power of the trial outcomes do not provide sufficient evidence to conclude that patients in receipt of HBR are disadvantaged compared with those receiving DHR.


Advances in Speech-Language Pathology | 2002

Establishing clinician reliability using the therapy outcome measure for the purpose of benchmarking services

Alexandra John; Anthony Hughes; Pam Enderby

Benchmarking is one approach to quality improvement by comparing best practice, which requires appropriate process and outcome indicators. A benchmarking study to identify best practice was designed to investigate the use of Therapy Outcome Measure (TOM) as an outcome indicator. To ensure comparison of like with like, one study objective was to establish the reliability of the speech and language therapists (SLTs) using the TOM. This article describes this important aspect. One hundred and twenty-five SLTs from eight services, spanning seven National Health Service Trusts, participated in both the TOM training and interrater reliability assessments, where they independently rated 10 cases using composite case histories and videotape recordings. The acceptable level of reliability for participating in this benchmarking study was set as substantial (ˇ- 0.61) using an interclass correlation and was met on 52/53 TOM dimensions. On well-being, one adult teams reliability was below the required level, but this was resolved by an additional training session and reliability check. Following this, of the 53 domain results, 36 (68%) were almost perfect, and 17 (32%) were satisfactory. The reliability study found that SLTs could be trained to be consistent in their use of the TOM for benchmarking purposes. It was found that training was important for consistency and to eradicate bias; additionally, the method of assessing reliability, and the adequacy of the available information, all affected reliability.


International Journal of Speech-Language Pathology | 2011

Therapy outcome measures: where are we now?

Alexandra John

Outcomes information contributes to the provision of quality services: sharing that information requires speech-language pathologists (SLPs) to use terminology readily understood by professions ranging from health and education to social and voluntary services. The Therapy Outcome Measure (TOM) provides a way of presenting outcome data in a digestible form, comprising part of a range of multiple measures used to collect information on the structures, processes, and outcomes of care. TOM was developed to provide a practical method of measuring outcomes in routine clinical practice. Furthermore, it has been used in a number of research studies as an outcome indicator. As an example of its utility in research, the article cites a benchmarking study, together with examples of internal and external benchmarking of outcomes intended to illustrate how the benchmarking of TOM data can inform practice. The TOM can therefore inform SLPs on their own outcomes, the outcomes for specific client groups, and, by benchmarking TOM data, can contribute to the delivery of better, more efficient services.


Clinical Governance: An International Journal | 2003

Using benchmarking data for assessing performance in occupational therapy

Pam Enderby; Alexandra John; Anthony Hughes; Brian Petheram

Comparing outcome data derived from patients receiving treatment in different sites can identify different practice worthy of further examination. This paper illustrates an approach to benchmarking with data collected on 1,711 patients who have received occupational therapy in nine healthcare trusts. Detailed results of 288 patients indicate that there were differences between the services in the patients referred for occupational therapy, they were discharged at different points in their recovery and different amounts of gain were achieved during the treatment period. In order to interpret the reasons for the variation meaning needs to be added to the data. While casemix is an important consideration and may account for many of these differences, it would also appear that investigation of the different processes of care in different trusts may warrant further study.


International Journal of Language & Communication Disorders | 2001

Benchmarking Can Facilitate the Sharing of Information on Outcomes of Care

Alexandra John; Pam Enderby; Anthony Hughes; Brian Petheram

Recent restructuring in the national health service (NHS) aimed to effect cultural and organisation changes that would ensure fair and equal access for service users to effective and efficient services. Clinical governance has been introduced as a means of delivering quality improvement. One element of this is the use of benchmarking to assess current process and outcome and to use comparative information to inform about current and best practice. The use of the Therapy Outcome Measure (TOM) (Enderby and John 1997) was investigated as an indicator to benchmark the outcomes of treatment for different client-groups and compare patterns of outcomes from different speech and language therapy (SLT) services. The study recruited eight SLT trust sites and ran for eighteen months. The TOM data was analysed to note similarities and differences in cases entering treatment, in the direction of change resulting from treatment, and on completing treatment. Variation was found on these points between cases with different disorders and across the trusts. TOM data could be used to provide a benchmark for a disorder against which services could make comparisons. However, for benchmarking to succeed there is a need for support and commitment from every level of an organisation.


Aphasiology | 2005

Benchmarking outcomes in dysphasia using the Therapy Outcome Measure

Alexandra John; Pam Enderby; Anthony Hughes

Background: Quality improvement in health care seeks to drive equitable, effective, and appropriate services. Benchmarking can be used as a tool for acquiring clinical information to inform and monitor change in order to identify commonalities and significant differences. Aims: This study reports on a benchmarking study of eight speech and language therapy services providing interventions for persons with dysphasia, addressing three questions related to equity of access to treatment; changes associated with treatment; and profiles on discharge. Methods & Procedures: The Therapy Outcome Measure (TOM) (Enderby & John, 1997) was used as the indicator. The SLTs were trained and inter-rater reliability checked. Data were collected on consecutively referred cases on entry to treatment and on discharge. Outcomes & Results: No difference was found in the profiles of patients referred to the different services on entry to treatment. While a significant difference was found between the services in the number of cases changing in impairment and disability/activity, the final outcomes reported were similar on discharge on impairment, disability/activity, and participation, but showed significant statistical difference on well-being. The number of contacts and duration of treatment varied across the services, even on cases with mild impairment. Conclusions: The results indicated equity of access, a difference in the effects of treatment on the different dimensions, and a significant difference on the dimension of well-being at discharge. Each service was able to compare their outcomes with those of the other services, observe the variations, and exchange information with those services in order to the identify reasons for the results.


Advances in Speech-Language Pathology | 2004

Assessing the construct validity of the therapy outcome measure for pre-school children with delayed speech and language

Sue Roulstone; Alexandra John; Anthony Hughes; Pam Enderby

Abstract This paper explores the construct validity of a measure of clinical outcomes, the Therapy Outcomes Measures (TOM). The work took place within a randomized controlled trial of pre-school children with speech and language difficulties in community clinics. Assessments of 159 children pre-randomization covered aspects of the childrens expressive and receptive language, phonology, attention, play and socialization. The analysis investigated the relationship between the TOM and the various assessments. The sample, which included a range of primary speech and language difficulties, was stratified according to childrens baseline scores on receptive and expressive language and their phonology. This made possible an assessment of the TOM against the pattern of the childrens difficulties rather than against a single criterion. The pattern of correlation found between the TOM impairment ratings and the assessments reflected the childs predominant difficulty suggesting that the TOM ratings do have construct validity for this client group.

Collaboration


Dive into the Alexandra John's collaboration.

Top Co-Authors

Avatar

Pam Enderby

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cindy Cooper

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Sarah Creer

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge