David R. Goldhill
Royal National Orthopaedic Hospital
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David R. Goldhill.
Anaesthesia | 2005
David R. Goldhill; A. F. McNarry; G. Mandersloot; A. McGinley
We analysed the physiological values and early warning score obtained from 1047 ward patients assessed by an intensive care outreach service. Patients were either referred directly from the wards (n = 245, 23.4%) or were routine critical care follow‐ups. Decisions were made to admit 135 patients (12.9%) to a critical care area and limit treatment in another 78 (7.4%). An increasing number of physiological abnormalities was associated with higher hospital mortality (p < 0.0001) ranging from 4.0% with no abnormalities to 51.9% with five or more. An increasing early warning score was associated with more intervention (p < 0.0001) and higher hospital mortality (p < 0.0001). For patients with scores above one (n = 660), decisions to admit to a critical care area or limit treatment were taken in 200 (30.3%). Scores of all physiological variables except temperature contributed to the need for intervention and all variables except temperature and heart rate were associated with hospital mortality.
British Journal of Surgery | 2005
V G Hadjianastassiou; Paris P. Tekkis; David R. Goldhill; Linda Hands
The study was designed to evaluate the Acute Physiology And Chronic Health Evaluation (APACHE) II risk scoring system in abdominal aortic aneurysm (AAA) surgery. The aim was to create an APACHE‐based risk stratification model for postoperative death.
Anaesthesia | 2008
David R. Goldhill; A. Badacsonyi; A. A. Goldhill; C. S. Waldmann
Positioning and turning critically ill patients may be beneficial but there are little data on current practice. We prospectively recorded patient position every hour over two separate days in 40 British intensive care units and analysed 393 sets of observation. Five patients were prone at any time and 3.8% (day 1) and 5% (day 2) were on rotating beds. Patients were on their back for 46.1% of observations, turned left for 28.4% and right for 25.5%, and head up for 97.4%. A turn was defined as a change between on back, turned left or turned right. The average time (SD) between turns was 4.85 (3.3) h. There was no significant association between the average time between turns and age, weight, height, gender, respiratory diagnosis, intubated and ventilated, sedation score, day of week or nurse:patient ratio. There was a significant difference between hospitals in the frequency with which patients were turned.
World Journal of Surgery | 2004
V G Hadjianastassiou; Paris P. Tekkis; Jan Poloniecki; Manolis Gavalas; David R. Goldhill
Existing methods of risk adjustment in surgical audit are complex and costly. The present study aimed to develop a simple risk stratification score for mortality and a robust audit tool using the existing resources of the hospital Patient Administration System (PAS) database. This was an observational study for all patients undergoing surgical procedures over a two-year period, at a London university hospital. Logistic regression analysis was used to determine predictive factors of in-hospital mortality, the study outcome. Odds ratios were used as weights in the derivation of a simple risk-stratification model—the Surgical Mortality Score (SMS). Observed-to-expected mortality risk ratios were calculated for application of the SMS model insurgical audit. There were 11,089 eligible cases, under five surgical specialties (maxillofacial, orthopedic, renal transplant/dialysis, general, and neurosurgery). Incomplete data were 3.7% of the total, with no evidence of systematic underreporting. The SMS model was well calibrated [Hosmer-Lemeshow C-statistic: development set (3.432, p = 0.33), validation set (6.359, p = 0.10) with a high discriminant ability (ROC areas: development set [0.837, S.E. = 0.013] validation set [0.816, S.E. = 0.016]). Subgroup analyses confirmed that the model can be used by the individual specialties for both elective and emergency cases. The SMS is an accurate risk-stratification model derived from existing database resources. It is simple to apply as a risk-management, screening tool to detect aberrations from expected surgical outcomes and to assist in surgical audit.RésuméLes techniques d’ajustement actuellement utilisées pour les scores de gravité en chirurgie sont complexes et coûteuses. Le but de cette étude a été de développer, à partir de la banque de données existante dans notre Hôpital, le système «Patient Administration System (PAS)», un score de stratification de risque de mortalité simple afin d’obtenir un outil d’audit robuste. D’après une période d’observation de deux ans pour tous les patients ayant eu une intervention chirurgicale dans un Hôpital universitaire de Londres, on a déterminé par analyse de régression logistique les facteurs prédictifs de mortalité hospitalière. Les rapports de côte ont été calculés et utilisés pour pondérer la réalisation d’un modèle simple de stratification de risques-le Surgical Mortality Score (SMS). On a aussi calculé les rapports mortalité observée/mortalité attendue appliqués au modèle de SMS lors d’un audit chirurgical. 11089 patients dans cinq spécialités chirurgicales (maxillo-faciale, orthopédie, transplantation et dialyse rénale, chirurgie générate et neurochirurgie) étaient éligibles. Dans 3.7% des cas, les données étaient incomplètes, toutefois sans preuve de sous-estimation de façon systématique. Le modèle SMS était bien calibré [statistique de Hosmer-Lemeshow C: développement (3.432, p = 0.33), validation (6.359, p = 0.10) avec une capacité de discrimination élevée (aires ROC: développement (0.837 ± [DS] 0.013), validation (0.816 ± 0.016). L’analyse de sous-groupe a confirmé que le modèle pouvait être utilisé par chaque spécialité, à la fois pour les interventions à froid et pour les urgences. Le modèle SMS est un modèle de stratification de risque précis, pouvant être dérivé des banques de données existantes. Il est simple à appliquer comme outil de dépistage pour détecter des déviations par rapport aux résultats attendus et ainsi, est d’une grande utilité pour les audits.ResumenLos métodos actuales de ajuste de riesgos en la auditoría quirúrgica son complejos y costosos. El presente estudio tuvo como objeto desarrollar una estratificación simple de la gradación o puntaje del riesgo de mortalidad y un instrumento robusto de auditoría utilizando los recursos existentes en la base de datos hospitalaria del Patient Administration Systems (PAS). Se realizó un estudio observacional de la totalidad de los pacientes sometidos a procedimientos quirúrgicos en un periodo de dos años en el London University Hospital. Se utilizó el análisis logístico de regresión para determinar los factores de predicción de mortalidad hospitalaria, que fue el estudio de resultados. Los odds ratios (razón de disparidad) fueron utilizados para la derivación del modelo simplificado de estratificación de riesgo-el puntaje de mortalidad quirúrgica (Surgical Mortality Score, SMS). La razón de la mortalidad observada frente al riesgo de mortalidad fue calculada mediante la aplicación del modelo SMS en la auditoría quirúrgica. Hubo 11.089 casos elegibles, ubicados en cinco especialidades quirúrgicas (maxilo-facial, ortopedia, trasplante renal/diálisis, cirugía general y neurocirugía). Se encontraron datos incompletos en 3.7% del total, sin evidencia de subregistro sistemático. El modelo SMS fue debidamente calibrado [Hosmer-Lemeshow-C-statistic; development set (3.432, p = 0.33), un set de validación (6.359, p = 0.10), con una alta habilidad de discriminación (ROC áreas: development set (0.837, S.E. = 0.013), set de validación (0816, S.E. = 0.016)]. Los análisis de subgrupo confirmaron que el modelo puede ser utilizado en las especialidades individuales, tanto en cirugía electiva como de urgencia. El SMS es un modelo seguro de estratificación de riesgo, derivado de recursos en las bases de datos existentes. Es fácil de aplicar como un instrumento de manejo de riesgo, de tamizaje para detectar aberraciones en los resultados quirúrgicos y de ayuda en la auditoría quirúrgica.
Anaesthesia | 2009
David R. Goldhill; T. Cook; C. S. Waldmann
Academic Anaesthesia. Anaesthesia News 2008; 250: 6–7. 26 Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice; a model for large scale knowledge translation. British Medical Journal 2008; 337: 963–5. 27 Henderson JJ, Popat MT, Latto IP, Pearce AC, Difficult Airway Society. Difficult Airway Society guidelines for management of the unanticipated difficult intubation. Anaesthesia 2004; 59: 675–94. 28 Cook TM. Novel airway devices: spoilt for choice? Anaesthesia 2003; 58: 107–10.
Anaesthesia | 2006
David R. Goldhill
The ballistic missile early warning system (BMEWS) became operational in 1959. BMEWS provided long-range warning of a missile attack over the polar region of the northern hemisphere. There were three installations: the United States Air Force facilities of Thule Air Base, Greenland; Clear Air Force Station, Alaska; and the Royal Air Force facility of Fylingdales, in the United Kingdom. Medical early warning systems came a bit later. Formal criteria for alerting a medical emergency team were introduced in 1989 in Liverpool, New South Wales, Australia. As well as cardiorespiratory arrest these criteria included grossly abnormal physiological values or concern regarding the patient [1]. The first published description of a physiologically based early warning score (EWS) for identifying critically ill ward patients was in 1997 [2]. Several physiological parameters were measured. Values within a ‘normal’ range were awarded zero points with increasingly abnormal values assigned points of up to a maximum of three. The points for each physiological parameter were summed to obtain a total score. The EWS is used to follow the patient’s physiological well-being and to generate a response to predetermined abnormal values. ‘Track and trigger’ is an apt description of this system. Early warning scores have been constructed using subjective criteria and many different variations are being used throughout the United Kingdom [3]. To date there is little evidence to help us decide which physiological parameters and values are best. This issue contains a report of the use of an EWS during an outbreak of Legionnaires’ disease in the summer of 2002 [4]. The authors sought Ethics Committee approval for reviewing the EWS observations from nursing records and observation charts. The Committee’s Chairman required that patients, or next of kin, be contacted to ascertain if they did not wish their data to be used (negative consent). Permission was refused by 44 of 498 patients, and their data were excluded from further analysis. If there was no reply it was assumed that there was no objection to the use of information extracted from the hospital records. Research based upon examining old records or databases where data are grouped and anonymised carries little, if any, risk of harm to the patients. Asking each patient for consent is laborious, expensive and time consuming and will make many similar studies very difficult to conduct. Ethics Committee review is an essential safeguard to all research. My experience suggests that many Committees would not require individual consent for this research. However, it is worth noting that 8.8% of those asked actively refused consent. For most research the assumption is made that patients do not wish to participate unless they provide informed consent. Many would argue that assuming that no reply implies consent is an illogical and unacceptable practice. Lack of consent cannot be taken to mean that a patient was willing to participate. Ethical concerns aside, there were some features of note in this study. Recordings of one or more of respiratory rate, pulse rate, systolic blood pressure and temperature at one time were called an observation set. The associated EWS, which also included patient responsiveness and urine output, was noted. Pulse rate, systolic blood pressure and temperature were included in over 90% of the sets, but respiratory rate was less commonly recorded. Responsiveness and urine output were infrequently and poorly recorded. An EWS was assigned to nearly 70% of the sets of observations. Errors in calculation were common and overall only 54.4% of observation sets were associated with a correct EWS. In general, the higher the score, the more likely it was to be misscored and also to be underscored. Patients proven to have Legionnaires’ disease were more likely to have incorrectly calculated scores. This study provides an insight into the way EWSs are used in clinical practice. The authors do not make it clear how many observations were from the nursing records and how many from observation charts. Experience suggests that sets of observations are more likely to be recorded on charts, whereas individual results may be found in nursing and medical records. Many charts now incorporate an EWS with instructions on frequency and intensity of monitoring as well as action to be taken based on the EWS. In the discussion, the authors give the opinion that errors were unlikely to have arisen from the rapid introduction of a new system while ward staff were working under pressure. However, EWSs had only just been implemented throughout the hospital at the time of this study. Education and support are essential to ensure that physiological values are completely and accurately recorded and associated scores calculated. The percentage of miscalculated EWSs fell to close to zero over the study period, suggesting that familiarity may increase the accuracy of the scoring. A glance at the calendar for the period also suggests that fluctuations in the percentage of miscalculations shown over the 2-week study period may have been related to increased errors at weekends. Patients in whom Legionnaires’ disease was not confirmed were, in general, admitted later in the study period and were less likely to have incorrectly calculated EWSs. I would question the authors’ view that it was unlikely that scores were more accurate in this group because staff had become used to the system. Their alternative explanation that experienced staff may ‘manipulate’ the scoring system to support their clinical impression may merit further exploration. Anaesthesia, 2006, 61, pages 209–214 .....................................................................................................................................................................................................................
The journal of the Intensive Care Society | 2008
Bruce Taylor; Verity Kemp; David R. Goldhill; Carl Waldmann
As most of our readers will be aware from previous publications and from the special articles contained in this edition, a lot of work has gone into highlighting the implications of an influenza pandemic for critical care services and trying to work out how to make the best use of the resources that may be available. The latest Department of Health Document ‘Pandemic influenza: surge capacity and prioritisation in health services – provisional UK guidance’ (available on the DH website) has made an encouraging start in providing official recognition of the problems likely to be encountered as a result of limited bed capacity, and also supports the concept that triaging decisions cannot be left to secondary care (and particularly critical care specialists) alone. Regrettably, however, even if its recommendations for patient selection are fully followed and the number of inappropriate referrals to critical care is reduced significantly, there is still a strong probability that during the peak of a pandemic the number of patients who are likely to benefit from critical care will still significantly exceed bed capacity – even if this is maximally expanded. In the original working of the Critical Care Contingency Planning Group a draft document on Phased Responses and Triaging was produced as a starter to addressing these difficulties. Further work on this was then put on-hold pending the production of official ethical guidance and other documentation to address these problems. However, now that these have been finalised and we still face potential dilemmas about how ICUs will be able to cope, feedback from critical care network discussions has persuaded us that it may be useful to circulate a revised version of this document, updated to include more recent recommendations, in the hope that this may be of help in assisting local planning. In particular, the document addresses two concepts that were initially felt to be inappropriate or unacceptable, but which now may be considered reasonable/realistic. These are the possibility of using some method of lottery selection if there are several appropriate referrals but insufficient bed numbers, and the fact that at some point there may be a requirement to accept temporary closure of intensive care to further referrals if no beds are available. It is hoped that consensus support for the principles of this document may help to produce reassurance for staff (with the support of local PCTs and Trust Management) that if potentially preventable deaths occur in such circumstances they will not be vulnerable to litigation or professional criticism when no other treatment options were available.
Anaesthesia | 2011
David R. Goldhill; C. S. Waldmann
B. D. Sites Associate Professor of Anesthesiolgy, Director of Orthopedic and Regional Anesthesia, Department of Anesthesiology, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA J. M. Neal Clinical Professor of Anesthesiology, University of Washington, Virginia Mason Medical Center, Seattle, WA, USA Email: [email protected] References 1 McGuirk S, Fahy C, Costi D, Cyna AM. Use of invasive placebos in research on local anaesthetic interventions. Anaesthesia 2011; 66: 84– 91. 2 Merriam-Webster On-line Dictionary. http://www.merriam-webster. com/ (accessed 11 ⁄ 07 ⁄ 2010). 3 World Medical Association. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects, 2008; http://www.wma. net/en/30publications/10policies/ b3/17c.pdf (accessed 11 ⁄ 07 ⁄ 2010). 4 Moseley J, O’Malley K, Petersen N, et al. A controlled trial of arthroscopic surgery for osteoarthritis of the knee. New England Journal of Medicine 2002; 347: 81–8. 5 Chestnut D, McGrath J, Vincent R, et al. Does early administration of epidural analgesia affect obstetric outcome in nulliparous women who are in spontaneous labor? Anesthesiology 1994; 80: 1201–8. 6 Neal JM, Rathmell JP, Rowlingson JC. Publishing studies that involve ‘‘off-label’’ use of drugs: formalizing Regional Anesthesia and Pain Medicine’s policy. Regional Anesthesia and Pain Medicine 2009; 34: 391–2.
Current Opinion in Anesthesiology | 2006
David R. Goldhill; Carl Waldmann
Purpose of review Studies over many years have demonstrated that preoptimization and attention to appropriate perioperative care is associated with a substantial decrease in surgical mortality. This review discusses ways in which patient preparation and perioperative support can minimize surgical mortality and morbidity. Recent findings Scoring systems continue to be developed in order to classify categories of surgical risk. Objective physiologically based assessments can also identify high-risk groups of patients. Debate continues over the indications for specific interventions such as β-blockade or statin therapy. There is continuing interest in perioperative optimization of oxygen delivery. A multimodality approach paying attention to a range of possible interventions appears to be beneficial. Audit, training, experience and a sufficient volume of procedures are all factors associated with surgical mortality. Summary The provision of a high-quality service throughout the perioperative period is vital for a successful outcome. Patients need to be assessed well before major elective surgery to determine if they fall into a high-risk category. Some patients may benefit from a change in management. Postoperatively, critical-care support should be available backed by level 1 (enhanced ward) care with input from outreach or medical emergency teams 24 hours per day, seven days a week.
The journal of the Intensive Care Society | 2012
David R. Goldhill
Volume 13, Number 1, January 2012 JICS 14 Evidence-based medicine rejects treatments based on intuition and unsystematic clinical experience. Proponents support a hierarchy in which best evidence is derived from systematic reviews and meta-analyses based on randomised controlled trials (RCTs) (Table 1).1 Not only is the evidence supporting this hierarchy elusive but it has been said that this approach ‘glorifies the results of imperfect experimental designs on unrepresentative populations in controlled research environments above all other sources of evidence.’ RCTs are not new. Between 1740 and 1744, Commodore George Anson led a squadron of eight ships around the world.2 Of the 1,854 men who left, only 188 survived. Many succumbed to scurvy. In 1747, James Lind, a navy doctor, conducted a randomised study at sea in sailors with scurvy, who were divided into six treatment groups.3 Those given two oranges and a lemon each day recovered quickly. This story illustrates the power of a RCT to identify both beneficial and ineffective treatments. Despite these impressive results in a previously untreatable condition with major logistical and hence financial and strategic implications, it wasn’t until 1795 that the Admiralty accepted recommendations that lemon juice should be issued routinely to the fleet. The intrinsic delay between a finding and its implementation persists today and it can still take time both to recognise effective treatments and for them to be adopted into clinical practice. Paradoxically, it is important to reflect on which present-day interventions will be shown to be nonsense in years to come. In 1988, Shoemaker showed that high-risk surgical patients driven to achieve supra-normal physiological values had much better outcomes compared to control patients.4 These were extraordinary findings with dramatic differences in outcome, but many years later we are still debating the role of goaldirected therapies and targets for oxygen delivery parameters.5-8 The certainty of the intervention which was embraced across many aspects of practice seems less certain now as a global panacea, but there is focus on its specific application in targeted populations, and the debate continues. In 1991, Ziegler told us in the NEJM that the monoclonal antibody HA1A was ‘safe and effective for treating patients with sepsis and gram-negative bacteremia.’9 It was only a few years later that this conclusion was comprehensively refuted in a repeat randomised clinical trial.10 Since then, many other well-conducted critical care RCTs with positive results have been conducted. The conclusions of some of these studies have later been rejected. Examples include tight glycaemic control11,12 and the use of activated protein C for sepsis.13 There have also been many studies where initial results have later been subject to major reevaluation or where continuing uncertainty about the role and effectiveness of the intervention remains. Examples include the use of steroids,14 selective digestive tract decontamination,15 prone positioning,16 the constituents of sepsis bundles,17-19 perioperative β-blockade,20 ECMO,21 nitric oxide for ARDS,22 medical emergency teams23 and albumin for resuscitation.24 Why should this be so? It is exceedingly hard to conduct an immaculate clinical trial.25,26 Furthermore, if standard assumptions of clinical significance are made, then we would expect 1 in 20 studies to wrongly conclude that a finding is positive. A single study, no matter how good, is inadequate. Two or more studies seem necessary to confirm that a treatment is effective.27,28 Confounding the issue further is the knowledge that undoubtedly, some research is fraudulent.29 Completely madeup studies have been identified although they are probably rare. Recently publicised concerns over the veracity of trials include investigations by Scott Reuben with COX-2 inhibitors and Joachim Boldt with colloids. It seems likely that a more common phenomenon might be for investigators to ignore protocol violations or to massage findings and results, but the existence or extent of such activity is unknown. Given all of these various considerations and more, it is clearly rare for a RCT to fulfil all the requirements necessary for an unimpeachable study. Added to this is the obvious observation that some are clearly underpowered for a definitive answer. The CONSORT statement for reporting RCTs addresses this in part. It has multiple categories where important details about methods should be given.30 These include the selection of subjects, clearly-defined outcomes, details of how sample size was determined, the way randomisation and blinding were performed, and the statistical methods used for comparing groups. This should and does help quality control, but even when criteria are fulfilled there may be chance elements degrading the reliability of the study. For example it is not uncommon for there to be potentially clinically important differences between study groups despite randomisation. Recruitment into some RCTs is stopped early. Sometimes this is because of insufficient funding or poor recruitment. Table 2 illustrates this effect using details of recruitment for the TRICC study.31 The original plan was to recruit 2,300 patients. Evidence-based medicine in critical care