Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Mallett is active.

Publication


Featured researches published by Susan Mallett.


Annals of Internal Medicine | 2011

QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.

Penny F Whiting; Anne Wilhelmina Saskia Rutjes; Marie Westwood; Susan Mallett; Jonathan J Deeks; Johannes B. Reitsma; Mariska M.G. Leeflang; Jonathan A C Sterne; Patrick M. Bossuyt

In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.


European Journal of Anaesthesiology | 2017

Management of severe perioperative bleeding Guidelines from the European Society of Anaesthesiology

Sibylle Kozek-Langenecker; Arash Afshari; Pierre Albaladejo; Cesar Aldecoa Alvarez Santullano; Edoardo De Robertis; Daniela Filipescu; Dietmar Fries; Thorsten Haas; Georgina Imberger; Matthias Jacob; Marcus D. Lancé; Juan V. Llau; Susan Mallett; Jens Meier; Niels Rahe-Meyer; Charles Marc Samama; Andrew F Smith; Cristina Solomon; Philippe Van der Linden; Anne Wikkelsø; Patrick Wouters; Piet Wyffels

The aims of severe perioperative bleeding management are three-fold. First, preoperative identification by anamesis and laboratory testing of those patients for whom the perioperative bleeding risk may be increased. Second, implementation of strategies for correcting preoperative anaemia and stabilisation of the macro- and microcirculations in order to optimise the patients tolerance to bleeding. Third, targeted procoagulant interventions to reduce the amount of bleeding, morbidity, mortality and costs. The purpose of these guidelines is to provide an overview of current knowledge on the subject with an assessment of the quality of the evidence in order to allow anaesthetists throughout Europe to integrate this knowledge into daily patient care wherever possible. The Guidelines Committee of the European Society of Anaesthesiology (ESA) formed a task force with members of scientific subcommittees and individual expert members of the ESA. Electronic databases were searched without language restrictions from the year 2000 until 2012. These searches produced 20 664 abstracts. Relevant systematic reviews with meta-analyses, randomised controlled trials, cohort studies, case-control studies and cross-sectional surveys were selected. At the suggestion of the ESA Guideline Committee, the Scottish Intercollegiate Guidelines Network (SIGN) grading system was initially used to assess the level of evidence and to grade recommendations. During the process of guideline development, the official position of the ESA changed to favour the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system. This report includes general recommendations as well as specific recommendations in various fields of surgical interventions. The final draft guideline was posted on the ESA website for four weeks and the link was sent to all ESA members. Comments were collated and the guidelines amended as appropriate. When the final draft was complete, the Guidelines Committee and ESA Board ratified the guidelines.


Anesthesia & Analgesia | 2001

The Effects of Balanced Versus Saline-Based Hetastarch and Crystalloid Solutions on Acid-Base and Electrolyte Status and Gastric Mucosal Perfusion in Elderly Surgical Patients

Nicholas J. Wilkes; Rex Woolf; Marjorie Mutch; Susan Mallett; Tim Peachey; Robert Stephens; Michael G. Mythen

The IV administration of sodium chloride solutions may produce a metabolic acidosis and gastrointestinal dysfunction. We designed this trial to determine whether, in elderly surgical patients, crystalloid and colloid solutions with a more physiologically balanced electrolyte formulation, such as Hartmann’s solution and Hextend®, can provide a superior metabolic environment and improved indices of organ perfusion when compared with saline-based fluids. Forty-seven elderly patients undergoing major surgery were randomly allocated to one of two study groups. Patients in the Balanced Fluid group received an intraoperative fluid regimen that consisted of Hartmann’s solution and 6% hetastarch in balanced electrolyte and glucose injection (Hextend). Patients in the Saline group were given 0.9% sodium chloride solution and 6% hetastarch in 0.9% sodium chloride solution (Hespan®). Biochemical indices and acid-base balance were determined. Gastric tonometry was used as a reflection of splanchnic perfusion. Postoperative chloride levels demonstrated a larger increase in the Saline group than the Balanced Fluid group (9.8 vs 3.3 mmol/L, P = 0.0001). Postoperative standard base excess showed a larger decline in the Saline group than the Balanced Fluid group (−5.5 vs −0.9 mmol/L, P = 0.0001). Two-thirds of patients in the Saline group, but none in the Balanced Fluid group, developed postoperative hyperchloremic metabolic acidosis (P = 0.0001). Gastric tonometry indicated a larger increase in the CO2 gap during surgery in the Saline group compared with the Balanced Fluid group (1.7 vs 0.9 kPa, P = 0.0394). In this study, the use of balanced crystalloid and colloid solutions in elderly surgical patients prevented the development of hyperchloremic metabolic acidosis and resulted in improved gastric mucosal perfusion when compared with saline-based solutions.


BMC Medicine | 2011

Developing risk prediction models for type 2 diabetes: a systematic review of methodology and reporting.

Gary S. Collins; Susan Mallett; Omar Omar; Ly-Mee Yu

BackgroundThe World Health Organisation estimates that by 2030 there will be approximately 350 million people with type 2 diabetes. Associated with renal complications, heart disease, stroke and peripheral vascular disease, early identification of patients with undiagnosed type 2 diabetes or those at an increased risk of developing type 2 diabetes is an important challenge. We sought to systematically review and critically assess the conduct and reporting of methods used to develop risk prediction models for predicting the risk of having undiagnosed (prevalent) or future risk of developing (incident) type 2 diabetes in adults.MethodsWe conducted a systematic search of PubMed and EMBASE databases to identify studies published before May 2011 that describe the development of models combining two or more variables to predict the risk of prevalent or incident type 2 diabetes. We extracted key information that describes aspects of developing a prediction model including study design, sample size and number of events, outcome definition, risk predictor selection and coding, missing data, model-building strategies and aspects of performance.ResultsThirty-nine studies comprising 43 risk prediction models were included. Seventeen studies (44%) reported the development of models to predict incident type 2 diabetes, whilst 15 studies (38%) described the derivation of models to predict prevalent type 2 diabetes. In nine studies (23%), the number of events per variable was less than ten, whilst in fourteen studies there was insufficient information reported for this measure to be calculated. The number of candidate risk predictors ranged from four to sixty-four, and in seven studies it was unclear how many risk predictors were considered. A method, not recommended to select risk predictors for inclusion in the multivariate model, using statistical significance from univariate screening was carried out in eight studies (21%), whilst the selection procedure was unclear in ten studies (26%). Twenty-one risk prediction models (49%) were developed by categorising all continuous risk predictors. The treatment and handling of missing data were not reported in 16 studies (41%).ConclusionsWe found widespread use of poor methods that could jeopardise model development, including univariate pre-screening of variables, categorisation of continuous risk predictors and poor handling of missing data. The use of poor methods affects the reliability of the prediction model and ultimately compromises the accuracy of the probability estimates of having undiagnosed type 2 diabetes or the predicted risk of developing type 2 diabetes. In addition, many studies were characterised by a generally poor level of reporting, with many key details to objectively judge the usefulness of the models often omitted.


Immunology Today | 1991

A new superfamily of cell surface proteins related to the nerve growth factor receptor

Susan Mallett; A. Neil Barclay

In this article Susan Mallett and Neil Barclay discuss the molecular and functional features of a new superfamily of membrane proteins defined by the presence of cysteine-rich motifs originally identified in the low-affinity nerve growth factor receptor. This superfamily includes two lymphocyte proteins of unknown function and two receptors for tumor necrosis factor.


BMC Medicine | 2010

Reporting methods in studies developing prognostic models in cancer: a review

Susan Mallett; Patrick Royston; Susan Dutton; Rachel Waters; Douglas G. Altman

BackgroundDevelopment of prognostic models enables identification of variables that are influential in predicting patient outcome and the use of these multiple risk factors in a systematic, reproducible way according to evidence based methods. The reliability of models depends on informed use of statistical methods, in combination with prior knowledge of disease. We reviewed published articles to assess reporting and methods used to develop new prognostic models in cancer.MethodsWe developed a systematic search string and identified articles from PubMed. Forty-seven articles were included that satisfied the following inclusion criteria: published in 2005; aiming to predict patient outcome; presenting new prognostic models in cancer with outcome time to an event and including a combination of at least two separate variables; and analysing data using multivariable analysis suitable for time to event data.ResultsIn 47 studies, prospective cohort or randomised controlled trial data were used for model development in only 33% (15) of studies. In 30% (14) of the studies insufficient data were available, having fewer than 10 events per variable (EPV) used in model development. EPV could not be calculated in a further 40% (19) of the studies. The coding of candidate variables was only reported in 68% (32) of the studies. Although use of continuous variables was reported in all studies, only one article reported using recommended methods of retaining all these variables as continuous without categorisation. Statistical methods for selection of variables in the multivariate modelling were often flawed. A method that is not recommended, namely, using statistical significance in univariate analysis as a pre-screening test to select variables for inclusion in the multivariate model, was applied in 48% (21) of the studies.ConclusionsWe found that published prognostic models are often characterised by both use of inappropriate methods for development of multivariable models and poor reporting. In addition, models are limited by the lack of studies based on prospective data of sufficient sample size to avoid overfitting. The use of poor methods compromises the reliability of prognostic models developed to provide objective probability estimates to complement clinical intuition of the physician and guidelines.


BMC Medicine | 2010

Reporting performance of prognostic models in cancer: a review.

Susan Mallett; Patrick Royston; Rachel Waters; Susan Dutton; Douglas G. Altman

BackgroundAppropriate choice and use of prognostic models in clinical practice require the use of good methods for both model development, and for developing prognostic indices and risk groups from the models. In order to assess reliability and generalizability for use, models need to have been validated and measures of model performance reported. We reviewed published articles to assess the methods and reporting used to develop and evaluate performance of prognostic indices and risk groups from prognostic models.MethodsWe developed a systematic search string and identified articles from PubMed. Forty-seven articles were included that satisfied the following inclusion criteria: published in 2005; aiming to predict patient outcome; presenting new prognostic models in cancer with outcome time to an event and including a combination of at least two separate variables; and analysing data using multivariable analysis suitable for time to event data.ResultsIn 47 studies, Cox models were used in 94% (44), but the coefficients or hazard ratios for the variables in the final model were reported in only 72% (34). The reproducibility of the derived model was assessed in only 11% (5) of the articles. A prognostic index was developed from the model in 81% (38) of the articles, but researchers derived the prognostic index from the final prognostic model in only 34% (13) of the studies; different coefficients or variables from those in the final model were used in 50% (19) of models and the methods used were unclear in 16% (6) of the articles. Methods used to derive prognostic groups were also poor, with researchers not reporting the methods used in 39% (14 of 36) of the studies and data derived methods likely to bias estimates of differences between risk groups being used in 28% (10) of the studies. Validation of their models was reported in only 34% (16) of the studies. In 15 studies validation used data from the same population and in five studies from a different population. Including reports of validation with external data from publications up to four years following model development, external validation was attempted for only 21% (10) of models. Insufficient information was provided on the performance of models in terms of discrimination and calibration.ConclusionsMany published prognostic models have been developed using poor methods and many with poor reporting, both of which compromise the reliability and clinical relevance of models, prognostic indices and risk groups derived from them.


British Journal of Haematology | 2010

Point-of-care testing in haemostasis.

David J. Perry; David Fitzmaurice; Steve Kitchen; Ian Mackie; Susan Mallett

Point‐of‐care testing (POCT) in haematology has seen a significant increase in both the spectrum of tests available and the number of tests performed annually. POCT is frequently undertaken with the belief that this will reduce the turnaround time for results and so improve patient care. The most obvious example of POCT in haemostasis is the out‐of‐hospital monitoring of the International Normalized Ratio in patients receiving a vitamin K antagonist, such as warfarin. Other areas include the use of the Activated Clotting Time to monitor anticoagulation for patients on cardio‐pulmonary bypass, platelet function testing to identify patients with apparent aspirin or clopidogrel resistance and thrombelastography to guide blood product replacement during cardiac and hepatic surgery. In contrast to laboratory testing, POCT is frequently undertaken by untrained or semi‐trained individuals and in many cases is not subject to the same strict quality control programmes that exist in the central laboratory. Although external quality assessment programmes do exist for some POCT assays these are still relatively few. The use of POCT in haematology, particularly in the field of haemostasis, is likely to expand and it is important that systems are in place to ensure that the generated results are accurate and precise.


Journal of Clinical Epidemiology | 2013

A systematic review classifies sources of bias and variation in diagnostic test accuracy studies

Penny F Whiting; Anne Ws Rutjes; Marie Westwood; Susan Mallett

OBJECTIVE To classify the sources of bias and variation and to provide an updated summary of the evidence of the effects of each source of bias and variation. STUDY DESIGN AND SETTING We conducted a systematic review of studies of any design with the main objective of addressing bias or variation in the results of diagnostic accuracy studies. We searched MEDLINE, EMBASE, BIOSIS, the Cochrane Methodology Register, and Database of Abstracts of Reviews of Effects (DARE) from 2001 to October 2011. Citation searches based on three key papers were conducted, and studies from our previous review (search to 2001) were eligible. One reviewer extracted data on the study design, objective, sources of bias and/or variation, and results. A second reviewer checked the extraction. RESULTS We summarized the number of studies providing evidence of an effect arising from each source of bias and variation on the estimates of sensitivity, specificity, and overall accuracy. CONCLUSIONS We found consistent evidence for the effects of case-control design, observer variability, availability of clinical information, reference standard, partial and differential verification bias, demographic features, and disease prevalence and severity. Effects were generally stronger for sensitivity than for specificity. Evidence for other sources of bias and variation was limited.


Journal of Hepatology | 2012

Evaluation of coagulation abnormalities in acute liver failure

Banwari Agarwal; Gavin Wright; Alex Gatt; Anne Riddell; Vishwaraj Vemala; Susan Mallett; Pratima Chowdary; Andrew Davenport; Rajiv Jalan; Andrew K. Burroughs

BACKGROUND & AIMS In acute liver failure (ALF), prothrombin time (PT) and its derivative prothrombin time ratio (PTR) are elevated, and are considered predictors of increased bleeding risk. We aimed at determining whether increased PT/PTR reflects the haemostatic potential and bleeding risk in ALF patients. METHODS Twenty consecutive ALF patients were recruited. Samples were analysed on admission for standard laboratory clotting tests (e.g. PT), thromboelastography (TEG), individual pro and anticoagulant factors and thrombin generation (TG) kinetics with and without Protac, a snake venom protein C activator, and microparticle assay. TG was also measured in 20 age and sex matched healthy volunteers. RESULTS PT was significantly raised (50.7s ± 7.2, p=0.0001) but did not correlate with TEG parameters. TEG tracings were consistent with a hypocoagulable state in 20%, normal in 45%, and hypercoagulable in 35% of the patients. There was a concomitant and proportional reduction in plasma levels of both procoagulants and natural anticoagulant proteins, in conjunction with a significant elevation in plasma levels of factors-VIII (FVIII) and Von Willebrand factor, and microparticles, culminating in an overall efficient, albeit reduced, thrombin generation capacity in comparison with healthy individuals. A heparin-like effect (HLE) was also noted in most patients. No significant clinical bleeding complications occurred and no blood transfusions were required. CONCLUSIONS In ALF, despite grossly deranged PT in all patients, estimation of bleeding risk suggests that the coagulation disturbance in ALF patients is complex and heterogeneous for which an individualised approach is required.

Collaboration


Dive into the Susan Mallett's collaboration.

Top Co-Authors

Avatar

Steve Halligan

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Plumb

University College London

View shared research outputs
Top Co-Authors

Avatar

Keith Rolles

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge