Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Fraccaro is active.

Publication


Featured researches published by Paolo Fraccaro.


JMIR medical informatics | 2015

Adoption of clinical decision support in multimorbidity: a systematic review.

Paolo Fraccaro; Mercedes Arguello Casteleiro; John Ainsworth; Iain Buchan

Background Patients with multiple conditions have complex needs and are increasing in number as populations age. This multimorbidity is one of the greatest challenges facing health care. Having more than 1 condition generates (1) interactions between pathologies, (2) duplication of tests, (3) difficulties in adhering to often conflicting clinical practice guidelines, (4) obstacles in the continuity of care, (5) confusing self-management information, and (6) medication errors. In this context, clinical decision support (CDS) systems need to be able to handle realistic complexity and minimize iatrogenic risks. Objective The aim of this review was to identify to what extent CDS is adopted in multimorbidity. Methods This review followed PRISMA guidance and adopted a multidisciplinary approach. Scopus and PubMed searches were performed by combining terms from 3 different thesauri containing synonyms for (1) multimorbidity and comorbidity, (2) polypharmacy, and (3) CDS. The relevant articles were identified by examining the titles and abstracts. The full text of selected/relevant articles was analyzed in-depth. For articles appropriate for this review, data were collected on clinical tasks, diseases, decision maker, methods, data input context, user interface considerations, and evaluation of effectiveness. Results A total of 50 articles were selected for the full in-depth analysis and 20 studies were included in the final review. Medication (n=10) and clinical guidance (n=8) were the predominant clinical tasks. Four studies focused on merging concurrent clinical practice guidelines. A total of 17 articles reported their CDS systems were knowledge-based. Most articles reviewed considered patients’ clinical records (n=19), clinical practice guidelines (n=12), and clinicians’ knowledge (n=10) as contextual input data. The most frequent diseases mentioned were cardiovascular (n=9) and diabetes mellitus (n=5). In all, 12 articles mentioned generalist doctor(s) as the decision maker(s). For articles reviewed, there were no studies referring to the active involvement of the patient in the decision-making process or to patient self-management. None of the articles reviewed adopted mobile technologies. There were no rigorous evaluations of usability or effectiveness of the CDS systems reported. Conclusions This review shows that multimorbidity is underinvestigated in the informatics of supporting clinical decisions. CDS interventions that systematize clinical practice guidelines without considering the interactions of different conditions and care processes may lead to unhelpful or harmful clinical actions. To improve patient safety in multimorbidity, there is a need for more evidence about how both conditions and care processes interact. The data needed to build this evidence base exist in many electronic health record systems and are underused.


Medicine | 2016

Predicting mortality from change-over-time in the Charlson Comorbidity Index: A retrospective cohort study in a data-intensive UK health system.

Paolo Fraccaro; Evangelos Kontopantelis; Matthew Sperrin; Niels Peek; Christian D. Mallen; Philip Urban; Iain Buchan; Mamas A. Mamas

Abstract Multimorbidity is common among older people and presents a major challenge to health systems worldwide. Metrics of multimorbidity are, however, crude: focusing on measuring comorbid conditions at single time-points rather than reflecting the longitudinal and additive nature of chronic conditions. In this paper, we explore longitudinal comorbidity metrics and their value in predicting mortality. Using linked primary and secondary care data, we conducted a retrospective cohort study on adults in Salford, UK from 2005 to 2014 (n = 287,459). We measured multimorbidity with the Charlson Comorbidity Index (CCI) and quantified its changes in various time windows. We used survival models to assess the relationship between CCI changes and mortality, controlling for gender, age, baseline CCI, and time-dependent CCI. Goodness-of-fit was assessed with the Akaike Information Criterion and discrimination with the c-statistic. Overall, 15.9% patients experienced a change in CCI after 10 years, with a mortality rate of 19.8%. The model that included gender and time-dependent age, CCI, and CCI change across consecutive time windows had the best fit to the data but equivalent discrimination to the other time-dependent models. The absolute CCI score gave a constant hazard ratio (HR) of around 1.3 per unit increase, while CCI change afforded greater prognostic impact, particularly when it occurred in shorter time windows (maximum HR value for the 3-month time window, with 1.63 and 95% confidence interval 1.59–1.66). Change over time in comorbidity is an important but overlooked predictor of mortality, which should be considered in research and care quality management.


BMC Ophthalmology | 2015

Combining macula clinical signs and patient characteristics for age-related macular degeneration diagnosis: a machine learning approach

Paolo Fraccaro; Massimo Nicolò; Monica Bonetto; Mauro Giacomini; Peter Weller; Carlo Enrico Traverso; Mattia Prosperi; Dympna O'Sullivan

BackgroundTo investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) “black-box” approaches, for automated diagnosis of Age-related Macular Degeneration (AMD).MethodsData from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patients’ attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance.ResultsStudy population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in clinicians’ decision pathways to diagnose AMD.ConclusionsBoth black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support.


BMC Medicine | 2016

An external validation of models to predict the onset of chronic kidney disease using population-based electronic health records from Salford, UK

Paolo Fraccaro; Sabine N. van der Veer; Benjamin Brown; Mattia Prosperi; Donal O’Donoghue; Gary S. Collins; Iain Buchan; Niels Peek

BackgroundChronic kidney disease (CKD) is a major and increasing constituent of disease burdens worldwide. Early identification of patients at increased risk of developing CKD can guide interventions to slow disease progression, initiate timely referral to appropriate kidney care services, and support targeting of care resources. Risk prediction models can extend laboratory-based CKD screening to earlier stages of disease; however, to date, only a few of them have been externally validated or directly compared outside development populations. Our objective was to validate published CKD prediction models applicable in primary care.MethodsWe synthesised two recent systematic reviews of CKD risk prediction models and externally validated selected models for a 5-year horizon of disease onset. We used linked, anonymised, structured (coded) primary and secondary care data from patients resident in Salford (population ~234 k), UK. All adult patients with at least one record in 2009 were followed-up until the end of 2014, death, or CKD onset (n = 178,399). CKD onset was defined as repeated impaired eGFR measures over a period of at least 3 months, or physician diagnosis of CKD Stage 3–5. For each model, we assessed discrimination, calibration, and decision curve analysis.ResultsSeven relevant CKD risk prediction models were identified. Five models also had an associated simplified scoring system. All models discriminated well between patients developing CKD or not, with c-statistics around 0.90. Most of the models were poorly calibrated to our population, substantially over-predicting risk. The two models that did not require recalibration were also the ones that had the best performance in the decision curve analysis.ConclusionsIncluded CKD prediction models showed good discriminative ability but over-predicted the actual 5-year CKD risk in English primary care patients. QKidney, the only UK-developed model, outperformed the others. Clinical prediction models should be (re)calibrated for their intended uses.


Journal of the International AIDS Society | 2014

A comparison of inpatient admissions in 2012 from two European countries.

Victoria Tittle; Giovanni Cenderello; Ambra Pasa; Preya Patel; Stefania Artioli; Chiara Dentone; Paolo Fraccaro; Mauro Giacomini; Maurizio Setti; Antonio Di Biagio; Mark Nelson

This study compares the trends of HIV inpatient admissions between a London tertiary HIV centre (United Kingdom) and four infectious disease wards in Italy (IT) to recognize common patterns across Europe.


Journal of the International AIDS Society | 2014

Relationship between innate immunity, soluble markers and metabolic-clinical parameters in HIV+ patients ART treated with HIV-RNA<50 cp/mL.

Chiara Dentone; Daniela Fenoglio; Alessio Signori; Giovanni Cenderello; Alessia Parodi; Federica Bozzano; Michele Guerra; Pasqualina De Leo; Valentina Bartolacci; Eugenio Mantia; G. Orofino; Francesca Kalli; Francesco Marras; Paolo Fraccaro; Mauro Giacomini; Giovanni Cassola; Bianca Bruzzone; Giuseppe Ferrea; Claudio Viscoli; Gilberto Filaci; Andrea De Maria; Antonio Di Biagio

The persistence of immune activation and inflammation in HIV patients with HIV‐RNA (VL) undetectable causes many co‐morbidities [ 1 – 3 ]. The aim of this study is to correlate monocytes (m) and NK cell activation levels, soluble markers and oxidative stress with clinical, biochemical and metabolic data in HIV‐1 infected patients with VL≤50 copies (cp)/mL on antiretroviral therapy.


BMC Medical Informatics and Decision Making | 2018

Presentation of laboratory test results in patient portals: Influence of interface design on risk interpretation and visual search behaviour

Paolo Fraccaro; Markel Vigo; Panagiotis Balatsoukas; Sabine N. van der Veer; Lamiece Hassan; Richard Williams; Grahame Wood; Smeeta Sinha; Iain Buchan; Niels Peek

BackgroundPatient portals are considered valuable instruments for self-management of long term conditions, however, there are concerns over how patients might interpret and act on the clinical information they access. We hypothesized that visual cues improve patients’ abilities to correctly interpret laboratory test results presented through patient portals. We also assessed, by applying eye-tracking methods, the relationship between risk interpretation and visual search behaviour.MethodsWe conducted a controlled study with 20 kidney transplant patients. Participants viewed three different graphical presentations in each of low, medium, and high risk clinical scenarios composed of results for 28 laboratory tests. After viewing each clinical scenario, patients were asked how they would have acted in real life if the results were their own, as a proxy of their risk interpretation. They could choose between: 1) Calling their doctor immediately (high interpreted risk); 2) Trying to arrange an appointment within the next 4 weeks (medium interpreted risk); 3) Waiting for the next appointment in 3 months (low interpreted risk). For each presentation, we assessed accuracy of patients’ risk interpretation, and employed eye tracking to assess and compare visual search behaviour.ResultsMisinterpretation of risk was common, with 65% of participants underestimating the need for action across all presentations at least once. Participants found it particularly difficult to interpret medium risk clinical scenarios. Participants who consistently understood when action was needed showed a higher visual search efficiency, suggesting a better strategy to cope with information overload that helped them to focus on the laboratory tests most relevant to their condition.ConclusionsThis study confirms patients’ difficulties in interpreting laboratories test results, with many patients underestimating the need for action, even when abnormal values were highlighted or grouped together. Our findings raise patient safety concerns and may limit the potential of patient portals to actively involve patients in their own healthcare.


Journal of innovation in health informatics | 2017

Informatics for Health 2017: Advancing both science and practice

Philip Scott; Ronald Cornet; Colin McCowan; Niels Peek; Paolo Fraccaro; Nophar Geifman; Wouter T. Gude; William Hulme; Glen P. Martin; Richard Williams

Introduction The Informatics for Health congress, 24-26 April 2017, in Manchester, UK, brought together the Medical Informatics Europe (MIE) conference and the Farr Institute International Conference. This special issue of the Journal of Innovation in Health Informatics contains 113 presentation abstracts and 149 poster abstracts from the congress. Discussion The twin programmes of “Big Data” and “Digital Health” are not always joined up by coherent policy and investment priorities. Substantial global investment in health IT and data science has led to sound progress but highly variable outcomes. Society needs an approach that brings together the science and the practice of health informatics. The goal is multi-level Learning Health Systems that consume and intelligently act upon both patient data and organizational intervention outcomes. Conclusions Informatics for Health 2017 demonstrated the art of the possible, seen in the breadth and depth of our contributions. We call upon policy makers, research funders and programme leaders to learn from this joined-up approach.


IEEE Journal of Translational Engineering in Health and Medicine | 2016

I-Maculaweb: A Tool to Support Data Reuse in Ophthalmology

Monica Bonetto; Massimo Nicolò; Roberta Gazzarata; Paolo Fraccaro; Raffaella Rosa; Donatella Musetti; Maria Musolino; Carlo Enrico Traverso; Mauro Giacomini

This paper intends to present a Web-based application to collect and manage clinical data and clinical trials together in a unique tool. I-maculaweb is a user-friendly Web-application designed to manage, share, and analyze clinical data from patients affected by degenerative and vascular diseases of the macula. The unique and innovative scientific and technological elements of this project are the integration with individual and population data, relevant for degenerative and vascular diseases of the macula. Clinical records can also be extracted for statistical purposes and used for clinical decision support systems. I-maculaweb is based on an existing multilevel and multiscale data management model, which includes general principles that are suitable for several different clinical domains. The database structure has been specifically built to respect laterality, a key aspect in ophthalmology. Users can add and manage patient records, follow-up visits, treatment, diagnoses, and clinical history. There are two different modalities to extract records: one for the patients own center, in which personal details are shown and the other for statistical purposes, where all centers anonymized data are visible. The Web-platform allows effective management, sharing, and reuse of information within primary care and clinical research. Clear and precise clinical data will improve understanding of real-life management of degenerative and vascular diseases of the macula as well as increasing precise epidemiologic and statistical data. Furthermore, this Web-based application can be easily employed as an electronic clinical research file in clinical studies.


BMJ Open | 2018

Acute kidney injury in the UK: a replication cohort study of the variation across three regional populations

Simon Sawhney; Heather Robinson; Sabine N. van der Veer; Hilda Osafo Hounkpatin; Timothy Scale; James Chess; Niels Peek; Angharad Marks; G.I. Davies; Paolo Fraccaro; Matthew Johnson; Ronan Lyons; Dorothea Nitsch; Paul Roderick; Nynke Halbesma; Eve Miller-Hodges; Corrinda Black

Objectives A rapid growth in the reported rates of acute kidney injury (AKI) has led to calls for greater attention and greater resources for improving care. However, the reported incidence of AKI also varies more than tenfold between previous studies. Some of this variation is likely to stem from methodological heterogeneity. This study explores the extent of cross-population variation in AKI incidence after minimising heterogeneity. Design Population-based cohort study analysing data from electronic health records from three regions in the UK through shared analysis code and harmonised methodology. Setting Three populations from Scotland, Wales and England covering three time periods: Grampian 2003, 2007 and 2012; Swansea 2007; and Salford 2012. Participants All residents in each region, aged 15 years or older. Main outcome measures Population incidence of AKI and AKI phenotype (severity, recovery, recurrence). Determined using shared biochemistry-based AKI episode code and standardised by age and sex. Results Respectively, crude AKI rates (per 10 000/year) were 131, 138, 139, 151 and 124 (p=0.095), and after standardisation for age and sex: 147, 151, 146, 146 and 142 (p=0.257) for Grampian 2003, 2007 and 2012; Swansea 2007; and Salford 2012. The pattern of variation in crude rates was robust to any modifications of the AKI definition. Across all populations and time periods, AKI rates increased substantially with age from ~20 to ~550 per 10 000/year among those aged <40 and ≥70 years. Conclusion When harmonised methods are used and age and sex differences are accounted for, a similar high burden of AKI is consistently observed across different populations and time periods (~150 per 10 000/year). There are particularly high rates of AKI among older people. Policy-makers should be careful not draw simplistic assumptions about variation in AKI rates based on comparisons that are not rigorous in methodological terms.

Collaboration


Dive into the Paolo Fraccaro's collaboration.

Top Co-Authors

Avatar

Niels Peek

Manchester Academic Health Science Centre

View shared research outputs
Top Co-Authors

Avatar

Iain Buchan

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Ainsworth

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Markel Vigo

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge