Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Filippo Galgani is active.

Publication


Featured researches published by Filippo Galgani.


Systematic Reviews | 2014

Systematic review automation technologies.

Guy Tsafnat; Paul Glasziou; Miew Keen Choong; Adam G. Dunn; Filippo Galgani; Enrico Coiera

Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.


Journal of Medical Internet Research | 2014

Automatic evidence retrieval for systematic reviews

Miew Keen Choong; Filippo Galgani; Adam G. Dunn; Guy Tsafnat

Background Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing’s effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Objective Our goal was to evaluate an automatic method for citation snowballing’s capacity to identify and retrieve the full text and/or abstracts of cited articles. Methods Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. Results The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. Conclusions The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.


pacific rim international conference on artificial intelligence | 2012

Citation based summarisation of legal texts

Filippo Galgani; Paul Compton; Achim G. Hoffmann

This paper presents an approach towards using both incoming and outgoing citation information for document summarisation. Our work aims at generating automatically catchphrases for legal case reports, using, beside the full text, also the text of cited cases and cases that cite the current case. We propose methods to use catchphrases and sentences of cited/citing cases to extract catchphrases from the text of the target case. We created a corpus of cases, catchphrases and citations, and performed a ROUGE based evaluation, which shows the superiority of our citation-based methods over full-text-only methods.


international conference on computational linguistics | 2012

Towards automatic generation of catchphrases for legal case reports

Filippo Galgani; Paul Compton; Achim G. Hoffmann

This paper presents the challenges and possibilities of a novel summarisation task: automatic generation of catchphrases for legal documents. Catchphrases are meant to present the important legal points of a document with respect of identifying precedents. Automatically generating catchphrases for legal case reports could greatly assist in searching for legal precedents, as many legal texts do not have catchphrases attached. We developed a corpus of legal (human-generated) catchphrases (provided with the submission), which lets us compute statistics useful for automatic catchphrase extraction. We propose a set of methods to generate legal catchphrases and evaluate them on our corpus. The evaluation shows a recall comparable to humans while still showing a competitive level of precision, which is very encouraging. Finally, we introduce a novel evaluation method for catchphrases for legal texts based on the known Rouge measure for evaluating summaries of general texts.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2014

HAUSS: Incrementally building a summarizer combining multiple techniques

Filippo Galgani; Paul Compton; Achim G. Hoffmann

Abstract The idea of automatic summarization dates back to 1958, when Luhn invented the “auto abstract” ( Luhn, 1958 ). Since then, many diverse automatic summarization approaches have been proposed, but no single technique has solved the increasingly urgent need for automatic summarization. Rather than proposing one more such technique, we suggest that the best solution is likely a system able to combine multiple summarization techniques, as required by the type of documents being summarized. Thus, this paper presents HAUSS: a framework to quickly build specialized summarizers, integrating several base techniques into a single approach. To recognize relevant text fragments, rules are created that combine frequency, centrality, citation and linguistic information in a context-dependent way. An incremental knowledge acquisition framework strongly supports the creation of these rules, using a training corpus to guide rule acquisition, and produce a powerful knowledge base specific to the domain. Using HAUSS, we created a knowledge base for catchphrase extraction in legal text. The system outperforms existing state-of-the-art general-purpose summarizers and machine learning approaches. Legal experts rated the extracted summaries similar to the original catchphrases given by the court. Our investigation of knowledge acquisition methods for summarization therefore demonstrates that it is possible to quickly create effective special-purpose summarizers, which combine multiple techniques, into a single context-aware approach.


australasian joint conference on artificial intelligence | 2010

LEXA: Towards Automatic Legal Citation Classification

Filippo Galgani; Achim G. Hoffmann

In this paper we present our approach towards legal citation classification using incremental knowledge acquisition. This forms a part of our more ambitious goal of automatic legal text summarization. We created a large training and test corpus from court decision reports in Australia. We showed that, within less than a week, it is possible to develop a good quality knowledge base which considerably outperforms a baseline Machine Learning approach. We note that the problem of legal citation classification allows the use of Machine Learning as classified training data is available. For other subproblems of legal text summarization this is unlikely to be the case.


Information Processing and Management | 2015

Summarization based on bi-directional citation analysis

Filippo Galgani; Paul Compton; Achim G. Hoffmann

Abstract Automatic document summarization using citations is based on summarizing what others explicitly say about the document, by extracting a summary from text around the citations (citances). While this technique works quite well for summarizing the impact of scientific articles, other genres of documents as well as other types of summaries require different approaches. In this paper, we introduce a new family of methods that we developed for legal documents summarization to generate catchphrases for legal cases (where catchphrases are a form of legal summary). Our methods use both incoming and outgoing citations, and we show how citances can be combined with other elements of cited and citing documents, including the full text of the target document, and catchphrases of cited and citing cases. On a legal summarization corpus, our methods outperform competitive baselines. The combination of full text sentences and catchphrases from cited and citing cases is particularly successful. We also apply and evaluate the methods on scientific paper summarization, where they perform at the level of state-of-the-art techniques. Our family of citation-based summarization methods is powerful and flexible enough to target successfully a range of different domains and summarization tasks.


pacific rim knowledge acquisition workshop | 2012

Knowledge acquisition for categorization of legal case reports

Filippo Galgani; Paul Compton; Achim G. Hoffmann

Natural language processing in complex domains, such as law, requires elaborate features, and their interaction is often difficult to model: thus traditional machine learning approaches might fail to perform satisfactorily. This paper describes our approach to assign categories and generate catchphrases for legal case reports. We describe our knowledge acquisition framework which lets us quickly build classification rules, using a small number of features, to assign general labels to cases. We show how the resulting knowledge base outperforms machine learning models which use both the designed features or a traditional bag of word representation. We also describe how to extend this approach to extract from the full text a list of more specific catchphrases that describe the content of the case.


Expert Systems With Applications | 2015

LEXA: Building knowledge bases for automatic legal citation classification

Filippo Galgani; Paul Compton; Achim G. Hoffmann

Abstract This paper presents a new approach to building legal citation classification systems. Our approach is based on Ripple-down Rules (RDR), an efficient knowledge acquisition methodology. The main contributions of the paper (over existing expert-systems approaches) are extensions to the traditional RDR approach introducing new automatic methods to assist in the creation of rules: using the available dataset to provide performance estimates and relevant examples, automatically suggesting and validating synonyms, re-using exceptions in different portions of the knowledge base. We compare our system LEXA with baseline machine learning techniques. LEXA obtains better results both in clean and noisy subsets of our corpus. Compared to machine learning approaches, LEXA also has other advantages such as supporting continuous extension of the rule base, and the opportunity to proceed without an annotated data set and to validate class labels while building rules.


Pharmacoepidemiology and Drug Safety | 2013

Feasibility of using Australian GP prescribing data for pharmacovigilance

Malcolm B. Gillies; Filippo Galgani; Guy Tsafnat; Adam G. Dunn; Nancy Huang; Margaret Williamson

Background: Unobserved confounding may impair the validity of observational research. Instrumental variable (IV) analysis theoretically controls for unobserved confounding, yet it has not widely been used in pharmacoepidemiologic studies. Objectives: To assess the applicability and apparent validity of different IVs in a study of long-acting beta2-agonist (LABA) use and the risk of myocardial infarction (MI). Methods: Information on adult patients with a diagnosis of asthma and/or chronic obstructive pulmonary disease and at least one prescription of inhaled beta2- agonist/Muscarinic antagonist was extracted from Dutch Mondriaan NPCRD General Practice (GP) database (N = 360,000). Effects of LABA vs. no-LABA on the risk of MI were estimated by using a Cox proportional hazards model. Physicians prescribing preference (PPP), measured by the last prescription written by a physician, GP centers (GPC), and proportions of LABA prescriptions per GP center (PLP) were used as IVs in two-stage IV analysis. Ninety-five percent confidence intervals (CI) for IV estimates were estimated by using bootstrapping. Quantitative methods (e.g., F-statistic, standardized difference for binary IV, and empirical cumulative density function for continuous IV) were applied to assess the validity of the IVs. Results: IV analysis showed that GPC was weakly (F = 11) associated with LABA in contrast to the other IVs: PPP (F = 200) and PLP (F = 975). Observed confounders were approximately balanced across IV levels for PPP and PLP, but not for GPC. As this study has been performed under the PROTECT project examining the variability of results from studies using a same protocol, or a protocol with defined differences, applied to a same drug-adverse event pair in different databases, in order to maintain the blinding of investigators from one anothers results, results on the association between LABA and MI will be disclosed during the ICPE conference. Conclusions: Our IV analysis suggests that PLP appears to perform better as an IV than PPP and GPC. We recommend researchers to start IV analysis with more than one possible IV in order to evade uncertainty of the effect estimate based on a single IV.Background: Selective serotonin reuptake inhibitors (SSRIs) have been associated with gastrointestinal (GI) adverse effects. However, conflicting results were obtained by studies that evaluated the association between SSRIs, whether or not in combination with NSAIDs, and GI adverse effects. Objectives: To assess whether SSRIs increase the risk of GI adverse effects. Methods: Drug dispensing data between 1994 and 2011 were retrieved from the IADB.nl database. A prescription sequence symmetry analysis was used to assess whether peptic ulcer drugs, a proxy for GI adverse events, were prescribed more often following SSRI therapy initiation, whether or not in combination with NSAIDs, than the other way around. A relative short maximum time-span of four weeks between both prescriptions was used to limit time-variant confounding. We adjusted for trends in prescribing and estimated 95% confidence intervals using exact confidence intervals for binomial distributions. The association between NSAIDs alone and peptic ulcer drugs was also evaluated, as a positive control. Results: In total, 253,588 incident SSRI users were identified. Of these patients, 277 were incident users of both SSRIs and peptic ulcer drugs within a 4 week time-span. Less patients received peptic ulcer drugs after SSRI therapy initiation than the other way around (126 vs. 151), corresponding to an adjusted sequence ratio (ASR) of 0.83 (95% confidence interval [CI] 0.65-1.06). The ASR of concurrent use of SSRIs and NSAIDs (1.48, 95% CI: 0.90-2.49) did not exceed the ASR of NSAIDs alone (2.50, 95% CI: 2.27-2.76). Conclusions: This study provides evidence that SSRIs do not increase the risk of GI adverse effects. Our findings indicate that at least part of the association between SSRIs with or without NSAIDs and GI adverse events might be attributed to unmeasured or residual confounding.Background: Although a concise overview of Adverse Drug Reactions (ADRs) of varenicline is known, little is known about the time related information about ADRs of varenicline such as for example latencies. Objectives: To gain insight in the experience and safety of varenicline in daily practice as reported by patients through web-based questionnaires using an intensive monitoring system. Methods: Design A prospective, observational, non-interventional cohort study. Setting: First-time users of varenicline were defined as patients who have not filled in a prescription of varenicline in the previous 12 months using the first prescription signal in that particular pharmacy. Participants: All first-time users of varenicline in participating pharmacies between 1 December 2008 and 31 March 2012 were invited for the study. Patients could sign up for the study on a dedicated website. Electronic questionnaires were sent after 1, 2 and 6 weeks, 3 months and 4 months after they started to use varenicline. In these questionnaires questions about drug use and ADRs were asked for. Main outcome measurements: Information about the ADR, seriousness, and action taken when experiencing an ADR. Statistical analysis: Descriptive analysis was done using Microsoft Access. Results: About 1,418 patients signed up for the study. Response rates for the various questionnaires vary from 31.3% to 62.5%. 58.8% of the patients reported at least one ADR. The most frequently reported ADRs were nausea (30.8%), abdominal pain (11.2%) and abnormal dreaming (10.3%) which are listed in the Summary of Product Characteristics (SmPC) of varenicline. Median latency times were 3-7 days, with exception for depressed mood (10 days). The number of ADRs did not abate over time. No signals were detected. During treatment 43.9% of the patients stopped using varenicline. The main reasons for stopping were the occurrence of ADRs (42.2%) and other (40%) unspecified reasons. Conclusions: This study indicates that varenicline is a relatively safe drug. The reported ADRs correspond with the ADRs mentioned in the SmPC of varenicline with a median latency of 3-7 days. The number of ADRs do not abate over time.Background: Concomitant disease and associated drug use is frequent in patients with type 2 diabetes mellitus. However, data on longitudinal changes in the prevalence of co-medication in such patients is limited. Objectives: To assess changes in the prevalence of concomitant medication use in patients with type 2 diabetes before and after initiation with oral antidiabetic agents (OAD). Methods: A cohort study was performed among new users of OAD aged ≥ 35 years, who were enrolled in the Diabetes Care System (DCS) in the Dutch region of West-Friesland (200,000 inhabitants). Patients receiving care from the DCS were linked to drug dispensing data obtained from 15 community pharmacies and two dispensing general practices in the region. The study period was between 1998 and 2007. The prevalence of medication use was assessed up to 10 years before and after initiation of OAD. Drugs evaluated included cardiovascular (CV) drugs and statins, but also included a range of non-CV drugs like antidepressants, antipsychotics, benzodiazepines, antibiotics, NSAIDs, respiratory medication and proton pump inhibitors. Results: We identified 2,933 incident users of OAD (51.9% men, mean age 61 years). The prevalence of drug use gradually increased with time for nearly any type of medication. However, the initiation of OAD triggered a shift in the prevalence of CV drug use. In the year prior to initiation, 58.7% of the patients used CV drugs, which increased to 73.9% in the first year after. Renin angiotensin aldosterone system inhibitors and statins attributed most to this increase. Also, the proportion of patients using more than one CV drug increased steadily over time. Stratification according to age and sex showed similar patterns with this shift being more pronounced in younger patients and men. The prevalence of non-CV medication use increased steadily, mostly due to a rise in the use of antibiotics, drugs for gastroesophageal reflux disease and eye medication. Conclusions: The increase in concomitant medication use in patients with type 2 diabetes was mostly attributable to an increase of cardiovascular medication according to guidelines aimed at prevention of cardiovascular disease.Background: Antidiabetic medication is aimed at attaining tight glycemic control, but patients do not always achieve guideline recommended targets. Available observational studies focusing on both drug treatment and glycemic control have some methodological limitations. Objectives: To describe the relation between long-term medication use and changes in HbA1c level in various subgroups of patients with type 2 diabetes mellitus. Methods: A cohort study was performed among new users of metformin (Met) or sulfonylurea (SU), aged ≥ 35 years, enrolled in the Diabetes Care System (DCS) in the Dutch region of West-Friesland (200,000 inhabitants). Patients receiving care from the DCS were linked to drug dispensing data obtained from 15 community pharmacies and two dispensing general practices in the region. Patients visit the DCS annually for a check-up. The study period was between 1998 and 2007. To be eligible, subjects had to remain on the initially prescribed OAD for at least 3 years, have a Hba1c-measurement in 3 months around treatment start and have at least three Hba1c measurements overall. We categorised patients according to Hb1ac level at treatment initiation (≤ 7%, > 7%) and assessed time to reaching either a HbA1c > 7% or ≤ 7%, respectively. Cox regression analysis was conducted to compare Met and SU, while adjusting for confounders. Results: There were 382 and 149 starters of Met and SU, respectively. The majority of patients (63%) entered the cohort with a Hba1c > 7%. Patients initiating Met at an HbA1c > 7% showed a statistically significant faster progression to a HbA1c ≤ 7% compared to those starting SU (adjusted hazard ratio 0.74, 95%CI: 0.56-0.96). Stratification revealed no statistically significant differences between age, sex, and BMI subgroups. The proportion of patients with add-ons was high (50.5%). No difference between Met en SU was found among patients on monotherapy. In patients starting with an HbA1c ≤ 7% no obvious differences were observed between Met and SU. Conclusions: Overall results confirmed Met as the first choice OAD in T2DM patients with an HbA1c ≥ 7.0%, but treatment intensification in order to reach glycemic control is frequent.Background: When estimating the effects of exposure in observational data, propensity score (PS) methods can be used to control for confounding. When PS matching is used, often a pre-specified caliper width is applied. A crucial part of this matching approach is assessment of how close the co-variate distributions are in the two treatment groups, i.e. balance. The choice of the caliper may influence the balance of covariates between treatment groups and, therefore, the bias and precision of the PS adjusted effect estimate. Although several balance measures have been described as tools for PS model selection, their role in choosing optimal caliper widths is not well studied. Objectives: To explore the usefulness of balance measures in selecting the optimal caliper width for propensity score matching. Methods: We conducted Monte Carlo simulations to assess the usefulness of balance measures (standardized difference) to select optimal caliper width and PS models that yielded the least biased estimates. In different simulations with binary covariates, exposure and outcome status, different sample sizes (n = 500, 1,000, 3,000) and strength of exposure-outcome association (OR = 1, 2) were considered. Caliper widths were varied between 0.05 and 0.6 (steps of 0.05) of the standard deviation of the PS. The balance of covariates between PS matched groups was assessed using the standardized difference (SDif) for each PS model-caliper width combination. PS model with the lowest value of SDif (i.e. most optimal balance) was selected and treatment effects were estimated using conditional logistic regression. Results: The PS models selected using various caliper widths were closely related and these models often included interaction and squared terms. When using balance measures to select a certain PS model, the choice for a certain PS model seems to have much more impact on bias and precision of exposure effects than the caliper width used. Conclusions: Balance measures are useful tools for selecting the optimal PS model and the PS model selected has more impact on bias and precision than the caliper width that is used in PS matching.

Collaboration


Dive into the Filippo Galgani's collaboration.

Top Co-Authors

Avatar

Achim G. Hoffmann

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Paul Compton

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge