Ivan Porro
University of Genoa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ivan Porro.
BMC Health Services Research | 2009
R. Valente; Angela Testi; Elena Tànfani; Marco Fato; Ivan Porro; Maurizio Santo; Gregorio Santori; Giancarlo Torre; Gianluca Ansaldo
BackgroundPrioritization of waiting lists for elective surgery represents a major issue in public systems in view of the fact that patients often suffer from consequences of long waiting times. In addition, administrative and standardized data on waiting lists are generally lacking in Italy, where no detailed national reports are available. This is true although since 2002 the National Government has defined implicit Urgency-Related Groups (URGs) associated with Maximum Time Before Treatment (MTBT), similar to the Australian classification. The aim of this paper is to propose a model to manage waiting lists and prioritize admissions to elective surgery.MethodsIn 2001, the Italian Ministry of Health funded the Surgical Waiting List Info System (SWALIS) project, with the aim of experimenting solutions for managing elective surgery waiting lists. The project was split into two phases. In the first project phase, ten surgical units in the largest hospital of the Liguria Region were involved in the design of a pre-admission process model. The model was embedded in a Web based software, adopting Italian URGs with minor modifications. The SWALIS pre-admission process was based on the following steps: 1) urgency assessment into URGs; 2) correspondent assignment of a pre-set MTBT; 3) real time prioritization of every referral on the list, according to urgency and waiting time. In the second project phase a prospective descriptive study was performed, when a single general surgery unit was selected as the deployment and test bed, managing all registrations from March 2004 to March 2007 (1809 ordinary and 597 day cases). From August 2005, once the SWALIS model had been modified, waiting lists were monitored and analyzed, measuring the impact of the model by a set of performance indexes (average waiting time, length of the waiting list) and Appropriate Performance Index (API).ResultsThe SWALIS pre-admission model was used for all registrations in the test period, fully covering the case mix of the patients referred to surgery. The software produced real time data and advanced parameters, providing patients and users useful tools to manage waiting lists and to schedule hospital admissions with ease and efficiency. The model protected patients from horizontal and vertical inequities, while positive changes in API were observed in the latest period, meaning that more patients were treated within their MTBT.ConclusionThe SWALIS model achieves the purpose of providing useful data to monitor waiting lists appropriately. It allows homogeneous and standardized prioritization, enhancing transparency, efficiency and equity. Due to its applicability, it might represent a pragmatic approach towards surgical waiting lists, useful in both clinical practice and strategic resource management.
BMC Bioinformatics | 2007
Ivan Porro; Livia Torterolo; Luca Corradi; Marco Fato; Adam Papadimitropoulos; Silvia Scaglione; Andrea Schenone; Federica Viti
Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip® Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of uploading and accessing distributed datasets through the Grid middleware and against its ability in managing the execution of jobs on distributed computational resources. Results from the Grid test will be discussed in a further paper.
BMC Bioinformatics | 2009
Luca Corradi; Valentina Mirisola; Ivan Porro; Livia Torterolo; Marco Fato; Paolo Romano; Ulrich Pfeffer
BackgroundComplex microarray gene expression datasets can be used for many independent analyses and are particularly interesting for the validation of potential biomarkers and multi-gene classifiers. This article presents a novel method to perform correlations between microarray gene expression data and clinico-pathological data through a combination of available and newly developed processing tools.ResultsWe developed Survival Online (available at http://ada.dist.unige.it:8080/enginframe/bioinf/bioinf.xml), a Web-based system that allows for the analysis of Affymetrix GeneChip microarrays by using a parallel version of dChip. The user is first enabled to select pre-loaded datasets or single samples thereof, as well as single genes or lists of genes. Expression values of selected genes are then correlated with sample annotation data by uni- or multi-variate Cox regression and survival analyses. The system was tested using publicly available breast cancer datasets and GO (Gene Ontology) derived gene lists or single genes for survival analyses.ConclusionThe system can be used by bio-medical researchers without specific computation skills to validate potential biomarkers or multi-gene classifiers. The design of the service, the parallelization of pre-processing tasks and the implementation on an HPC (High Performance Computing) environment make this system a useful tool for validation on several independent datasets.
Future Generation Computer Systems | 2007
Francesco Beltrame; Adam Papadimitropoulos; Ivan Porro; Silvia Scaglione; Andrea Schenone; Livia Torterolo; Federica Viti
Microarray techniques are successfully used to investigate thousands gene expression profiling in a variety of genomic analyses such as gene identification, drug discovery and clinical diagnosis, providing a large amount of genomic data for the overall research community. A Grid based Environment for distributed Microarray data Management and Analysis (GEMMA) is being built. This platform is planned to provide shared, standardized and reliable tools for managing and analyzing biological data related to bone marrow stem cell cultures, in order to maximize the results of distributed experiments. Different microarray analysis algorithms may be offered to the end-user, through a web interface. A set of modular and independent applications may be published on the portal, and either single algorithms or a combination of them might be invoked by the user, through a workflow strategy. Services may be implemented within an existing Grid computing infrastructure to solve problems concerning both large datasets storage (data intensive problem) and large computational times (computing intensive problem). Moreover, experimental data annotation may be collected according to the same rules and stored through the Grid portal, by using a metadata schema, which allows a comprehensive and replicable sharing of microarray experiments among different researchers. The environment has been tested, so far, as regards performance results concerning Grid parallelization of a microarray based gene expression analysis. First results show a very promising speedup ratio.
BMC Bioinformatics | 2008
Luca Corradi; Marco Fato; Ivan Porro; Silvia Scaglione; Livia Torterolo
BackgroundMicroarray techniques are one of the main methods used to investigate thousands of gene expression profiles for enlightening complex biological processes responsible for serious diseases, with a great scientific impact and a wide application area. Several standalone applications had been developed in order to analyze microarray data. Two of the most known free analysis software packages are the R-based Bioconductor and dChip. The part of dChip software concerning the calculation and the analysis of gene expression has been modified to permit its execution on both cluster environments (supercomputers) and Grid infrastructures (distributed computing).This work is not aimed at replacing existing tools, but it provides researchers with a method to analyze large datasets without any hardware or software constraints.ResultsAn application able to perform the computation and the analysis of gene expression on large datasets has been developed using algorithms provided by dChip. Different tests have been carried out in order to validate the results and to compare the performances obtained on different infrastructures. Validation tests have been performed using a small dataset related to the comparison of HUVEC (Human Umbilical Vein Endothelial Cells) and Fibroblasts, derived from same donors, treated with IFN-α.Moreover performance tests have been executed just to compare performances on different environments using a large dataset including about 1000 samples related to Breast Cancer patients.ConclusionA Grid-enabled software application for the analysis of large Microarray datasets has been proposed. DChip software has been ported on Linux platform and modified, using appropriate parallelization strategies, to permit its execution on both cluster environments and Grid infrastructures. The added value provided by the use of Grid technologies is the possibility to exploit both computational and data Grid infrastructures to analyze large datasets of distributed data. The software has been validated and performances on cluster and Grid environments have been compared obtaining quite good scalability results.
Journal of Medical Systems | 2005
Gianluca De Leo; Santosh Krishna; Sue Boren; Marco Fato; Ivan Porro; E. Andrew Balas
Diabetes is a chronic disease that causes a great deal of morbidity and mortality and poor quality of life for millions of people. Continuing care and patient education help maintain a good control of the disease and prevent complications. Since current available resources are limited to providing such an education during clinic or physician visits only, alternative ways to educate people about diabetes need to be identified. In this article we discuss the implementation of an automated diabetes education call center, we define the evaluation procedures we adopted, we summarize general guidelines for the implementation of the entire system based on our experience, and we present preliminary results about the use of the call center. We believe our system is providing “active health” since we deliver educational messages to patients at regular intervals and at the time of their choice without waiting for their actions.
Concurrency and Computation: Practice and Experience | 2011
Jano van Hemert; Jos Koetsier; Livia Torterolo; Ivan Porro; Maurizio Melato; R. Barbera
Scientific gateways in the form of web portals are becoming the popular approach to share knowledge and resources around a topic in a community of researchers. Unfortunately, the development of web portals is expensive and requires specialists skills. Commercial and more generic web portals have a much larger user base and can afford this kind of development. Here we present two solutions that address this problem in the area of portals for scientific computing; both take the same approach. The whole process of designing, delivering and maintaining a portal can be made more cost‐effective by generating a portal from a description rather than programming in the traditional sense. We show four successful use cases to show how this process works and the results it can deliver. Copyright
BMC Medical Informatics and Decision Making | 2012
Luca Corradi; Ivan Porro; Andrea Schenone; Parastoo Momeni; Raffaele Ferrari; Flavio Nobili; M. Ferrara; Gabriele Arnulfo; Marco Fato
BackgroundRobust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i) supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii) handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii) providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration.MethodsA dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of “meta” data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach. Finally, data integration aspects have been addressed by providing the repository application with an efficient dynamic interface designed to enable the user to both easily query the data depending on defined datatypes and view all the data of every patient in an integrated and simple way.ResultsThe results of our work have been twofold. First, a dynamically extensible data model has been implemented and tested based on a “meta” data-model enabling users to define their own data types independently from the application context. This data model has allowed users to dynamically include additional data types without the need of rebuilding the underlying database. Then a complex process-event data structure has been built, based on this data model, describing patient-centered diagnostic processes and merging information from data and metadata. Second, a repository implementing such a data structure has been deployed on a distributed Data Grid in order to provide scalability both in terms of data input and data storage and to exploit distributed data and computational approaches in order to share resources more efficiently. Moreover, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications.ConclusionsBased on such repository, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications.
Informatics for Health & Social Care | 2011
Lorenzo Guerriero; Ezio Maria Ferdeghini; Silvia Rita Viola; Ivan Porro; Angela Testi; Remo Bedini
The patients’ clinical and healthcare data should virtually be available everywhere, both to provide a more efficient and effective medical approach to their pathologies, as well as to make public healthcare decision makers able to verify the efficacy and efficiency of the adopted healthcare processes. Unfortunately, customised solutions adopted by many local Health Information Systems in Italy make it difficult to share the stored data outside their own environment. In the last years, worldwide initiatives have aimed to overcome such sharing limitation. An important issue during the passage towards standardised, integrated information systems is the possible loss of previously collected data. The herein presented project realises a suitable architecture able to guarantee reliable, automatic, user-transparent storing and retrieval of information from both modern and legacy systems. The technical and management solutions provided by the project avoid data loss and overlapping, and allow data integration and organisation suitable for data-mining and data-warehousing analysis.
2010 IEEE Workshop on Health Care Management (WHCM) | 2010
R. Valente; Angela Testi; Elena Tànfani; Marco Fato; Ivan Porro; Maurizio Santo; Gregorio Santori; Giancarlo Torre; Gianluca Ansaldo
Operating room (OR) sessions planning is often critical, due to potential clinical consequences of errors, while access to surgery often follows non transparent prioritization. In previous projects we designed the Surgical Waiting List InfoSystem (SWALIS) model to prioritize elective waiting lists. In the present study we present further modeling work, aiming at building instruments to plan OR session. Two new indexes were identified to measure the efficiency of planning both admissions and OR sessions, on the basis of the SWALIS prioritization. The SWALIS original data underwent simulation on a cohort of 1612 patients, including retrospective and cross sectional calculation and trend analysis. The availability of objective information allowed measuring the demand and the provided service. Furthermore, the new intelligible data on the efficiency and appropriateness of the pre-admission management allowed designing tools to plan OR sessions according to the actual and forecasted demand.