Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Tangherloni is active.

Publication


Featured researches published by Andrea Tangherloni.


Briefings in Bioinformatics | 2016

Graphics processing units in bioinformatics, computational biology and systems biology.

Marco S. Nobile; Paolo Cazzaniga; Andrea Tangherloni; Daniela Besozzi

Abstract Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools.


BMC Bioinformatics | 2017

LASSIE: simulating large-scale models of biochemical systems on GPUs

Andrea Tangherloni; Marco S. Nobile; Daniela Besozzi; Giancarlo Mauri; Paolo Cazzaniga

BackgroundMathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models—consisting in hundreds or thousands of reactions and molecular species—can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs.ResultsLASSIE is a “black-box” GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm.ConclusionsLASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.


ieee symposium series on computational intelligence | 2016

Multimodal medical image registration using Particle Swarm Optimization: A review

Leonardo Rundo; Andrea Tangherloni; Carmelo Militello; Maria Carla Gilardi; Giancarlo Mauri

Intensity-based registration techniques have been increasingly used in multimodal image co-registration, which is a fundamental task in medical imaging, because it enables to integrate different images into a single representation such that complementary information can be easily accessed and fused. These schemes usually require the optimization of some similarity metric (e.g., Mutual Information) calculated on the input images. Local optimization methods often do not obtain good results, possibly leading to premature convergence to local optima, especially with non-smooth fitness functions. In these cases, we can adopt global optimization methods, and Swarm Intelligence techniques represent a very effective and efficient solution. This paper focuses on biomedical image registration using Particle Swarm Optimization (PSO). Several literature approaches are critically reviewed, by investigating modifications and hybridizations with Evolutionary Strategies. Since biomedical image registration represents a challenging clinical task, the experimental findings encourage further research studies in the near future.


computational intelligence in bioinformatics and computational biology | 2016

GPU-powered Bat Algorithm for the parameter estimation of biochemical kinetic values

Andrea Tangherloni; Marco S. Nobile; Paolo Cazzaniga

The emergent behavior of biochemical systems can be investigated by means of mathematical modeling and computational analyses, which usually require the automatic inference of the unknown values of the models parameters. This problem, known as Parameter Estimation (PE), is usually tackled with bio-inspired meta-heuristics for global optimization, most notably Particle Swarm Optimization (PSO). In this work we assess the performances of PSO and Bat Algorithm with differential operator and Lévy flights trajectories (DLBA). In particular, we compared these meta-heuristics for the PE using two biochemical models: the expression of genes in prokaryotes and the heat shock response in eukaryotes. In our tests, we also evaluated the impact on PE of different strategies for the initial positioning of individuals within the search space. Our results show that DLBA achieves comparable results with respect to PSO, but it converges to better results when a uniform initialization is employed. Since every iteration of DLBA requires three fitness evaluations for each bat, the whole methodology is built around a GPU-powered biochemical simulator (cupSODA) which is able to parallelize the process. We show that the acceleration achieved with cupSODA strongly reduces the running time, with an empirical 61× speedup that has been obtained comparing a Nvidia GeForce Titan GTX with respect to a CPU Intel Core i7-4790K. Moreover, we show that DLBA always outperforms PSO with respect to the computational time required to execute the optimization process.


congress on evolutionary computation | 2017

Proactive Particles in Swarm Optimization: A settings-free algorithm for real-parameter single objective optimization problems

Andrea Tangherloni; Leonardo Rundo; Marco S. Nobile

Particle Swarm Optimization (PSO) is an effective Swarm Intelligence technique for the optimization of non-linear and complex high-dimensional problems. Since PSOs performance is strongly dependent on the choice of its functioning settings, in this work we consider a self-tuning version of PSO, called Proactive Particles in Swarm Optimization (PPSO). PPSO leverages Fuzzy Logic to dynamically determine the best settings for the inertia weight, cognitive factor and social factor. The PPSO algorithm significantly differs from other versions of PSO relying on Fuzzy Logic, because specific settings are assigned to each particle according to its history, instead of being globally assigned to the whole swarm. In such a way, PPSOs particles gain a limited autonomous and proactive intelligence with respect to the reactive agents proposed by PSO. Our results show that PPSO achieves overall good optimization performances on the benchmark functions proposed in the CEC 2017 test suite, with the exception of those based on the Schwefel function, whose fitness landscape seems to mislead the fuzzy reasoning. Moreover, with many benchmark functions, PPSO is characterized by a higher speed of convergence than PSO in the case of high-dimensional problems.


computational intelligence in bioinformatics and computational biology | 2017

Reboot strategies in particle swarm optimization and their impact on parameter estimation of biochemical systems

Simone Spolaor; Andrea Tangherloni; Leonardo Rundo; Marco S. Nobile; Paolo Cazzaniga

Computational methods adopted in the field of Systems Biology require the complete knowledge of reaction kinetic constants to perform simulations of the dynamics and understand the emergent behavior of biochemical systems. However, kinetic parameters of biochemical reactions are often difficult or impossible to measure, thus they are generally inferred from experimental data, in a process known as Parameter Estimation (PE). We consider here a PE methodology that exploits Particle Swarm Optimization (PSO) to estimate an appropriate kinetic parameterization, by comparing experimental time-series target data with in silica dynamics, simulated by using the parameterization encoded by each particle. In this work we present three different reboot strategies for PSO, whose aim is to reinitialize particle positions to avoid particles to get trapped in local optima, and we compare the performance of PSO coupled with the reboot strategies with respect to standard PSO in the case of the PE of two biochemical systems. Since the PE requires a huge number of simulations at each iteration, in this work we exploit a GPU-powered deterministic simulator, cupSODA, which performs in a parallel fashion all simulations and fitness evaluations. Finally, we show that the performances of our implementation scale sublinearly with respect to the swarm size, even on outdated GPUs.


congress on evolutionary computation | 2016

GPU-powered and settings-free parameter estimation of biochemical systems

Marco S. Nobile; Andrea Tangherloni; Daniela Besozzi; Paolo Cazzaniga

To understand the emergent behavior of biochemical systems, computational analyses generally require the inference of unknown reaction kinetic constants, a problem known as parameter estimation (PE). In this work we propose a PE methodology that exploits Particle Swarm Optimization (PSO) to examine a set of candidate kinetic parameterizations, whose fitness is evaluated by comparing given target time-series of experimental data with in silico dynamics, simulated by using the parameterization encoded by each particle. In particular, we consider a Fuzzy Logic-based version of PSO - called Proactive Particles in Swarm Optimization (PPSO) - that automatically tunes the setting (inertia, cognitive and social factors) of each particle, independently from all other particles in the swarm. Since the optimization phase requires a large number of simulations for each particle at each iteration, we exploit a GPU-accelerated deterministic simulator, called cupSODA, that automatically generates the system of Ordinary Differential Equations associated with the biochemical system and performs its simulation for each candidate parameterization. We compare the performance of PPSO with respect to PSO for the PE problem by considering two biochemical systems as test cases. In addition, we evaluate the impact on PE of different strategies adopted, both in PPSO and PSO, for the selection of the initial positions of particles within the search space. We prove the effectiveness of our settings-free PE methodology by showing that PPSO outperforms PSO with respect to the computational time required to execute the optimization, achieving comparable results concerning the fitness of the best parameterization found.


International Journal of Imaging Systems and Technology | 2018

NeXt for neuro‐radiosurgery: A fully automatic approach for necrosis extraction in brain tumor MRI using an unsupervised machine learning technique

Leonardo Rundo; Carmelo Militello; Andrea Tangherloni; Giorgio Ivan Russo; Salvatore Vitabile; Maria Carla Gilardi; Giancarlo Mauri

Stereotactic neuro‐radiosurgery is a well‐established therapy for intracranial diseases, especially brain metastases and highly invasive cancers that are difficult to treat with conventional surgery or radiotherapy. Nowadays, magnetic resonance imaging (MRI) is the most used modality in radiation therapy for soft‐tissue anatomical districts, allowing for an accurate gross tumor volume (GTV) segmentation. Investigating also necrotic material within the whole tumor has significant clinical value in treatment planning and cancer progression assessment. These pathological necrotic regions are generally characterized by hypoxia, which is implicated in several aspects of tumor development and growth. Therefore, particular attention must be deserved to these hypoxic areas that could lead to recurrent cancers and resistance to therapeutic damage. This article proposes a novel fully automatic method for necrosis extraction (NeXt), using the Fuzzy C‐Means algorithm, after the GTV segmentation. This unsupervised Machine Learning technique detects and delineates the necrotic regions also in heterogeneous cancers. The overall processing pipeline is an integrated two‐stage segmentation approach useful to support neuro‐radiosurgery. NeXt can be exploited for dose escalation, allowing for a more selective strategy to increase radiation dose in hypoxic radioresistant areas. Moreover, NeXt analyzes contrast‐enhanced T1‐weighted MR images alone and does not require multispectral MRI data, representing a clinically feasible solution. This study considers an MRI dataset composed of 32 brain metastatic cancers, wherein 20 tumors present necroses. The segmentation accuracy of NeXt was evaluated using both spatial overlap‐based and distance‐based metrics, achieving these average values: Dice similarity coefficient 95.93% ± 4.23% and mean absolute distance 0.225 ± 0.229 (pixels).


italian workshop on neural nets | 2017

Computer-Assisted Approaches for Uterine Fibroid Segmentation in MRgFUS Treatments: Quantitative Evaluation and Clinical Feasibility Analysis

Leonardo Rundo; Carmelo Militello; Andrea Tangherloni; Giorgio Ivan Russo; Roberto Lagalla; Giancarlo Mauri; Maria Carla Gilardi; Salvatore Vitabile

Nowadays, uterine fibroids can be treated using Magnetic Resonance guided Focused Ultrasound Surgery (MRgFUS), which is a non-invasive therapy exploiting thermal ablation. In order to measure the Non-Perfused Volume (NPV) for treatment response assessment, the ablated fibroid areas (i.e., Region of Treatment, ROT) are manually contoured by a radiologist. The current operator-dependent methodology could affect the subsequent follow-up phases, due to the lack of result repeatability. In addition, this fully manual procedure is time-consuming, considerably increasing execution times. These critical issues can be addressed only by means of accurate and efficient automated Pattern Recognition approaches. In this contribution, we evaluate two computer-assisted segmentation methods, which we have already developed and validated, for uterine fibroid segmentation in MRgFUS treatments. A quantitative comparison on segmentation accuracy, in terms of area-based and distance-based metrics, was performed. The clinical feasibility of these approaches was assessed from physicians’ perspective, by proposing an integrated solution.


The Journal of Supercomputing | 2017

Gillespie's Stochastic Simulation Algorithm on MIC coprocessors

Andrea Tangherloni; Marco S. Nobile; Paolo Cazzaniga; Daniela Besozzi; Giancarlo Mauri

To investigate the behavior of biochemical systems, many runs of Gillespie’s Stochastic Simulation Algorithm (SSA) are generally needed, causing excessive computational costs on Central Processing Units (CPUs). Since all SSA runs are independent, the Intel Xeon Phi coprocessors based on the Many Integrated Core (MIC) architecture can be exploited to distribute the workload. We considered two execution modalities on MIC: one consisted in running exactly the same CPU code of SSA, while the other exploited MIC’s vector instructions to reuse the CPU code with only few modifications. MIC performance was compared with Graphics Processing Units (GPUs), specifically implemented in CUDA to optimize the use of memory hierarchy. Our results show that GPU largely outperforms MIC and CPU, but required a complete redesign of SSA. MIC allows a relevant speedup, especially when vector instructions are used, with the additional advantage of requiring minimal modifications to CPU code.

Collaboration


Dive into the Andrea Tangherloni's collaboration.

Top Co-Authors

Avatar

Marco S. Nobile

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniela Besozzi

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar

Leonardo Rundo

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Simone Spolaor

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Carla Gilardi

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge