Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabio Tordini is active.

Publication


Featured researches published by Fabio Tordini.


Frontiers in Genetics | 2015

Integrating Multi-omic features exploiting Chromosome Conformation Capture data

Ivan Merelli; Fabio Tordini; Maurizio Drocco; Marco Aldinucci; Pietro Liò; Luciano Milanesi

The representation, integration, and interpretation of omic data is a complex task, in particular considering the huge amount of information that is daily produced in molecular biology laboratories all around the world. The reason is that sequencing data regarding expression profiles, methylation patterns, and chromatin domains is difficult to harmonize in a systems biology view, since genome browsers only allow coordinate-based representations, discarding functional clusters created by the spatial conformation of the DNA in the nucleus. In this context, recent progresses in high throughput molecular biology techniques and bioinformatics have provided insights into chromatin interactions on a larger scale and offer a formidable support for the interpretation of multi-omic data. In particular, a novel sequencing technique called Chromosome Conformation Capture allows the analysis of the chromosome organization in the cell’s natural state. While performed genome wide, this technique is usually called Hi–C. Inspired by service applications such as Google Maps, we developed NuChart, an R package that integrates Hi–C data to describe the chromosomal neighborhood starting from the information about gene positions, with the possibility of mapping on the achieved graphs genomic features such as methylation patterns and histone modifications, along with expression profiles. In this paper we show the importance of the NuChart application for the integration of multi-omic data in a systems biology fashion, with particular interest in cytogenetic applications of these techniques. Moreover, we demonstrate how the integration of multi-omic data can provide useful information in understanding why genes are in certain specific positions inside the nucleus and how epigenetic patterns correlate with their expression.


Frontiers in Genetics | 2016

The Genome Conformation As an Integrator of Multi-Omic Data: The Example of Damage Spreading in Cancer

Fabio Tordini; Marco Aldinucci; Luciano Milanesi; Pietro Liò; Ivan Merelli

Publicly available multi-omic databases, in particular if associated with medical annotations, are rich resources with the potential to lead a rapid transition from high-throughput molecular biology experiments to better clinical outcomes for patients. In this work, we propose a model for multi-omic data integration (i.e., genetic variations, gene expression, genome conformation, and epigenetic patterns), which exploits a multi-layer network approach to analyse, visualize, and obtain insights from such biological information, in order to use achieved results at a macroscopic level. Using this representation, we can describe how driver and passenger mutations accumulate during the development of diseases providing, for example, a tool able to characterize the evolution of cancer. Indeed, our test case concerns the MCF-7 breast cancer cell line, before and after the stimulation with estrogen, since many datasets are available for this case study. In particular, the integration of data about cancer mutations, gene functional annotations, genome conformation, epigenetic patterns, gene expression, and metabolic pathways in our multi-layer representation will allow a better interpretation of the mechanisms behind a complex disease such as cancer. Thanks to this multi-layer approach, we focus on the interplay of chromatin conformation and cancer mutations in different pathways, such as metabolic processes, that are very important for tumor development. Working on this model, a variance analysis can be implemented to identify normal variations within each omics and to characterize, by contrast, variations that can be accounted to pathological samples compared to normal ones. This integrative model can be used to identify novel biomarkers and to provide innovative omic-based guidelines for treating many diseases, improving the efficacy of decision trees currently used in clinic.


parallel, distributed and network-based processing | 2015

Parallel Exploration of the Nuclear Chromosome Conformation with NuChart-II

Fabio Tordini; Maurizio Drocco; Claudia Misale; Luciano Milanesi; Pietro Liò; Ivan Merelli; Marco Aldinucci

High-throughput molecular biology techniques are widely used to identify physical interactions between genetic elements located throughout the human genome. Chromosome Conformation Capture (3C) and other related techniques allow to investigate the spatial organisation of chromosomes in the cells natural state. Recent results have shown that there is a large correlation between co-localization and co-regulation of genes, but these important information are hampered by the lack of biologists-friendly analysis and visualisation software. In this work we introduce NuChart-II, a tool for Hi-C data analysis that provides a gene-centric view of the chromosomal neighbourhood in a graph-based manner. NuChart-II is an efficient and highly optimized C++ re-implementation of a previous prototype package developed in R. Representing Hi-C data using a graph-based approach overcomes the common view relying on genomic coordinates and permits the use of graph analysis techniques to explore the spatial conformation of a gene neighbourhood.


parallel, distributed and network-based processing | 2013

Parallel Stochastic Simulators in System Biology: The Evolution of the Species

Marco Aldinucci; Maurizio Drocco; Fabio Tordini; Mario Coppo; Massimo Torquati

The stochastic simulation of biological systems is an increasingly popular technique in Bioinformatics. It is often an enlightening technique, especially for multi-stable systems which dynamics can be hardly captured with ordinary differential equations. To be effective, stochastic simulations should be supported by powerful statistical analysis tools. The simulation-analysis workflow may however result in being computationally expensive, thus compromising the interactivity required in model tuning. In this work we advocate the high-level design of simulators for stochastic systems as a vehicle for building efficient and portable parallel simulators. In particular, the Calculus of Wrapped Components (CWC) simulator, which is designed according to the FastFlows pattern-based approach, is presented and discussed in this work. FastFlow has been extended to support also clusters of multi-cores with minimal coding effort, assessing the portability of the approach.


parallel, distributed and network-based processing | 2015

Memory-Optimised Parallel Processing of Hi-C Data

Maurizio Drocco; Claudia Misale; Guilherme Peretti Pezzi; Fabio Tordini; Marco Aldinucci

This paper presents the optimisation efforts on the creation of a graph-based mapping representation of gene adjacency. The method is based on the Hi-C process, starting from Next Generation Sequencing data, and it analyses a huge amount of static data in order to produce maps for one or more genes. Straightforward parallelisation of this scheme does not yield acceptable performance on multicore architectures since the scalability is rather limited due to the memory bound nature of the problem. This work focuses on the memory optimisations that can be applied to the graph construction algorithm and its (complex) data structures to derive a cache-oblivious algorithm and eventually to improve the memory bandwidth utilisation. We used as running example not, a tool for annotation and statistic analysis of Hi-C data that creates a gene-centric neighborhood graph. The proposed approach, which is exemplified for Hi-C, addresses several common issue in the parallelisation of memory bound algorithms for multicore. Results show that the proposed approach is able to increase the parallel speedup from 7x to 22x (on a 32-core platform). Finally, the proposed C++ implementation outperforms the first R Nu Chart prototype, by which it was not possible to complete the graph generation because of strong memory-saturation problems.


computational intelligence methods for bioinformatics and biostatistics | 2015

NuchaRt: Embedding High-Level Parallel Computing in R for Augmented Hi-C Data Analysis

Fabio Tordini; Ivan Merelli; Pietro Liò; Luciano Milanesi; Marco Aldinucci

Recent advances in molecular biology and Bioinformatics techniques brought to an explosion of the information about the spatial organisation of the DNA in the nucleus. High-throughput chromosome conformation capture techniques provide a genome-wide capture of chromatin contacts at unprecedented scales, which permit to identify physical interactions between genetic elements located throughout the human genome. These important studies are hampered by the lack of biologists-friendly software. In this work we present NuchaRt, an R package that wraps NuChart-II, an efficient and highly optimized C++ tool for the exploration of Hi-C data. By rising the level of abstraction, NuchaRt proposes a high-performance pipeline that allows users to orchestrate analysis and visualisation of multi-omics data, making optimal use of the computing capabilities offered by modern multi-core architectures, combined with the versatile and well known R environment for statistical analysis and data visualisation.


computational intelligence methods for bioinformatics and biostatistics | 2014

NuChart-II: A Graph-Based Approach for Analysis and Interpretation of Hi-C Data

Fabio Tordini; Maurizio Drocco; Ivan Merelli; Luciano Milanesi; Pietro Liò; Marco Aldinucci

Long-range chromosomal associations between genomic regions, and their repositioning in the 3D space of the nucleus, are now considered to be key contributors to the regulation of gene expressions and DNA rearrangements. Recent Chromosome Conformation Capture (3C) measurements performed with high throughput sequencing techniques (Hi-C) and molecular dynamics studies show that there is a large correlation between co-localization and co-regulation of genes, but these important researches are hampered by the lack of biologists-friendly analysis and visualisation software. In this work we present NuChart-II, a software that allows the user to annotate and visualize a list of input genes with information relying on Hi-C data, integrating knowledge data about genomic features that are involved in the chromosome spatial organization. This software works directly with sequenced reads to identify related Hi-C fragments, with the aim of creating gene-centric neighbourhood graphs on which multi-omics features can be mapped. NuChart-II is a highly optimized implementation of a previous prototype developed in R, in which the graph-based representation of Hi-C data was tested. The prototype showed inevitable problems of scalability while working genome-wide on large datasets: particular attention has been paid in order to obtain an efficient parallel implementation of the software. The normalization of Hi-C data has been modified and improved, in order to provide a reliable estimation of proximity likelihood for the genes.


International Journal of High Performance Computing Applications | 2017

NuChart-II: The road to a fast and scalable tool for Hi-C data analysis

Fabio Tordini; Maurizio Drocco; Claudia Misale; Luciano Milanesi; Pietro Liò; Ivan Merelli; Massimo Torquati; Marco Aldinucci

Recent advances in molecular biology and bioinformatic techniques have brought about an explosion of information about the spatial organisation of the DNA in the nucleus of a cell. High-throughput molecular biology techniques provide a genome-wide capture of the spatial organisation of chromosomes at unprecedented scales, which permit one to identify physical interactions between genetic elements located throughout a genome. This important information is, however, hampered by the lack of biologist-friendly analysis and visualisation software: these disciplines are literally caught in a flood of data and are now facing many of the scale-out issues that high-performance computing has been addressing for years. Data must be managed, analysed and integrated, with substantial requirements of speed (in terms of execution time), application scalability and data representation. In this work, we present NuChart-II, an efficient and highly optimised tool for genomic data analysis that provides a gene-centric, graph-based representation of genomic information and which proposes an ex-post normalisation technique for Hi-C data. While designing NuChart-II, we addressed several common issues in the parallelisation of memory-bound algorithms for shared-memory systems.


international conference on distributed computing systems workshops | 2014

Exercising High-Level Parallel Programming on Streams: A Systems Biology Use Case

Marco Aldinucci; Maurizio Drocco; Guilherme Peretti Pezzi; Claudia Misale; Fabio Tordini; Massimo Torquati

The stochastic modelling of biological systems, coupled with Monte Carlo simulation of models, is an increasingly popular technique in Bioinformatics. The simulation-analysis workflow may result into a computationally expensive task reducing the interactivity required in the model tuning. In this work, we advocate high-level software design as a vehicle for building efficient and portable parallel simulators for a variety of platforms, ranging from multi-core platforms to GPGPUs to cloud. In particular, the Calculus of Wrapped Compartments (CWC) parallel simulator for systems biology equipped with on-line mining of results, which is designed according to the Fast Flow pattern-based approach, is discussed as a running example. In this work, the CWC simulator is used as a paradigmatic example of a complex C++ application where the quality of results is correlated with both computation and I/O bounds, and where high-quality results might turn into big data. The Fast Flow parallel programming framework, which advocates C++ pattern-based parallel programming makes it possible to develop portable parallel code without relinquish neither run-time efficiency nor performance tuning opportunities. Performance and effectiveness of the approach are validated on a variety of platforms, inter-alia cache-coherent multi-cores, cluster of multi-core (Ethernet and Infiniband) and the Amazon Elastic Compute Cloud.


formal methods | 2013

An abstract annotation model for skeletons

Marco Aldinucci; Sonia Campa; Peter Kilpatrick; Fabio Tordini; Massimo Torquati

Multi-core and many-core platforms are becoming increasingly heterogeneous and asymmetric. This significantly increases the porting and tuning effort required for parallel codes, which in turn often leads to a growing gap between peak machine power and actual application performance. In this work a first step toward the automated optimization of high level skeleton-based parallel code is discussed. The paper presents an abstract annotation model for skeleton programs aimed at formally describing suitable mapping of parallel activities on a high-level platform representation. The derived mapping and scheduling strategies are used to generate optimized run-time code.

Collaboration


Dive into the Fabio Tordini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Merelli

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Pietro Liò

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Kilpatrick

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge