Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sergio Decherchi is active.

Publication


Featured researches published by Sergio Decherchi.


IEEE Transactions on Robotics | 2011

Tactile-Data Classification of Contact Materials Using Computational Intelligence

Sergio Decherchi; Paolo Gastaldo; Ravinder Dahiya; Maurizio Valle; Rodolfo Zunino

The two major components of a robotic tactile-sensing system are the tactile-sensing hardware at the lower level and the computational/software tools at the higher level. Focusing on the latter, this research assesses the suitability of computational-intelligence (CI) tools for tactile-data processing. In this context, this paper addresses the classification of sensed object material from the recorded raw tactile data. For this purpose, three CI paradigms, namely, the support-vector machine (SVM), regularized least square (RLS), and regularized extreme learning machine (RELM), have been employed, and their performance is compared for the said task. The comparative analysis shows that SVM provides the best tradeoff between classification accuracy and computational complexity of the classification algorithm. Experimental results indicate that the CI tools are effective in dealing with the challenging problem of material classification.


Neurocomputing | 2013

Circular-ELM for the reduced-reference assessment of perceived image quality

Sergio Decherchi; Paolo Gastaldo; Rodolfo Zunino; Erik Cambria; Judith Redi

Providing a satisfactory visual experience is one of the main goals for present-day electronic multimedia devices. All the enabling technologies for storage, transmission, compression, rendering should preserve, and possibly enhance, the quality of the video signal; to do so, quality control mechanisms are required. These mechanisms rely on systems that can assess the visual quality of the incoming signal consistently with human perception. Computational Intelligence (CI) paradigms represent a suitable technology to tackle this challenging problem. The present research introduces an augmented version of the basic Extreme Learning Machine (ELM), the Circular-ELM (C-ELM), which proves effective in addressing the visual quality assessment problem. The C-ELM model derives from the original Circular BackPropagation (CBP) architecture, in which the input vector of a conventional MultiLayer Perceptron (MLP) is augmented by one additional dimension, the circular input; this paper shows that C-ELM can actually benefit from the enhancement provided by the circular input without losing any of the fruitful properties that characterize the basic ELM framework. In the proposed framework, C-ELM handles the actual mapping of visual signals into quality scores, successfully reproducing perceptual mechanisms. Its effectiveness is proved on recognized benchmarks and for four different types of distortions.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2012

Efficient Digital Implementation of Extreme Learning Machines for Classification

Sergio Decherchi; Paolo Gastaldo; Alessio Leoncini; Rodolfo Zunino

The availability of compact fast circuitry for the support of artificial neural systems is a long-standing and critical requirement for many important applications. This brief addresses the implementation of the powerful extreme learning machine (ELM) model on reconfigurable digital hardware (HW). The design strategy first provides a training procedure for ELMs, which effectively trades off prediction accuracy and network complexity. This, in turn, facilitates the optimization of HW resources. Finally, this brief describes and analyzes two implementation approaches: one involving field-programmable gate array devices and one embedding low-cost low-performance devices such as complex programmable logic devices. Experimental results show that, in both cases, the design approach yields efficient digital architectures with satisfactory performances and limited costs.


Scientific Reports | 2015

Kinetics of protein-ligand unbinding via smoothed potential molecular dynamics simulations

Luca Mollica; Sergio Decherchi; Syeda Rehana Zia; Roberto Gaspari; Andrea Cavalli; Walter Rocchia

Drug discovery is expensive and high-risk. Its main reasons of failure are lack of efficacy and toxicity of a drug candidate. Binding affinity for the biological target has been usually considered one of the most relevant figures of merit to judge a drug candidate along with bioavailability, selectivity and metabolic properties, which could depend on off-target interactions. Nevertheless, affinity does not always satisfactorily correlate with in vivo drug efficacy. It is indeed becoming increasingly evident that the time a drug spends in contact with its target (aka residence time) can be a more reliable figure of merit. Experimental kinetic measurements are operatively limited by the cost and the time needed to synthesize compounds to be tested, to express and purify the target, and to setup the assays. We present here a simple and efficient molecular-dynamics-based computational approach to prioritize compounds according to their residence time. We devised a multiple-replica scaled molecular dynamics protocol with suitably defined harmonic restraints to accelerate the unbinding events while preserving the native fold. Ligands are ranked according to the mean observed scaled unbinding time. The approach, trivially parallel and easily implementable, was validated against experimental information available on biological systems of pharmacological relevance.


Nature Communications | 2015

The ligand binding mechanism to purine nucleoside phosphorylase elucidated via molecular dynamics and machine learning

Sergio Decherchi; Anna Berteotti; Giovanni Bottegoni; Walter Rocchia; Andrea Cavalli

The study of biomolecular interactions between a drug and its biological target is of paramount importance for the design of novel bioactive compounds. In this paper, we report on the use of molecular dynamics (MD) simulations and machine learning to study the binding mechanism of a transition state analogue (DADMe–immucillin-H) to the purine nucleoside phosphorylase (PNP) enzyme. Microsecond-long MD simulations allow us to observe several binding events, following different dynamical routes and reaching diverse binding configurations. These simulations are used to estimate kinetic and thermodynamic quantities, such as kon and binding free energy, obtaining a good agreement with available experimental data. In addition, we advance a hypothesis for the slow-onset inhibition mechanism of DADMe–immucillin-H against PNP. Combining extensive MD simulations with machine learning algorithms could therefore be a fruitful approach for capturing key aspects of drug–target recognition and binding.


PLOS ONE | 2013

A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

Sergio Decherchi; Walter Rocchia

We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation.


CISIS | 2009

Text Clustering for Digital Forensics Analysis

Sergio Decherchi; Simone Tacconi; Judith Redi; Alessio Leoncini; Fabio Sangiacomo; Rodolfo Zunino

In the last decades digital forensics have become a prominent activity in modern investigations. Indeed, an important data source is often constituted by information contained in devices on which investigational activity is performed. Due to the complexity of this inquiring activity, the digital tools used for investigation constitute a central concern. In this paper a clustering-based text mining technique is introduced for investigational purposes. The proposed methodology is experimentally applied to the publicly available Enron dataset that well fits a plausible forensics analysis context.


Journal of Medicinal Chemistry | 2016

Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein–Ligand Residence Times

Luca Mollica; Isabelle Theret; Mathias Antoine; Françoise Perron-Sierra; Yves Charton; Jean-Marie Fourquez; Michel Wierzbicki; Jean A. Boutin; Gilles Ferry; Sergio Decherchi; Giovanni Bottegoni; Pierre Ducrot; Andrea Cavalli

Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedures predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocols reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.


Journal of Chemical Information and Modeling | 2015

A Pipeline To Enhance Ligand Virtual Screening: Integrating Molecular Dynamics and Fingerprints for Ligand and Proteins

Francesca Spyrakis; Paolo Benedetti; Sergio Decherchi; Walter Rocchia; Andrea Cavalli; Stefano Alcaro; Francesco Ortuso; Massimo Baroni; Gabriele Cruciani

The importance of taking into account protein flexibility in drug design and virtual ligand screening (VS) has been widely debated in the literature, and molecular dynamics (MD) has been recognized as one of the most powerful tools for investigating intrinsic protein dynamics. Nevertheless, deciphering the amount of information hidden in MD simulations and recognizing a significant minimal set of states to be used in virtual screening experiments can be quite complicated. Here we present an integrated MD-FLAP (molecular dynamics-fingerprints for ligand and proteins) approach, comprising a pipeline of molecular dynamics, clustering and linear discriminant analysis, for enhancing accuracy and efficacy in VS campaigns. We first extracted a limited number of representative structures from tens of nanoseconds of MD trajectories by means of the k-medoids clustering algorithm as implemented in the BiKi Life Science Suite ( http://www.bikitech.com [accessed July 21, 2015]). Then, instead of applying arbitrary selection criteria, that is, RMSD, pharmacophore properties, or enrichment performances, we allowed the linear discriminant analysis algorithm implemented in FLAP ( http://www.moldiscovery.com [accessed July 21, 2015]) to automatically choose the best performing conformational states among medoids and X-ray structures. Retrospective virtual screenings confirmed that ensemble receptor protocols outperform single rigid receptor approaches, proved that computationally generated conformations comprise the same quantity/quality of information included in X-ray structures, and pointed to the MD-FLAP approach as a valuable tool for improving VS performances.


IEEE Transactions on Neural Networks | 2010

Using Unsupervised Analysis to Constrain Generalization Bounds for Support Vector Classifiers

Sergio Decherchi; Sandro Ridella; Rodolfo Zunino; Paolo Gastaldo; Davide Anguita

A crucial issue in designing learning machines is to select the correct model parameters. When the number of available samples is small, theoretical sample-based generalization bounds can prove effective, provided that they are tight and track the validation error correctly. The maximal discrepancy (MD) approach is a very promising technique for model selection for support vector machines (SVM), and estimates a classifiers generalization performance by multiple training cycles on random labeled data. This paper presents a general method to compute the generalization bounds for SVMs, which is based on referring the SVM parameters to an unsupervised solution, and shows that such an approach yields tight bounds and attains effective model selection. When one estimates the generalization error, one uses an unsupervised reference to constrain the complexity of the learning machine, thereby possibly decreasing sharply the number of admissible hypothesis. Although the methodology has a general value, the method described in the paper adopts vector quantization (VQ) as a representation paradigm, and introduces a biased regularization approach in bound computation and learning. Experimental results validate the proposed method on complex real-world data sets.

Collaboration


Dive into the Sergio Decherchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Walter Rocchia

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Judith Redi

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giovanni Bottegoni

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Andrea Spitaleri

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Colmenares

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge