Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Salvatore Rampone is active.

Publication


Featured researches published by Salvatore Rampone.


International Journal of Modern Physics C | 2002

HS3D, A DATASET OF HOMO SAPIENS SPLICE REGIONS, AND ITS EXTRACTION PROCEDURE FROM A MAJOR PUBLIC DATABASE

Pasquale Pollastro; Salvatore Rampone

The aim of this work is to describe a cleaning procedure of GenBank data, producing material to train and to assess the prediction accuracy of computational approaches for gene characterization. A procedure (GenBank2HS3D) has been defined, producing a dataset (HS3D — Homo Sapiens Splice Sites Dataset) of Homo Sapiens Splice regions extracted from GenBank (Rel.123 at this time). It selects, from the complete GenBank Primate Division, entries of Human Nuclear DNA according with several assessed criteria; then it extracts exons and introns from these entries (actually 4523 + 3802). Donor and acceptor sites are then extracted as windows of 140 nucleotides around each splice site (3799 + 3799). After discarding windows not including canonical GT–AG junctions (65 + 74), including insufficient data (not enough material for a 140 nucleotide window) (686 + 589), including not AGCT bases (29 + 30), and redundant (218 + 226), the remaining windows (2796 + 2880) are reported in the dataset. Finally, windows of false splice sites are selected by searching canonical GT–AG pairs in not splicing positions (271 937 + 332 296). The false sites in a range +/- 60 from a true splice site are marked as proximal. HS3D, release 1.2 at this time, is available at the Web server of the University of Sannio: .


Measurement | 2016

Feature extraction and soft computing methods for aerospace structure defect classification

Gianni D’Angelo; Salvatore Rampone

Abstract This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature extraction methods, in addition to a new CBIR-based one; and some soft computing techniques including a recent HPC parallel implementation of the U-BRAIN learning algorithm on Non Destructive Testing data. The performance of the resulting detection systems are measured in terms of Accuracy, Sensitivity, Specificity, and Precision. Their effectiveness is evaluated by the Matthews correlation, the Area Under Curve (AUC), and the F-Measure. Several experiments are performed on a standard dataset of eddy current signal samples for aircraft structures. Our experimental results evidence that the key to a successful defect classifier is the feature extraction method – namely the novel CBIR-based one outperforms all the competitors – and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application.


Applied Soft Computing | 2015

An uncertainty-managing batch relevance-based approach to network anomaly detection

Gianni D'Angelo; Francesco Palmieri; Massimo Ficco; Salvatore Rampone

Graphical abstractDisplay Omitted HighlightsAdaptive, network anomaly detection strategy based on a batch relevance-based fuzzified learning algorithm.Couples the capability of inferring decisional structures from incomplete observations, with the flexibility of a fuzzy-based uncertainty management strategy.Infers the laws and rules governing normal or abnormal network traffic, in order to model its operating dynamics.Based on a rule-based detection strategy, is more effective against previously unknown phenomena and robust against obfuscation mechanisms. The main aim in network anomaly detection is effectively spotting hostile events within the traffic pattern associated to network operations, by distinguishing them from normal activities. This can be only accomplished by acquiring the a-priori knowledge about any kind of hostile behavior that can potentially affect the network (that is quite impossible for practical reasons) or, more easily, by building a model that is general enough to describe the normal network behavior and detect the violations from it. Earlier detection frameworks were only able to distinguish already known phenomena within traffic data by using pre-trained models based on matching specific events on pre-classified chains of traffic patterns. Alternatively, more recent statistics-based approaches were able to detect outliers respect to a statistic idealization of normal network behavior. Clearly, while the former approach is not able to detect previously unknown phenomena (zero-day attacks) the latter one has limited effectiveness since it cannot be aware of anomalous behaviors that do not generate significant changes in traffic volumes. Machine learning allows the development of adaptive, non-parametric detection strategies that are based on understanding the network dynamics by acquiring through a proper training phase a more precise knowledge about normal or anomalous phenomena in order to classify and handle in a more effective way any kind of behavior that can be observed on the network. Accordingly, we present a new anomaly detection strategy based on supervised machine learning, and more precisely on a batch relevance-based fuzzyfied learning algorithm, known as U-BRAIN, aiming at understanding through inductive inference the specific laws and rules governing normal or abnormal network traffic, in order to reliably model its operating dynamics. The inferred rules can be applied in real time on online network traffic. This proposal appears to be promising both in terms of identification accuracy and robustness/flexibility when coping with uncertainty in the detection/classification process, as verified through extensive evaluation experiments.


International Journal of Modern Physics C | 2013

Neural Network aided Glitch-Burst Discrimination and Glitch Classification

Salvatore Rampone; V. Pierro; Luigi Troiano; I. M. Pinto

We investigate the potential of neural-network based classifiers for discriminating gravitational wave bursts (GWBs) of a given canonical family (e.g. core-collapse supernova waveforms) from typical transient instrumental artifacts (glitches), in the data of a single detector. The further classification of glitches into typical sets is explored.In order to provide a proof of concept,we use the core-collapse supernova waveform catalog produced by H. Dimmelmeier and co-Workers, and the data base of glitches observed in laser interferometer gravitational wave observatory (LIGO) data maintained by P. Saulson and co-Workers to construct datasets of (windowed) transient waveforms (glitches and bursts) in additive (Gaussian and compound-Gaussian) noise with different signal-tonoise ratios (SNR). Principal component analysis (PCA) is next implemented for reducing data dimensionality, yielding results consistent with, and extending those in the literature. Then, a multilayer perceptron is trained by a backpropagation algorithm (MLP-BP) on a data subset, and used to classify the transients as glitch or burst. A Self-Organizing Map (SOM) architecture is finally used to classify the glitches. The glitch/burst discrimination and glitch classification abilities are gauged in terms of the related truth tables. Preliminary results suggest that the approach is effective and robust throughout the SNR range of practical interest. Perspective applications pertain both to distributed (network, multisensor) detection of GWBs, where someintelligenceat the single node level can be introduced, and instrument diagnostics/optimization, where spurious transients can be identified, classified and hopefully traced back to their entry points


BMC Bioinformatics | 2014

Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications

Gianni D'Angelo; Salvatore Rampone

BackgroundThe huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n3) and of O(n5) order, respectively, and so, the algorithm is unaffordable for huge data sets.ResultsWe find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of the communications between different memories (RAM, Cache, Mass, Virtual) and to achieve efficient I/O performance, we design a mass storage structure able to access its data with a high degree of temporal and spatial locality. Then we develop a parallel implementation of the algorithm. We model it as a SPMD system together to a Message-Passing Programming Paradigm. Here, we adopt the high-level message-passing systems MPI (Message Passing Interface) in the version for the Java programming language, MPJ. The parallel processing is organized into four stages: partitioning, communication, agglomeration and mapping. The decomposition of the U-BRAIN algorithm determines the necessity of a communication protocol design among the processors involved. Efficient synchronization design is also discussed.ConclusionsIn the context of a collaboration between public and private institutions, the parallel model of U-BRAIN has been implemented and tested on the INTEL XEON E7xxx and E5xxx family of the CRESCO structure of Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), developed in the framework of the European Grid Infrastructure (EGI), a series of efforts to provide access to high-throughput computing resources across Europe using grid computing techniques. The implementation is able to minimize both the memory space and the execution time. The test data used in this study are IPDATA (Irvine Primate splice- junction DATA set), a subset of HS3D (Homo Sapiens Splice Sites Dataset) and a subset of COSMIC (the Catalogue of Somatic Mutations in Cancer). The execution time and the speed-up on IPDATA reach the best values within about 90 processors. Then the parallelization advantage is balanced by the greater cost of non-local communications between the processors. A similar behaviour is evident on HS3D, but at a greater number of processors, so evidencing the direct relationship between data size and parallelization gain. This behaviour is confirmed on COSMIC. Overall, the results obtained show that the parallel version is up to 30 times faster than the serial one.


soft computing | 2017

Developing a trust model for pervasive computing based on Apriori association rules learning and Bayesian classification

Gianni D'Angelo; Salvatore Rampone; Francesco Palmieri

Pervasive computing is one of the latest and more advanced paradigms currently available in the computers arena. Its ability to provide the distribution of computational services within environments where people live, work or socialize leads to make issues such as privacy, trust and identity more challenging compared to traditional computing environments. In this work, we review these general issues and propose a pervasive computing architecture based on a simple but effective trust model that is better able to cope with them. The proposed architecture combines some artificial intelligence techniques to achieve close resemblance with human-like decision making. Accordingly, Apriori algorithm is first used in order to extract the behavioral patterns adopted from the users during their network interactions. Naïve Bayes classifier is then used for final decision making expressed in term of probability of user trustworthiness. To validate our approach, we applied it to some typical ubiquitous computing scenarios. The obtained results demonstrated the usefulness of such approach and the competitiveness against other existing ones.


2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC) | 2015

An Artificial Intelligence-Based Trust Model for Pervasive Computing

Gianni D'Angelo; Salvatore Rampone; Francesco Palmieri

Pervasive Computing is one of the latest and more advanced paradigms currently available in the computers arena. Its ability to provide the distribution of computational services within environments where people live, work or socialize leads to make issues such as privacy, trust and identity more challenging compared to traditional computing environments. In this work we review these general issues and propose a Pervasive Computing architecture based on a simple but effective trust model that is better able to cope with them. The proposed architecture combines some Artificial Intelligence techniques to achieve close resemblance with human-like decision making. Accordingly, Apriori algorithm is first used in order to extract the behavioral patterns adopted from the users during their network interactions. Naïve Bayes classifier is then used for final decision making expressed in term of probability of user trustworthiness. To validate our approach we applied it to some typical ubiquitous computing scenarios. The obtained results demonstrated the usefulness of such approach and the competitiveness against other existing ones.


ieee international workshop on metrology for aerospace | 2014

Diagnosis of aerospace structure defects by a HPC implemented soft computing algorithm

Gianni D'Angelo; Salvatore Rampone

This study concerns with the diagnosis of aerospace structure defects by applying a HPC parallel implementation of a novel learning algorithm, named U-BRAIN. The Soft Computing approach allows advanced multi-parameter data processing in composite materials testing. The HPC parallel implementation overcomes the limits due to the great amount of data and the complexity of data processing. Our experimental results illustrate the effectiveness of the U-BRAIN parallel implementation as defect classifier in aerospace structures. The resulting system is implemented on a Linux-based cluster with multi-core architecture.


International Journal of Modern Physics C | 2011

NEURAL NETWORK AIDED EVALUATION OF LANDSLIDE SUSCEPTIBILITY IN SOUTHERN ITALY

Salvatore Rampone; Alessio Valente

Landslide hazard mapping is often performed through the identification and analysis of hillslope instability factors. In heuristic approaches, these factors are rated by the attribution of scores based on the assumed role played by each of them in controlling the development of a sliding process. The objective of this research is to forecast landslide susceptibility through the application of Artificial Neural Networks. In particular, given the availability of past events data, we mainly focused on the Calabria region (Italy). Vectors of eight hillslope factors (features) were considered for each considered event in this area (lithology, permeability, slope angle, vegetation cover in terms of type and density, land use, yearly rainfall and yearly temperature range). We collected 106 vectors and each one was labeled with its landslide susceptibility, which is assumed to be the output variable. Subsequently a set of these labeled vectors (examples) was used to train an artificial neural network belonging to the category of Multi-Layer Perceptron (MLP) to evaluate landslide susceptibility. Then the neural network predictions were verified on the vectors not used in the training (validation set), i.e. in previously unseen locations. The comparison between the expected output and the artificial neural network output showed satisfactory results, reporting a prediction discrepancy of less than 4.3%. This is an encouraging preliminary approach towards a systematic introduction of artificial neural network in landslide hazard assessment and mapping in the considered area.


ambient intelligence | 2017

Prediction of seasonal temperature using soft computing techniques: application in Benevento (Southern Italy) area

Salvatore Rampone; Alessio Valente

In this work two soft computing methods, Artificial Neural Networks and Genetic Programming, are proposed in order to forecast the mean air temperature that will occur in future seasons. The area in which the soft computing techniques were applied is that of the surroundings of the town of Benevento, in the south of Italy, having the geographic coordinates (lat.

Collaboration


Dive into the Salvatore Rampone's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Aloisio

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge