Gianni D'Angelo
University of Sannio
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gianni D'Angelo.
BMC Bioinformatics | 2014
Gianni D'Angelo; Salvatore Rampone
BackgroundThe huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n3) and of O(n5) order, respectively, and so, the algorithm is unaffordable for huge data sets.ResultsWe find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of the communications between different memories (RAM, Cache, Mass, Virtual) and to achieve efficient I/O performance, we design a mass storage structure able to access its data with a high degree of temporal and spatial locality. Then we develop a parallel implementation of the algorithm. We model it as a SPMD system together to a Message-Passing Programming Paradigm. Here, we adopt the high-level message-passing systems MPI (Message Passing Interface) in the version for the Java programming language, MPJ. The parallel processing is organized into four stages: partitioning, communication, agglomeration and mapping. The decomposition of the U-BRAIN algorithm determines the necessity of a communication protocol design among the processors involved. Efficient synchronization design is also discussed.ConclusionsIn the context of a collaboration between public and private institutions, the parallel model of U-BRAIN has been implemented and tested on the INTEL XEON E7xxx and E5xxx family of the CRESCO structure of Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), developed in the framework of the European Grid Infrastructure (EGI), a series of efforts to provide access to high-throughput computing resources across Europe using grid computing techniques. The implementation is able to minimize both the memory space and the execution time. The test data used in this study are IPDATA (Irvine Primate splice- junction DATA set), a subset of HS3D (Homo Sapiens Splice Sites Dataset) and a subset of COSMIC (the Catalogue of Somatic Mutations in Cancer). The execution time and the speed-up on IPDATA reach the best values within about 90 processors. Then the parallelization advantage is balanced by the greater cost of non-local communications between the processors. A similar behaviour is evident on HS3D, but at a greater number of processors, so evidencing the direct relationship between data size and parallelization gain. This behaviour is confirmed on COSMIC. Overall, the results obtained show that the parallel version is up to 30 times faster than the serial one.
soft computing | 2017
Gianni D'Angelo; Salvatore Rampone; Francesco Palmieri
Pervasive computing is one of the latest and more advanced paradigms currently available in the computers arena. Its ability to provide the distribution of computational services within environments where people live, work or socialize leads to make issues such as privacy, trust and identity more challenging compared to traditional computing environments. In this work, we review these general issues and propose a pervasive computing architecture based on a simple but effective trust model that is better able to cope with them. The proposed architecture combines some artificial intelligence techniques to achieve close resemblance with human-like decision making. Accordingly, Apriori algorithm is first used in order to extract the behavioral patterns adopted from the users during their network interactions. Naïve Bayes classifier is then used for final decision making expressed in term of probability of user trustworthiness. To validate our approach, we applied it to some typical ubiquitous computing scenarios. The obtained results demonstrated the usefulness of such approach and the competitiveness against other existing ones.
2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC) | 2015
Gianni D'Angelo; Salvatore Rampone; Francesco Palmieri
Pervasive Computing is one of the latest and more advanced paradigms currently available in the computers arena. Its ability to provide the distribution of computational services within environments where people live, work or socialize leads to make issues such as privacy, trust and identity more challenging compared to traditional computing environments. In this work we review these general issues and propose a Pervasive Computing architecture based on a simple but effective trust model that is better able to cope with them. The proposed architecture combines some Artificial Intelligence techniques to achieve close resemblance with human-like decision making. Accordingly, Apriori algorithm is first used in order to extract the behavioral patterns adopted from the users during their network interactions. Naïve Bayes classifier is then used for final decision making expressed in term of probability of user trustworthiness. To validate our approach we applied it to some typical ubiquitous computing scenarios. The obtained results demonstrated the usefulness of such approach and the competitiveness against other existing ones.
mobility management and wireless access | 2007
Massimo Ficco; Maurizio D'Arienzo; Gianni D'Angelo
The proliferation of mobile devices and wireless technologies pave the ground to new scenarios that attract the interest of service providers and business operators. However, to let mobile users able to receive the same services independently from the context in which they operate, several open issues need to be solved. One of these issues is related to the current levels of security that often require mobile users to exchange secret credentials with the network operators or an authentication server. This constraint limits mobile users who wish to access wireless services anywhere without caring for configuration of used devices.This paper proposes a new device that enables a mobile user to access wireless services in a seamless and automatic way using a context-aware approach in ubiquitous/nomadic computing environments. This device implements a Bluetooth personal identification badge that automatically authenticates and authorizes a mobile user to access services independently of the mobile terminal used. Any users terminal can retrieve proper credentials in order to authenticate and gain access to different network access points. The design and implementation of a prototype of such a device is presented.
ieee international workshop on metrology for aerospace | 2014
Gianni D'Angelo; Salvatore Rampone
This study concerns with the diagnosis of aerospace structure defects by applying a HPC parallel implementation of a novel learning algorithm, named U-BRAIN. The Soft Computing approach allows advanced multi-parameter data processing in composite materials testing. The HPC parallel implementation overcomes the limits due to the great amount of data and the complexity of data processing. Our experimental results illustrate the effectiveness of the U-BRAIN parallel implementation as defect classifier in aerospace structures. The resulting system is implemented on a Linux-based cluster with multi-core architecture.
ieee international workshop on metrology for aerospace | 2015
Gianni D'Angelo; Salvatore Rampone
The aim of this work is to classify the aerospace structure defects detected by eddy current non-destructive testing. The proposed method is based on the assumption that the defect is bound to the reaction of the probe coil impedance during the test. Impedance plane analysis is used to extract a feature vector from the shape of the coil impedance in the complex plane, through the use of some geometric parameters. Shape recognition is tested with three different machine-learning based classifiers: decision trees, neural networks and Naive Bayes. The performance of the proposed detection system are measured in terms of accuracy, sensitivity, specificity, precision and Matthews correlation coefficient. Several experiments are performed on dataset of eddy current signal samples for aircraft structures. The obtained results demonstrate the usefulness of our approach and the competiveness against existing descriptors.
IEEE Transactions on Instrumentation and Measurement | 2018
Gianni D'Angelo; Marco Laracca; Salvatore Rampone; Giovanni Betta
In this paper, we present a fast method for classification of defects detected by eddy current testing (ECT). This is done by using defects derived by lab experiments. For any defect, the ECT magnetic field response for different EC-probe’s paths is represented on a complex plane to obtain Lissajous’ figures. Their shapes are described through the use of few geometrical parameters forming a feature vector. Such vectors are used as signatures of the defects detected by the probe at different crossing angles and distances from the defect. The effectiveness of the proposed approach is evaluated by measuring the performances of three machine learning-based classifiers (Naïve Bayes, C4.5/J48 Decision Tree, and Multilayer Perceptron neural network), through the following metrics: area under ROC curve, the Matthews correlation coefficient, and F-Measure. The results confirm the usefulness of the proposed approach to defects detection and classification without the need of an overall scanning of the faulty area. So, it is able to minimize the efforts, and, consequently, the cost of an ECT test.
Applied Soft Computing | 2015
Gianni D'Angelo; Francesco Palmieri; Massimo Ficco; Salvatore Rampone
ieee international workshop on metrology for aerospace | 2017
Gianni D'Angelo; Massimo Tipaldi; Luigi Glielmo; Salvatore Rampone
ieee international workshop on metrology for aerospace | 2016
Gianni D'Angelo; Marco Laracca; Salvatore Rampone