Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simone Bassis is active.

Publication


Featured researches published by Simone Bassis.


Pattern Recognition | 2014

DANCo: An intrinsic dimensionality estimator exploiting angle and norm concentration

Claudio Ceruti; Simone Bassis; Alessandro Rozza; Gabriele Lombardi; Elena Casiraghi; Paola Campadelli

Abstract In the past decade the development of automatic intrinsic dimensionality estimators has gained considerable attention due to its relevance in several application fields. However, most of the proposed solutions prove to be not robust on noisy datasets, and provide unreliable results when the intrinsic dimensionality of the input dataset is high and the manifold where the points are assumed to lie is nonlinearly embedded in a higher dimensional space. In this paper we propose a novel intrinsic dimensionality estimator ( DANCo ) and its faster variant ( FastDANCo ), which exploit the information conveyed both by the normalized nearest neighbor distances and by the angles computed on couples of neighboring points. The effectiveness and robustness of the proposed algorithms are assessed by experiments on synthetic and real datasets, by the comparative evaluation with state-of-the-art methodologies, and by significance tests.


Information Sciences | 2009

Feature selection via Boolean independent component analysis

Bruno Apolloni; Simone Bassis; Andrea Brega

We devise a feature selection method in terms of a follow-out utility of a special classification procedure. In turn, we root the latter on binary features which we extract from the input patterns with a wrapper method. The whole contrivance results in a procedure that is progressive in two respects. As for features, first we compute a very essential representation of them in terms of Boolean independent components in order to reduce their entropy. Then we reverse the representation mapping to discover the subset of the original features supporting a successful classification. As for the classification, we split it into two less hard tasks. With the former we look for a clustering of input patterns that satisfies loose consistency constraints and benefits from the conciseness of binary representation. With the latter we attribute labels to the clusters through the combined use of basically linear separators. We implement out the method through a relatively quick numerical procedure by assembling a set of connectionist and symbolic routines. These we toss on the benchmark of feature selection of DNA microarray data in cancer diagnosis and other ancillary datasets.


Information Sciences | 2006

Controlling the losing probability in a monotone game

Bruno Apolloni; Simone Bassis; Sabrina Gaito; Dario Malchiodi; Italo Zoppis

We deal with a complex game between Alice and Bob where each contenders probability of victory grows monotonically by unknown amounts with the resources employed. For a fixed effort on Alices part, Bob increases his resources on the basis of the results for each round (victory, tie or defeat) with the aim of reducing the probability of defeat to below a given threshold. We read this goal in terms of computing a confidence interval for the probability of losing and realize that the moves in some contests may bring in an indeterminacy trap: in certain games Bob cannot simultaneously have both a low probability-of-defeat measure and a narrow confidence interval. We use the inferential mechanism called twisting argument to compute the above interval on the basis of two joint statistics. Careful use of such statistics allows us to avoid indeterminacy.


Archive | 2015

Advances in Neural Networks: Computational and Theoretical Issues

Simone Bassis; Anna Esposito; Francesco Carlo Morabito

This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and bio-inspired memristor-based networks. Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive and context-aware Information Communication Technologies.


systems man and cybernetics | 2011

Confidence About Possible Explanations

Bruno Apolloni; Simone Bassis

We revise the notion of confidence with which we estimate the parameters of a given distribution law in terms of their compatibility with the sample we have observed. This is a recent perspective that allows us to get a more intuitive feeling of the crucial concept of the confidence interval in parametric inference together with quick tools for exactly computing them even in conditions far from the common Gaussian framework where standard methods fail. The key artifact consists of working with a representation of the compatible parameters in terms of random variables without priors. This leads to new estimators that meet the most demanding requirements of the modern statistical inference in terms of learning algorithms. We support our methods with: a consistent theoretical framework, general-purpose estimation procedures, and a set of paradigmatic benchmarks.


Italian workshop on neural networks | 2005

Learning Continuous Functions through a New Linear Regression Method

Bruno Apolloni; Simone Bassis; Sabrina Gaito; D. Iannizzi; Dario Malchiodi

We revisit the linear regression problem in terms of a computational learning problem whose task is to identify a confidence region for a continuous function belonging in particular to the straight lines family. Within the Algorithmic Inference framework this function is deputed to explain a relation between pairs of variables that are observed through a limited sample. Hence it is a random item within the above family and we look for a partial order relation allowing us to state a cumulative distribution function over the function specifications, hence a pair of quantiles identifying the confidence region. The regions we compute in this way is theoretically and numerically attested to entirely contain the goal function with a given confidence. Its shape is quite different from the analogous region obtained through conventional methods as a collation of confidence intervals found for the expected value of the dependent variable as a function of the independent one.


international symposium on neural networks | 2011

Training a network of mobile neurons

Bruno Apolloni; Simone Bassis; Lorenzo Valerio

We introduce a new paradigm of neural networks where neurons autonomously search for the best reciprocal position in a topological space so as to exchange information more profitably. The idea that elementary processors move within a network to get a proper position is borne out by biological neurons in brain morphogenesis. The basic rule we state for this dynamics is that a neuron is attracted by the mates which are most informative and repelled by ones which are most similar to it. By embedding this rule into a Newtonian dynamics, we obtain a network which autonomously organizes its layout. Thanks to this further adaptation, the network proves to be robustly trainable through an extended version of the back-propagation algorithm even in the case of deep architectures. We test this network on two classic benchmarks and thereby get many insights on how the network behaves, and when and why it succeeds.


international conference on knowledge-based and intelligent information and engineering systems | 2007

Fitting opportunistic networks data with a pareto distribution

Bruno Apolloni; Simone Bassis; Sabrina Gaito

We contrast properties and parameters of a Pareto distribution law with the behavior of memory endowed processes underlying the intercontact times of opportunistic networks. Within a general model where mobile agents meet together as a consequence of a common goal they are carrying out, the memory of the process identifies with the agent intention versus a goal, where intention consists in turn in the introduction of asymmetries into a random walk. With these elementary hypotheses we come to a very elementary agents mobility model as a semantic counterpart of the Pareto law. In particular this model gives a suitable meaning to law parameters and a rationale to its fitting of a benchmark of real intercontact times.


Italian workshop on neural networks | 2005

Computing Confidence Intervals for the Risk of A SVM Classifier through Algorithmic Inference

Bruno Apolloni; Simone Bassis; Sabrina Gaito; Dario Malchiodi; A. Minora

We reconsider in the Algorithmic Inference framework the accuracy of a Boolean function learnt from examples. This framework is specially suitable when the Boolean function is learnt through a Support Vector Machine, since (i) we know the number of support vectors really employed as an ancillary output of the learning procedure, and (ii) we can appreciate confidence intervals of misclassifying probability exactly in function of the cardinality of these vectors. As a result we obtain confidence intervals that are up to an order narrower than those supplied in the literature, having a slight different meaning due to the different approach they come from, but the same operational function. We numerically check the covering of these intervals.


italian workshop on neural nets | 2003

Cooperative Games in a Stochastic Environment

Bruno Apolloni; Simone Bassis; Sabrina Gaito; Dario Malchiodi

We introduce a very complex game based on an approximate solution of a NP-hard problem, so that the probability of victory grows monotonically, but of an unknown amount, with the resources each player employs. We formulate this model in the computational learning framework and focus on the problem of computing a confidence interval for the losing probability. We deal with the problem of reducing the width of this interval under a given threshold in both batch and on-line modality. While the former leads to a feasible polynomial complexity, the on-line learning strategy may get stuck in an indeterminacy trap: the more we play the game the broader becomes the confidence interval. In order to avoid this indeterminacy we organise in a better way the knowledge, introducing the notion of virtual game to achieve the goal efficiently. Then we extend the one-player to a team mode game. Namely, we improve the success of a team by redistributing the resources among the players and exploiting their mutual cooperation to treat the indeterminacy phenomenon suitably.

Collaboration


Dive into the Simone Bassis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Esposito

Seconda Università degli Studi di Napoli

View shared research outputs
Top Co-Authors

Avatar

Francesco Carlo Morabito

Mediterranea University of Reggio Calabria

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Valerio

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luca Ferrari

National Autonomous University of Mexico

View shared research outputs
Researchain Logo
Decentralizing Knowledge