Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonino Freno is active.

Publication


Featured researches published by Antonino Freno.


Pattern Analysis and Applications | 2015

Techniques for dealing with incomplete data: a tutorial and survey

Marco Aste; Massimo Boninsegna; Antonino Freno; Edmondo Trentin

Real-world applications of pattern recognition, or machine learning algorithms, often present situations where the data are partly missing, corrupted by noise, or otherwise incomplete. In spite of that, developments in the machine learning community in the last decade have mostly focused on mathematical analysis of learning machines, making it difficult for practitioners to recollect an overview of major approaches to this issue. Paradoxically, as a consequence, even established methodologies rooted in statistics appear to have long been forgotten. Although the relevant literature on the topic is so wide that no exhaustive coverage is nowadays possible, the first goal of this paper is to provide the reader with a nonetheless significant survey of major, or utterly sound, techniques for dealing with the tasks of pattern recognition, machine learning, and density estimation from incomplete data. Secondly, the paper aims at representing a viable tutorial tool for the interested practitioner, by allowing for self-contained, step-by-step understanding of several approaches. An effort is made to categorize the different techniques as follows: (1) heuristic methods; (2) statistical approaches; (3) connectionist-oriented techniques; (4) other approaches (dynamical systems, adversarial deletion of features, etc.).


knowledge discovery and data mining | 2009

Scalable pseudo-likelihood estimation in hybrid random fields

Antonino Freno; Edmondo Trentin; Marco Gori

Learning probabilistic graphical models from high-dimensional datasets is a computationally challenging task. In many interesting applications, the domain dimensionality is such as to prevent state-of-the-art statistical learning techniques from delivering accurate models in reasonable time. This paper presents a hybrid random field model for pseudo-likelihood estimation in high-dimensional domains. A theoretical analysis proves that the class of pseudo-likelihood distributions representable by hybrid random fields strictly includes the class of joint probability distributions representable by Bayesian networks. In order to learn hybrid random fields from data, we develop the Markov Blanket Merging algorithm. Theoretical and experimental evidence shows that Markov Blanket Merging scales up very well to high-dimensional datasets. As compared to other widely used statistical learning techniques, Markov Blanket Merging delivers accurate results in a number of link prediction tasks, while achieving also significant improvements in terms of computational efficiency. Our software implementation of the models investigated in this paper is publicly available at http://www.dii.unisi.it/~freno/. The same website also hosts the datasets used in this work that are not available elsewhere in the same preprocessing used for our experiments.


international symposium on neural networks | 2009

Scalable statistical learning: A modular bayesian/markov network approach

Antonino Freno; Edmondo Trentin; Marco Gori

In this paper we propose a hybrid probabilistic graphical model for pseudo-likelihood estimation in high-dimensional domains. The model is based on Bayesian networks and Markov random fields. On the one hand, we prove that the proposed model is more expressive than Bayesian networks in terms of the representable distributions. On the other hand, we develop a computationally efficient structure learning algorithm, and we provide theoretical and experimental evidence showing how the modular nature of our model allows structure learning to scale up very well to high-dimensional datasets. The capability of the hybrid model to accurately learn complex networks of conditional independencies is illustrated by promising results in pattern recognition applications.


international conference on knowledge-based and intelligent information and engineering systems | 2007

Selecting features by learning Markov blankets

Antonino Freno

In this paper I propose a novel feature selection technique based on Bayesian networks. The main idea is to exploit the conditional independencies entailed by Bayesian networks in order to discard features that are not directly relevant for classification tasks. An algorithm for learning Bayesian networks and its use in feature selection are illustrated. The advantages of this algorithm with respect to other ones are then discussed. Finally, experimental results are offered which confirm the reliability of the algorithm.


Archive | 2011

Hybrid Random Fields

Antonino Freno; Edmondo Trentin

Introduction.- Bayesian Networks.- Markov Random Fields.- Introducing Hybrid Random Fields: Discrete-Valued Variables.- Extending Hybrid Random Fields: Continuous-Valued Variables.- Applications.- Probabilistic Graphical Models: Cognitive Science or Cognitive Technology? ..- Conclusions.


international symposium on neural networks | 2009

Unsupervised nonparametric density estimation: A neural network approach

Edmondo Trentin; Antonino Freno

One major problem in pattern recognition is estimating probability density functions. Unfortunately, parametric techniques rely on an arbitrary assumption on the form of the underlying, unknown density function. On the other hand, nonparametric techniques, such as the popular kn-Nearest Neighbor (not to be confused with the k-Nearest Neighbor classification algorithm), allow to remove such an assumption. Albeit effective, the kn-Nearest Neighbor is affected by a number of limitations. Artificial neural networks are, in principle, an alternative family of nonparametric models. So far, artificial neural networks have been extensively used to estimate probabilities (e.g., class-posterior probabilities). However, they have not been exploited to estimate instead probability density functions. This paper introduces a simple, neuralbased algorithm for unsupervised, nonparametric estimation of multivariate densities, relying on the kn-Nearest Neighbor technique. This approach overcomes the limitations of kn-Nearest Neighbor, possibly improving the estimation accuracy of the resulting pdf models. An experimental investigation of the algorithm behavior is offered, exploiting random samples drawn from a mixture of Fisher-Tippett density functions.


Studies in computational intelligence | 2009

Probabilistic interpretation of neural networks for the classification of vectors, sequences and graphs

Edmondo Trentin; Antonino Freno

This chapter introduces a probabilistic interpretation of artificial neural networks (ANNs), moving the focus from posterior probabilities to probability density functions (pdfs). Parametric and non-parametric neural-based algorithms for unsupervised estimation of pdfs, relying on maximum-likelihood or on the Parzen Window techniques, are reviewed. The approaches may overcome the limitations of traditional statistical estimation methods, possibly leading to improved pdf models. Two paradigms for combining ANNs and hidden Markov models (HMMs) for sequence recognition are then discussed. These models rely on (i) an ANN that estimates state-posteriors over a maximum-a-posteriori criterion, or on (ii) a connectionist estimation of emission pdfs, featuring global optimization of HMM and ANN parameters over a maximumlikelihood criterion. Finally, the chapter faces the problem of the classification of graphs (structured data), by presenting a connectionist probabilistic model for the posterior probability of classes given a labeled graphical pattern. In all cases, empirical evidence and theoretical arguments underline the fact that plausible probabilistic interpretations of ANNs are viable and may lead to improved statistical classifiers, not only in the statical but also in the sequential and structured pattern recognition setups.


Neural Processing Letters | 2018

Dynamic Hybrid Random Fields for the Probabilistic Graphical Modeling of Sequential Data: Definitions, Algorithms, and an Application to Bioinformatics

Marco Bongini; Antonino Freno; Vincenzo Laveglia; Edmondo Trentin

The paper introduces a dynamic extension of the hybrid random field (HRF), called dynamic HRF (D-HRF). The D-HRF is aimed at the probabilistic graphical modeling of arbitrary-length sequences of sets of (time-dependent) discrete random variables under Markov assumptions. Suitable maximum likelihood algorithms for learning the parameters and the structure of the D-HRF are presented. The D-HRF inherits the computational efficiency and the modeling capabilities of HRFs, subsuming both dynamic Bayesian networks and Markov random fields. The behavior of the D-HRF is first evaluated empirically on synthetic data drawn from probabilistic distributions having known form. Then, D-HRFs (combined with a recurrent autoencoder) are successfully applied to the prediction of the disulfide-bonding state of cysteines from the primary structure of proteins in the Protein Data Bank.


Archive | 2011

Probabilistic Graphical Models: Cognitive Science or Cognitive Technology?

Antonino Freno; Edmondo Trentin

This chapter is an attempt to make explicit the philosophical and cognitive perspective that the scientific work presented in Chapters 2–6 should be viewed from. This does not mean that the scientific material collected in this work needs a philosophical foundation in order to make sense or to be really interesting. The only aim of embedding scientific results within a philosophical framework is “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” [275], which is what Wilfrid Sellars regarded as the general aim of philosophy. In other words, while proposing a philosophical reflection on the meaning of the technical results collected in the previous chapters, we do not think that the value of those results depends in any important way on their philosophical meaning. Our standpoint is rather that, if we ask how those results in AI “hang together” with other results in the cognitive sciences and with particular views advocated in the philosophy of mind, then the philosophical remarks contained in this chapter are the answer we give to that question. But the reader should keep in mind that our ‘philosophical’ reflections are more properly meant as a scientific contribution to philosophy, rather than a philosophical contribution to science, where the guiding idea is that science can take care of itself.


Archive | 2011

Markov Random Fields

Antonino Freno; Edmondo Trentin

Let’s give Bayesian networks a break, and let us go back to our favorite topic, namely soccer. Suppose you want to develop a probabilistic model of the ranking of your team in the domestic soccer league championship at any given time t throughout the current season. In this setup, it is reasonable to assume that t is a discrete time index, denoting t-th game in the season and ranging from t = 1 (first match of the tournament) to t = T (season finale). Assuming the championship is organized as a round-robin tournament among N teams, then T = 2(N − 1). The ranking of your team at time t + 1 is likely to change with a certain probability distribution which (i) accounts for the randomness of the results at the end of the corresponding matchday, and (ii) depends on the ranking at time t.

Collaboration


Dive into the Antonino Freno's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Saveski

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge