Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodolfo Zunino is active.

Publication


Featured researches published by Rodolfo Zunino.


IEEE Transactions on Neural Networks | 1997

Circular backpropagation networks for classification

Sandro Ridella; Stefano Rovetta; Rodolfo Zunino

The class of mapping networks is a general family of tools to perform a wide variety of tasks. This paper presents a standardized, uniform representation for this class of networks, and introduces a simple modification of the multilayer perceptron with interesting practical properties, especially well suited to cope with pattern classification tasks. The proposed model unifies the two main representation paradigms found in the class of mapping networks for classification, namely, the surface-based and the prototype-based schemes, while retaining the advantage of being trainable by backpropagation. The enhancement in the representation properties and the generalization performance are assessed through results about the worst-case requirement in terms of hidden units and about the Vapnik-Chervonenkis dimension and cover capacity. The theoretical properties of the network also suggest that the proposed modification to the multilayer perceptron is in many senses optimal. A number of experimental verifications also confirm theoretical results about the models increased performances, as compared with the multilayer perceptron and the Gaussian radial basis functions network.


IEEE Transactions on Industrial Electronics | 2000

Vector quantization for license-plate location and image coding

Rodolfo Zunino; Stefano Rovetta

License-plate location in sensor images plays an important role in vehicle identification for automated transport systems (ATS). This paper presents a novel method based on vector quantization (VQ) to process vehicle images. The proposed method makes it possible to perform superior picture compression for archival purposes and to support effective location at the same time. As compared with classical approaches, VQ encoding can give some hints about the contents of image regions; such additional information can be exploited to boost location performance. The VQ system can be trained by way of examples; this gives the advantages of adaptiveness and on-field tuning. The approach has been tested in a real industrial application and included satisfactorily in a complete ATS for vehicle identification.


Neurocomputing | 2015

An ELM-based model for affective analogical reasoning

Erik Cambria; Paolo Gastaldo; Federica Bisio; Rodolfo Zunino

Between the dawn of the Internet through year 2003, there were just a few dozens exabytes of information on the Web. Today, that much information is created weekly. The opportunity to capture the opinions of the general public about social events, political movements, company strategies, marketing campaigns, and product preferences has raised increasing interest both in the scientific community, for the exciting open challenges, and in the business world, for the remarkable fallouts in marketing and financial prediction. Keeping up with the ever-growing amount of unstructured information on the Web, however, is a formidable task and requires fast and efficient models for opinion mining. In this paper, we explore how the high generalization performance, low computational complexity, and fast learning speed of extreme learning machines can be exploited to perform analogical reasoning in a vector space model of affective common-sense knowledge. In particular, by enabling a fast reconfiguration of such a vector space, extreme learning machines allow the polarity associated with natural language concepts to be calculated in a more dynamic and accurate way and, hence, perform better concept-level sentiment analysis.


IEEE Transactions on Robotics | 2011

Tactile-Data Classification of Contact Materials Using Computational Intelligence

Sergio Decherchi; Paolo Gastaldo; Ravinder Dahiya; Maurizio Valle; Rodolfo Zunino

The two major components of a robotic tactile-sensing system are the tactile-sensing hardware at the lower level and the computational/software tools at the higher level. Focusing on the latter, this research assesses the suitability of computational-intelligence (CI) tools for tactile-data processing. In this context, this paper addresses the classification of sensed object material from the recorded raw tactile data. For this purpose, three CI paradigms, namely, the support-vector machine (SVM), regularized least square (RLS), and regularized extreme learning machine (RELM), have been employed, and their performance is compared for the said task. The comparative analysis shows that SVM provides the best tradeoff between classification accuracy and computational complexity of the classification algorithm. Experimental results indicate that the CI tools are effective in dealing with the challenging problem of material classification.


Neurocomputing | 2013

Circular-ELM for the reduced-reference assessment of perceived image quality

Sergio Decherchi; Paolo Gastaldo; Rodolfo Zunino; Erik Cambria; Judith Redi

Providing a satisfactory visual experience is one of the main goals for present-day electronic multimedia devices. All the enabling technologies for storage, transmission, compression, rendering should preserve, and possibly enhance, the quality of the video signal; to do so, quality control mechanisms are required. These mechanisms rely on systems that can assess the visual quality of the incoming signal consistently with human perception. Computational Intelligence (CI) paradigms represent a suitable technology to tackle this challenging problem. The present research introduces an augmented version of the basic Extreme Learning Machine (ELM), the Circular-ELM (C-ELM), which proves effective in addressing the visual quality assessment problem. The C-ELM model derives from the original Circular BackPropagation (CBP) architecture, in which the input vector of a conventional MultiLayer Perceptron (MLP) is augmented by one additional dimension, the circular input; this paper shows that C-ELM can actually benefit from the enhancement provided by the circular input without losing any of the fruitful properties that characterize the basic ELM framework. In the proposed framework, C-ELM handles the actual mapping of visual signals into quality scores, successfully reproducing perceptual mechanisms. Its effectiveness is proved on recognized benchmarks and for four different types of distortions.


vehicular technology conference | 1990

A distributed intelligence methodology for railway traffic control

Gianni Vernazza; Rodolfo Zunino

A distributed approach to railway traffic control is described. The approach overcomes the upper bounds imposed on the size of controlled areas by the requirement for real-time processing when centralized methodologies are applied. The control problem is modeled in terms of resource allocation tasks, and the concept of priority is generalized to rule local control decisions. The analysis of a global networks behavior, as derived from the integration of local microdecisions, prefigures a depletion effect which will protect the system from traffic jam collapses. Simulation runs are reported to show the control systems overall operation. >


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2012

Efficient Digital Implementation of Extreme Learning Machines for Classification

Sergio Decherchi; Paolo Gastaldo; Alessio Leoncini; Rodolfo Zunino

The availability of compact fast circuitry for the support of artificial neural systems is a long-standing and critical requirement for many important applications. This brief addresses the implementation of the powerful extreme learning machine (ELM) model on reconfigurable digital hardware (HW). The design strategy first provides a training procedure for ELMs, which effectively trades off prediction accuracy and network complexity. This, in turn, facilitates the optimization of HW resources. Finally, this brief describes and analyzes two implementation approaches: one involving field-programmable gate array devices and one embedding low-cost low-performance devices such as complex programmable logic devices. Experimental results show that, in both cases, the design approach yields efficient digital architectures with satisfactory performances and limited costs.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Color Distribution Information for the Reduced-Reference Assessment of Perceived Image Quality

Judith Redi; Paolo Gastaldo; Ingrid Heynderickx; Rodolfo Zunino

Reduced-reference systems can predict in real-time the perceived quality of images for digital broadcasting, only requiring that a limited set of features, extracted from the original undistorted signals, is transmitted together with the image data. This paper uses descriptors based on the color correlogram, analyzing the alterations in the color distribution of an image as a consequence of the occurrence of distortions, for the reduced reference data. The processing architecture relies on a double layer at the receiver end. The first layer identifies the kind of distortion that may affect the received signal. The second layer deploys a dedicated prediction module for each type of distortion; every predictor yields an objective quality score, thus completing the estimation process. Computational-intelligence models are used extensively to support both layers with empirical training. The double-layer architecture implements a general purpose image quality assessment system, not being tied up to specific distortions and, at the same time, it allows us to benefit from the accuracy of specific, distortion-targeted metrics. Experimental results based on subjective quality data confirm the general validity of the approach.


Neurocomputing | 2003

Hyperparameter design criteria for support vector classifiers

Davide Anguita; Sandro Ridella; Fabio Rivieccio; Rodolfo Zunino

Abstract The design of a support vector machine (SVM) consists in tuning a set of hyperparameter quantities, and requires an accurate prediction of the classifiers generalization performance. The paper describes the application of the maximal-discrepancy criterion to the hyperparameter-setting process, and points out the advantages of such an approach over existing theoretical frameworks. The resulting theoretical predictions are then compared with the k -fold cross-validation empirical method, which probably is the current best-performing approach to the SVM design problem. Experimental results on a wide range of real-world testbeds prove out that the features of the maximal-discrepancy method can notably narrow the gap that so far has separated theoretical and empirical estimates of a classifiers generalization error.


Proceedings of SPIE | 2011

Interactions of visual attention and quality perception

Judith Redi; Hantao Liu; Rodolfo Zunino; Ingrid Heynderickx

Several attempts to integrate visual saliency information in quality metrics are described in literature, albeit with contradictory results. The way saliency is integrated in quality metrics should reflect the mechanisms underlying the interaction between image quality assessment and visual attention. This interaction is actually two-fold: (1) image distortions can attract attention away from the Natural Scene Saliency (NSS), and (2) the quality assessment task in itself can affect the way people look at an image. A subjective study was performed to analyze the deviation in attention from NSS as a consequence of being asked to assess the quality of distorted images, and, in particular, whether, and if so how, this deviation depended on the distortion kind and/or amount. Saliency maps were derived from eye-tracking data obtained during scoring distorted images, and they were compared to the corresponding NSS, derived from eye-tracking data obtained during freely looking at high quality images. The study revealed some structural differences between the NSS maps and the ones obtained during quality assessment of the distorted images. These differences were related to the quality level of the images; the lower the quality, the higher the deviation from the NSS was. The main change was identified as a shrinking of the region of interest, being most evident at low quality. No evident role for the kind of distortion in the change in saliency was found. Especially at low quality, the quality assessment task seemed to prevail on the natural attention, forcing it to deviate in order to better evaluate the impact of artifacts.

Collaboration


Dive into the Rodolfo Zunino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergio Decherchi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith Redi

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge