Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Franco Mana is active.

Publication


Featured researches published by Franco Mana.


Machine Learning | 1991

Rigel: An Inductive Learning System

Roberto Gemello; Franco Mana

This paper is aimed at showing the benefits obtained by explicitly introducing a priori control knowledge into the inductive process. The starting point is Michalskis Induce system, which has been modified and augmented. Although the basic philosophy has been changed as little as possible, Induce has been radically modified from the algorithmic point of view, resulting in the new learning system Rigel. The main ideas taken from Induce are the sequential learning of descriptions of each concept against all the others, the Covering algorithm, the Star definition, and the VL2 representation language. The modifications consist of a new way of computing the Star, the use of a separate body of heuristic knowledge to strongly direct the search, the implementation of a larger subset of the VL2 language, a reasoned way of selecting the seed, and the use of rules to evaluate the worthiness of the inductive assertions. The effectiveness of Rigel has been tested both on artificial and on real-world case studies.


international symposium on neural networks | 1997

Continuous speech recognition with neural networks and stationary-transitional acoustic units

Roberto Gemello; Dario Albesano; Franco Mana

This paper proposes the use of a kind of acoustic units named stationary-transitional units within a hybrid hidden Markov model/neural network recognition framework as an alternative to standard context-independent phonemes. These units are made up of stationary parts of the context independent phonemes plus all the admissible transitions between them and represent a partition of the sounds of the language, like phonemes, but with more acoustic detail. These units are very suitable to be modeled with neural networks and their use may enhance the performances of hybrid HMM-NN systems by increasing their acoustic resolution. This hypothesis is verified for the Italian language, experimenting these units on a difficult domain of spontaneous speech recognition, namely railway timetable vocal access with the Dialogos system. The results show that a relevant improvement is achieved with respects to the use of the standard context independent phonemes.


international symposium on neural networks | 1992

Word recognition with recurrent network automata

Dario Albesano; Roberto Gemello; Franco Mana

The authors report a method to directly encode temporal information into a neural network by explicitly modeling that information with a left-to-right automaton, and teaching a recurrent network to identify the automaton states. The state length and position are adjusted with the usual train and re-segment iterative procedure. The global model is a hybrid of a recurrent neural network which implements the state transition models, and dynamic programming, which finds the best state sequence. The advantages achieved by using recurrent networks are outlined by applying the method to a speaker-independent digit recognition task.<<ETX>>


international symposium on neural networks | 1998

Linear input network based speaker adaptation in the Dialogos system

Roberto Gemello; Franco Mana; Dario Albesano

Describes an activity devoted to experiment linear input networks (LIN) as a speaker adaptation technique for the neural recognition module of the Dialogos(R) system. The LIN technique is experimented with and some variants devoted to reduce the number of estimated parameters are introduced. The obtained results confirm the validity of LIN for speaker adaptation, while the introduced variants are a valid alternative when a reduced model size is important. The potentialities and drawbacks of supervised and unsupervised speaker adaptation are illustrated. Experimentations with a speaker dependent data base collected from real interactions with the Dialogos system are described in detail showing, in both cases, a relevant improvement in comparison with the speaker independent model.


ieee workshop on neural networks for signal processing | 1994

Recurrent network automata for speech recognition: a summary of recent work

Roberto Gemello; Dario Albesano; Franco Mana; Rossella Cancelliere

The integration of hidden Markov models (HMMs) and neural networks is an important research line to obtain new speech recognition systems that combine a good time-alignment capability and a powerful discrimination-based training. The recurrent network automata (RNA) model is a hybrid of a recurrent neural network, which estimates the state emission probability of a HMM, and a dynamic programming, which finds the best state sequence. This paper reports the results obtained with the RNA model, after three years of research and application in speaker independent digit recognition over the public telephone network.<<ETX>>


information processing and management of uncertainty | 1988

Controlling inductive search in RIGEL learning system

Roberto Gemello; Franco Mana

Concept induction from examples has already proved to work on significant case studies and will be a fundamental tool in next generation expert systems. For this reason it is important to improve induction techniques, particularly to face real world problems. A tipical problem of real application is the width of the state spaces to be searched. This problem has been faced in the design of the inductive learning system RIGEL, which explicitly represents and uses task dependent (but domain independent) control knowledge to strongly focus the inductive search. RIGEL has been implemented in Common Lisp on TI Explorer, Symbolics and Sun and tested on several case studies taken from literature. At present it is beeing applied to a real problem, taken from the image recognition field.


international symposium on neural networks | 1994

Correlative training and recurrent network automata for speech recognition

Roberto Gemello; Dario Albesano; Franco Mana

Discriminative training is one of the more distinctive features of multilayer perceptron networks when used as classifiers. Although, when dealing with overlapping classes, it may be useful to smooth this feature not compelling the MLP to discrimination where it is impossible. This can be done adaptively, without any prior information about the classes by introducing a straightforward modification of backpropagation, named correlative training. This new MLP feature has proved to be very useful when training the hybrid recurrent network automata model for speech recognition.<<ETX>>


international conference on artificial neural networks | 1991

SELF ORGANIZING FEATURE MAPS FOR CONTOUR DETECTION IN VIDEOPHONE IMAGES

Roberto Gemello; Cataldo Lettera; Franco Mana; Lorenzo Masera

This paper describes an application of Self-Organizing Maps to a telecommunication task, namely the speaker contour detection in videophone images. The problem of contour detection is related to improvements in quality and to the reduction of the information transmitted during a videophone communication. Only the area inside the speaker contour could be coded with a high accuracy, exploiting the immobility of the landscape and its scarce relevance in the image to code it at a lower bit-rate. The speaker contour detection is faced by the combined action of an algorithmic low level image processing, which spots a cloud of pixels scattered around the speaker contour, and of a Self-Organizing Map, which interpolates the points to approximate the contour.


Neural Processing Letters | 1998

Speeding up MLP Execution through Difference Forward Propagation

Dario Albesano; Roberto Gemello; Franco Mana

At present the Multi-layer Perceptron model (MLP) is, without doubt, the most diffused neural network for applications. So, it is important, from an engineering point of view, to design and test methods to improve MLP efficiency at run time. This paper describes a simple but effective method to cut down execution time for MLP networks dealing with a sequential input. This case is very common, including all kinds of temporal processing, like speech, video, and in general signals varying in time. The suggested technique requires neither specialized hardware nor big quantities of additional memory. The method is based on the ubiquitous idea of difference transmission, widely used in signal coding. For each neuron, the activation value at a certain moment is compared with the corresponding activation value computed at the previous net forward computation: if no relevant change occurred the neuron does not perform any computation, otherwise it propagates to the connected neurons the difference of its two activations multiplied by its outgoing weights. The method requires the introduction of a quantization of the unit activation function that causes an error which is analyzed empirically. In particular, the effectiveness of the method is verified on two speech recognition tasks with two different neural networks architectures. The results show a drastic reduction of the execution time on both the neural architectures and no significant changes in recognition quality.


Archive | 1999

Continuous Speech Recognition with Neural Networks: An Application to Railway Timetables Enquires

Dario Albesano; Franco Mana; Roberto Gemello

This paper describes the use of Neural Network Automata (NNA) in Dialogos®, a real time system for human machine spoken dialogue on the telephone network devoted to railway timetables inquires. NNA is CSELT hybrid Hidden Markov Model (HMM) and Neural Network (NN) devoted to speech recognition.

Collaboration


Dive into the Franco Mana's collaboration.

Researchain Logo
Decentralizing Knowledge