Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonardo Franco is active.

Publication


Featured researches published by Leonardo Franco.


Biological Cybernetics | 2007

Neuronal selectivity, population sparseness, and ergodicity in the inferior temporal visual cortex

Leonardo Franco; Edmund T. Rolls; Nikolaos C. Aggelopoulos; José M. Jerez

The sparseness of the encoding of stimuli by single neurons and by populations of neurons is fundamental to understanding the efficiency and capacity of representations in the brain, and was addressed as follows. The selectivity and sparseness of firing to visual stimuli of single neurons in the primate inferior temporal visual cortex were measured to a set of 20 visual stimuli including objects and faces in macaques performing a visual fixation task. Neurons were analysed with significantly different responses to the stimuli. The firing rate distribution of 36% of the neurons was exponential. Twenty-nine percent of the neurons had too few low rates to be fitted by an exponential distribution, and were fitted by a gamma distribution. Interestingly, the raw firing rate distribution taken across all neurons fitted an exponential distribution very closely. The sparseness as or selectivity of the representation of the set of 20 stimuli provided by each of these neurons (which takes a maximal value of 1.0) had an average across all neurons of 0.77, indicating a rather distributed representation. The sparseness of the representation of a given stimulus by the whole population of neurons, the population sparseness ap, also had an average value of 0.77. The similarity of the average single neuron selectivity as and population sparseness for any one stimulus taken at any one time ap shows that the representation is weakly ergodic. For this to occur, the different neurons must have uncorrelated tuning profiles to the set of stimuli.


information technology interfaces | 2001

A neural network facial expression recognition system using unsupervised local processing

Leonardo Franco; Alessandro Treves

A local unsupervised processing stage is inserted within a neural network constructed to recognize facial expressions. The stage is applied in order to reduce the dimensionality of the input data while preserving some topological structure. The receptive fields of the neurons in the first hidden layer self-organize according to a local energy function, taking into account the variance of the input pixels. There is just one synapse going out from every input pixel and these weights, connecting the first two layers, are trained with a Hebbian algorithm. The structure of the network is completed with specialised modules, trained with backpropagation, that classify the data into the different expression categories. Thus, the neural net architecture includes 4 layers of neurons, that we train and test with images from the Yale Faces Database. We obtain a generalization rate of 84.5% on unseen faces, similar to the 83.2% rate obtained when using a similar system but implementing PCA processing at the initial stage.


Journal of Neurophysiology | 2009

Prediction of subjective affective state from brain activations.

Edmund T. Rolls; Fabian Grabenhorst; Leonardo Franco

Decoding and information theoretic techniques were used to analyze the predictions that can be made from functional magnetic resonance neuroimaging data on individual trials. The subjective pleasantness produced by warm and cold applied to the hand could be predicted on single trials with typically in the range 60-80% correct from the activations of groups of voxels in the orbitofrontal and medial prefrontal cortex and pregenual cingulate cortex, and the information available was typically in the range 0.1-0.2 (with a maximum of 0.6) bits. The prediction was typically a little better with multiple voxels than with one voxel, and the information increased sublinearly with the number of voxels up to typically seven voxels. Thus the information from different voxels was not independent, and there was considerable redundancy across voxels. This redundancy was present even when the voxels were from different brain areas. The pairwise stimulus-dependent correlations between voxels, reflecting higher-order interactions, did not encode significant information. For comparison, the activity of a single neuron in the orbitofrontal cortex can predict with 90% correct and encode 0.5 bits of information about whether an affectively positive or negative visual stimulus has been shown, and the information encoded by small numbers of neurons is typically independent. In contrast, the activation of a 3 x 3 x 3-mm voxel reflects the activity of approximately 0.8 million neurons or their synaptic inputs and is not part of the information encoding used by the brain, thus providing a relatively poor readout of information compared with that available from small populations of neurons.


Experimental Brain Research | 2004

The use of decoding to analyze the contribution to the information of the correlations between the firing of simultaneously recorded neurons

Leonardo Franco; Edmund T. Rolls; Nikolaos C. Aggelopoulos; Alessandro Treves

A new decoding method is described that enables the information that is encoded by simultaneously recorded neurons to be measured. The algorithm measures the information that is contained not only in the number of spikes from each neuron, but also in the cross-correlations between the neuronal firing including stimulus-dependent synchronization effects. The approach enables the effects of interactions between the ‘signal’ and ‘noise’ correlations to be identified and measured, as well as those from stimulus-dependent cross-correlations. The approach provides an estimate of the statistical significance of the stimulus-dependent synchronization information, as well as enabling its magnitude to be compared with the magnitude of the spike-count related information, and also whether these two contributions are additive or redundant. The algorithm operates even with limited numbers of trials. The algorithm is validated by simulation. It was then used to analyze neuronal data from the primate inferior temporal visual cortex. The main conclusions from experiments with two to four simultaneously recorded neurons were that almost all of the information was available in the spike counts of the neurons; that this Rate information included on average very little redundancy arising from stimulus-independent correlation effects; and that stimulus-dependent cross-correlation effects (i.e. stimulus-dependent synchronization) contribute very little to the encoding of information in the inferior temporal visual cortex about which object or face has been presented.


IEEE Transactions on Circuits and Systems | 2008

A New Decomposition Algorithm for Threshold Synthesis and Generalization of Boolean Functions

José Luis Subirats; José M. Jerez; Leonardo Franco

A new algorithm for obtaining efficient architectures composed of threshold gates that implement arbitrary Boolean functions is introduced. The method reduces the complexity of a given target function by splitting the function according to the variable with the highest influence. The procedure is iteratively applied until a set of threshold functions is obtained, leading to reduced depth architectures, in which the obtained threshold functions form the nodes and a and or or function is the output of the architecture. The algorithm is tested on a large set of benchmark functions and the results compared to previous existing solutions, showing a considerable reduction on the number of gates and levels of the obtained architectures. An extension of the method for partially defined functions is also introduced and the generalization ability of the method is analyzed.


IEEE Transactions on Neural Networks | 2001

Generalization properties of modular networks: implementing the parity function

Leonardo Franco; Sergio A. Cannas

The parity function is one of the most used Boolean function for testing learning algorithms because both of its simple definition and its great complexity. We construct a family of modular architectures that implement the parity function in which, every member of the family can be characterized by the fan-in max of the network, i.e., the maximum number of connections that a neuron can receive. We analyze the generalization ability of the modular networks first by computing analytically the minimum number of examples needed for perfect generalization and then by numerical simulations. Both results show that the generalization ability of these networks is systematically improved by the degree of modularity of the network. We also analyze the influence of the selection of examples in the emergence of generalization ability, by comparing the learning curves obtained through a random selection of examples to those obtained through examples selected accordingly to a general algorithm we (2000) recently proposed.


TAEBC-2009 | 2009

Constructive Neural Networks

Leonardo Franco; David A. Elizondo; José M. Jerez

The book is a collection of invited papers on Constructive methods for Neural networks. Most of the chapters are extended versions of works presented on the special session on constructive neural network algorithms of the 18th International Conference on Artificial Neural Networks (ICANN 2008) held September 3-6, 2008 in Prague, Czech Republic. The book is devoted to constructive neural networks and other incremental learning algorithms that constitute an alternative to standard trial and error methods for searching adequate architectures. It is made of 15 articles which provide an overview of the most recent advances on the techniques being developed for constructive neural networks and their applications. It will be of interest to researchers in industry and academics and to post-graduate students interested in the latest advances and developments in the field of artificial neural networks.


Neural Processing Letters | 2009

Neural network architecture selection: can function complexity help?

Iván Gómez; Leonardo Franco; José M. Jerez

This work analyzes the problem of selecting an adequate neural network architecture for a given function, comparing existing approaches and introducing a new one based on the use of the complexity of the function under analysis. Numerical simulations using a large set of Boolean functions are carried out and a comparative analysis of the results is done according to the architectures that the different techniques suggest and based on the generalization ability obtained in each case. The results show that a procedure that utilizes the complexity of the function can help to achieve almost optimal results despite the fact that some variability exists for the generalization ability of similar complexity classes of functions.


Constructive Neural Networks | 2009

Constructive Neural Network Algorithms for Feedforward Architectures Suitable for Classification Tasks

Maria do Carmo Nicoletti; João Roberto Bertini; David A. Elizondo; Leonardo Franco; José M. Jerez

This chapter presents and discusses several well-known constructive neural network algorithms suitable for constructing feedforward architectures aiming at classification tasks involving two classes. The algorithms are divided into two different groups: the ones directed by the minimization of classification errors and those based on a sequential model. In spite of the focus being on two-class classification algorithms, the chapter also briefly comments on the multiclass versions of several two-class algorithms, highlights some of the most popular constructive algorithms for regression problems and refers to several other alternative algorithms.


IEEE Transactions on Industrial Informatics | 2014

FPGA Implementation of the C-Mantec Neural Network Constructive Algorithm

Francisco Ortega-Zamorano; José M. Jerez; Leonardo Franco

Competitive majority network trained by error correction (C-Mantec), a recently proposed constructive neural network algorithm that generates very compact architectures with good generalization capabilities, is implemented in a field programmable gate array (FPGA). A clear difference with most of the existing neural network implementations (most of them based on the use of the backpropagation algorithm) is that the C-Mantec automatically generates an adequate neural architecture while the training of the data is performed. All the steps involved in the implementation, including the on-chip learning phase, are fully described and a deep analysis of the results is carried on using the two sets of benchmark problems. The results show a clear increase in the computation speed in comparison to the standard personal computer (PC)-based implementation, demonstrating the usefulness of the intrinsic parallelism of FPGAs in the neurocomputational tasks and the suitability of the hardware version of the C-Mantec algorithm for its application to real-world problems.

Collaboration


Dive into the Leonardo Franco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergio A. Cannas

National University of Cordoba

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge