José Luis Bernier
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by José Luis Bernier.
Neurocomputing | 2002
Ignacio Rojas; Héctor Pomares; José Luis Bernier; Julio Ortega; Begoña Pino; Francisco J. Pelayo; Alberto Prieto
This paper proposes a framework for constructing and training a radial basis function (RBF) neural network. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. The structure of the Gaussian functions is modified using a pseudo-Gaussian function (PG) in which two scaling parameters σ are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. Other important characteristics of the proposed neural system are that the activation of the hidden neurons is normalized which, as described in the bibliography, provides a better performance than nonnormalization and instead of using a single parameter for the output weights, these are functions of the input variables which leads to a significant reduction in the number of hidden units compared to the classical RBF network. Finally, we examine the result of applying the proposed algorithm to time series prediction.
Computer Vision and Image Understanding | 2008
Javier Díaz; Eduardo Ros; Rodrigo Agís; José Luis Bernier
Optical-flow computation is a well-known technique and there are important fields in which the application of this visual modality commands high interest. Nevertheless, most real-world applications require real-time processing, an issue which has only recently been addressed. Most real-time systems described to date use basic models which limit their applicability to generic tasks, especially when fast motion is presented or when subpixel motion resolution is required. Therefore, instead of implementing a complex optical-flow approach, we describe here a very high-frame-rate optical-flow processing system. Recent advances in image sensor technology make it possible nowadays to use high-frame-rate sensors to properly sample fast motion (i.e. as a low-motion scene), which makes a gradient-based approach one of the best options in terms of accuracy and consumption of resources for any real-time implementation. Taking advantage of the regular data flow of this kind of algorithm, our approach implements a novel superpipelined, fully parallelized architecture for optical-flow processing. The system is fully working and is organized into more than 70 pipeline stages, which achieve a data throughput of one pixel per clock cycle. This computing scheme is well suited to FPGA technology and VLSI implementation. The developed customized DSP architecture is capable of processing up to 170 frames per second at a resolution of 800x600 pixels. We discuss the advantages of high-frame-rate processing and justify the optical-flow model chosen for the implementation. We analyze this architecture, measure the system resource requirements using FPGA devices and finally evaluate the systems performance and compare it with other approaches described in the literature.
Neural Processing Letters | 2000
Ignacio Rojas; Héctor Pomares; J. Gonzáles; José Luis Bernier; Eduardo Ros; Francisco J. Pelayo; Alberto Prieto
The main architectures, learning abilities and applications of radial basis function (RBF) neural networks are well documented. However, to the best of our knowledge, no in-depth analyses have been carried out into the influence on the behaviour of the neural network arising from the use of different alternatives for the design of an RBF (different non-linear functions, distances, number of neurons, structures, etc.). Thus, as a complement to the existing intuitive knowledge, it is necessary to have a more precise understanding of the significance of the different alternatives. In the present contribution, the relevance and relative importance of the parameters involved in such a design are investigated by using a statistical tool, the ANalysis Of the VAriance (ANOVA). In order to obtain results that are widely applicable, various problems of classification, functional approximation and time series estimation are analyzed. Conclusions are drawn regarding the whole set.
parallel problem solving from nature | 1996
José Luis Bernier; C. Ilia Herráiz; Juan Julián Merelo Guervós; S. Olmeda; Alberto Prieto
MasterMind is a game in which the player must find out, by making guesses, a hidden combination of colors set by the opponent. The player lays a combination of colors, and the opponent points out the number of positions the player has found out (black tokens) and the number of colors that are in a different place from the hidden combination (white tokens). This problem can be formulated in the following way: the target of the game is to find a string composed of l symbols, drawn from an alphabet of cardinality c, using as constraints hints that restrict the search space. The partial objective of the search is to find a string that meets all the constraints made so far the final objective being to find the hidden string.This problem can also be located within the class of constrained optimization, although in this case not all the constraints are known in advance; hence its dynamic nature.Three algorithms playing MasterMind have been evaluated with respect to the number of guesses made by each one and the number of combinations examined before finding the solution: a random-search-with-constraints algorithm, simulated annealing, and a genetic algorithm. The random search and genetic algorithm at each step plays the optimal solution i.e., the one that is consistent with the constraints made so far, while simulated annealing plays the best found within certain time constraints. This paper proves that the algorithms that follow the optimal strategy behave similarly, getting the correct combination in more or less the same number of guesses; between them, GA is better with respect to the number of combinations examined, and this difference increases with the size of the search space, while SA is much faster (around 2 orders of magnitude) and gives a good enough answer.
Neural Processing Letters | 2000
José Luis Bernier; Julio Ortega; Ignacio Rojas; Eduardo Ros; Alberto Prieto
When the learning algorithm is applied to a MLP structure, different solutions for the weight values can be obtained if the parameters of the applied rule or the initial conditions are changed. Those solutions can present similar performance with respect to learning, but they differ in other aspects, in particular, fault tolerance against weight perturbations. In this paper, a backpropagation algorithm that maximizes fault tolerance is proposed. The algorithm presented explicitly adds a new term to the backpropagation learning rule related to the mean square error degradation in the presence of weight deviations in order to minimize this degradation. The results obtained demonstrate the efficiency of the learning rule proposed here in comparison with other algorithm.
Neural Computation | 2000
José Luis Bernier; Julio Ortega; Eduardo Ros; Ignacio Rojas; Alberto Prieto
An analysis of the influence of weight and input perturbations in a multilayer perceptron (MLP) is made in this article. Quantitative measurements of fault tolerance, noise immunity, and generalization ability are provided. From the expressions obtained, it is possible to justify some previously reported conjectures and experimentally obtained results (e.g., the influence of weight magnitudes, the relation between training with noise and the generalization ability, the relation between fault tolerance and the generalization ability). The measurements introduced here are explicitly related to the mean squared error degradation in the presence of perturbations, thus constituting a selection criterion between different alternatives of weight configurations. Moreover, they allow us to predict the degradation of the learning performance of an MLP when its weights or inputs are deviated from their nominal values and thus, the behavior of a physical implementation can be evaluated before the weights are mapped on it according to its accuracy.
Neurocomputing | 2000
José Luis Bernier; Julio Ortega; Ignacio Rojas; Alberto Prieto
Abstract This paper proposes a version of the backpropagation algorithm which increases the tolerance of a feedforward neural network against deviations in the weight values. These changes can originate either when the neural network is mapped on a given VLSI circuit where the precision and/or weight matching are low, or by physical defects affecting the neural circuits. The modified backpropagation algorithm we propose uses the statistical sensitivity of the network to changes in the weights as a quantitative measure of network tolerance and attempts to reduce this statistical sensitivity while keeping the figures for the usual training performance (in errors and time) similar to those obtained with the usual backpropagation algorithm.
international conference on computational science | 2006
J. Fernández; Mancia Anguita; Eduardo Ros; José Luis Bernier
Users of Scientific Computing Environments (SCE) benefit from faster high-level software development at the cost of larger run time due to the interpreted environment. For time-consuming SCE applications, dividing the workload among several computers can be a cost-effective acceleration technique. Using our PVM and MPI toolboxes, Matlab
Neural Processing Letters | 1999
José Luis Bernier; Julio Ortega; M. M. Rodrì‘guez; Ignacio Rojas; Alberto Prieto
^{\rm {\sc {\textregistered}}}
ieee international conference on fuzzy systems | 1997
Ignacio Rojas; J. J. Merelo; José Luis Bernier; Alberto Prieto
and Octave users in a computer cluster can parallelize their interpreted applications using the native cluster programming paradigm — message-passing. Our toolboxes are complete interfaces to the corresponding libraries, support all the compatible datatypes in the base SCE and have been designed with performance and maintainability in mind. Although in this paper we focus on our new toolbox, MPITB for Octave, we describe the general design of these toolboxes and of the development aids offered to end users, mention some related work, mention speedup results obtained by some of our users and introduce speedup results for the NPB-EP benchmark for MPITB in both SCEs.