Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Víctor M. Rivas is active.

Publication


Featured researches published by Víctor M. Rivas.


Neurocomputing | 2000

G-Prop: Global optimization of multilayer perceptrons using GAs

Pedro A. Castillo; J. J. Merelo; Alberto Prieto; Víctor M. Rivas; G. Romero

Abstract A general problem in model selection is to obtain the right parameters that make a model fit observed data. For a multilayer perceptron (MLP) trained with back-propagation (BP), this means finding appropiate layer size and initial weights. This paper proposes a method (G-Prop, genetic backpropagation) that attempts to solve that problem by combining a genetic algorithm (GA) and BP to train MLPs with a single hidden layer. The GA selects the initial weights and changes the number of neurons in the hidden layer through the application of specific genetic operators. G-Prop combines the advantages of the global search performed by the GA over the MLP parameter space and the local search of the BP algorithm. The application of the G-Prop algorithm to several real-world and benchmark problems shows that MLPs evolved using G-Prop are smaller and achieve a higher level of generalization than other perceptron training algorithms, such as Quick-Propagation or RPROP, and other evolutive algorithms, such as G-LVQ.


Neural Processing Letters | 2000

Evolving Multilayer Perceptrons

Pedro A. Castillo; J. Carpio; J. J. Merelo; Alberto Prieto; Víctor M. Rivas; G. Romero

This paper proposes a new version of a method (G-Prop, genetic backpropagation) that attempts to solve the problem of finding appropriate initial weights and learning parameters for a single hidden layer Multilayer Perceptron (MLP) by combining an evolutionary algorithm (EA) and backpropagation (BP). The EA selects the MLP initial weights, the learning rate and changes the number of neurons in the hidden layer through the application of specific genetic operators, one of which is BP training. The EA works on the initial weights and structure of the MLP, which is then trained using QuickProp; thus G-Prop combines the advantages of the global search performed by the EA over the MLP parameter space and the local search of the BP algorithm. The application of the G-Prop algorithm to several real-world and benchmark problems shows that MLPs evolved using G-Prop are smaller and achieve a higher level of generalization than other perceptron training algorithms, such as QuickPropagation or RPROP, and other evolutive algorithms. It also shows some improvement over previous versions of the algorithm.


Applied Soft Computing | 2006

Finding a needle in a haystack using hints and evolutionary computation: the case of evolutionary MasterMind

Juan-Julián Merelo-Guervós; Pedro A. Castillo; Víctor M. Rivas

In this paper we present a new version of an evolutionary algorithm that finds the hidden combination in the game of MasterMind by using hints on how close is a combination played to it. The evolutionary algorithm finds the hidden combination in an optimal number of guesses, is efficient in terms of memory and CPU, and examines only a minimal part of the search space. The algorithm is fast, and indeed previous versions can be played in real time on the world wide web. This new version of the algorithm is presented and compared with theoretical bounds and other algorithms. We also examine how the algorithm scales with search space size, and its performance for different values of the EA parameters. # 2005 Published by Elsevier B.V.


congress on evolutionary computation | 1999

G-Prop-II: global optimization of multilayer perceptrons using GAs

Pedro A. Castillo; Víctor M. Rivas; J. J. Merelo; Jesús González; Alberto Prieto; G. Romero

A general problem in model selection is to obtain the right parameters that make a model fit observed data. For a multilayer perceptron (MLP) trained with backpropagation (BP), this means finding the right hidden layer size, appropriate initial weights and learning parameters. The paper proposes a method (G-Prop-II) that attempts to solve that problem by combining a genetic algorithm (GA) and BP to train MLPs with a single hidden layer. The GA selects the initial weights and the learning rate of the network, and changes the number of neurons in the hidden layer through the application of specific genetic operators. G-Prop-II combines the advantages of the global search performed by the GA over the MLP parameter space and the local search of the BP algorithm. The application of the G-Prop-II algorithm to several real world and benchmark problems shows that MLPs evolved using G-Prop-II are smaller and achieve a higher level of generalization than other perceptron training algorithms, such as QuickPropagation or RPROP, and other evolutive algorithms, such as G-LVQ. It also shows some improvement over previous versions of the algorithm.


Neurocomputing | 2014

Short, medium and long term forecasting of time series using the L-Co-R algorithm

E. Parras-Gutierrez; Víctor M. Rivas; M. Garcia-Arenas; M.J. del Jesus

This paper describes the coevolutionary algorithm L-Co-R (Lags COevolving with Radial Basis Function Neural Networks - RBFNs), and analyzes its performance in the forecasting of time series in the short, medium and long terms. The method allows the coevolution, in a single process, of the RBFNs as the time series models, as well as the set of lags to be used for predictions, integrating two genetic algorithms with real and binary codification, respectively. The individuals of one population are radial basis neural networks (used as models), while sets of candidate lags are individuals of the second population. In order to test the behavior of the algorithm in a new context of a variable horizon, 5 different measures have been analyzed, for more than 30 different databases, comparing this algorithm against six existing algorithms and for seven different prediction horizons. Statistical analysis of the results shows that L-Co-R outperforms other methods, regardless of the horizon, and is capable of predicting short, medium or long horizons using real known values.


parallel problem solving from nature | 2006

Multiobjective optimization of ensembles of multilayer perceptrons for pattern classification

Pedro A. Castillo; M. G. Arenas; J. J. Merelo; Víctor M. Rivas; G. Romero

Pattern classification seeks to minimize error of unknown patterns, however, in many real world applications, type I (false positive) and type II (false negative) errors have to be dealt with separately, which is a complex problem since an attempt to minimize one of them usually makes the other grow. Actually, a type of error can be more important than the other, and a trade-off that minimizes the most important error type must be reached. Despite the importance of type-II errors, most pattern classification methods take into account only the global classification error. In this paper we propose to optimize both error types in classification by means of a multiobjective algorithm in which each error type and the network size is an objective of the fitness function. A modified version of the GProp method (optimization and design of multilayer perceptrons) is used, to simultaneously optimize the network size and the type I and II errors.


international work-conference on artificial and natural neural networks | 1999

Optimizing web newspaper layout using simulated annealing

Jesús González; J. J. Merelo; Pedro A. Castillo; Víctor M. Rivas; G. Romero

This paper presents a new approach to the pagination problem. This problem has traditionally been solved offline for a variety of applications like the pagination of Yellow Pages or newspapers, but since these services have appeared in Internet, a new approach is needed to solve the problem in real time. This paper is concerned with the problem of paginating a selection of articles from web newspapers that match a query sent to a personalized news site by a user. The result should look like a real newspaper and adapt to the clients computer configuration (font faces and sizes, screen size and resolution, etc.). A combinatorial approach based on Simulated Annealing and written in JavaScript is proposed to solve the problem online in the clients computer. Experiments show that the SA achieves real time layout optimization for up to 50 articles.


european conference on applications of evolutionary computation | 2014

An Object-Oriented Library in JavaScript to Build Modular and Flexible Cross-Platform Evolutionary Algorithms

Víctor M. Rivas; Juan Julián Merelo Guervós; Gustavo Romero López; Maribel Arenas-García; Antonio M. Mora

This paper introduces jsEO, a new evolutionary computation library that is executed in web browsers, as it is written in Javascript. The library allows the rapid development of evolutionary algorithm, and makes easier the collaboration between different clients by means of individuals stored in a web server. In this work, jsEO has been tested against two simple problems, such as the Royal Road function and a 128-terms equation, and analysing how many machines and evaluations it yields. This paper attempts to reproduce results of older papers using modern browsers and all kind of devices that, nowadays, have JavaScript integrated in the browser, and is a complete rewrite of the code using the popular MooTools library. Results show that the system makes easier the development of evolutionary algorithms, suited for different chromosomes representations and problems, that can be simultaneously executed in many different operating systems and web browsers, sharing the best solutions previously found.


soft computing | 2012

Coevolution of lags and RBFNs for time series forecasting: L-Co-R algorithm

E. Parras-Gutierrez; M. Garcia-Arenas; Víctor M. Rivas; M. J. del Jesus

This paper introduces Lags COevolving with Rbfns (L-Co-R), a coevolutionary method developed to face time-series forecasting problems. L-Co-R simultaneously evolves the model that provides the forecasted values and the set of time lags the model must use in the prediction process. Coevolution takes place by means of two populations that evolve at the same time, cooperating between them; the first population is composed of radial basis function neural networks; the second one contains the individuals representing the sets of lags. Thus, the final solution provided by the method comprises both the neural net and the set of lags that better approximate the time series. The method has been tested across 34 different time series datasets, and the results compared to 6 different methods referenced in literature, and with respect to 4 different error measures. The results show that L-Co-R outperforms the rest of methods, as the statistical analysis carried out indicates.


international symposium on neural networks | 2010

Time series forecasting: Automatic determination of lags and radial basis neural networks for a changing horizon environment

E. Parras-Gutierrez; Víctor M. Rivas

This paper shows how E-tsRBF deals with time-series prediction in a changing horizon environment. E-tsRBF is a meta-evolutionary algorithm that simultaneously evolves both the neural networks and the set of lags needed to forecast time series. The method uses radial basis function neural networks, a kind of net that has been successfully applied to time series prediction in literature. Frequently, methods to build and train these networks must be given the past periods or lags to be used in order to create patterns and forecast any time series. Up to twenty-one time series are evaluated in this work, showing the behaviour of the new method.

Collaboration


Dive into the Víctor M. Rivas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Romero

University of Granada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge