Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beatriz Pérez-Sánchez is active.

Publication


Featured researches published by Beatriz Pérez-Sánchez.


Pattern Recognition | 2010

A new convex objective function for the supervised learning of single-layer neural networks

Oscar Fontenla-Romero; Bertha Guijarro-Berdiñas; Beatriz Pérez-Sánchez; Amparo Alonso-Betanzos

This paper proposes a novel supervised learning method for single-layer feedforward neural networks. This approach uses an alternative objective function to that based on the MSE, which measures the errors before the neurons nonlinear activation functions instead of after them. In this case, the solution can be easily obtained solving systems of linear equations, i.e., requiring much less computational power than the one associated with the regular methods. A theoretical study is included to proof the approximated equivalence between the global optimum of the objective function based on the regular MSE criterion and the one of the proposed alternative MSE function. Furthermore, it is shown that the presented method has the capability of allowing incremental and distributed learning. An exhaustive experimental study is also presented to verify the soundness and efficiency of the method. This study contains 10 classification and 16 regression problems. In addition, a comparison with other high performance learning algorithms shows that the proposed method exhibits, in average, the highest performance and low-demanding computational requirements.


intelligent data engineering and automated learning | 2007

A linear learning method for multilayer perceptrons using least-squares

Bertha Guijarro-Berdiñas; Oscar Fontenla-Romero; Beatriz Pérez-Sánchez; Paula Fraguela

Training multilayer neural networks is typically carried out using gradient descent techniques. Ever since the brilliant backpropagation (BP), the first gradient-based algorithm proposed by Rumelhart et al., novel training algorithms have appeared to become better several facets of the learning process for feed-forward neural networks. Learning speed is one of these. In this paper, a learning algorithm that applies linear-least-squares is presented. We offer the theoretical basis for the method and its performance is illustrated by its application to several examples in which it is compared with other learning algorithms and well known data sets. Results show that the new algorithm upgrades the learning speed of several backpropagation algorithms, while preserving good optimization accuracy. Due to its performance and low computational cost it is an interesting alternative, even for second order methods, particularly when dealing large networks and training sets.


Expert Systems With Applications | 2013

An online learning algorithm for adaptable topologies of neural networks

Beatriz Pérez-Sánchez; Oscar Fontenla-Romero; Bertha Guijarro-Berdiñas; David Martínez-Rego

Many real scenarios in machine learning are of dynamic nature. Learning in these types of environments represents an important challenge for learning systems. In this context, the model used for learning should work in real time and have the ability to act and react by itself, adjusting its controlling parameters, even its structures, depending on the requirements of the process. In a previous work, the authors presented an online learning algorithm for two-layer feedforward neural networks that includes a factor that weights the errors committed in each of the samples. This method is effective in dynamic environments as well as in stationary contexts. As regards this method’s incremental feature, we raise the possibility that the network topology is adapted according to the learning needs. In this paper, we demonstrate and justify the suitability of the online learning algorithm to work with adaptive structures without significantly degrading its performance. The theoretical basis for the method is given and its performance is illustrated by means of its application to different system identification problems. The results confirm that the proposed method is able to incorporate units to its hidden layer, during the learning process, without high performance degradation.


international symposium on neural networks | 2010

An incremental learning method for neural networks in adaptive environments

Beatriz Pérez-Sánchez; Oscar Fontenla-Romero; Bertha Guijarro-Berdiñas

Many real scenarios in machine learning are non-stationary. These challenges forces to develop new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or abrupt. As the dynamics of the changes can be different, the existing machine learning algorithms exhibit difficulties to cope with them. In this work we propose a new method, that is based in the introduction of a forgetting function in an incremental online learning algorithm for two-layer feedforward neural networks. This forgetting function gives a monotonically crescent importance to new data. Due to this fact, the network forgets in presence of changes while maintaining a stable behavior when the context is stationary. The theoretical basis for the method is given and its performance is illustrated by evaluating its behavior. The results confirm that the proposed method is able to work in evolving environments.


international conference on artificial neural networks | 2010

Fault prognosis of mechanical components using on-line learning neural networks

David Martínez-Rego; Oscar Fontenla-Romero; Beatriz Pérez-Sánchez; Amparo Alonso-Betanzos

Predictive maintenance of industrial machinery has steadily emerge as an important topic of research. Due to an accurate automatic diagnosis and prognosis of faults, savings of the current expenses devoted to maintenance can be obtained. The aim of this work is to develop an automatic prognosis system based on vibration data. An on-line version of the Sensitivity-based Linear Learning Model algorithm for neural networks is applied over real vibrational data in order to assess its forecasting capabilities. Moreover, the behavior of the method is compared with that of an efficient and fast method, the On-line Sequential Extreme LearningMachine. The accurate predictions of the proposed method pave the way for future development of a complete prognosis system.


CAEPIA'09 Proceedings of the Current topics in artificial intelligence, and 13th conference on Spanish association for artificial intelligence | 2009

An incremental learning method for neural networks based on sensitivity analysis

Beatriz Pérez-Sánchez; Oscar Fontenla-Romero; Bertha Guijarro-Berdiñas

The Sensitivity-Based Linear Learning Method (SBLLM) is a learning method for two-layer feedforward neural networks based on sensitivity analysis that calculates the weights by solving a linear system of equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method as compared to other batch learning algorithms. The SBLLM works in batch mode; however, there exist several reasons that justify the need for an on-line version of this algorithm. Among them, it can be mentioned the need for real time learning for many environments in which the information is not available at the outset but rather, is continually acquired, or in those situations in which large databases have to be managed but the computing resources are limited. In this paper an incremental version of the SBLLM is presented. The theoretical basis for the method is given and its performance is illustrated by comparing the results obtained by the on-line and batch mode versions of the algorithm.


Artificial Intelligence Review | 2018

A review of adaptive online learning for artificial neural networks

Beatriz Pérez-Sánchez; Oscar Fontenla-Romero; Bertha Guijarro-Berdiñas

In real applications learning algorithms have to address several issues such as, huge amount of data, samples which arrive continuously and underlying data generation processes that evolve over time. Classical learning is not always appropriate to work in these environments since independent and indentically distributed data are assumed. Taking into account the requirements of the learning process, systems should be able to modify both their structures and their parameters. In this survey, our aim is to review the developed methodologies for adaptive learning with artificial neural networks, analyzing the strategies that have been traditionally applied over the years. We focus on sequential learning, the handling of the concept drift problem and the determination of the network structure. Despite the research in this field, there are currently no standard methods to deal with these environments and diverse issues remain an open problem.


Applied Soft Computing | 2017

An incremental non-iterative learning method for one-layer feedforward neural networks

Oscar Fontenla-Romero; Beatriz Pérez-Sánchez; Bertha Guijarro-Berdiñas

Abstract In machine learning literature, and especially in the literature referring to artificial neural networks, most methods are iterative and operate in batch mode. However, many of the standard algorithms are not suitable for efficiently managing the emerging large-scale data sets obtained from new real-world applications. Novel proposals to address these challenges are mainly iterative approaches based on incremental or distributed learning algorithms. However, the state-of-the-art is such that there are few learning methods based on non-iterative approaches, which have certain advantages over iterative models in dealing more efficiently with these new challenges. We have developed a non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the networks weights and that is further transformed using Singular Value Decomposition. We analyze the behavior of the algorithm, comparing its performance and scaling properties to other state-of-the-art approaches. Experimental results demonstrate that the proposed method appropriately solves a wide range of classification problems and is able to efficiently deal with large-scale tasks.


international symposium on neural networks | 2015

Selecting target concept in one-class classification for handling class imbalance problem

Beatriz Pérez-Sánchez; Oscar Fontenla-Romero; Noelia Sánchez-Maroño

Microarray data classification is a difficult problem for computational techniques due to its inherent properties mainly, its imbalanced distribution and small sample size. Machine learning has been widely employed for handling this type of data predominantly applying two-class classification techniques. However, one-class approach has the ability to deal with imbalanced distribution and unexpected noise in the data. To deal with these situations it is considered that the best option is using the minority class as the target concept. This is reinforced by the idea of obtaining a classifier able to adjust itself to the specificity of the given class despite sacrificing the additional information about the second class. Although this consideration appears in different research, there are no thorough studies that prove it experimentally. In this paper, we investigate the suitability of employing minority class as the concept target in one-class classification to handle the class imbalance problem. A study over several microarray data sets is included. The results confirm that the use of minority class allows us to obtain better performance in one-class classification.


Expert Systems With Applications | 2013

A comparative study of the scalability of a sensitivity-based learning algorithm for artificial neural networks

Diego Peteiro-Barral; Bertha Guijarro-Berdiñas; Beatriz Pérez-Sánchez; Oscar Fontenla-Romero

Until recently, the most common criterion in machine learning for evaluating the performance of algorithms was accuracy. However, the unrestrainable growth of the volume of data in recent years in fields such as bioinformatics, intrusion detection or engineering, has raised new challenges in machine learning not simply regarding accuracy but also scalability. In this research, we are concerned with the scalability of one of the most well-known paradigms in machine learning, artificial neural networks (ANNs), particularly with the training algorithm Sensitivity-Based Linear Learning Method (SBLLM). SBLLM is a learning method for two-layer feedforward ANNs based on sensitivity analysis, that calculates the weights by solving a linear system of equations. The results show that the training algorithm SBLLM performs better in terms of scalability than five of the most popular and efficient training algorithms for ANNs.

Collaboration


Dive into the Beatriz Pérez-Sánchez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge