Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beatriz Lacruz is active.

Publication


Featured researches published by Beatriz Lacruz.


Technometrics | 2001

Some Applications of Functional Networks in Statistics and Engineering

Enrique Castillo; José Manuel Gutiérrez; Ali S. Hadi; Beatriz Lacruz

Functional networks are a general framework useful for solving a wide range of problems in probability, statistics, and engineering applications. In this article, we demonstrate that functional networks can be used for many general purposes including (a) solving nonlinear regression problems without the rather strong assumption of a known functional form, (b) modeling chaotic time series data, (c) finding conjugate families of distribution functions needed for the applications of Bayesian statistical techniques, (d) analyzing the problem of stability with respect to maxima operations, which are useful in the theory and applications of extreme values, and (e) modeling the reproductivity and associativity laws that have many applications in applied probability. We also give two specific engineering applications—analyzing the Ikeda map with parameters leading to chaotic behavior and modeling beam stress subject to a given load. The main purpose of this article is to introduce functional networks and to show their power and usefulness in engineering and statistical applications. We describe the steps involved in working with functional networks including structural learning (specification and simplification of the initial topology), parametric learning, and model-selection procedures. The concepts and methodologies are illustrated using several examples of applications.


Computational Statistics & Data Analysis | 2008

Semi-parametric nonlinear regression and transformation using functional networks

Enrique Castillo; Ali S. Hadi; Beatriz Lacruz; Rosa Eva Pruneda

Functional networks are used to solve some nonlinear regression problems. One particular problem is how to find the optimal transformations of the response and/or the explanatory variables and obtain the best possible functional relation between the response and predictor variables. After a brief introduction to functional networks, two specific transformation models based on functional networks are proposed. Unlike in neural networks, where the selection of the network topology is arbitrary, the selection of the initial topology of a functional network is problem driven. This important feature of functional networks is illustrated for each of the two proposed models. An equivalent, but simpler network may be obtained from the initial topology using functional equations. The resultant model is then checked for uniqueness of representation. When the functions specified by the transformations are unknown in form, families of linear independent functions are used as approximations. Two different parametric criteria are used for learning these functions: the constrained least squares and the maximum canonical correlation. Model selection criteria are used to avoid the problem of overfitting. Finally, performance of the proposed method are assessed and compared to other methods using a simulation study as well as several real-life data.


international work conference on artificial and natural neural networks | 2001

Optimal Transformations in Multiple Linear Regression Using Functional Networks

Enrique Castillo; Ali S. Hadi; Beatriz Lacruz

Functional networks are used to determine the optimal transformations to be applied to the response and the predictor variables in linear regression. The main steps required to build the functional network: selection of the initial topology, simplification of the initial functional network, uniqueness of representation, and learning the parameters are discussed, and illustrated with some examples.


Neurocomputing | 2013

A multi-objective micro genetic ELM algorithm

David Lahoz; Beatriz Lacruz; Pedro M. Mateo

The extreme learning machine (ELM) is a methodology for learning single-hidden layer feedforward neural networks (SLFN) which has been proved to be extremely fast and to provide very good generalization performance. ELM works by randomly choosing the weights and biases of the hidden nodes and then analytically obtaining the output weights and biases for a SLFN with the number of hidden nodes previously fixed. In this work, we develop a multi-objective micro genetic ELM (@mG-ELM) which provides the appropriate number of hidden nodes for the problem being solved as well as the weights and biases which minimize the MSE. The multi-objective algorithm is conducted by two criteria: the number of hidden nodes and the mean square error (MSE). Furthermore, as a novelty, @mG-ELM incorporates a regression device in order to decide whether the number of hidden nodes of the individuals of the population should be increased or decreased or unchanged. In general, the proposed algorithm reaches better errors by also implying a smaller number of hidden nodes for the data sets and competitors considered.


2011 IEEE Workshop On Hybrid Intelligent Models And Applications | 2011

A bi-objective micro genetic Extreme Learning Machine

David Lahoz; Beatriz Lacruz; Pedro M. Mateo

The Extreme Learning Machine (ELM) is a recent algorithm for training single-hidden layer feedforward neural networks (SLFN) which has shown promising results when compared with other usual tools. ELM randomly chooses weights and biases of hidden nodes and analytically obtains the output weights and biases. It constitutes a very fast algorithm with a good generalization performance in most cases. Since the original ELM was presented, several papers have been published using similar ideas, EI-ELM, OP-ELM, OS-ELM, EM-ELM, etc. In this paper, we present a bi-objective micro genetic ELM (μG-ELM). This algorithm, instead of considering random hidden weights and biases, generates them by means of a micro genetic algorithm. It is conducted considering two objectives, the number of hidden nodes and the mean square error (MSE). Furthermore, as a novelty, μG-ELM incorporates a regression model in order to decide whether the number of hidden nodes should be increased or decreased. The proposed algorithm reaches similar errors but involves, in general, a smaller number of hidden nodes, while maintaining competitive execution time.


Communications in Statistics-theory and Methods | 2001

REGRESSION DIAGNOSTICS FOR THE LEAST ABSOLUTE DEVIATIONS AND THE MINIMAX METHODS

Enrique Castillo; Ali S. Hadi; Beatriz Lacruz

The literature on the detection of outliers and influential observations for the standard least squares regression model is abundant. By contrast, very little work has been done about regression diagnostics for the least absolute deviations (LAD) and the minimax (MM) methods. In this paper we propose methods for regression diagnostics for the LAD and MM methods. Using the Exterior Point method, recently introduced by [1], we show how to compute diagnostic measures from the final solution of the corresponding linear programming problem without having to redo the calculations from scratch. This provides the analysts with tools for assessing the influence of observations on the LAD and MM estimates. Both the Exterior Point method and the computations of the proposed diagnostic measures are illustrated by numeric and real-life examples.


international conference on artificial intelligence and statistics | 1996

Modeling and Monitoring Dynamic Systems by Chain Graphs

Alberto Lekuona; Beatriz Lacruz; Pilar Lasala

It is widely recognized that probabilistic graphical models provide a good framework for both knowledge representation and probabilistic inference (e.g., see [Cheeseman94], [Whittaker90]). The dynamic behaviour of a system which changes over time requires an implicit or explicit time representation. In this paper, an implicit time representation using dynamic graphical models is proposed. Our goal is to model the state of a system and its evolution over time in a richer and more natural way than other approaches together with a more suitable treatment of the inference on variables of interest.


Archive | 2008

Generalized Inverse Computation Based on an Orthogonal Decomposition Methodology

Patricia Gómez; Beatriz Lacruz; Rosa Eva Pruneda

The need to compute the generalized inverse of a matrix appears in several statistical, mathematical and engineering problems, such as the estimation of linear classification and regression functions, electrical circuits estimation, calculus of structures, etc. In this paper, we propose to apply an orthogonal decomposition methodology to compute a weak generalized inverse, based on the calculus of a non-singular submatrix of the given matrix. Special attention will be focussed on the updating of the generalized inverse when some of the elements of the original matrix are modified. The proposed method allows us to perform this updating without starting the process from scratch. The proposed procedures will be illustrated with some examples and their application to the estimation of linear regression coefficients when a problem of multicollinearity is present.


Statistics & Probability Letters | 2000

Dynamic graphical models and nonhomogeneous hidden Markov models

Beatriz Lacruz; Pilar Lasala; Alberto Lekuona

We propose a dynamic graphical model which generalizes nonhomogeneous hidden Markov models. Inference and forecast procedures are developed. A comparison with an exact propagation algorithm is established and equivalence is stated.


Neurocomputing | 2016

µG2-ELM

Beatriz Lacruz; David Lahoz; Pedro M. Mateo

µG-ELM is a multiobjective evolutionary algorithm which looks for the best (in terms of the MSE) and most compact artificial neural network using the ELM methodology. In this work we present the µG2-ELM, an upgraded version of µG-ELM, previously presented by the authors. The upgrading is based on three key elements: a specifically designed approach for the initialization of the weights of the initial artificial neural networks, the introduction of a re-sowing process when selecting the population to be evolved and a change of the process used to modify the weights of the artificial neural networks. To test our proposal we consider several state-of-the-art Extreme Learning Machine (ELM) algorithms and we confront them using a wide and well-known set of continuous, regression and classification problems. From the conducted experiments it is proved that the µG2-ELM shows a better general performance than the previous version and also than other competitors. Therefore, we can guess that the combination of evolutionary algorithms with the ELM methodology is a promising subject of study since both together allow for the design of better training algorithms for artificial neural networks.

Collaboration


Dive into the Beatriz Lacruz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali S. Hadi

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar

David Lahoz

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Manuel Gutiérrez

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge