Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David A. Elizondo is active.

Publication


Featured researches published by David A. Elizondo.


decision support systems | 2008

Bankruptcy forecasting: An empirical comparison of AdaBoost and neural networks

Esteban Alfaro; Noelia García; Matías Gámez; David A. Elizondo

The goal of this study is to show an alternative method to corporate failure prediction. In the last decades Artificial Neural Networks have been widely used for this task. These models have the advantage of being able to detect non-linear relationships and show a good performance in presence of noisy information, as it usually happens, in corporate failure prediction problems. AdaBoost is a novel ensemble learning algorithm that constructs its base classifiers in sequence using different versions of the training data set. In this paper, we compare the prediction accuracy of both techniques on a set of European firms, considering the usual predicting variables such as financial ratios, as well as qualitative variables, such as firm size, activity and legal structure. We show that our approach decreases the generalization error by about thirty percent with respect to the error produced with a neural network.


Agricultural and Forest Meteorology | 1994

Development of a neural network model to predict daily solar radiation

David A. Elizondo; Gerrit Hoogenboom; Ronald W. McClendon

Many computer simulation models which predict growth, development, and yield of agronomic and horticultural crops require daily weather data as input. One of these inputs is daily total solar radiation, which in many cases is not available owing to the high cost and complexity of the instrumentation needed to record it. The aim of this study was to develop a neural network model which can predict solar radiation as a function of readily available weather data and other environmental variables. Four sites in the southeastern USA, i.e. Tifton, GA, Clayton, NC, Gainesville, FL, and Quincy, FL, were selected because of the existence of longterm daily weather data sets which included solar radiation. A combined total of 23 complete years of weather data sets were available, and these data sets were separated into 11 years for the training data set and 12 years for the testing data set. Daily observed values of minimum and maximum air temperature and precipitation, together with daily calculated values for daylength and clear sky radiation, were used as inputs for the neural network model. Daylength and clear sky radiation were calculated as a function of latitude, day of year, solar angle, and solar constant. An optimum momentum, learning rate, and number of hidden nodes were determined for further use in the development of the neural network model. After model development, the neural network model was tested against the independent data set. Root mean square error varied from 2.92 to 3.64 MJ m−2 and the coefficient of determination varied from 0.52 to 0.74 for the individual years used to test the accuracy of the model. Although this neural network model was developed and tested for a limited number of sites, the results suggest that it can be used to estimate daily solar radiation when measurements of only daily maximum and minimum air temperature and precipitation are available.


Transactions of the ASABE | 1994

Neural Network Models for Predicting Flowering and Physiological Maturity of Soybean

David A. Elizondo; Ronald W. McClendon; Gerrit Hoogenboom

It is important for farmers to know when various plant development stages occur for making appropriate and timely crop management decisions. Although computer simulation models have been developed to simulate plant growth and development, these models have not always been very accurate in predicting plant development for a wide range of environmental conditions. The objective of this study was to develop a neural network model to predict flowering and physiological maturity for soybean (Glycine max L. Merr.). An artificial neural network is a computer software system consisting of various simple and highly interconnected processing elements similar to the neuron structure found in the human brain. A neural network model was used because it has the capabilities to identify relationships between variables of rather large and complex data bases. For this study, field-observed flowering dates for the cultivar ‘Bragg’ from experimental studies conducted in Gainesville and Quincy, Florida, and Clayton, North Carolina, were used. Inputs considered for the neural network model were daily maximum and minimum air temperature, photoperiod, and days after planting or days after flowering. The data sets were split into training sets to develop the models and independent data sets to test the models. The average relative error of the test data sets for date of flowering prediction was+0.143 days (n = 21, R2 = 0.987) and for date of physiological maturity prediction was +2.19 days (n = 21, R2 = 0.950). It can be concluded from this study that the use of neural network models to predict flowering and physiological maturity dates is promising and needs to be explored further.


IEEE Transactions on Neural Networks | 2006

The linear separability problem: some testing methods

David A. Elizondo

The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included.


International Journal of Neural Systems | 1997

A Survey of Partially Connected Neural Networks

David A. Elizondo; Emile Fiesler

Almost all artificial neural networks are by default fully connected, which often implies a high redundancy and complexity. Little research has been devoted to the study of partially connected neural networks, despite its potential advantages like reduced training and recall time, improved generalization capabilities, reduced hardware requirements, as well as being a step closer to biological reality. This publication presents an extensive survey of the various kinds of partially connected neural networks, clustered into a clear framework, followed by a detailed comparative discussion.


Expert Systems With Applications | 2013

Assessment of geometric features for individual identification and verification in biometric hand systems

Rafael Marcos Luque-Baena; David A. Elizondo; Ezequiel López-Rubio; Esteban J. Palomo; Tim Watson

This paper studies the reliability of geometric features for the identification of users based on hand biometrics. Our methodology is based on genetic algorithms and mutual information. The aim is to provide a system for user identification rather than a classification. Additionally, a robust hand segmentation method to extract the hand silhouette and a set of geometric features in hard and complex environments is described. This paper focuses on studying how important and discriminating the hand geometric features are, and if they are suitable in developing a robust and reliable biometric identification. Several public databases have been used to test our method. As a result, the number of required features have been drastically reduced from datasets with more than 400 features. In fact, good classification rates with about 50 features on average are achieved, with a 100% accuracy using the GA-LDA strategy for the GPDS database and 97% for the CASIA and IITD databases, approximately. For these last contact-less databases, reasonable EER rates are also obtained.


TAEBC-2009 | 2009

Constructive Neural Networks

Leonardo Franco; David A. Elizondo; José M. Jerez

The book is a collection of invited papers on Constructive methods for Neural networks. Most of the chapters are extended versions of works presented on the special session on constructive neural network algorithms of the 18th International Conference on Artificial Neural Networks (ICANN 2008) held September 3-6, 2008 in Prague, Czech Republic. The book is devoted to constructive neural networks and other incremental learning algorithms that constitute an alternative to standard trial and error methods for searching adequate architectures. It is made of 15 articles which provide an overview of the most recent advances on the techniques being developed for constructive neural networks and their applications. It will be of interest to researchers in industry and academics and to post-graduate students interested in the latest advances and developments in the field of artificial neural networks.


Neural Networks | 1998

The recursive deterministic perceptron neural network

Mohamed Tajine; David A. Elizondo

We introduce a feedforward multilayer neural network which is a generalization of the single layer perceptron topology (SLPT), called recursive deterministic perceptron (RDP). This new model is capable of solving any two-class classification problem, as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two subsets X and Y of R(d) are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of R(d) delimited by this hyperplane). We propose several growing methods for constructing a RDP. These growing methods build a RDP by successively adding intermediate neurons (IN) to the topology (an IN corresponds to a SLPT). Thus, as a result, we obtain a multilayer perceptron topology, which together with the weights, are determined automatically by the constructing algorithms. Each IN augments the affine dimension of the set of input vectors. This augmentation is done by adding the output of each of these INs, as a new component, to every input vector. The construction of a new IN is made by selecting a subset from the set of augmented input vectors which is LS from the rest of this set. This process ends with LS classes in almost n-1 steps where n is the number of input vectors. For this construction, if we assume that the selected LS subsets are of maximum cardinality, the problem is proven to be NP-complete. We also introduce a generalization of the RDP model for classification of m classes (m>2) allowing to always separate m classes. This generalization is based on a new notion of linear separability for m classes, and it follows naturally from the RDP. This new model can be used to compute functions with a finite domain, and thus, to approximate continuous functions. We have also compared - over several classification problems - the percentage of test data correctly classified, or the topology of the 2 and m classes RDPs with that of the backpropagation (BP), cascade correlation (CC), and two other growing methods.


Constructive Neural Networks | 2009

Constructive Neural Network Algorithms for Feedforward Architectures Suitable for Classification Tasks

Maria do Carmo Nicoletti; João Roberto Bertini; David A. Elizondo; Leonardo Franco; José M. Jerez

This chapter presents and discusses several well-known constructive neural network algorithms suitable for constructing feedforward architectures aiming at classification tasks involving two classes. The algorithms are divided into two different groups: the ones directed by the minimization of classification errors and those based on a sequential model. In spite of the focus being on two-class classification algorithms, the chapter also briefly comments on the multiclass versions of several two-class algorithms, highlights some of the most popular constructive algorithms for regression problems and refers to several other alternative algorithms.


Neural Networks | 2012

2012 Special Issue: Application of growing hierarchical SOM for visualisation of network forensics traffic data

Esteban J. Palomo; John North; David A. Elizondo; Rafael Marcos Luque; Tim Watson

Digital investigation methods are becoming more and more important due to the proliferation of digital crimes and crimes involving digital evidence. Network forensics is a research area that gathers evidence by collecting and analysing network traffic data logs. This analysis can be a difficult process, especially because of the high variability of these attacks and large amount of data. Therefore, software tools that can help with these digital investigations are in great demand. In this paper, a novel approach to analysing and visualising network traffic data based on growing hierarchical self-organising maps (GHSOM) is presented. The self-organising map (SOM) has been shown to be successful for the analysis of highly-dimensional input data in data mining applications as well as for data visualisation in a more intuitive and understandable manner. However, the SOM has some problems related to its static topology and its inability to represent hierarchical relationships in the input data. The GHSOM tries to overcome these limitations by generating a hierarchical architecture that is automatically determined according to the input data and reflects the inherent hierarchical relationships among them. Moreover, the proposed GHSOM has been modified to correctly treat the qualitative features that are present in the traffic data in addition to the quantitative features. Experimental results show that this approach can be very useful for a better understanding of network traffic data, making it easier to search for evidence of attacks or anomalous behaviour in a network environment.

Collaboration


Dive into the David A. Elizondo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge