Javier Apolloni
National University of San Luis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Javier Apolloni.
Applied Soft Computing | 2016
Javier Apolloni; Guillermo Leguizamón; Enrique Alba
Graphical abstractDisplay Omitted HighlightsWe propose two new, simple, and efficient Hybrid Feature Selection techniques.We use a feature-based ranking to initialize the Binary Differential Evolution.We also propose a new fitness function influenced by the features in the population.Several statistical tests show the robustness and effectiveness of the proposals.The reducing of the size of the original set of features is larger than 99%. Microarray experiments generally deal with complex and high-dimensional samples, and in addition, the number of samples is much smaller than their dimensions. Both issues can be alleviated by using a feature selection (FS) method. In this paper two new, simple, and efficient hybrid FS algorithms, called respectively BDE- X Rank and BDE- X Rank f , are presented. Both algorithms combine a wrapper FS method based on a Binary Differential Evolution (BDE) algorithm with a rank-based filter FS method. Besides, they generate the initial population with solutions involving only a small number of features. Some initial solutions are built considering only the most relevant features regarding the filter method, and the remaining ones include only random features (to promote diversity). In the BDE- X Rank f , a new fitness function, in which the score value of a solution is influenced by the frequency of the features in the current population, is incorporated in the algorithm. The robustness of BDE- X Rank and BDE- X Rank f is shown by using four Machine Learning (ML) algorithms (NB, SVM, C4.5, and kNN). Six high-dimensional well-known data sets of microarray experiments are used to carry out an extensive experimental study based on statistical tests. This experimental analysis shows the robustness as well as the ability of both proposals to obtain highly accurate solutions at the earlier stages of BDE evolutionary process. Finally, BDE- X Rank and BDE- X Rank f are also compared against the results of nine state-of-the-art algorithms to highlight its competitiveness and the ability to successfully reduce the original feature set size by more than 99%.
international conference hybrid intelligent systems | 2008
Javier Apolloni; Guillermo Leguizamón; José García-Nieto; Enrique Alba
This paper presents a new distributed differential evolution (dDE) algorithm and evaluates it according to the standard procedure set in the special session of continuous optimization of CECpsila05. We statistically validate our results in continuous optimization versus several other efficient techniques. Our distributed differential evolution is simple and accurate, at the same time amenable, to be applied to a wide variety of problems, especially for noisy and multimodal functions.
Applied Mathematics and Computation | 2014
Javier Apolloni; José García-Nieto; Enrique Alba; Guillermo Leguizamón
This paper presents a new distributed Differential Evolution (dDE) algorithm and provides an exhaustive evaluation of it by using two standard benchmarks. One of them was proposed in the special session of Real-Parameter Optimization of CEC’05, and the other was proposed in the special session of Large Scale Global Optimization of CEC’08. We statistically validate and compare our results versus all other techniques presented in these special sessions. This means that more than 25 problems, with different dimensions: 30, 50, 100, and 500 variables, are evaluated; and 15 algorithms are compared in the experiments. Our dDE is simple, accurate, and competitive when applied to a wide variety of problems, with scaling dimensions, and different function features: noisy, non-separable, multimodal, rotated, etc.
2009 2nd International Symposium on Logistics and Industrial Informatics | 2009
José García-Nieto; Enrique Alba; Javier Apolloni
The efficient selection of predictive and accurate gene subsets for cell-type classification is nowadays a crucial problem in Microarray data analysis. The application and combination of dedicated computational intelligence methods holds a great promise for tackling the feature selection and classification. In this work we present a Differential Evolution (DE) approach for the efficient automated gene subset selection. In this model, the selected subsets are evaluated by means of their classification rate using a Support Vector Machines (SVM) classifier. The proposed approach is tested on DLBCL Lymphoma and Colon Tumor gene expression datasets. Experiments lying in effectiveness and biological analyses of the results, in addition to comparisons with related methods in the literature, indicate that our DE-SVM model is highly reliable and competitive. I. INTRODUCTION DNA Microarrays (MA) (13) allow the scientists to simulta- neously analyze thousands of genes, and thus giving important insights about cells function, since changes in the physio-logy of an organism are generally associated with changes in gene ensembles of expression patterns. The vast amount of data involved in a typical Microarray experiment usually requires to perform a complex statistical analysis, with the important goal of making the classification of the dataset into correct classes. The key issue in this classification is to identify significant and representative gene subsets that may be used to predict class membership for new external samples. Furthermore, these subsets should be as small as possible in order to develop fast and low consuming processes for the future class prediction. The main difficulty in Microarray classification versus other domains is the availability of a relatively small number of samples in comparison with the number of genes in each sample (between 2,000 and more than 10,000 in MA). In addition, expression data are highly redundant and noisy, and of most genes are believed to be uninformative with respect to studied classes, as only a fraction of genes may present distinct profiles for different classes of samples. In this context, machine learning techniques have been applied to handle with large and heterogeneous datasets, since they are capable to isolate the useful information by rejecting redundancies. Concretely, feature selection is often considered as a necessary preprocess step to analyze large datasets, as this method can reduce the dimensionality of the datasets and often conducts to better analyses (9). Feature selection (gene selection in Biology) for gene expression analysis in cancer prediction often uses wrapper classification methods to discriminate a type of tumor (9), (11), to reduce the number of genes to investigate in case of a new patient, and also to assist in drug discovery and early diagnosis. The formal definition of the feature selection problem that we consider here is given as follows:
genetic and evolutionary computation conference | 2009
José García-Nieto; Enrique Alba; Javier Apolloni
In this work we evaluate a Particle Swarm Optimizer hybridized with Differential Evolution and apply it to the Black-Box Optimization Benchmarking for noisy functions (BBOB 2009). We have performed the complete procedure established in this special session dealing with noisy functions with dimension: 2, 3, 5, 10, 20, and 40 variables. Our proposal obtained an accurate level of coverage rate, despite the simplicity of the model and the relatively small number of function evaluations used.
genetic and evolutionary computation conference | 2009
José García-Nieto; Enrique Alba; Javier Apolloni
In this work we evaluate a Particle Swarm Optimizer hybridized with Differential Evolution and apply it to the Black-Box Optimization Benchmarking for noiseless functions (BBOB 2009). We have performed the complete procedure established in this special session dealing with noiseless functions with dimension: 2, 3, 5, 10, 20, and 40 variables. Our proposal obtained an accurate level of coverage rate, despite the simplicity of the model and the relatively small number of function evaluations used.
XVIII Congreso Argentino de Ciencias de la Computación | 2012
Javier Apolloni; Guillermo Leguizamón; Enrique Alba Torres
X Congreso Argentino de Ciencias de la Computación | 2004
Carlos Kavka; Patricia Roggero; Javier Apolloni
VI Workshop de Investigadores en Ciencias de la Computación | 2004
Carlos Kavka; Patricia Roggero; Javier Apolloni
VIII Congreso Argentino de Ciencias de la Computación | 2002
Javier Apolloni; Carlos Kavka; Patricia Roggero