Héctor Fraire
Instituto Tecnológico de Ciudad Madero
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Héctor Fraire.
international conference on computational science and its applications | 2007
O Joaquín Pérez; R. Rodolfo Pazos; R Laura Cruz; S. Gerardo Reyes; T. Rosy Basave; Héctor Fraire
Clustering problems arise in many different applications: machine learning, data mining, knowledge discovery, data compression, vector quantization, pattern recognition and pattern classification. One of the most popular and widely studied clustering methods is K-means. Several improvements to the standard K-means algorithm have been carried out, most of them related to the initial parameter values. In contrast, this article proposes an improvement using a new convergence condition that consists of stopping the execution when a local optimum is found or no more object exchanges among groups can be performed. For assessing the improvement attained, the modified algorithm (Early Stop K-means) was tested on six databases of the UCI repository, and the results were compared against SPSS, Weka and the standard K-means algorithm. Experimentally Early Stop K-means obtained important reductions in the number of iterations and improvements in the solution quality with respect to the other algorithms.
international conference on computational science and its applications | 2004
Joaquín Pérez; Rodolfo Pazos; Juan Frausto; Guillermo Rodríguez; Laura Cruz; Graciela Mora; Héctor Fraire
In this paper, a new mechanism for automatically obtaining some control parameter values for Genetic Algorithms is presented, which is independent of problem domain and size. This approach differs from the traditional methods which require knowing first the problem domain, and then knowing how to select the parameter values for solving specific problem instances. The proposed method is based on a sample of problem instances, whose solution permits to characterize the problem and to obtain the parameter values.To test the method, a combinatorial optimization model for data-objects allocation in the Web (known as DFAR) was solved using Genetic Algorithms. We show how the proposed mechanism permits to develop a set of mathematical expressions that relates the problem instance size to the control parameters of the algorithm. The experimental results show that the self-tuning of control parameter values of the Genetic Algorithm for a given instance is possible, and that this mechanism yields satisfactory results in quality and execution time. We consider that the proposed method principles can be extended for the self-tuning of control parameters for other heuristic algorithms.
international conference on computational science and its applications | 2004
O Joaquín Pérez; A R Rodolfo Pazos; S Juan Frausto; O Guillermo Rodríguez; R Laura Cruz; Héctor Fraire
The traditional approach for comparing heuristic algorithms uses well-known statistical tests for meaningfully relating the empirical performance of the algorithms and concludes that one outperforms the other. In contrast, the method presented in this paper, builds a predictive model of the algorithms behavior using functions that relate performance to problem size, in order to define dominance regions. This method generates first a representative sample of the algorithms performance, then using a common and simplified regression analysis determines performance functions, which are finally incorporated into an algorithm selection mechanism. For testing purposes, a set of same-class instances of the database distribution problem was solved using an exact algorithm (Branch&Bound) and a heuristic algorithm (Simulated Annealing). Experimental results show that problem size affects differently both algorithms, in such a way that there exist regions where one algorithm is more efficient than the other.
modeling decisions for artificial intelligence | 2004
O Joaquín Pérez; A R Rodolfo Pazos; S Juan Frausto; R Laura Cruz; Héctor Fraire; D Elizabeth Santiago; E A Norma Garcia
This paper deals with heuristic algorithm selection, which can be stated as follows: given a set of solved instances of a NP-hard problem, for a new instance to predict which algorithm solves it better. For this problem, there are two main selection approaches. The first one consists of developing functions to relate performance to problem size. In the second more characteristics are incorporated, however they are not defined formally, neither systematically, In contrast, we propose a methodology to model algorithm performance predictors that incorporate critical characteristics.. The relationship among performance and characteristics is learned from historical data using machine learning techniques. To validate our approach we carried out experiments using an extensive test set. In particular, for the classical bin packing problem, we developed predictors that incorporate the interrelation among five critical characteristics and the performance of seven heuristic algorithms. We obtained an accuracy of 81% in the selection of the best algorithm.This paper deals with heuristic algorithm selection, which can be stated as follows: given a set of solved instances of a NP-hard problem, for a new instance to predict which algorithm solves it better. For this problem, there are two main selection approaches. The first one consists of developing functions to relate performance to problem size. In the second more characteristics are incorporated, however they are not defined formally, neither systematically. In contrast, we propose a methodology to model algorithm performance predictors that incorporate critical characteristics. The relationship among performance and characteristics is learned from historical data using machine learning techniques. To validate our approach we carried out experiments using an extensive test set. In particular, for the classical bin packing problem, we developed predictors that incorporate the interrelation among five critical characteristics and the performance of seven heuristic algorithms. We obtained an accuracy of 81% in the selection of the best algorithm.
Recent Advances on Hybrid Approaches for Designing Intelligent Systems | 2014
Héctor Fraire; J. David Terán-Villanueva; Norberto Castillo García; Juan Javier González Barbosa; Eduardo Rodríguez del Angel; Yazmín Gómez Rojas
In this chapter we approach the vertex bisection problem (VB), which is relevant in the context of communication networks. A literature review shows that the reported exact methods are restricted to solve particular graph cases. As a first step to solve the problem for general graphs using soft computing techniques, we propose two new integer-linear programming models and a new branch and bound algorithm (B&B). For the first time, the optimal solutions for an extensive set of standard instances were obtained.
brazilian symposium on artificial intelligence | 2004
O Joaquín Pérez; A R Rodolfo Pazos; O Graciela Mora; V Guadalupe Castilla; José Martínez; N Vanesa Landero; Héctor Fraire; J B Juan González
In this paper, a new mechanism for automatically obtaining some control parameter values for Genetic Algorithms is presented, which is independent of problem domain and size. This approach differs from the traditional methods which require knowing the problem domain first, and then knowing how to select the parameter values for solving specific problem instances. The proposed method uses a sample of problem instances, whose solution allows to characterize the problem and to obtain the parameter values. To test the method, a combinatorial optimization model for data-object allocation in the Web (known as DFAR) was solved using Genetic Algorithms. We show how the proposed mechanism allows to develop a set of mathematical expressions that relates the problem instance size to the control parameters of the algorithm. The expressions are then used, in on-line process, to control the parameter values. We show the last experimental results with the self-tuning mechanism applied to solve a sample of random instances that simulates a typical Web workload. We consider that the proposed method principles must be extended to the self-tuning of control parameters for other heuristic algorithms.
international syposium on methodologies for intelligent systems | 2008
Joaquín Pérez; Laura Cruz; Rodolfo Pazos; Vanesa Landero; Gerardo Reyes; Crispín Zavala; Héctor Fraire; Verónica Pérez
The problem of algorithm selection for solving NP problems arises with the appearance of a variety of heuristic algorithms. The first works claimed the supremacy of some algorithm for a given problem. Subsequent works revealed that the supremacy of algorithms only applied to a subset of instances. However, it was not explained why an algorithm solved better an instances subset. In this respect, this work approaches the problem of explaining through causal modeling the interrelations between instances characteristics and the inner workings of algorithms. For validating the results of the proposed approach, a set of experiments was carried out in a study case of the Tabu Search algorithm applied to the Bin Packing problem. Finally, the proposed approach can be useful for redesigning the logic of heuristic algorithms and for justifying the use of an algorithm to solve an instance subset. This information could contribute to algorithm selection for NP-hard problems.
international conference on computational science and its applications | 2007
R. Manuel Aguilar; Héctor Fraire; R Laura Cruz; B. Juan Javier González; V Guadalupe Castilla; S Claudia Gómez
Prediction of exonic and intronic regions is an important problem of bioinformatics, which has been solved with a set of medium accuracy coding measures. In this work, we propose a new methodology for the prediction of exons and introns based on a cryptanalysis method of Kasiski, using variants of three classical coding measures: codon usage, amino acid usage, and codon preference. We validated our approach testing a set of 178 sequence of different length, improving the prediction of exons level reported by Fickett. Additionally we introduce the first results of introns prediction with an accuracy level of 83.4%.Prediction of exonic and intronic regions is an important problem of bioinformatics, which has been solved with a set of medium accuracy coding measures. In this work, we propose a new methodology for the prediction of exons and introns based on a cryptanalysis method of Kasiski, using variants of three classical coding measures: codon usage, amino acid usage, and codon preference. We validated our approach testing a set of 178 sequence of different length, improving the prediction of exons level reported by Fickett. Additionally we introduce the first results of introns prediction with an accuracy level of 83.4%.
international conference on artificial intelligence and soft computing | 2006
Joaquín Pérez; Laura Cruz; Rodolfo Pazos; Vanesa Landero; Gerardo Reyes; Héctor Fraire; Juan Frausto
The problem of algorithm selection for solving NP problems arises with the appearance of a variety of heuristic algorithms. The first works claimed the supremacy of some algorithm for a given problem. Subsequent works revealed the supremacy of algorithms only applied to a subset of instances. However, it was not explained why an algorithm solved better a subset of instances. In this respect, this work approaches the problem of explaining through causal model the interrelations between instances characteristics and the inner workings of algorithms. For validating the results of the proposed approach, a set of experiments was carried out in a study case of the Threshold Accepting algorithm to solve the Bin Packing problem. Finally, the proposed approach can be useful for redesigning the logic of heuristic algorithms and for justifying the use of an algorithm to solve an instance subset. This information could contribute to algorithm selection for NP problems.
Lecture Notes in Computer Science | 2005
Héctor Fraire; V Guadalupe Castilla; R Arturo Hernández; S Claudia Gómez; O Graciela Mora; V Arquimedes Godoy
In this paper we approach the solution of large instances of the distribution design problem. Traditional approaches do not consider that the size of the instances can significantly affect the efficiency of the solution process. This paper shows the feasibility to solve large scale instances of the distribution design problem by compressing the instance to be solved. The goal of the compression is to obtain a reduction in the amount of resources needed to solve the original instance, without significantly reducing the quality of its solution. In order to preserve the solution quality, the compression summarizes the access pattern of the original instance using clustering techniques. In order to validate the approach we tested it on a new model of the replicated version of the distribution design problem that incorporates generalized database objects. The experimental results show that our approach permits to reduce the computational resources needed for solving large instances, using an efficient clustering algorithm. We present experimental evidence of the clustering efficiency of the algorithm.