Antonio Gómez-Iglesias
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antonio Gómez-Iglesias.
parallel, distributed and network-based processing | 2011
Miguel Cárdenas-Montes; Miguel A. Vega-Rodríguez; Juan José Rodríguez-Vázquez; Antonio Gómez-Iglesias
This paper focuses on solving large size optimization problems using GPGPU. Evolutionary Algorithms for solving these optimization problems suffer from the curse of dimensionality, which implies that their performance deteriorates as quickly as the dimensionality of the search space increases. This difficulty makes very challenging the performance studies for very high dimensional problems. Furthermore, these studies deal with a limited time-budget. The availability of low cost powerful parallel graphics cards has stimulated the implementation of diverse algorithms on Graphics Processing Units (GPU). In this paper, the design of a GPGPU-based Parallel Particle Swarm Algorithm, to tackle this type of problem maintaining a limited execution time budget, is described. This implementation profits of an efficient mapping of the data elements (swarm of very high dimensional particles) to the parallel processing elements of the GPU. In this problem, the fitness evaluation is the most CPU-costly routine, and therefore the main candidate to be implemented on GPU. As main conclusion, the speed-up curve versus the increase in dimensionality is shown. This curve indicates an asymptotic limit stemmed from the data-parallel mapping.
ieee international conference on high performance computing, data, and analytics | 2016
Carlos Rosales; John Cazes; Kent Milfeld; Antonio Gómez-Iglesias; Lars Koesterke; Lei Huang; Jérôme Vienne
Intel Knights Landing represents a qualitative change in the Many Integrated Core architecture. It represents a self-hosted option and includes a high speed integrated memory together with a two dimensional mesh used to interconnect the cores. This leads to a number of possible runtime configurations with different characteristics and implications in the performance of applications. This paper presents a study of the performance differences observed when using the three MCDRAM configurations available in combination with the three possible memory access or cluster modes. We analyze the effects that memory affinity and process pinning have on different applications. The Mantevo suite of mini applications and NAS Parallel Benchmarks are used to analyze the behavior of very different application kernels, from molecular dynamics to CFD mini-applications. Two full applications, the Weather Research and Forecast (WRF) application and a Lattice Boltzman Suite (LBS3D) are also analyzed in detail to complete the study and present scalability results of a variety of applications.
parallel, distributed and network-based processing | 2010
Antonio Gómez-Iglesias; Miguel A. Vega-Rodríguez; F. Castejón; Miguel Cárdenas-Montes; Enrique Morales-Ramos
Artificial Bee Colony (ABC) algorithm is an optimisation algorithm based on the intelligent behaviour of honey bee swarm. In this work, ABC algorithm is used to optimise the equilibrium of confined plasma in a nuclear fusion device. Plasma physics research for fusion still presents open problems that need a large computing capacity to be solved. This optimisation process is a long time consuming process so an environment like grid computing has to be used, thus the first step is to adapt and extend the ABC algorithm to use the grid capabilities. In this work we present a modification of the original ABC algorithm, its adaption to a grid computing environment and the application of this algorithm to the equilibrium optimisation process.
parallel, distributed and network-based processing | 2008
Antonio Gómez-Iglesias; Miguel A. Vega-Rodríguez; Francisco Castejón-Magaña; M.R. del Solar; Miguel Cárdenas Montes
Fusion energy is the next generation of energy. Plasma confinement is part of the program to develop fusion energy, so many investigations have been performed on it. Grid computing is currently focused on scientific problems like fusion energy or climate. Using grid computing many of these problems, with high computational costs, can be simulated and optimized before implementing special devices. High number of tests can be performed using grid computing in the same time that a single test on a single machine, so researchers can obtain better results in less time than using traditional techniques. In this paper, we present our first steps in grid computing applied into fusion energy, being the starting point to analyse and optimize the configuration of a device for plasma confinement based on the transport inside the device. Finally we propose some goals to use the current work in future investigations.
cluster computing and the grid | 2008
Manuel Rubio-Solar; Miguel A. Vega-Rodríguez; Juan Manuel Sánchez Pérez; Antonio Gómez-Iglesias; Miguel Cárdenas-Montes
In this work we present a Grid implementation of a FPGA optimization tool. The application is based on a Distributed Genetic Algorithm (DGA). It solves the placement and routing problem into the FPGA design cycle. The Grid infrastructure is based both on gLite middleware and GridWay metascheduler. The DGA s different islands are sent to the Working Nodes (WN), where they evolve as remote jobs. We implemented a migration system between islands based on centralizing the exchanging data on a local node. Parting from this data, the local node builds new islands and the evolution continues until the stop criterion is reached. Obtained results show us that the main benefit of the distributed model is a large reduction of the execution time. By using the distributed platform users can launch more complex tasks and increase the number of experiments comparing with sequential execution, expending less amounts of time and effort.
international conference on adaptive and natural computing algorithms | 2011
Miguel Cárdenas-Montes; Miguel A. Vega-Rodríguez; Antonio Gómez-Iglesias
This article presents an empirical study of the impact of the change of the Random Number Generator over the performance of four Evolutionary Algorithms: Particle Swarm Optimisation, Differential Evolution, Genetic Algorithm and Firefly Algorithm. Random Number Generators are a key piece in the production of e-science, including optimisation problems by Evolutionary Algorithms. However, Random Number Generator ought to be carefully selected taking into account the quality of the generator. In order to analyse the impact over the performance of an evolutionary algorithm due to the change of Random Number Generator, a huge production of simulated data is necessary as well as the use of statistical techniques to extract relevant information from large data set. To support this production, a grid computing infrastructure has been employed. In this study, the most frequently employed high-quality Random Number Generators and Evolutionary Algorithms are coupled in order to cover the widest portfolio of cases. As consequence of this study, an evaluation about the impact of the use of different Random Number Generators over the final performance of the Evolutionary Algorithm is stated.
international conference on adaptive and natural computing algorithms | 2011
Miguel Cárdenas-Montes; Miguel A. Vega-Rodríguez; Juan José Rodríguez-Vázquez; Antonio Gómez-Iglesias
Diverse technologies have been used to accelerate the execution of Evolutionary Algorithms. Nowadays, the GPGPUcards have demonstrated a high efficiency in the improvement of the execution times in a wide range of scientific problems, including some excellent examples with diverse categories of Evolutionary Algorithms. Nevertheless, the studies in depth of the efficiency of each one of these technologies, and how they affect to the final performance are still scarce. These studies are relevant in order to reduce the execution time budget, and therefore affront higher dimensional problems. In this work, the improvement of the speed-up face to the percentage of threads used per block in the GPGPU card is analysed. The results conclude that a correct election of the occupancy --number of the threads per block-- contributes to win an additional speed-up.
international conference on adaptive and natural computing algorithms | 2009
Antonio Gómez-Iglesias; Miguel A. Vega-Rodríguez; Miguel Cárdenas-Montes; Enrique Morales-Ramos; Francisco Castejón-Magaña
Scatter search (SS) is an evolutionary algorithm (EA) becoming more important in current researches as the increasing number of publications shows. It is a very promising method for solving combinatorial and nonlinear optimisation problems. This algorithm is being widely implemented for solving problems not taking long, but in case of processes requiring of high execution times likely to be executed using grid computing there is not an implementation for it. Some problems arise when we try to execute this algorithm using the grid, but once they are solved, the obtained results are really promising for many complex and scientific applications like, for example, applications for optimising nuclear fusion devices. Using concurrent programming and distributed techniques associated to the grid, the algorithm works as it could do it in a single computer.
international symposium on parallel and distributed computing | 2008
Antonio Gómez-Iglesias; Miguel A. Vega-Rodríguez; Francisco Castejón-Magaña; Miguel Cárdenas-Montes; Enrique Morales-Ramos
Fusion energy is the next generation of energy. The devices that scientists are using to carry out their researches need more energy than they produce, because many problems are presented in fusion devices. In magnetic confinement devices, one of these problems is the transport of particles in the confined plasma. Some modeling tools can be used to improve the transport levels, but the computational cost of these tools and the number of different configurations to simulate make impossible to perform the required test to obtain good designs. But with grid computing we have the computational resources needed for running the required number of tests and with genetic algorithms we can look for a good result without exploring all the solution space.
Applied Soft Computing | 2013
Antonio Gómez-Iglesias; Miguel A. Vega-Rodríguez; Francisco Castejón
Solving large-scale scientific problems represents a challenging and large area in numerical optimisation. Devoted techniques may improve the results achieved for these problems. We aimed to design an specific optimisation technique for these problems. In this case, a new swarm-based algorithm based on bees foraging behaviour is presented. This system must rely on large computing infrastructures that present specific characteristics. We designed this algorithm for being executed on the grid. The resulting algorithm improves the results obtained for the large-scale problem described in the paper by other algorithms. It also delivers an optimal usage of the computational resources. This work represents one of the few evidences for solving real large-scale scientific problems with a devoted algorithm using large and complex computing infrastructures. We show the capabilities of this approach when solving these problems.