Hormoz Shahrzad
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hormoz Shahrzad.
Archive | 2013
Babak Hodjat; Hormoz Shahrzad
We present a method for estimating fitness functions that are computationally expensive for an exact evaluation. The proposed estimation method applies a number of partial evaluations based on incomplete information or uncertainties. We show how this method can yield results that are close to similar methods where fitness is measured over the entire dataset, but at a fraction of the speed or memory usage, and in a parallelizable manner. We describe our experience in applying this method to a real world application in the form of evolving equity trading strategies.
GPTP | 2014
Babak Hodjat; Erik Hemberg; Hormoz Shahrzad; Una-May O’Reilly
We describe a system, ECStar, that outstrips many scaling aspects of extant genetic programming systems. One instance in the domain of financial strategies has executed for extended durations (months to years) on nodes distributed around the globe. ECStar system instances are almost never stopped and restarted, though they are resource elastic. Instead they are interactively redirected to different parts of the problem space and updated with up-to-date learning. Their non-reproducibility (i.e. single “play of the tape” process) due to their complexity makes them similar to real biological systems. In this contribution we focus upon how ECStar introduces a provocative, important, new paradigm for GP by its sheer size and complexity. ECStar’s scale, volunteer compute nodes and distributed hub-and-spoke design have implications on how a multi-node instance is managed. We describe the set up, deployment, operation and update of an instance of such a large, distributed and long running system. Moreover, we outline how ECStar is designed to allow manual guidance and re-alignment of its evolutionary search trajectory.
genetic and evolutionary computation conference | 2016
Hormoz Shahrzad; Babak Hodjat; Risto Miikkulainen
In an age-layered evolutionary algorithm, candidates are evaluated on a small number of samples first; if they seem promising, they are evaluated with more samples, up to the entire training set. In this manner, weak candidates can be eliminated quickly, and evolution can proceed faster. In this paper, the fitness-level method is used to derive a theoretical upper bound for the runtime of (k+1) age-layered evolutionary strategy, showing a significant potential speedup compared to a non-layered counterpart. The parameters of the upper bound are estimated experimentally in the 11-Multiplexer problem, verifying that the theory can be useful in configuring age layering for maximum advantage. The predictions are validated in a practical implementation of age layering, confirming that 60-fold speedups are possible with this technique.
arXiv: Neural and Evolutionary Computing | 2018
Hormoz Shahrzad; Daniel Fink; Risto Miikkulainen
An important benefit of multi-objective search is that it maintains a diverse population of candidates, which helps in deceptive problems in particular. Not all diversity is useful, however: candid...
Archive | 2015
Hormoz Shahrzad; Babak Hodjat
We demonstrate the effectiveness and power of the distributed GP platform, EC-Star, by comparing the computational power needed for solving an 11-multiplexer function, both on a single machine using a full-fitness evaluation method, as well as using distributed, age-layered, partial-fitness evaluations and a Pitts-style representation. We study the impact of age-layering and show how the system scales with distribution and tends towards smaller solutions. We also consider the effect of pool size and the choice of fitness function on convergence and total computation.
ACM Sigevolution | 2018
Risto Miikkulainen; Babak Hodjat; Xin Qiu; Jason Zhi Liang; Elliot Meyerson; Aditya Rawal; Hormoz Shahrzad
Deep learning (DL) has transformed much of AI, and demonstrated how machine learning can make a difference in the real world. Its core technology is gradient descent, which has been used in neural networks since the 1980s. However, massive expansion of available training data and compute gave it a new instantiation that significantly increased its power.
Archive | 2016
Babak Hodjat; Hormoz Shahrzad
We introduce a cross-validation algorithm called nPool that can be applied in a distributed fashion. Unlike classic k-fold cross-validation, the data segments are mutually exclusive, and training takes place only on one segment. This system is well suited to run in concert with the EC-Star distributed Evolutionary system, cross-validating solution candidates during a run. The system is tested with different numbers of validation segments using a real-world problem of classifying ICU blood-pressure time series.
arXiv: Neural and Evolutionary Computing | 2017
Risto Miikkulainen; Jason Zhi Liang; Elliot Meyerson; Aditya Rawal; Daniel Fink; Olivier Francon; Bala Raju; Hormoz Shahrzad; Arshak Navruzyan; Nigel Duffy; Babak Hodjat
Archive | 2010
Babak Hodjat; Hormoz Shahrzad
Archive | 2010
Babak Hodjat; Hormoz Shahrzad