Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amitay Isaacs is active.

Publication


Featured researches published by Amitay Isaacs.


IEEE Transactions on Evolutionary Computation | 2011

A Pareto Corner Search Evolutionary Algorithm and Dimensionality Reduction in Many-Objective Optimization Problems

Hemant Kumar Singh; Amitay Isaacs; Tapabrata Ray

Many-objective optimization refers to the optimization problems containing large number of objectives, typically more than four. Non-dominance is an inadequate strategy for convergence to the Pareto front for such problems, as almost all solutions in the population become non-dominated, resulting in loss of convergence pressure. However, for some problems, it may be possible to generate the Pareto front using only a few of the objectives, rendering the rest of the objectives redundant. Such problems may be reducible to a manageable number of relevant objectives, which can be optimized using conventional multiobjective evolutionary algorithms (MOEAs). For dimensionality reduction, most proposals in the paper rely on analysis of a representative set of solutions obtained by running a conventional MOEA for a large number of generations, which is computationally overbearing. A novel algorithm, Pareto corner search evolutionary algorithm (PCSEA), is introduced in this paper, which searches for the corners of the Pareto front instead of searching for the complete Pareto front. The solutions obtained using PCSEA are then used for dimensionality reduction to identify the relevant objectives. The potential of the proposed approach is demonstrated by studying its performance on a set of benchmark test problems and two engineering examples. While the preliminary results obtained using PCSEA are promising, there are a number of areas that need further investigation. This paper provides a number of useful insights into dimensionality reduction and, in particular, highlights some of the roadblocks that need to be cleared for future development of algorithms attempting to use few selected solutions for identifying relevant objectives.


Archive | 2009

Infeasibility Driven Evolutionary Algorithm for Constrained Optimization

Tapabrata Ray; Hemant Kumar Singh; Amitay Isaacs; Warren Smith

Real life optimization problems often involve one or more constraints and in most cases, the optimal solutions to such problems lie on constraint boundaries. The performance of an optimization algorithm is known to be largely dependent on the underlying mechanism of constraint handling. Most population based stochastic optimization methods prefer a feasible solution over an infeasible solution during their course of search. Such a preference drives the population to feasibility first before improving its objective function value which effectively means that the solutions approach the constraint boundaries from the feasible side of the search space. In this chapter, we introduce an evolutionary algorithm that explicitly maintains a small percentage of infeasible solutions close to the constraint boundaries during its course of evolution. The presence of marginally infeasible solutions in the population allows the algorithm to approach the constraint boundary from the infeasible side of the search space in addition to its approach from the feasible side of the search space via evolution of feasible solutions. Furthermore, “good” infeasible solutions are ranked higher than the feasible solutions, thereby focusing the search for the optimal solutions near the constraint boundaries. The performance of the proposed algorithm is compared with Non-dominated Sorting Genetic Algorithm II (NSGA-II) on a set of single and multi-objective test problems. The results clearly indicate that the rate of convergence of the proposed algorithm is better than NSGA-II on the studied test problems. Additionally, the algorithm provides a set of marginally infeasible solutions which are of great use in trade-off studies.


congress on evolutionary computation | 2009

Performance of infeasibility driven evolutionary algorithm (IDEA) on constrained dynamic single objective optimization problems

Hemant Kumar Singh; Amitay Isaacs; Trung Thanh Nguyen; Tapabrata Ray; Xin Yao

A number of population based optimization algorithms have been proposed in recent years to solve unconstrained and constrained single and multi-objective optimization problems. Most of such algorithms inherently prefer a feasible solution over an infeasible one during the course of search, which translates to approaching the constraint boundary from the feasible side of the search space. Previous studies [1], [2] have already demonstrated the benefits of explicitly maintaining a fraction of infeasible solutions in Infeasiblity Driven Evolutionary Algorithm (IDEA) for single and multiobjective constrained optimization problems. In this paper, the benefits of IDEA as a sub-evolve mechanism are highlighted for dynamic, constrained single objective optimization problems. IDEA is particularly attractive for such problems as it offers a faster rate of convergence over a conventional EA, which is of significant interest in dynamic optimization problems. The algorithm is tested on two new dynamic constrained test problems. For both the problems, the performance of IDEA is found to be significantly better than conventional EA.


australasian joint conference on artificial intelligence | 2008

Infeasibility Driven Evolutionary Algorithm (IDEA) for Engineering Design Optimization

Hemant Kumar Singh; Amitay Isaacs; Tapabrata Ray; Warren Smith

Engineering design often requires solutions to constrained optimization problems with highly nonlinear objective and constraint functions. The optimal solutions of most design problems lie on the constraint boundary. In this paper, Infeasibility Driven Evolutionary Algorithm (IDEA) is presented that searches for optimum solutions near the constraint boundary. IDEA explicitly maintains and evolves a small proportion of infeasible solutions. This behavior is fundamentally different from the current state of the art evolutionary algorithms, which rank the feasible solutions higher than the infeasible solutions and in the process approach the constraint boundary from the feasible side of the design space. In IDEA, the original constrained minimization problem with k objectives is reformulated as an unconstrained minimization problem with k + 1 objectives, where the additional objective is calculated based on the relative amount of constraint violation among the population members. The presence of infeasible solutions in IDEA leads to an improved rate of convergence as the solutions approach the constraint boundary from both feasible and infeasible regions of the search space. As an added benefit, IDEA provides a set of marginally infeasible solutions for trade-off studies. The performance of IDEA is compared with Non-dominated Sorting Genetic Algorithm II (NSGA-II) [1] on a set of single and multi-objective mathematical and engineering optimization problems to highlight the benefits.


simulated evolution and learning | 2008

A Study on the Performance of Substitute Distance Based Approaches for Evolutionary Many Objective Optimization

Hemant Kumar Singh; Amitay Isaacs; Tapabrata Ray; Warren Smith

Non-dominated Sorting Genetic Algorithm (NSGA-II) [1] and the Strength Pareto Evolutionary Algorithm (SPEA2) [2] are the two most widely used evolutionary multi-objective optimization algorithms. Although, they have been quite successful so far in solving a wide variety of real life optimization problems mostly 2 or 3 objective in nature, their performance is known to deteriorate significantly with an increasing number of objectives. The term many objective optimization refers to problems with number of objectives significantly larger than two or three. In this paper, we provide an overview of the challenges involved in solving many objective optimization problems and provide an in depth study on the performance of recently proposed substitute distance based approaches, viz. Subvector dominance, -eps-dominance, Fuzzy Pareto Dominance and Sub-objective dominance count for NSGA-II to deal with many objective optimization problems. The present study has been conducted on scalable benchmark functions (DTLZ2-DTLZ3) and the recently proposed P* problem [3] since their convergence and diversity measures can be compared conveniently. An alternative substitute distance approach is introduced in this paper and compared with existing ones on the set of benchmark problems.


world congress on computational intelligence | 2008

Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems

Amitay Isaacs; Tapabrata Ray; Warren Smith

The most common approach to handling constraints in a constrained optimization problem has been the use of penalty functions. In recent years non-dominance based ranking methods have been applied for an efficient handling of constraints. These techniques favor the feasible solutions over the infeasible solutions, thus guiding the search through the feasible space. Usually the optimal solutions of the constrained optimization problems are spread along the constraint boundary. In this paper we propose a constraint handling method that maintains infeasible solutions in the population to aid the search of the optimal solutions through the infeasible space. The constraint handling method is implemented in constraint handling evolutionary algorithm (CHEA), which is the modified non-dominated sorting genetic algorithm II (NSGA-II) [1]. The original constrained minimization problem with k objectives is reformulated as an unconstrained minimization problem with k + 1 objectives, where an additional objective function is the number of constraint violations. In CHEA, the infeasible solutions are ranked higher than the feasible solutions, thereby focusing the search for the optimal solutions near the constraint boundaries through infeasible region. CHEA simultaneously obtains the solutions to the constrained as well as the unconstrained optimization problem. The performance of CHEA is compared with NSGA-II on the set of CTP test problems. For a fixed number of function evaluations, CHEA converges to the Pareto optimal solutions much faster than NSGA-II. It is observed that retaining even a small number of infeasible solutions in the population, CHEA is able to prevent the search from prematurely converging to a sub-optimal Pareto front.


international symposium on neural networks | 2008

Development of a memetic algorithm for Dynamic Multi-Objective Optimization and its applications for online neural network modeling of UAVs

Amitay Isaacs; Vishwas R. Puttige; Tapabrata Ray; Warren Smith; Sreenatha G. Anavatti

Dynamic multi-objective optimization (DMO) is one of the most challenging class of optimization problems where the objective functions change over time and the optimization algorithm is required to identify the corresponding Pareto optimal solutions with minimal time lag. DMO has received very little attention in the past and none of the existing multi-objective algorithms perform satisfactorily on test problems and a handful of such applications have been reported. In this paper, we introduce a memetic algorithm (MA) and illustrate its performance for online neural network (NN) identification of the multi-input multi-output unmanned aerial vehicle (UAV) system. As a typical case, the longitudinal model of the UAV is considered and the performance of a NN trained with the memetic algorithm is compared to another trained with Levenberg-Marquardt training algorithm using mini-batches. The memetic algorithm employs an orthogonal epsilon-constrained formulation to deal with multiple objectives and a sequential quadratic programming (SQP) solver is embedded as its local search mechanism to improve the rate of convergence. The performance of the memetic algorithm is presented for two benchmarks Fisherpsilas Discriminant Analysis (FDA), FDA1 and modified FDA2 before highlighting its benefits for online NN model identification for UAVs. Observations from our recent work indicated that Mean Square Error (MSE) alone may not always be a good measure for training the networks. Hence the MSE and maximum absolute value of the instantaneous error is considered as objectives to be minimized which requires a Dynamic MO algorithm. The proposed memetic algorithm is aimed to solve such identification problems and the same can be extended to control problems.


australian conference on artificial life | 2007

An evolutionary algorithm with spatially distributed surrogates for multiobjective optimization

Amitay Isaacs; Tapabrata Ray; Warren Smith

In this paper, an evolutionary algorithm with spatially distributed surrogates (EASDS) for multiobjective optimization is presented. The algorithm performs actual analysis for the initial population and periodically every few generations. An external archive of the unique solutions evaluated using the actual analysis is maintained to train the surrogate models. The data points in the archive are split into multiple partitions using k-Means clustering. A Radial Basis Function (RBF) network surrogate model is built for each partition using a fraction of the points in that partition. The rest of the points in the partition are used as a validation data to decide the prediction accuracy of the surrogate model. Prediction of a new candidate solution is done by the surrogate model with the least prediction error in the neighborhood of that point. Five multiobjective test problems are presented in this study and a comparison with Nondominated Sorting Genetic Algorithm II (NSGA-II) is included to highlight the benefits offered by our approach. EASDS algorithm consistently reported better nondominated solutions for all the test cases for the same number of actual evaluations as compared to a single global surrogate model and NSGA-II.


congress on evolutionary computation | 2007

A Hybrid Evolutionary Algorithm With Simplex Local Search

Amitay Isaacs; Tapabrata Ray; Warren Smith

Presented in this paper is a hybrid algorithm simplex search enabled evolutionary algorithm (SSEA) which is fundamentally an evolutionary algorithm (EA) embedded with a local simplex search for unconstrained optimization problems. Evolutionary algorithms have been quite successful in solving a wide class of intractable problems and the non-dominated sorting genetic algorithm (NSGA-II) is a popular choice. However, like any other evolutionary algorithms, the rate of convergence of NSGA-II slows down with generations and often there is no improvement in the best candidate solution over a number of generations. The simplex search component comes into effect once the basic evolutionary algorithm encounters a slow rate of convergence. To allow exploitation around multiple promising regions, the simplex search is invoked from multiple promising regions of the variable space identified using hierarchical agglomerative clustering. In this paper, results are presented for a series of unconstrained optimization test problems that cover problems with a single minimum, a few minima and a large number of minima. Provided is a comparison of results with NSGA-II, fast evolutionary strategy (FES), fast evolutionary programming (FEP) and improved fast evolutionary programming (IFEP) where its clear that SSEA outperforms all other algorithms for unimodal problems. On the suite of problems with large number of minima, SSEA performs better on some of them. For problems with fewer minima, SSEA performs better than FES, FEP and IFEP while demonstrating comparable performance to NSGA-II.


International Journal of Product Development | 2009

Multi-objective design optimisation using multiple adaptive spatially distributed surrogates

Amitay Isaacs; Tapabrata Ray; Warren Smith

This paper introduces an evolutionary algorithm with Multiple Adaptive Spatially Distributed Surrogates (MASDS) for multi-objective optimisation. The core optimisation algorithm is a canonical evolutionary algorithm. The solutions are evaluated using the actual analysis periodically every few generations and evaluated using surrogate models in between. An external archive of the unique solutions evaluated using actual analysis is maintained to train the surrogate models. The solutions in the archive are split into multiple partitions using k-means clustering. A surrogate model based on the Radial Basis Function (RBF) network is built for each partition and its prediction accuracy is computed using a validation set. A surrogate model for a partition is only considered valid if its prediction error is below a user-defined threshold. The performance of a new candidate solution is predicted using a valid surrogate model with the least prediction error in the neighbourhood of that point. The results of six multi-objective test problems are presented in this study, along with a welded beam design optimisation problem. A detailed comparison of the results obtained using Nondominated Sorting Genetic Algorithm II (NSGA-II), the Single Surrogate (SS) model, the Multiple Spatially Distributed Surrogate (MSDS) model and finally, the MASDS model, is presented to highlight the benefits offered by the approach.

Collaboration


Dive into the Amitay Isaacs's collaboration.

Top Co-Authors

Avatar

Tapabrata Ray

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Warren Smith

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Hemant Kumar Singh

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Russell R. Boyce

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmad F. Mohamad Ayob

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Asafuddoula

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Mohammad Sharif Khan

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kathryn E. Merrick

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge