Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward P. K. Tsang is active.

Publication


Featured researches published by Edward P. K. Tsang.


European Journal of Operational Research | 1999

Guided local search and its application to the traveling salesman problem

Christos Voudouris; Edward P. K. Tsang

Abstract The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. In this paper, we are going to examine how the techniques of Guided Local Search (GLS) and Fast Local Search (FLS) can be applied to the problem. GLS sits on top of local search heuristics and has as a main aim to guide these procedures in exploring efficiently and effectively the vast search spaces of combinatorial optimization problems. GLS can be combined with the neighborhood reduction scheme of FLS which significantly speeds up the operations of the algorithm. The combination of GLS and FLS with TSP local search heuristics of different eiciency and effectiveness is studied in an effort to determine the dependence of GLS on the underlying local search heuristic used. Comparisons are made with some of the best TSP heuristic algorithms and general optimization techniques which demonstrate the advantages of GLS over alternative heuristic approaches suggested for the problem.


Information Sciences | 2005

DE/EDA: a new evolutionary algorithm for global optimization

Jianyong Sun; Qingfu Zhang; Edward P. K. Tsang

Differential evolution (DE) was very successful in solving the global continuous optimization problem. It mainly uses the distance and direction information from the current population to guide its further search. Estimation of distribution algorithm (EDA) samples new solutions from a probability model which characterizes the distribution of promising solutions. This paper proposes a combination of DE and EDA (DE/EDA) for the global continuous optimization problem. DE/EDA combines global information extracted by EDA with differential information obtained by DE to create promising solutions. DE/EDA has been compared with the best version of the DE algorithm and an EDA on several commonly utilized test problems. Experimental results demonstrate that DE/EDA outperforms the DE algorithm and the EDA. The effect of the parameters of DE/EDA to its performance is investigated experimentally.


Wiley Encyclopedia of Operations Research and Management Science | 2010

Guided Local Search

Christos Voudouris; Edward P. K. Tsang; Abdullah Alsheddy

Combinatorial explosion is a well-known phenomenon that prevents complete algorithms from solving many real-life combinatorial optimization problems. In many situations, heuristic search methods are needed. This chapter describes the principles of Guided Local Search (GLS) and Fast Local Search (FLS) and surveys their applications. GLS is a penalty-based metaheuristic algorithm that sits on top of other local search algorithms, with the aim to improve their efficiency and robustness. FLS is a way of reducing the size of the neighbourhood to improve the efficiency of local search. The chapter also provides guidance for implementing and using GLS and FLS. Four problems, representative of general application categories, are examined with detailed information provided on how to build a GLS-based method in each case.


IEEE Transactions on Evolutionary Computation | 2010

Expensive Multiobjective Optimization by MOEA/D With Gaussian Process Model

Qingfu Zhang; Wudong Liu; Edward P. K. Tsang; Botond Virginas

In some expensive multiobjective optimization problems (MOPs), several function evaluations can be carried out in a batch way. Therefore, it is very desirable to develop methods which can generate multipler test points simultaneously. This paper proposes such a method, called MOEA/D-EGO, for dealing with expensive multiobjective optimization. MOEA/D-EGO decomposes an MOP in question into a number of single-objective optimization subproblems. A predictive model is built for each subproblem based on the points evaluated so far. Effort has been made to reduce the overhead for modeling and to improve the prediction quality. At each generation, MOEA/D is used for maximizing the expected improvement metric values of all the subproblems, and then several test points are selected for evaluation. Extensive experimental studies have been carried out to investigate the ability of the proposed algorithm.


IEEE Transactions on Evolutionary Computation | 2005

An evolutionary algorithm with guided mutation for the maximum clique problem

Qingfu Zhang; Jianyong Sun; Edward P. K. Tsang

Estimation of distribution algorithms sample new solutions (offspring) from a probability model which characterizes the distribution of promising solutions in the search space at each generation. The location information of solutions found so far (i.e., the actual positions of these solutions in the search space) is not directly used for generating offspring in most existing estimation of distribution algorithms. This paper introduces a new operator, called guided mutation. Guided mutation generates offspring through combination of global statistical information and the location information of solutions found so far. An evolutionary algorithm with guided mutation (EA/G) for the maximum clique problem is proposed in this paper. Besides guided mutation, EA/G adopts a strategy for searching different search areas in different search phases. Marchioris heuristic is applied to each new solution to produce a maximal clique in EA/G. Experimental results show that EA/G outperforms the heuristic genetic algorithm of Marchiori (the best evolutionary algorithm reported so far) and a MIMIC algorithm on DIMACS benchmark graphs.


Operations Research Letters | 1997

Fast local search and guided local search and their application to British Telecom's workforce scheduling problem

Edward P. K. Tsang; Christos Voudouris

This paper reports a fast local search (FLS) algorithm which helps to improve the efficiency of hill climbing and a guided local search (GLS) algorithm which was developed to help local search to escape local optima and distribute search effort. To illustrate how these algorithms work, this paper describes their application to British Telecoms workforce scheduling problem, which is a hard real life problem. The effectiveness of FLS and GLS are demonstrated by the fact that they both outperform all the methods applied to this problem so far, which include simulated annealing, genetic algorithms and constraint logic programming.


ieee international conference on evolutionary computation | 2006

Combining Model-based and Genetics-based Offspring Generation for Multi-objective Optimization Using a Convergence Criterion

Aimin Zhou; Yaochu Jin; Qingfu Zhang; Bernhard Sendhoff; Edward P. K. Tsang

In our previous work conducted by Aimin Zhou et. al., (2005), it has been shown that the performance of multi-objective evolutionary algorithms can be greatly enhanced if the regularity in the distribution of Pareto-optimal solutions is used. This paper suggests a new hybrid multi-objective evolutionary algorithm by introducing a convergence based criterion to determine when the model-based method and when the genetics-based method should be used to generate offspring in each generation. The basic idea is that the genetics-based method, i.e., crossover and mutation, should be used when the population is far away from the Pareto front and no obvious regularity in population distribution can be observed. When the population moves towards the Pareto front, the distribution of the individuals will show increasing regularity and in this case, the model-based method should be used to generate offspring. The proposed hybrid method is verified on widely used test problems and our simulation results show that the method is effective in achieving Pareto-optimal solutions compared to two state-of-the-art evolutionary multi-objective algorithms: NSGA-II and SPEA2, and our pervious method in Aimin Zhou et. al., (2005).


Engineering Computations | 2004

Hybrid estimation of distribution algorithm for global optimization

Qingfu Zhang; Jianyong Sun; Edward P. K. Tsang; John A. Ford

This paper introduces a new hybrid evolutionary algorithm (EA) for continuous global optimization problems, called estimation of distribution algorithm with local search (EDA/L). Like other EAs, EDA/L maintains and improves a population of solutions in the feasible region. Initial candidate solutions are generated by uniform design, these solutions evenly scatter over the feasible solution region. To generate a new population, a marginal histogram model is built based on the global statistical information extracted from the current population and then new solutions are sampled from the model thus built. The incomplete simplex method applies to every new solution generated by uniform design or sampled from the histogram model. Unconstrained optimization by diagonal quadratic approximation applies to several selected resultant solutions of the incomplete simplex method at each generation. We study the effectiveness of main components of EDA/L. The experimental results demonstrate that EDA/L is better than four other recent EAs in terms of the solution quality and the computational cost.


international conference on evolutionary multi criterion optimization | 2007

Prediction-based population re-initialization for evolutionary dynamic multi-objective optimization

Aimin Zhou; Yaochu Jin; Qingfu Zhang; Bernhard Sendhoff; Edward P. K. Tsang

Optimization in changing environment is a challenging task, especially when multiple objectives are to be optimized simultaneously. The basic idea to address dynamic optimization problems is to utilize history information to guide future search. In this paper, two strategies for population re-initialization are introduced when a change in the environment is detected. The first strategy is to predict the new location of individuals from the location changes that have occurred in the history. The current population is then partially or completely replaced by the new individuals generated based on prediction. The second strategy is to perturb the current population with a Gaussian noise whose variance is estimated according to previous changes. The prediction based population re-initialization strategies, together with the random re-initialization method, are then compared on two bi-objective test problems. Conclusions on the different re-initialization strategies are drawn based on the preliminary empirical results.


decision support systems | 2004

EDDIE-automation, a decision support tool for financial forecasting

Edward P. K. Tsang; Paul Yung; Jin Li

Evolutionary Dynamic Data Investment Evaluator (EDDIE) is a genetic programming (GP)-based decision support tool for financial forecasting. EDDIE itself does not replace forecasting experts. It serves to improve the productivity of experts in searching the space of decision trees, with the aim to improve the odds in its users favour. The efficacy of EDDIE has been reported in the literature. However, discovering patterns in historical data is only the first step towards building a practical financial forecasting tool. Data preparation, rules organization and application are all important issues. This paper describes an architecture that embeds EDDIE for learning from and monitoring the stock market.

Collaboration


Dive into the Edward P. K. Tsang's collaboration.

Top Co-Authors

Avatar

Qingfu Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shu-Heng Chen

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Alvin C. M. Kwan

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge