Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanaz Mostaghim is active.

Publication


Featured researches published by Sanaz Mostaghim.


ieee swarm intelligence symposium | 2003

Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO)

Sanaz Mostaghim; Jürgen Teich

In multi-objective particle swarm optimization (MOPSO) methods, selecting the best local guide (the global best particle) for each particle of the population from a set of Pareto-optimal solutions has a great impact on the convergence and diversity of solutions, especially when optimizing problems with high number of objectives. This paper introduces the Sigma method as a new method for finding best local guides for each particle of the population. The Sigma method is implemented and is compared with another method, which uses the strategy of an existing MOPSO method for finding the local guides. These methods are examined for different test functions and the results are compared with the results of a multi-objective evolutionary algorithm (MOEA).


Archive | 2016

Introduction to Neural Networks

Rudolf Kruse; Christian Borgelt; Christian Braune; Sanaz Mostaghim; Matthias Steinbrecher

(Artificial) neural networks are information processing systems, whose structure and operation principles are inspired by the nervous system and the brain of animals and humans. They consist of a large number of fairly simple units, the so-called neurons, which are working in parallel. These neurons communicate by sending information in the form of activation signals, along directed connections, to each other.


congress on evolutionary computation | 2004

Covering Pareto-optimal fronts by subswarms in multi-objective particle swarm optimization

Sanaz Mostaghim; Jürgen Teich

Covering the whole set of Pareto-optimal solutions is a desired task of multiobjective optimization methods. Because in general it is not possible to determine this set, a restricted amount of solutions are typically delivered in the output to decision makers. We propose a method using multiobjective particle swarm optimization to cover the Pareto-optimal front. The method works in two phases. In phase 1 the goal is to obtain a good approximation of the Pareto-front. In a second run subswarms are generated to cover the Pareto-front. The method is evaluated using different test functions and compared with an existing covering method using a real world example in antenna design.


international conference on evolutionary multi criterion optimization | 2007

Heatmap visualization of population based multi objective algorithms

Andy Pryke; Sanaz Mostaghim; Alireza Nazemi

Understanding the results of a multi objective optimization process can be hard. Various visualization methods have been proposed previously, but the only consistently popular one is the 2D or 3D objective scatterplot, which cannot be extended to handle more than 3 objectives. Additionally, the visualization of high dimensional parameter spaces has traditionally been neglected. We propose a new method, based on heatmaps, for the simultaneous visualization of objective and parameter spaces. We demonstrate its application on a simple 3D test function and also apply heatmaps to the analysis of real-world optimization problems. Finally we use the technique to compare the performance of two different multi-objective algorithms.


congress on evolutionary computation | 2003

The role of /spl epsi/-dominance in multi objective particle swarm optimization methods

Sanaz Mostaghim; Jiirgen Teich

In this paper, the influence of /spl epsi/-dominance on multi-objective particle swarm optimization (MOPSO) methods is studied. The most important role of /spl epsi/-dominance is to bound the number of non-dominated solutions stored in the archive (archive size), which has influences on computational time, convergence and diversity of solutions. Here, /spl epsi/-dominance is compared with the existing clustering technique for fixing the archive size and the solutions are compared in terms of computational time, convergence and diversity. A new diversity metric is also suggested. The results show that the /spl epsi/-dominance method can find solutions much faster than the clustering technique with comparable and even in some cases better convergence and diversity.


genetic and evolutionary computation conference | 2007

Multi-objective particle swarm optimization on computer grids

Sanaz Mostaghim; Juergen Branke; Hartmut Schmeck

In recent years, a number of authors have successfully extended particle swarmoptimization to problem domains with multiple objec\-tives. This paper addresses theissue of parallelizing multi-objec\-tive particle swarms. We propose and empirically comparetwo parallel versions which differ in the way they divide the swarminto subswarms that can be processed independently on differentprocessors. One of the variants works asynchronouslyand is thus particularly suitable for heterogeneous computer clusters asoccurring e.g.\ in moderngrid computing platforms.


IEEE Transactions on Evolutionary Computation | 2013

Experimental Analysis of Bound Handling Techniques in Particle Swarm Optimization

Sabine Helwig; Juergen Branke; Sanaz Mostaghim

Many practical optimization problems are constrained and have a bounded search space. In this paper, we propose and compare a wide variety of bound handling techniques for particle swarm optimization. By examining their performance on flat landscapes, we show that many bound handling techniques introduce significant search bias. Furthermore, we compare the performance of many bound handling techniques on a variety of test problems, demonstrating that the bound handling technique can have a major impact on the algorithm performance, and that the method recently proposed as the standard does not, in general, perform well.


parallel problem solving from nature | 2006

About selecting the personal best in multi-objective particle swarm optimization

Jürgen Branke; Sanaz Mostaghim

In particle swarm optimization, a particles movement is usually guided by two solutions: the swarms global best and the particles personal best. Selecting these guides in the case of multiple objectives is not straightforward. In this paper, we investigate the influence of the personal best particles in Multi-Objective Particle Swarm Optimization. We show that selecting a proper personal guide has a significant impact on algorithm performance. We propose a new idea of allowing each particle to memorize all non-dominated personal best particles it has encountered. This means that if the updated personal best position be indifferent to the old one, we keep both in the personal archive. Also we propose several strategies to select a personal best particle from the personal archive. These methods are empirically compared on some standard test problems.


Multiobjective Optimization | 2008

Parallel Approaches for Multiobjective Optimization

El-Ghazali Talbi; Sanaz Mostaghim; Tatsuya Okabe; Hisao Ishibuchi; Günter Rudolph; Carlos A. Coello Coello

This chapter presents a general overview of parallel approaches for multiobjective optimization. For this purpose, we propose a taxonomy for parallel metaheuristics and exact methods. This chapter covers the design aspect of the algorithms as well as the implementation aspects on different parallel and distributed architectures.


international conference on evolutionary multi criterion optimization | 2003

Covering pareto sets by multilevel evolutionary subdivision techniques

Oliver Schütze; Sanaz Mostaghim; Michael Dellnitz; Jürgen Teich

We present new hierarchical set oriented methods for the numerical solution of multi-objective optimization problems. These methods are based on a generation of collections of subdomains (boxes) in parameter space which cover the entire set of Pareto points. In the course of the subdivision procedure these coverings get tighter until a desired granularity of the covering is reached. For the evaluation of these boxes we make use of evolutionary algorithms. We propose two particular strategies and discuss combinations of those which lead to a better algorithmic performance. Finally we illustrate the efficiency of our methods by several examples.

Collaboration


Dive into the Sanaz Mostaghim's collaboration.

Top Co-Authors

Avatar

Hartmut Schmeck

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Heiner Zille

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Jürgen Teich

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Marcus Geimer

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Timo Kautzmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Micaela Wünsche

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Borgelt

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Christian Braune

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Matthias Steinbrecher

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge