Alexander Scheidler
Leipzig University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Scheidler.
Memetic Computing | 2011
Konrad Diwold; Andrej Aderhold; Alexander Scheidler; Martin Middendorf
The artificial bee colony optimization (ABC) is a population-based algorithm for function optimization that is inspired by the foraging behavior of bees. The population consists of two types of artificial bees: employed bees (EBs) which scout for new, good solutions and onlooker bees (OBs) that search in the neighborhood of solutions found by the EBs. In this paper we study in detail the influence of ABC’s parameters on its optimization behavior. It is also investigated whether the use of OBs is always advantageous. Moreover, we propose two new variants of ABC which use new methods for the position update of the artificial bees. Extensive empirical tests were performed to compare the new variants with the standard ABC and several other metaheuristics on a set of benchmark functions. Our findings show that the ideal parameter values depend on the hardness of the optimization goal and that the standard values suggested in the literature should be applied with care. Moreover, it is shown that in some situations it is advantageous to use OBs but in others it is not. In addition, a potential problem of the ABC is identified, namely that it performs worse on many functions when the optimum is not located at the center of the search space. Finally it is shown that the new ABC variants improve the algorithm’s performance and achieve very good performance in comparison to other metaheuristics under standard as well as hard optimization goals.
Swarm Intelligence | 2011
Marco Antonio Montes de Oca; Eliseo Ferrante; Alexander Scheidler; Carlo Pinciroli; Mauro Birattari; Marco Dorigo
Collective decision-making is a process whereby the members of a group decide on a course of action by consensus. In this paper, we propose a collective decision-making mechanism for robot swarms deployed in scenarios in which robots can choose between two actions that have the same effects but that have different execution times. The proposed mechanism allows a swarm composed of robots with no explicit knowledge about the difference in execution times between the two actions to choose the one with the shorter execution time. We use an opinion formation model that captures important elements of the scenarios in which the proposed mechanism can be used in order to predict the system’s behavior. The model predicts that when the two actions have different average execution times, the swarm chooses with high probability the action with the shorter average execution time. We validate the model’s predictions through a swarm robotics experiment in which robot teams must choose one of two paths of different length that connect two locations. Thanks to the proposed mechanism, a swarm made of robot teams that do not measure time or distance is able to choose the shorter path.
IEEE Transactions on Systems, Man, and Cybernetics | 2016
Alexander Scheidler; Arne Brutschy; Eliseo Ferrante; Marco Dorigo
In this paper, we propose a collective decision-making method for swarms of robots. The method enables a robot swarm to select, from a set of possible actions, the one that has the fastest mean execution time. By means of positive feedback the method achieves consensus on the fastest action. The novelty of our method is that it allows robots to collectively find consensus on the fastest action without measuring explicitly the execution times of all available actions. We study two analytical models of the decision-making method in order to understand the dynamics of the consensus formation process. Moreover, we verify the applicability of the method in a real swarm robotics scenario. To this end, we conduct three sets of experiments that show that a robotic swarm can collectively select the shortest of two paths. Finally, we use a Monte Carlo simulation model to study and predict the influence of different parameters on the method.
NICSO | 2010
Andrej Aderhold; Konrad Diwold; Alexander Scheidler; Martin Middendorf
The artificial bee colony optimization (ABC) is a population based algorithm for function optimization that is inspired by the foraging behaviour of bees. The population consists of two types of artificial bees: employed bees (EBs) which scout for new good solution in the search space and onlooker bees (OBs) that search in the neighbourhood of solutions found by the EBs. In this paper we study the influence of the populations size on the optimization behaviour of ABC. Moreover, we investigate when it is advantageous to use OBs. We also propose two variants of ABC which use new methods for the position update of the artificial bees. Empirical tests were performed on a set of benchmark functions. Our findings show that the ideal population size and whether it is advantageous to use OBs depends on the hardness of the optimization goal. Additionally the newly proposed variants of the ABC outperform the standard ABC significantly on all test functions. In comparison to several other optimization algorithm the best ABC variant performs better or at least as good as all reference algorithms in most cases.
ieee swarm intelligence symposium | 2009
Hugo Hernández; Christian Blum; Martin Middendorf; Kai Ramsch; Alexander Scheidler
When asked if ants rest or if they work untiringly all day long, most people would probably respond that they had no idea. In fact, when watching the bustling life of an ant hill it is hard to imagine that ants take a rest from now and then. However, biologists discovered that ants rest quite a large fraction of their time. Surprisingly, not only single ants show alternate phases of resting and being active, but whole ant colonies exhibit synchronized activity phases that result from self-organization. Inspired by this self-synchronization behaviour of ant colonies, we develop a mechanism for self-synchronized duty-cycling in mobile sensor networks. In addition, we equip sensor nodes with energy harvesting capabilities such as, for example, solar cells. We show that the self-synchronization mechanism can be made adaptive depending on the available energy.
ieee swarm intelligence symposium | 2007
Daniel Merkle; Martin Middendorf; Alexander Scheidler
A new approach to prevent negative emergent behaviors of adaptive or organic computing systems is presented. One characteristic of such computing systems is the use self-organisation principles from nature and components that make decentralized decisions. To control such systems is a difficult task. In this paper we propose to control by introducing a swarm of so called anti-components to the system that can prevent the negative emergence. As an example serves a model that is inspired by the emergent behavior of ants to cluster different items. This model system has been used for several applications in computer science already. Different types of anti-components (or anti-agents) that can prevent a clustering behavior are designed for this system. Several cluster validity measures are used to investigate the clustering behavior of a system that contains standard clustering agents together with anti-clustering agents. It is shown that such systems can show a complex behavior over time where a phase of item distributions with increasing order is followed by distributions with increasing degree of clustering. It is also shown that a medium number of certain anti-clustering agents (which in a larger number completely prevent any clustering) may even help the system to perform a good clustering faster
intelligent robots and systems | 2012
Arne Brutschy; Alexander Scheidler; Eliseo Ferrante; Marco Dorigo; Mauro Birattari
In swarm robotics, large groups of relatively simple robots cooperate so that they can perform tasks that go beyond their individual capabilities [1], [2]. The interactions among the robots are based on simple behavioral rules that exploit only local information. The robots in a swarm have neither global knowledge, nor a central controller. Therefore, decisions in the swarm have to be taken in a distributed manner based on local interactions. Because of these limitations, the design of collective decision-making methods in swarm robotic systems is a challenging problem. Moreover, the collective decision-making method must be efficient, robust with respect to robot failures, and scale well with the size of the swarm.
ant colony optimization and swarm intelligence | 2008
Arne Brutschy; Alexander Scheidler; Daniel Merkle; Martin Middendorf
This paper proposes ant-inspired strategies for self-organized and decentralized collective decision-making in computing systems which employ reconfigurable units. The particular principles used for the design of these strategies are inspired by the house-hunting of the ant Temnothorax albipennis. The considered computing system consists of two types of units: so-called worker units that are able to execute jobs that come into the system, and scout units that are additionally responsible for the reconfiguration process of all units. The ant-inspired strategies are analyzed experimentally and are compared to a non-adaptive reference strategy. It is shown that the ant-inspired strategies lead to a collective decentralized decision process through which the units are able to find good configurations that lead to a high system throughput even in complex configuration spaces.
international parallel and distributed processing symposium | 2006
Daniel Merkle; Martin Middendorf; Alexander Scheidler
A self-organized allocation scheme for service tasks in computing systems is proposed in this paper. Usually components of a computing system need some service from time to time in order perform their work efficiently. In adaptive computing systems the components and the necessary tasks adapt to the needs of users or the environment. Since in such cases the type of service tasks will often change it is attractive to use reconfigurable hardware to perform the service tasks. The studied system consists of normal worker components and helper components which have reconfigurable hardware and can perform different service tasks. The speed with which a service tasks is executed by a helper depends on its actual configuration. Different strategies for the helpers to decide about service task acceptance and reconfiguration are proposed. These strategies are inspired by stimulus-threshold models that are used to explain task allocation in social insects
Artificial Life | 2014
Giovanni Pini; Arne Brutschy; Alexander Scheidler; Marco Dorigo; Mauro Birattari
We study task partitioning in the context of swarm robotics. Task partitioning is the decomposition of a task into subtasks that can be tackled by different workers. We focus on the case in which a task is partitioned into a sequence of subtasks that must be executed in a certain order. This implies that the subtasks must interface with each other, and that the output of a subtask is used as input for the subtask that follows. A distinction can be made between task partitioning with direct transfer and with indirect transfer. We focus our study on the first case: The output of a subtask is directly transferred from an individual working on that subtask to an individual working on the subtask that follows. As a test bed for our study, we use a swarm of robots performing foraging. The robots have to harvest objects from a source, situated in an unknown location, and transport them to a home location. When a robot finds the source, it memorizes its position and uses dead reckoning to return there. Dead reckoning is appealing in robotics, since it is a cheap localization method and it does not require any additional external infrastructure. However, dead reckoning leads to errors that grow in time if not corrected periodically. We compare a foraging strategy that does not make use of task partitioning with one that does. We show that cooperation through task partitioning can be used to limit the effect of dead reckoning errors. This results in improved capability of locating the object source and in increased performance of the swarm. We use the implemented system as a test bed to study benefits and costs of task partitioning with direct transfer. We implement the system with real robots, demonstrating the feasibility of our approach in a foraging scenario.