Martin Schlueter
Hokkaido University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Schlueter.
congress on evolutionary computation | 2013
Martin Schlueter; Masaharu Munetomo
Two different parallelization strategies for evolutionary algorithms for mixed integer nonlinear programming (MINLP) are discussed and numerically compared in this contribution. The first strategy is to parallelize some internal parts of the evolutionary algorithm. The second strategy is to parallelize the MINLP function calls outside and independently of the evolutionary algorithm. The first strategy is represented here by a genetic algorithm (arGA) for numerical testing. The second strategy is represented by an ant colony optimization algorithm (MIDACO) for numerical testing. It can be shown that the first parallelization strategy represented by arGA is inferior to the serial version of MIDACO, even though if massive parallelization via GPGPU is used. In contrast to this, theoretical and practical tests demonstrate that the parallelization strategy of MIDACO is promising for cpu-time expensive MINLP problems, which often arise in real world applications.
BHI 2013 Proceedings of the International Conference on Brain and Health Informatics - Volume 8211 | 2013
Courtney Powell; Masaharu Munetomo; Martin Schlueter; Masataka Mizukoshi
A new wearable computing era featuring devices such as Google Glass, smartwatches, and digital contact lenses is almost upon us, bringing with it usability issues that conventional human computer interaction (HCI) modalities cannot resolve. Brain computer interface (BCI) technology is also rapidly advancing and is now at a point where noninvasive BCIs are being used in games and in healthcare. Thought control of wearable devices is an intriguing vision and would facilitate more intuitive HCI; however, to achieve even a modicum of control BCI currently requires massive processing power that is not available on mobile devices. Cloud computing is a maturing paradigm in which elastic computing power is provided on demand over networks. In this paper, we review the three technologies and take a look at possible ways cloud computing can be harnessed to provide the computational power needed to facilitate practical thought control of next-generation wearable computing devices.
congress on evolutionary computation | 2015
Martin Schlueter; Chit Hong Yam; Takeshi Watanabe; Akira Oyama
Optimization of interplanetary space mission trajectories have been a long standing challenge. Here a novel approach is presented that considers several aspects of the space mission simultaneously as many-objective problem. Such problem is then solved by a decomposition approach in combination with a (massive) parallelization framework employing instances of Ant Colony Optimization algorithms. Numerical results show that the here presented approach has advantages over a classical weighted sum approach and is very suitable to efficiently exploit massive parallelization.
congress on evolutionary computation | 2014
Martin Schlueter; Masaharu Munetomo
The impact of parallelization on the optimization process of space mission trajectories is investigated in this contribution. As space mission trajectory reference model, the well known Cassini1 benchmark, published by the European Space Agency (ESA), is considered and solved here with the MIDACO optimization software. It can be shown that significant speed ups can be gained by applying parallelization.
soft computing | 2017
Martin Schlueter; Masaharu Munetomo
Abstract This contribution presents a numerical evaluation of the impact of parallelization on the performance of an evolutionary algorithm for mixed-integer nonlinear programming (MINLP). On a set of 200 MINLP benchmarks the performance of the MIDACO solver is assessed with gradually increasing parallelization factor from one to three hundred. The results demonstrate that the efficiency of the algorithm can be significantly improved by parallelized function evaluation. Furthermore, the results indicate that the scale-up behaviour on the efficiency resembles a linear nature, which implies that this approach will even be promising for very large parallelization factors. The presented research is especially relevant to CPU-time consuming real-world applications, where only a low number of serial processed function evaluation can be calculated in reasonable time.
genetic and evolutionary computation conference | 2018
Martin Schlueter; Masaharu Munetomo
This contribution presents numerical results for optimizing a many-objective space mission trajectory benchmark under consideration of massively parallelized co-evaluation of solution candidates. The considered benchmark is the well-known Cassini1 instance published by the European Space Agency (ESA) extended to four objectives. The MIDACO optimization software represents an evolutionary algorithm based on Ant Colony Optimization (ACO) and is applied to solve this benchmark with a varying fine-grained parallelization factor (P) ranging from one to 1024. It can be shown that the required number of sequential steps to solve this benchmark can be significantly reduced by applying massive parallelization, while still maintaining a sufficient high number of well distributed non-dominated solutions in the objective space.
IEEE Transactions on Systems, Man, and Cybernetics | 2018
Gabriele Oliva; Stefano Panzieri; Federica Pascucci; Martin Schlueter; Masaharu Munetomo; Roberto Setola
In this paper, we provide a novel framework to assess the vulnerability/robustness of a network with respect to pair-wise nodes’ connectivity. In particular, we consider attackers that aim, at the same time, at dealing the maximum possible damage to the network in terms of the residual connectivity after the attack and at keeping the cost of the attack (e.g., the number of attacked nodes) at a minimum. Differently from the previous literature, we consider the attacker perspective using a multiobjective formulation and, rather than making hypotheses on the mindset of the attacker in terms of a particular tradeoff between the objectives, we consider the entire Pareto front of nondominated solutions. Based on that, we define novel global and local robustness/vulnerability indicators and we show that such indices can be the base for the implementation of effective protection strategies. Specifically, we propose two different problem formulations and we assess their performances. We conclude this paper by analyzing, as case studies, the IEEE118 power network and the U.S. Airline Network as it was in 1997, comparing the proposed approach against centrality measures.
ieee acm international symposium cluster cloud and grid computing | 2017
Phyo Thandar Thant; Courtney Powell; Martin Schlueter; Masaharu Munetomo
Over the past decade, cloud computing has grown in popularity for the processing of scientific applications as a result of the scalability of the cloud and the ready availability of on-demand computing and storage resources. It is also a cost-effective alternative for scientific workflow executions with a pay-per-use paradigm. However, providing services with optimal performance at the lowest financial resource deployment cost is still challenging. Several fine-grained tasks are included in scientific workflow applications, and efficient execution of these tasks according to their processing dependency to minimize the overall makespan during workflow execution is an important research area. In this paper, a system for level-wise workflow makespan optimization and virtual machine deployment cost minimization for overall workflow optimization in cloud infrastructure is proposed. Further, balanced task clustering, to ensure load balancing in different virtual machine instances at each workflow level during workflow execution, is also considered. The system retrieves the necessary workflow information from a directed acyclic graph and uses the non-dominated sorting genetic algorithm II (NSGA-II) to carry out multiobjective optimization. Pareto front solutions obtained for makespan time and instance resource deployment cost for several scientific workflow applications verify the efficacy of our system.
european conference on applications of evolutionary computation | 2017
Martin Schlueter; Mohamed Wahib; Masaharu Munetomo
The design and optimization of interplanetary space mission trajectories is known to be a difficult challenge. The trajectory of the Messenger mission (launched by NASA in 2004) is one of the most complex ones ever created. The European Space Agency (ESA) makes available a numerical optimization benchmark which resembles an accurate model of Messengers full mission trajectory. This contribution presents an optimization approach which is capable to (robustly) solve ESA’s Messenger full mission benchmark to its putative global solution within 24 h run time on a moderate sized computer cluster. The considered algorithm, named MXHPC, is a parallelization framework for the MIDACO optimization algorithm which is an evolutionary method particularly suited for space trajectory design. The presented results demonstrate the effectiveness of evolutionary computing for complex real-world problems which have been previously considered intractable.
Scientific Programming | 2017
Phyo Thandar Thant; Courtney Powell; Martin Schlueter; Masaharu Munetomo
Cloud computing in the field of scientific applications such as scientific big data processing and big data analytics has become popular because of its service oriented model that provides a pool of abstracted, virtualized, dynamically scalable computing resources and services on demand over the Internet. However, resource selection to make the right choice of instances for a certain application of interest is a challenging problem for researchers. In addition, providing services with optimal performance at the lowest financial resource deployment cost based on users’ resource selection is quite challenging for cloud service providers. Consequently, it is necessary to develop an optimization system that can provide benefits to both users and service providers. In this paper, we conduct scientific workflow optimization on three perspectives: makespan minimization, virtual machine deployment cost minimization, and virtual machine failure minimization in the cloud infrastructure in a level-wise manner. Further, balanced task assignment to the virtual machine instances at each level of the workflow is also considered. Finally, system efficiency verification is conducted through evaluation of the results with different multiobjective optimization algorithms such as SPEA2 and NSGA-II.