Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Pickartz is active.

Publication


Featured researches published by Simon Pickartz.


european conference on parallel processing | 2014

Migration Techniques in HPC Environments

Simon Pickartz; Ramy Gad; Stefan Lankes; Lars Nagel; Tim Süß; André Brinkmann; Stephan Krempel

Process migration is an important feature in modern computing centers as it allows for a more efficient use and maintenance of hardware. Especially in virtualized infrastructures it is successfully exploited by schemes for load balancing and energy efficiency. One can divide the tools and techniques into three groups: Process-level migration, virtual machine migration, and container-based migration.


international conference on high performance computing and simulation | 2016

Application migration in HPC — A driver of the exascale era?

Simon Pickartz; Stefan Lankes; Antonello Monti; Carsten Clauss; Jens Breitbart

Application migration is valuable for modern computing centers. Apart from a facilitation of the maintenance process, it enables dynamic load balancing for an improvement of the systems efficiency. Although, the concept is already wide-spread in cloud computing environments, it did not find huge adoption in HPC yet. As major challenges of future exascale systems are resiliency, concurrency, and locality, we expect migration of applications to be one means to cope with these challenges. In this paper we investigate its viability for HPC by deriving respective requirements for this specific field of application. In doing so, we sketch example scenarios demonstrating its potential benefits. Furthermore, we discuss challenges that result from the migration of OS-bypass networks and present a prototype migration mechanism enabling the seamless migration of MPI processes in HPC systems.


international parallel and distributed processing symposium | 2016

Non-intrusive Migration of MPI Processes in OS-Bypass Networks

Simon Pickartz; Carsten Clauss; Stefan Lankes; Stephan Krempel; Thomas Moschny; Antonello Monti

Load balancing, maintenance, and energy efficiency are key challenges for upcoming supercomputers. An indispensable tool for the accomplishment of these tasks is the ability to migrate applications during runtime. Especially in HPC, where any performance hit is frowned upon, such migration mechanisms have to come with minimal overhead. This constraint is usually not met by current practice adding further abstraction layers to the software stack. In this paper, we propose a concept for the migration of MPI processes communicating over OS-bypass networks such as InfiniBand. While being transparent to the application, our solution minimizes the runtime overhead by introducing a protocol for the shutdown of individual connections prior to the migration. It is implemented on the basis of an MPI library and evaluated using virtual machines based on KVM. Our evaluation reveals that the runtime overhead is negligible small. The migration time itself is mainly determined by the particular migration mechanism, whereas the additional execution time of the presented protocol converges to 2ms per connection if more than a few dozen connections are shut down at a time.


international symposium on parallel and distributed computing | 2012

Towards a Multicore Communications API Implementation (MCAPI) for the Intel Single-Chip Cloud Computer (SCC)

Carsten Clauss; Simon Pickartz; Stefan Lankes; Thomas Bemmerl

In this paper, we present a prototype implementation of the Multicore Communications API (MCAPI) for the Intel Single-Chip Cloud Computer (SCC). The SCC is a 48 core concept vehicle for future many-core systems that exhibit message-passing oriented architectures. The MCAPI specification, recently developed by the Multicore Association, resembles a lightweight interface for message-passing in todays multicore systems. The presented prototype implementation should be used to evaluate the MCAPIs capability and feasibility for its employment also in future many-core systems.


international conference on cluster computing | 2017

Dynamic Co-Scheduling Driven by Main Memory Bandwidth Utilization

Jens Breitbart; Simon Pickartz; Stefan Lankes; Josef Weidendorfer; Antonello Monti

Most applications running on supercomputers achieve only a fraction of a systems peak performance. It has been demonstrated that the co-scheduling of applications can improve the overall system utilization. However, following this approach, applications need to fulfill certain criteria such that the mutual slowdown is kept at a minimum. In this paper, we present an HPC scheduler that applies co-scheduling and utilizes virtual machine migration for a re-orchestration of applications at runtime based on their main memory bandwidth requirements. Given a job queue consisting of main memory-bound applications and compute-bound applications, we can see a throughput increase of up to 35% while at the same time reducing energy consumption by around 30%.


Concurrency and Computation: Practice and Experience | 2018

Prospects and challenges of virtual machine migration in HPC

Simon Pickartz; Carsten Clauss; Jens Breitbart; Stefan Lankes; Antonello Monti

The continuous growth of supercomputers is accompanied by increased complexity of the intra‐node level and the interconnection topology. Consequently, the whole software stack ranging from the system software to the applications has to evolve, eg, by means of fault tolerance and support for the rising intra‐node parallelism. Migration techniques are one means to address these challenges. On the one hand, they facilitate the maintenance process by enabling the evacuation of individual nodes during runtime, ie, the implementation of fault avoidance. On the other hand, they enable dynamic load balancing for an improvement of the systems efficiency. However, these prospects come along with certain challenges. On the process level, migration mechanisms have to resolve so‐called residual dependencies to the source node, eg, the communication hardware. On the job level, migrations affect the communication topology, which should be addressed by the communication stack, ie, the optimal communication path between a pair of processes might change after a migration. In this article, we explore migration mechanisms for HPC and discuss their prospects as well as the challenges. Furthermore, we present solutions enabling their efficient usage in this domain. Finally, we evaluate our prototype co‐scheduler leveraging migration for workload optimization.


ieee international conference on high performance computing, data, and analytics | 2017

A Locality-Aware Communication Layer for Virtualized Clusters

Simon Pickartz; Jonas Baude; Stefan Lankes; Antonello Monti

Locality-aware HPC communication stacks have been around with the emergence of SMP systems since the early 2000s. Common MPI implementations provide communication paths optimized for the underlying transport mechanism, i.e., two processes residing on the same SMP node should leverage local shared-memory communication while inter-node communication should be realized by means of HPC interconnects. As virtualization gains more and more importance in the area of HPC, locality-awareness becomes relevant again. Commonly, HPC systems lack support for efficient communication among co-located VMs, i.e., they harness the local InfiniBand adapter as opposed to the shared physical memory on the host system. This results in important performance penalties, especially for communication intensive applications. With IVShmem there exist means for the exploitation of the local memory as communication medium. In this paper we present a locality-aware MPI layer leveraging this technology for efficient intra-host inter-VM communication. We evaluate our implementation by drawing a comparison to a non-locality-aware communication layer in virtualized clusters.


Proceedings of the 24th European MPI Users' Group Meeting on | 2017

Enabling hierarchy-aware MPI collectives in dynamically changing topologies

Simon Pickartz; Carsten Clauss; Stefan Lankes; Antonello Monti

Hierarchy-awareness for message-passing has been around since the early 2000s with the emergence of SMP systems. Since then, many works dealt with the optimization of collective communication operations (so-called collectives) for such hierarchical topologies. However, until now, all these optimizations basically assume that the hierarchical topology remains static in a parallel program. In contrast, this paper strives for a discussion of how dynamically changing topologies can be considered during runtime, especially with focus on collective communication patterns. In doing so, the discussion starter for this is the possibility of process migration, e. g., in virtualized environments where the MPI processes are encapsulated within virtual machines. Consequently, processes originally located on distinct nodes can then (dynamically) become neighbors on the same SMP node. The central subject for the discussion on how such changes can be taken into account for optimized collectives is a new experimental MPI function that we propose and detail within this paper.


high performance embedded architectures and compilers | 2017

Co-scheduling on Upcoming Many-Core Architectures

Simon Pickartz; Jens Breitbart; Stefan Lankes

Co-scheduling is known to optimize the utilization of supercomputers. By choosing applications with distict resource demands, the application throughput can be increased avoiding an underutilization of the available nodes. This is especially true for traditional multi-core architecture where a subset of the available cores are already able to saturate the main memory bandwidth. In this paper, we apply this concept to upcoming many-core architectures by taking the example of the Intel KNL. Therefore, we take a memory-bound and a compute-bound kernel from the NPB as example applications. Furthermore, we examine the effect of different memory assignment strategies that are enabled by the two-layered memory hierarchy of the KNL.


ieee international conference on high performance computing, data, and analytics | 2016

Migrating LinuX Containers Using CRIU

Simon Pickartz; Niklas Eiling; Stefan Lankes; Lukas Razik; Antonello Monti

Process migration is one of the most important techniques in modern computing centers. It enables the implementation of load balancing strategies and eases the system administration. As supercomputers continue to grow in size, according mechanisms become interesting to High-Performance Computing (HPC) as well.

Collaboration


Dive into the Simon Pickartz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonas Baude

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar

Lukas Razik

RWTH Aachen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge