Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Schöttner is active.

Publication


Featured researches published by Michael Schöttner.


parallel and distributed computing: applications and technologies | 2008

Checkpointing Process Groups in a Grid Environment

John Mehnert-Spahn; Michael Schöttner; Christine Morin

The EU-funded XtreemOS project implements a grid operating system transparently exploiting resources of virtual organizations through the standard POSIX interface. Grid checkpointing and restart requires to save and restore jobs executing in a distributed heterogeneous grid environment. The latter may spawn millions of grid nodes ( PCs, clusters, and mobile devices ) using different system-specific checkpointers saving and restoring application and kernel data structures for processes executing on a grid node. In this paper we shortly describe the XtreemOS grid checkpointing architecture and how we bridge the gap between the abstract grid and the system-specific checkpointers. Then we discuss how we keep track of processes and how different process grouping techniques are managed to ensure that all processes of a job and any further dependent ones can be checkpointed and restarted. Finally, we present how Linux control groups can be used to address resource isolation issues during the restart.


international conference on cluster computing | 2014

Memory management for billions of small objects in a distributed in-memory storage

Florian Klein; Kevin Beineke; Michael Schöttner

Large-scale interactive applications and online analytic processing on graphs require fast data access to huge sets of small data objects. DXRAM addresses these challenges by keeping all data always in memory of potentially many nodes aggregated in a data center. In this paper we focus on the efficient memory management and mapping of global IDs to local memory addresses, which is not trivial as each node may store up to one billion of small data objects (16-64 byte) in its local memory. We present an efficient paging-like translation scheme for global IDs and a memory management optimized for many small data objects. The latter includes an efficient incremental defragmentation supporting changing allocation granularities for dynamic data. Our evaluations show that the proposed memory management approach has only a 4-5% overhead compared to state of the art memory allocators with around 20% and the paging-like mapping of globals IDs is faster and more efficient than hash-table based approaches. Furthermore, we compare memory overhead and read performance of DXRAM with RAMCloud.


european conference on parallel processing | 2010

Adaptive conflict unit size for distributed optimistic synchronization

Kim-Thomas Rehmann; Marc-Florian Müller; Michael Schöttner

Distributed and parallel applications often require accessing shared data. Distributed transactional memory is an emerging concept for concurrent shared data access. By using optimistic synchronization, transactional memory is simpler to use and less error-prone than explicit lock-based synchronization. However, distributed transactional memories are particularly sensitive to phenomena such as true sharing and false sharing, which are caused by correlated data access patterns on multiple nodes. In this paper, we propose a transparent technique that adaptively manages conflict unit sizes for distributed optimistic synchronization in order to relieve application developers from reasoning about such sharing phenomena. Experiments with micro-benchmarks and an on-line data processing application similar to Twitter (using the MapReduce computing model) show the benefits of the proposed approach.


international conference on algorithms and architectures for parallel processing | 2009

A Software Transactional Memory Service for Grids

Kim-Thomas Möller; Marc-Florian Müller; Michael Sonnenfroh; Michael Schöttner

In-memory data sharing for grids allow location-transparent access to data stored in volatile memory. Existing Grid middlewares typ- ically support only explicit data transfer between Grid nodes. We be- lieve that Grid systems benefit from complementing traditional message- passing techniques with a data-oriented sharing technique. The latter includes automatic replica management, data consistency, and location- transparent access. As a proof of concept, we are implementing a POSIX- compatible object sharing service as part of the EU-funded XtreemOS project, which builds a Linux-based Grid operating system. In this paper we describe the software architecture of the object sharing service and design decisions including transactional consistency and peer-to-peer net- work structure. We also present preliminary evaluation results analyzing lower-bound transaction-overhead using a parallel raytracing application.


parallel and distributed computing applications and technologies | 2013

DXRAM: A Persistent In-Memory Storage for Billions of Small Objects

Florian Klein; Michael Schöttner

Large-scale interactive applications and real time data-processing are facing problems with traditional disk-based storage solutions. Because of the often irregular access patterns they must keep almost all data in RAM caches, which need to be manually synchronized with secondary storage and need a lot of time to be re-loaded in case of power outages. In this paper we propose a novel key-value storage keeping all data always in RAM by aggregating resources of potentially many nodes in a data center. We aim at supporting management of billions of small data objects (16-64 byte) like for example needed for storing graphs. A scalable low-overhead meta-data management is realized using a novel range-based ID approach combined with a super-overlay network. Furthermore, we provide persistence by a novel SSD-aware logging approach allowing to recover failed nodes very fast.


european conference on parallel processing | 2015

Distributed Range-Based Meta-Data Management for an In-Memory Storage

Florian Klein; Kevin Beineke; Michael Schöttner

Large-scale interactive applications and online graph processing require fast data access to billions of small data objects. DXRAM addresses this challenge by keeping all data always in RAM of potentially many nodes aggregated in a data center. Such storage clusters need a space-efficient and fast meta-data management. In this paper we propose a range-based meta-data management allowing fast node lookups while being space efficient by combining data object IDs in ranges. A super-peer overlay network is used to manage these ranges together with backup-node information allowing parallel and fast recovery of meta data and data of failed peers. Furthermore, the same concept can also be used for client-side caching. The measurement results show the benefits of the proposed concepts compared to other meta-data management strategies as well as its very good overall performance evaluated using the social network benchmark BG.


parallel and distributed computing: applications and technologies | 2010

Sharing In-Memory Game States

Michael Sonnenfroh; Tobias Baeuerle; Peter Schulthess; Michael Schöttner

Massively multi-user virtual environments (MMVEs)are becoming increasingly popular with millions of users. Typically, commercial implementations rely on client/server architectures for managing the game state and use message passing mechanisms to communicate state changes to the clients. We have developed the Typed Grid Object Sharing (TGOS)service providing data sharing of in-memory data. TGOS aims at simplifying the development of MMVEs by sharing scene graphs in a peer-to-peer way and also data of backend services. Replication is controlled by different consistency models, including restartable transactions combined with optimistic synchronization for strong consistency. In this paper we describe the data centric architecture of the Wissenheim Worlds application and relevant parts of TGOS. Furthermore, we present an evaluation showing the feasibility and efficiency of the proposed approach.


international conference on cluster computing | 2016

High Throughput Log-Based Replication for Many Small In-Memory Objects

Kevin Beineke; Stefan Nothaas; Michael Schöttner

Online graph analytics and large-scale interactive applications such as social media networks require low-latency data access to billions of small data objects. These applications have mostly irregular access patterns making caching insufficient. Hence, more and more distributed in-memory systems are proposed keeping all data always in memory. These in-memory systems are typically not optimized for the sheer amount of small data objects, which demands new concepts regarding the local and global data management and also the fault-tolerance mechanisms required to mask node failures and power outages. In this paper we propose a novel two-level logging architecture with backup-side version control enabling parallel recovery of in-memory objects after node failures. The presented fault-tolerance approach provides high throughput and minimal memory overhead when working with many small objects. We also present a highly concurrent log cleaning approach to keep logs compact. All proposed concepts have been implemented within the DXRAM system and have been evaluated using two benchmarks: The Yahoo! Cloud Serving Benchmark and RAMClouds Log Cleaner benchmark. The experiments show that our proposed approach has less memory overhead and outperforms state-of-the-art in-memory systems for the target application domains, including RAMCloud, Redis, and Aerospike.


Proceedings of the First International Workshop on High Performance Graph Data Management and Processing | 2016

Distributed multithreaded breadth-first search on large graphs using DXGraph

Stefan Nothaas; Kevin Beineke; Michael Schöttner

Interactive graph applications are often generating irregular access patterns on very large graphs with trillions of edges and billions of vertices. In order to provide short response times for interactive queries, all these small data objects need to be stored in memory. DXRAM is a distributed in-memory system optimized to efficiently manage large amounts of small data objects. In this paper, we present DXGraph, an extension to allow graph processing on DXRAM storage nodes. For a natural graph representation, each vertex is stored as an object. We describe DXGraphs implementation of a breadth-first search (BFS) algorithm, as specified by the Graph500 benchmark. The preliminary evaluation of the BFS algorithm shows that DXGraphs implementation is up to five times faster than Grappas and GraphLabs with a peak throughput of over 323 million traversed edges per second.


parallel and distributed computing: applications and technologies | 2010

Commit Protocols for a Distributed Transactional Memory

Marc-Florian Müller; Kim-Thomas Möller; Michael Schöttner

Collaboration


Dive into the Michael Schöttner's collaboration.

Top Co-Authors

Avatar

Kevin Beineke

University of Düsseldorf

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Nothaas

University of Düsseldorf

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge