Kevin Beineke
University of Düsseldorf
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin Beineke.
international conference on cluster computing | 2014
Florian Klein; Kevin Beineke; Michael Schöttner
Large-scale interactive applications and online analytic processing on graphs require fast data access to huge sets of small data objects. DXRAM addresses these challenges by keeping all data always in memory of potentially many nodes aggregated in a data center. In this paper we focus on the efficient memory management and mapping of global IDs to local memory addresses, which is not trivial as each node may store up to one billion of small data objects (16-64 byte) in its local memory. We present an efficient paging-like translation scheme for global IDs and a memory management optimized for many small data objects. The latter includes an efficient incremental defragmentation supporting changing allocation granularities for dynamic data. Our evaluations show that the proposed memory management approach has only a 4-5% overhead compared to state of the art memory allocators with around 20% and the paging-like mapping of globals IDs is faster and more efficient than hash-table based approaches. Furthermore, we compare memory overhead and read performance of DXRAM with RAMCloud.
european conference on parallel processing | 2015
Florian Klein; Kevin Beineke; Michael Schöttner
Large-scale interactive applications and online graph processing require fast data access to billions of small data objects. DXRAM addresses this challenge by keeping all data always in RAM of potentially many nodes aggregated in a data center. Such storage clusters need a space-efficient and fast meta-data management. In this paper we propose a range-based meta-data management allowing fast node lookups while being space efficient by combining data object IDs in ranges. A super-peer overlay network is used to manage these ranges together with backup-node information allowing parallel and fast recovery of meta data and data of failed peers. Furthermore, the same concept can also be used for client-side caching. The measurement results show the benefits of the proposed concepts compared to other meta-data management strategies as well as its very good overall performance evaluated using the social network benchmark BG.
international conference on cluster computing | 2017
Kevin Beineke; Stefan Nothaas; Michael Schoettner
Social media networks as well as online graph analytics operate on large-scale graphs with millions of vertices, even billions in some cases. Low-latency access is essential, but caching suffers from the mostly irregular access patterns of the aforementioned application domains. Hence, distributed in-memory systems are proposed keeping all data always in memory. But, the sheer amount of small data objects demands new concepts regarding the local and global data management as well as for the fault-tolerance mechanisms to mask server failures and power outages. We propose a backup distribution mechanism and a parallel recovery concept allowing to recover a failed server storing hundreds of millions of small objects within 1 to 2 seconds. All proposed concepts have been implemented within the open source system DXRAM and have been evaluated in the Microsoft Azure cloud with up to 72 virtual machines. The experiments show that DXRAM can recover a server storing 500,000,000 small objects from SSDs within 2 seconds.
international conference on cluster computing | 2016
Kevin Beineke; Stefan Nothaas; Michael Schöttner
Online graph analytics and large-scale interactive applications such as social media networks require low-latency data access to billions of small data objects. These applications have mostly irregular access patterns making caching insufficient. Hence, more and more distributed in-memory systems are proposed keeping all data always in memory. These in-memory systems are typically not optimized for the sheer amount of small data objects, which demands new concepts regarding the local and global data management and also the fault-tolerance mechanisms required to mask node failures and power outages. In this paper we propose a novel two-level logging architecture with backup-side version control enabling parallel recovery of in-memory objects after node failures. The presented fault-tolerance approach provides high throughput and minimal memory overhead when working with many small objects. We also present a highly concurrent log cleaning approach to keep logs compact. All proposed concepts have been implemented within the DXRAM system and have been evaluated using two benchmarks: The Yahoo! Cloud Serving Benchmark and RAMClouds Log Cleaner benchmark. The experiments show that our proposed approach has less memory overhead and outperforms state-of-the-art in-memory systems for the target application domains, including RAMCloud, Redis, and Aerospike.
Proceedings of the First International Workshop on High Performance Graph Data Management and Processing | 2016
Stefan Nothaas; Kevin Beineke; Michael Schöttner
Interactive graph applications are often generating irregular access patterns on very large graphs with trillions of edges and billions of vertices. In order to provide short response times for interactive queries, all these small data objects need to be stored in memory. DXRAM is a distributed in-memory system optimized to efficiently manage large amounts of small data objects. In this paper, we present DXGraph, an extension to allow graph processing on DXRAM storage nodes. For a natural graph representation, each vertex is stored as an object. We describe DXGraphs implementation of a breadth-first search (BFS) algorithm, as specified by the Graph500 benchmark. The preliminary evaluation of the BFS algorithm shows that DXGraphs implementation is up to five times faster than Grappas and GraphLabs with a peak throughput of over 323 million traversed edges per second.
cluster computing and the grid | 2018
Kevin Beineke; Stefan Nothaas; Michael Schöttner
arXiv: Distributed, Parallel, and Cluster Computing | 2018
Kevin Beineke; Stefan Nothaas; Michael Schöttner
international conference on parallel and distributed systems | 2017
Kevin Beineke; Stefan Nothaas; Michael Schoettner
GI-Jahrestagung | 2014
Kevin Beineke; Florian Klein; Michael Schöttner
international conference on parallel and distributed systems | 2012
Kim-Thomas Rehmann; Kevin Beineke; Michael Schoettner