Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Schintke is active.

Publication


Featured researches published by Florian Schintke.


Computer Communications | 2008

Range queries on structured overlay networks

Thorsten Schütt; Florian Schintke; Alexander Reinefeld

The efficient handling of range queries in peer-to-peer systems is still an open issue. Several approaches exist, but their lookup schemes are either too expensive (space-filling curves) or their queries lack expressiveness (topology-driven data distribution). We present two structured overlay networks that support arbitrary range queries. The first one, named Chord^#, has been derived from Chord by substituting Chords hashing function by a key-order preserving function. It has a logarithmic routing performance and it supports range queries, which is not possible with Chord. Its O(1) pointer update algorithm can be applied to any peer-to-peer routing protocol with exponentially increasing pointers. We present a formal proof of the logarithmic routing performance and show empirical results that demonstrate the superiority of Chord^# over Chord in systems with high churn rates. We then extend our routing scheme to multiple dimensions, resulting in SONAR, a Structured Overlay Network with Arbitrary Range queries. SONAR covers multi-dimensional data spaces and, in contrast to other approaches, SONARs range queries are not restricted to rectangular shapes but may have arbitrary shapes. Empirical results with a data set of two million objects show the logarithmic routing performance in a geospatial domain.


Journal of Grid Computing | 2003

Modeling Replica Availability in Large Data Grids

Florian Schintke; Alexander Reinefeld

Large Grid systems not only provide massive aggregated computing power but also an unprecedented amount of distributed storage space. Unfortunately, the dynamic behavior of the Grid, caused by varying resource availability, unpredictable data updates, and the impact of local site policies makes it difficult to exploit the full capabilities of Data Grids.We present an analytical model for determining the optimal number of replica servers, catalog servers, and catalog sizes to guarantee a given overall reliability in the face of unreliable components. Our model captures the characteristics of peer-to-peer-like environments as well as that of Grid systems. Empirical simulations confirm the accuracy of our analytical model.


Concurrency and Computation: Practice and Experience | 2006

Resource reservations with fuzzy requests

Thomas Röblitz; Florian Schintke; Alexander Reinefeld

We present a scheme for reserving job resources with imprecise requests. Typical parameters such as the estimated runtime, the start time or the type or number of required CPUs need not be fixed at submission time but can be kept fuzzy in some aspects. Users may specify a list of preferences which guide the system in determining the best matching resources for the given job. Originally, the impetus for our work came from the need for efficient co‐reservation mechanisms in the Grid where rigid constraints on multiple job components often make it difficult to find a feasible solution. Our method for handling fuzzy reservation requests gives the users more freedom to specify the requirements and it gives the Grid Reservation Service more flexibility to find optimal solutions. In the future, we will extend our methods to process co‐reservations. We evaluated our algorithms with real workload traces from a large supercomputer site. The results indicate that our scheme greatly improves the flexibility of the solution process without having much affect on the overall workload of a site. From a users perspective, only about 10% of the non‐reservation jobs have a longer response time, and from a site administrators view, the makespan of the original workload is extended by only 8% in the worst case. Copyright


european conference on parallel processing | 2007

A structured overlay for multi-dimensional range queries

Thorsten Schütt; Florian Schintke; Alexander Reinefeld

We introduce SONAR, a structured overlay to store and retrieve objects addressed by multi-dimensional names (keys). The overlay has the shape of a multi-dimensional torus, where each node is responsible for a contiguous part of the data space. A uniform distribution of keys on the data space is not necessary, because denser areas get assigned more nodes. To nevertheless support logarithmic routing, SONAR maintains, per dimension, fingers to other nodes, that span an exponentially increasing number of nodes. Most other overlays maintain such fingers in the key-space instead and therefore require a uniform data distribution. SONAR, in contrast, avoids hashing and is therefore able to perform range queries of arbitrary shape in a logarithmic number of routing steps--independent of the number of system- and query-dimensions. SONAR needs just one hop for updating an entry in its routing table: A longer finger is calculated by querying the node referred to by the next shorter finger for its shorter finger. This doubles the number of spanned nodes and leads to exponentially spaced fingers.


distributed systems operations and management | 2007

Transactions for distributed wikis on structured overlays

Stefan Plantikow; Alexander Reinefeld; Florian Schintke

We present a transaction processing scheme for structured overlay networks and use it to develop a distributed Wiki application based on a relational data model. The Wiki supports rich metadata and additional indexes for navigation purposes. Ensuring consistency and durability requires handling of node failures. We mask such failures by providing high availability of nodes by constructing the overlay from replicated state machines (cell model). Atomicity is realized using two phase commit with additional support for failure detection and restoration of the transaction manager. The developed transaction processing scheme provides the application with a mixture of pessimistic, hybrid optimistic and multiversioning concurrency control techniques to minimize the impact of replication on latency and optimize for read operations. We present pseudocode of the relevant Wiki functions and evaluate the different concurrency control techniques in terms of message complexity.


Future Generation Grids | 2006

On Adaptability in Grid Systems

Artur Andrzejak; Alexander Reinefeld; Florian Schintke; Thorsten Schütt

With the increasing size and complexity, adaptability is among the most badly needed properties in today’s Grid systems. Adaptability refers to the degree to which adjustments in practices, processes, or structures of systems are possible to projected or actual changes of their environment.


grid computing | 2010

Enhanced Paxos Commit for Transactions on DHTs

Florian Schintke; Alexander Reinefeld; Seif Haridi; Thorsten Schütt

Key/value stores which are built on structured overlay networks often lack support for atomic transactions and strong data consistency among replicas. This is unfortunate, because consistency guarantees and transactions would allow a wide range of additional application domains to benefit from the inherent scalability and fault-tolerance of DHTs. The Scalaris key/value store supports strong data consistency and atomic transactions. It uses an enhanced Paxos Commit protocol with only four communication steps rather than six. This improvement was possible by exploiting information from the replica distribution in the DHT. Scalaris enables implementation of more reliable and scalable infrastructure for collaborative Web services that require strong consistency and atomic changes across multiple items.


international conference on computational science | 2003

Efficient synchronization of replicated data in distributed systems

Thorsten Schütt; Florian Schintke; Alexander Reinefeld

We present nsync, a tool for synchronizing large replicated data sets in distributed systems. nsync computes nearly optimal synchronization plans based on a hierarchy of gossip algorithms that take the network topology into account. Our primary design goals were maximum performance and maximum scalability. We achieved these goals by exploiting parallelism in the planning and the synchronization phase, by omitting transfer of unnecessary metadata, by synchronizing at a block level rather than a file level, and by using sophisticated compression methods. With its relaxed consistency semantic, nsync neither needs a master copy nor a quorum for updating distributed replicas. Each replica is kept as an autonomous entity and can be modified with the usual tools.


european conference on parallel processing | 2002

Concepts and Technologies for a Worldwide Grid Infrastructure

Alexander Reinefeld; Florian Schintke

Grid computing got much attention lately-not only from the academic world, but also from industry and business. But what remains when the dust of the many press articles has settled? We try to answer this question by investigating the concepts and techniques grids are based on. We distinguish three kinds of grids: the HTML-based Information Grid, the contemporary Resource Grid, and the newly evolving Service Grid.We show that grid computing is not just another hype, but has the potential to open new perspectives for the co-operative use of distributed resources. Grid computing is on the right way to solve a key problem in our distributed computing world: the discovery and coordinated use of distributed services that may be implemented by volatile, dynamic local resources


Journal of Grid Computing | 2004

Autonomic Management of Large Clusters and Their Integration into the Grid

Thomas Röblitz; Florian Schintke; Alexander Reinefeld; O. Barring; Maite Barroso Lopez; German Cancio; S. Chapeland; Karim Chouikh; Lionel Cons; Piotr Poznański; Philippe Defert; Jan Iven; Thorsten Kleinwort; B. Panzer-Steindel; Jaroslaw Polok; Catherine Rafflin; Alan Silverman; T.J. Smith; Jan Van Eldik; David Front; Massimo Biasotto; Cristina Aiftimiei; Enrico Ferro; Gaetano Maron; Andrea Chierici; Luca Dell’agnello; Marco Serra; M. Michelotto; Lord Hess; V. Lindenstruth

Abstract We present a framework for the co-ordinated, autonomic management of multiple clusters in a compute center and their integration into a Grid environment. Site autonomy and the automation of administrative tasks are prime aspects in this framework. The system behavior is continuously monitored in a steering cycle and appropriate actions are taken to resolve any problems. All presented components have been implemented in the course of the EU project DataGrid: The Lemon monitoring components, the FT fault-tolerance mechanism, the quattor system for software installation and configuration, the RMS job and resource management system, and the Gridification scheme that integrates clusters into the Grid.

Collaboration


Dive into the Florian Schintke's collaboration.

Top Co-Authors

Avatar

Alexander Reinefeld

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andre Merzky

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Simon

University of Paderborn

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge