Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Seybold is active.

Publication


Featured researches published by Daniel Seybold.


ieee acm international conference utility and cloud computing | 2015

Cloud orchestration features: are tools fit for purpose?

Daniel Baur; Daniel Seybold; Frank Griesinger; Athanasios Tsitsipas; Christopher B. Hauser; Jörg Domaschka

Even though the cloud era has begun almost one decade ago, many problems of the first hour are still around. Vendor lock-in and poor tool support hinder users from taking full advantage of main cloud features: dynamic and scale. This has given rise to tools that target the seamless management and orchestration of cloud applications. All these tools promise similar capabilities and are barely distinguishable what makes it hard to select the right tool. In this paper, we objectively investigate required and desired features of such tools and give a definition of them. We then select three open-source tools (Brooklyn, Cloudify, Stratos) and compare them according to the features they support using our experience gained from deploying and operating a standard three-tier application. This exercise leads to a fine-grained feature list that enables the comparison of such tools based on objective criteria as well as a rating of three popular cloud orchestration tools. In addition, it leads to the insight that the tools are on the right track, but that further development and particularly research is necessary to satisfy all demands.


international conference on big data | 2016

Is elasticity of scalable databases a Myth

Daniel Seybold; Nicolas Wagner; Benjamin Erb; Jörg Domaschka

The age of cloud computing has introduced all the mechanisms needed to elastically scale distributed, cloud-enabled applications. At roughly the same time, NoSQL databases have been proclaimed as the scalable alternative to relational databases. Since then, NoSQL databases are a core component of many large-scale distributed applications. This paper evaluates the scalability and elasticity features of the three widely used NoSQL database systems Couchbase, Cassandra and MongoDB under various workloads and settings using throughput and latency as metrics. The numbers show that the three database systems have dramatically different baselines with respect to both metrics and also behave unexpected when scaling out. For instance, while Couchbases throughput increases by 17% when scaled out from 1 to 4 nodes, MongoDBs throughput decreases by more than 50%. These surprising results show that not all tested NoSQL databases do scale as expected and even worse, in some cases scaling harms performances.


advances in databases and information systems | 2017

Is Distributed Database Evaluation Cloud-Ready?

Daniel Seybold; Jörg Domaschka

The database landscape has significantly evolved over the last decade as cloud computing enables to run distributed databases on virtually unlimited cloud resources. Hence, the already non-trivial task of selecting and deploying a distributed database system becomes more challenging. Database evaluation frameworks aim at easing this task by guiding the database selection and deployment decision. The evaluation of databases has evolved as well by moving the evaluation focus from performance to distribution aspects such as scalability and elasticity. This paper presents a cloud-centric analysis of distributed database evaluation frameworks based on evaluation tiers and framework requirements. It analysis eight well adopted evaluation frameworks. The results point out that the evaluation tiers performance, scalability, elasticity and consistency are well supported, in contrast to resource selection and availability. Further, the analysed frameworks do not support cloud-centric requirements but support classic evaluation requirements.


european conference on service-oriented and cloud computing | 2015

Axe: A Novel Approach for Generic, Flexible, and Comprehensive Monitoring and Adaptation of Cross-Cloud Applications

Jörg Domaschka; Daniel Seybold; Frank Griesinger; Daniel Baur

The vendor lock-in has been a major problem since cloud computing has evolved as on the one hand side hinders a quick transition between cloud providers and at the other hand side hinders an application deployment over various clouds at the same time (cross-cloud deployment). While the rise of cross-cloud deployment tools has to some extend limited the impact of vendor lock-in and given more freedom to operators, the fact that applications now are spread out over more than one cloud platform tremendously complicates matters: Either the operator has to interact with the interfaces of various cloud providers or he has to apply custom management tools. This is particularly true when it comes to the task of auto-scaling an application and adapting it to load changes. This paper introduces a novel approach to monitoring and adaptation management that is able to flexibly gather various monitoring data from virtual machines distributed across cloud providers, to dynamically aggregate the data in the cheapest possible manner, and finally, to evaluate the processed data in order to adapt the application according to user-defined rules.


advances in databases and information systems | 2017

A cloud-centric survey on distributed database evaluation

Daniel Seybold; Jörg Domaschka

The database landscape has significantly evolved over the last decade and distributed databases running in the cloud moved into the focus. This evolvement challenges the already non-trivial task of selecting and deploying a distributed database system. Database evaluation frameworks aim at easing this task by guiding the database selection and deployment decision. The evaluation of databases has evolved as well, now considering not only performance evaluation but also distributed database aspects such as scalability or elasticity. This paper presents a classification for distributed database evaluation frameworks based on evaluation tiers and framework requirements with the focus on exploiting cloud computing. The classification is applied for eight well adopted evaluation frameworks. The results points out that the evaluation tiers performance, scalability, elasticity and consistency are well supported, while (cloud) resource selection and availability lack support. Further, the analysed frameworks support common database evaluation requirements in varying extend and lack the support of cloud-centric requirements.


software language engineering | 2016

Experiences of models@run-time with EMF and CDO

Daniel Seybold; Jörg Domaschka; Alessandro Rossini; Christopher B. Hauser; Frank Griesinger; Athanasios Tsitsipas

Model-drivenengineering promotes models and modeltrans- formations as the primary assets in software development. The models@run-time approach provides an abstract rep- resentation of a system at run-time, whereby changes in the model and the system are constantly reflected on each other. In this paper, we report on more than three years of experience with realising models@run-time in scalable cloud scenarios using a technology stack consisting of the Eclipse Modelling Framework (EMF) and Connected Data Objects(CDO).We establish requirements for the three roles domain-specific language (DSL) designer, developer, and operator, and compare them against the capabilities of EM- F/CDO. It turns out that this technology stack is well-suited for DSL designers, but less recommendable for developers and even less suited for operators. For these roles, we experi- enced a steep learning curve and several lacking features that hinder the implementation of models@run-time in scalable cloud scenarios. Performance experiences show limitations for write heavy scenarios with an increasing amount of total elements. While we do not discourage the use of EMF/CDO for such scenarios, we recommend that its adoption for sim- ilar use cases is carefully evaluated until this technology stack has realised our wish list of advanced features.


international middleware conference | 2017

Towards a framework for orchestrated distributed database evaluation in the cloud

Daniel Seybold

The selection and operation of a distributed database management system (DDBMS) in the cloud is a challenging task as supportive evaluation frameworks miss orchestrated evaluation scenarios, hindering comparable and reproducible evaluations for heterogeneous cloud resources. We propose a novel evaluation approach that supports orchestrated evaluation scenarios for scalability, elasticity and availability by exploiting cloud resources. We highlight the challenges in evaluating DDBMSs in the cloud and introduce a cloud-centric framework for orchestrated DDBMS evaluation, enabling reproducible evaluations and significant rating indices.


conference on the future of the internet | 2017

A Cross-Layer BPaaS Adaptation Framework

Kyriakos Kritikos; Chrysostomos Zeginis; Frank Griesinger; Daniel Seybold; Joerg Domaschka

The notion of a BPaaS is currently taking a momentum as many organisations attempt to move and offer their business processes (BPs) in the cloud. Such BPs need to be adaptively provisioned so as to sustain the service level promised in the respective SLA. However, current cloud-based adaptation frameworks cannot cover all possible abstraction levels and usually rely on simplistic adaptation rules. As such, this paper proposes a novel BPaaS adaptation framework able to orchestrate actions on different abstraction levels so as to better address the current problematic situation. This framework can support the dynamic generation of adaptation workflows as well as the recording of the adaptation history for analysis purposes. It is also coupled with the CAMEL language which has been extended to support the specification of cross-level adaptation workflows.


OTM Confederated International Conferences "On the Move to Meaningful Internet Systems" | 2017

Gibbon: An Availability Evaluation Framework for Distributed Databases

Daniel Seybold; Christopher B. Hauser; Simon Volpert; Jörg Domaschka

Driven by new application domains, the database management systems (DBMSs) landscape has significantly evolved from single node DBMS to distributed database management systems (DDBMSs). In parallel, cloud computing became the preferred solution to run distributed applications. Hence, modern DDBMSs are designed to run in the cloud. Yet, in distributed systems the probability of failures is the higher the more entities are involved and by using cloud resources the probability of failures increases even more. Therefore, DDBMSs apply data replication across multiple nodes to provide high availability. Yet, high availability limits consistency or partition tolerance as stated by the CAP theorem. As the decision for two of the three attributes in not binary, the heterogeneous landscape of DDBMSs gets even more complex when it comes to their high availability mechanisms. Hence, the selection of a high available DDBMS to run in the cloud becomes a very challenging task, as supportive evaluation frameworks are not yet available. In order to ease the selection and increase the trust in running DDBMSs in the cloud, we present the Gibbon framework, a novel availability evaluation framework for DDBMSs. Gibbon defines quantifiable availability metrics, a customisable evaluation methodology and a novel evaluation framework architecture. Gibbon is discussed by an availability evaluation of MongoDB, analysing the take over and recovery time.


Archive | 2017

The cloud application modelling and execution language (CAMEL)

Alessandro Rossini; Kiriakos Kritikos; Nikolay Nikolov; Jörg Domaschka; Frank Griesinger; Daniel Seybold; Daniel Romero; Michal Orzechowski; Georgia M. Kapitsaki; Achilleas Achilleos

Cloud computing provides ubiquitous networked access to a shared and virtualised pool of computing capabilities that can be provisioned with minimal management effort [27]. Cloud applications are deployed on cloud infrastructures and delivered as services. The PaaSage project aims to facilitate the modelling and execution of cloud applications by leveraging model-driven engineering (MDE) and by exploiting multiple cloud infrastructures. The Cloud Application Modelling and Execution Language (CAMEL) is the core modelling and execution language developed in the PaaSage project and enables the specification of multiple aspects of cross-cloud applications (i.e., applications deployed across multiple private, public, or hybrid cloud infrastructures). By exploiting models at both designand run-time, and by allowing both direct and programmatic manipulation of models, CAMEL enables the management of self-adaptive cross-cloud applications (i.e., cross-cloud applications that autonomously adapt to changes in the environment, requirements, and usage). In this paper, we describe the design and implementation of CAMEL, with emphasis on the integration of heterogeneous domain-specific languages (DSLs) that cover different aspects of self-adaptive cross-cloud applications. Moreover, we provide a real-world running example to illustrate how to specify models in a concrete textual syntax and how to dynamically adapt these models during the application life cycle. Finally, we provide an evaluation of CAMEL’s usability and usefulness, based on the technology acceptance model (TAM).

Collaboration


Dive into the Daniel Seybold's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge