Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Bermbach is active.

Publication


Featured researches published by David Bermbach.


international conference on web engineering | 2016

Benchmarking Web API Quality

David Bermbach; Erik Wittern

Web APIs are increasingly becoming an integral part of web or mobile applications. As a consequence, performance characteristics and availability of the APIs used directly impact the user experience of end users. Still, quality of web APIs is largely ignored and simply assumed to be sufficiently good and stable. Especially considering geo-mobility of today’s client devices, this can lead to negative surprises at runtime.


Archive | 2017

Cloud service benchmarking

David Bermbach; Erik Wittern; Stefan Tai

Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level. The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web. The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments.


Technology Conference on Performance Evaluation and Benchmarking | 2014

Towards an Extensible Middleware for Database Benchmarking

David Bermbach; Jörn Kuhlenkamp; Akon Dey; Sherif Sakr; Raghunath Nambiar

Today’s database benchmarks are designed to evaluate a particular type of database. Furthermore, popular benchmarks, like those from TPC, come without a ready-to-use implementation requiring database benchmark users to implement the benchmarking tool from scratch. The result of this is that there is no single framework that can be used to compare arbitrary database systems. The primary reason for this, among others, being the complexity of designing and implementing distributed benchmarking tools.


international conference on service oriented computing | 2015

AISLE: Assessment of Provisioned Service Levels in Public IaaS-Based Database Systems

Jörn Kuhlenkamp; Kevin Rudolph; David Bermbach

When database systems running on top of public cloud services run into performance problems, it is hard to identify the concrete infrastructure service for which provisioning additional resources would solve said performance problem. In this work, we present AISLE, which develops a model for expected service levels and includes metrics which assess values from service level monitoring to identify these cloud services. Using AISLE, we develop such a model for the Amazon EBS service and evaluate our approach in experiments with Apache Cassandra running on top of EBS-backed EC2 instances.


Archive | 2018

Data and Computation Movement in Fog Environments: The DITAS Approach

Pierluigi Plebani; David García-Pérez; Maya Anderson; David Bermbach; Cinzia Cappiello; Ronen I. Kat; Achilleas Marinakis; Vrettos Moulos; Frank Pallas; Stefan Tai; Monica Vitali

Data-intensive applications are becoming very important in several domains including e-health, government 2.0, smart cities, and industry 4.0. In fact, the significant increase of sensor deployment in the Internet of things (IoT) environments, in conjunction with the huge amount of data that are generated by the smart and intelligent devices such as smartphones, requires proper data management. The goal of this chapter is to focus on how to improve data management when data are produced and consumed in a Fog Computing environment, where both resources at the edge of the network (e.g., sensors and mobile devices) and resources in the cloud (e.g., virtual machines) are involved and need to operate seamlessly together. Based on the approach proposed in the European DITAS project, data and computation movement between the edge and the cloud are studied, to create a balance between such characteristics as latency and response time (when data are stored in edge-located resources) and scalability and reliability in case of data residing in the cloud. In this contribution, to enable data and computation movement, an approach based on the principles of Service-Oriented Computing applied to a Fog environment has been adopted.


international conference on service oriented computing | 2017

BenchFoundry: A Benchmarking Framework for Cloud Storage Services

David Bermbach; Jörn Kuhlenkamp; Akon Dey; Alan Fekete; Stefan Tai

Understanding quality of services in general, and of cloud storage services in particular, is often crucial. Previous proposals to benchmark storage services are too restricted to cover the full variety of NoSQL stores, or else too simplistic to capture properties of use by realistic applications; they also typically measure only one facet of the complex tradeoffs between different qualities of service. In this paper, we present BenchFoundry which is not a benchmark itself but rather is a benchmarking framework that can execute arbitrary application-driven benchmark workloads in a distributed deployment while measuring multiple qualities at the same time. BenchFoundry can be used or extended for every kind of storage service. Specifically, BenchFoundry is the first system where workload specifications become mere configuration files instead of code. In our design, we have put special emphasis on ease-of-use and deterministic repeatability of benchmark runs which is achieved through a trace-based workload model.


international conference on service oriented computing | 2017

Designing Suitable Access Control for Web-Connected Smart Home Platforms

Sebastian Werner; Frank Pallas; David Bermbach

Access control in web-connected smart home platforms exhibits unique characteristics and challenges. In this paper, we therefore discuss suitable access control mechanisms specifically tailored to such platforms. Based on a set of relevant scenarios, we identify requirements and available technologies for fulfilling them. We then present our experiences gained from implementing access control meeting the identified requirements in OpenHAB, a widely used smart home platform.


european conference on service-oriented and cloud computing | 2017

DITAS: Unleashing the Potential of Fog Computing to Improve Data-Intensive Applications

Pierluigi Plebani; David García-Pérez; Maya Anderson; David Bermbach; Cinzia Cappiello; Ronen I. Kat; Achilleas Marinakis; Vrettos Moulos; Frank Pallas; Barbara Pernici; Stefan Tai; Monica Vitali

Although it has been initially introduced in the telecommunication domain by Cisco [1], Fog Computing is recently emerging as a hot topic also in the software domain, and especially for data-intensive applications (DIA), with the goal of creating a continuum between the resources living on the Cloud and the ones living on the Edge [3]. In fact, especially because of the significant increasing of smart devices connected to the Internet (e.g., smartphones, raspberry PI), operators at the edge of the network are no longer considered as content consumers but also content providers, i.e., the so called prosumers. This new scenario implies a paradigm shift and the Fog Computing is contributing to it, by considering Cloud and Edge parts of the same platform.


Archive | 2017

Implementation Objectives and Challenges

David Bermbach; Erik Wittern; Stefan Tai

The previous part of the book addressed how to design a good cloud service benchmark. In this part, we shift our focus from the design to its implementation as part of a benchmarking tool and (later on) its runtime execution. In this chapter, we start by introducing implementation objectives for cloud service benchmarks. Even based on a careful benchmark design considering all design objectives, the actual benchmark implementation can still run afoul of the goals initially set. In this chapter, next to outlining implementation objectives, we provide concrete examples on how they can be achieved in practice.


Archive | 2017

Experiment Setup and Runtime

David Bermbach; Erik Wittern; Stefan Tai

The previous chapter described the objectives and challenges for implementing a cloud service benchmark that has already been designed. Now having an implementation at hand, it can be used to run actual experiments. In this chapter, we discuss how to deploy, set up, and run such experiments. For this purpose, we start by outlining the typical process underlying experiment setup and execution. Afterwards, we discuss an important precondition for running experiments, namely, ensuring that the required resources are readily available when needed. We then dive into addressing challenges that occur directly before, during, and after running an experiment, including challenges associated with collecting benchmarking data, data provenance, and storing data.

Collaboration


Dive into the David Bermbach's collaboration.

Top Co-Authors

Avatar

Stefan Tai

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Frank Pallas

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Jörn Kuhlenkamp

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Jacob Eberhardt

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Akon Dey

University of Sydney

View shared research outputs
Top Co-Authors

Avatar

Achilleas Marinakis

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge