Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher B. Hauser is active.

Publication


Featured researches published by Christopher B. Hauser.


ieee international conference on cloud computing technology and science | 2014

The CACTOS Vision of Context-Aware Cloud Topology Optimization and Simulation

Per-Olov Östberg; Henning Groenda; Stefan Wesner; James Byrne; Dimitrios S. Nikolopoulos; Craig Sheridan; Jakub Krzywda; Ahmed Ali-Eldin; Johan Tordsson; Erik Elmroth; Christian Stier; Klaus Krogmann; Jörg Domaschka; Christopher B. Hauser; Peter J. Byrne; Sergej Svorobej; Barry McCollum; Zafeirios Papazachos; Darren Whigham; Stephan Ruth; Dragana Paurevic

Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.


ieee acm international conference utility and cloud computing | 2015

Cloud orchestration features: are tools fit for purpose?

Daniel Baur; Daniel Seybold; Frank Griesinger; Athanasios Tsitsipas; Christopher B. Hauser; Jörg Domaschka

Even though the cloud era has begun almost one decade ago, many problems of the first hour are still around. Vendor lock-in and poor tool support hinder users from taking full advantage of main cloud features: dynamic and scale. This has given rise to tools that target the seamless management and orchestration of cloud applications. All these tools promise similar capabilities and are barely distinguishable what makes it hard to select the right tool. In this paper, we objectively investigate required and desired features of such tools and give a definition of them. We then select three open-source tools (Brooklyn, Cloudify, Stratos) and compare them according to the features they support using our experience gained from deploying and operating a standard three-tier application. This exercise leads to a fine-grained feature list that enables the comparison of such tools based on objective criteria as well as a rating of three popular cloud orchestration tools. In addition, it leads to the insight that the tools are on the right track, but that further development and particularly research is necessary to satisfy all demands.


enterprise distributed object computing | 2014

Reliability and Availability Properties of Distributed Database Systems

Jörg Domaschka; Christopher B. Hauser; Benjamin Erb

Distributed database systems represent an essential component of modern enterprise application architectures. If the overall application needs to provide reliability and availability, the database has to guarantee these properties as well. Entailing non-functional database features such as replication, consistency, conflict management, and partitioning represent subsequent challenges for successfully designing and operating an available and reliable database system. In this document, we identify why these concepts are important for databases and classify their design options. Moreover, we survey how eleven modern database systems implement these reliability and availability properties.


software language engineering | 2016

Experiences of models@run-time with EMF and CDO

Daniel Seybold; Jörg Domaschka; Alessandro Rossini; Christopher B. Hauser; Frank Griesinger; Athanasios Tsitsipas

Model-drivenengineering promotes models and modeltrans- formations as the primary assets in software development. The models@run-time approach provides an abstract rep- resentation of a system at run-time, whereby changes in the model and the system are constantly reflected on each other. In this paper, we report on more than three years of experience with realising models@run-time in scalable cloud scenarios using a technology stack consisting of the Eclipse Modelling Framework (EMF) and Connected Data Objects(CDO).We establish requirements for the three roles domain-specific language (DSL) designer, developer, and operator, and compare them against the capabilities of EM- F/CDO. It turns out that this technology stack is well-suited for DSL designers, but less recommendable for developers and even less suited for operators. For these roles, we experi- enced a steep learning curve and several lacking features that hinder the implementation of models@run-time in scalable cloud scenarios. Performance experiences show limitations for write heavy scenarios with an increasing amount of total elements. While we do not discourage the use of EMF/CDO for such scenarios, we recommend that its adoption for sim- ilar use cases is carefully evaluated until this technology stack has realised our wish list of advanced features.


ieee international conference on software quality reliability and security companion | 2018

Predictability of Resource Intensive Big Data and HPC Jobs in Cloud Data Centres

Christopher B. Hauser; Jörg Domaschka; Stefan Wesner

Cloud data centres share physical resources at the same time with multiple users, which can lead to resource interferences. Especially with resource intensive computations like HPC or big data processing jobs, neighbouring applications in a cloud data centre may experience less performance of their assigned virtual resources. This work evaluates the predictability of such resource intensive jobs in principle. The assumption is, that the execution behaviour of such computations depends on the computation and the environment parameters. From these two influencing factors, the predictability is the outcome of removing the hardware dependent environment parameters from the observed execution behaviour, in order to compute any other execution behaviour for computations with similar computation parameters but on a different environment. The assumptions are analysed and evaluated with the HPC application Molpro.


ieee international conference on cloud computing technology and science | 2018

Context-aware cloud topology optimization for OpenStack

Christopher B. Hauser; Athanasios Tsitsipas; Jörg Domaschka

CACTOS offers Cloud developers, operators, and consultants a context-aware optimisation for private Clouds. It leads to better and more reliable user experience, by optimising the mapping of virtual to physical resources, considering application requirements and heterogeneity.The optimisation and simulation requires monitoring, and an integration for controlling and intercepting client requests.


utility and cloud computing | 2017

Dynamic Network Scheduler for Cloud Data Centres with SDN

Christopher B. Hauser; Santhosh Ramapuram Palanivel

The presented dynamic network scheduler improves the fairness and efficiency of network utilization in a Cloud data centre. The proposed design utilizes a directed graph which represents the network comprising routers, switches, physical hypervisors, and virtual machines (VMs) as graph nodes, and represents the physical network connections as weighted edges. The edges have a guaranteed transmission rate derived from the number of devices sharing an outgoing link with a defined bandwidth. Moreover, to maximize utilization of the resources, each node gets a deserved rate dynamically depending on the measured utilization metrics. A VM throttles up or down its traffic up to its deserved rate. The conceptual design of a dynamic network scheduler is further prototypically implemented and evaluated. The implementation uses Software Defined Networking (SDN) with OpenFlow, Ryu SDN controller, and Open vSwitch as software switch on the hypervisor level. The presented dynamic network scheduler uses OpenFlow for monitoring and applying flows to control the link bandwidth. The evaluation shows that the dynamic network scheduler maximizes fairness in resource sharing while minimizing the unutilized resources.


ieee international conference on cloud computing technology and science | 2017

ViCE Registry: An Image Registry for Virtual Collaborative Environments

Christopher B. Hauser; Jörg Domaschka

The paper presents a concept and an implementation for an image registry for virtual collaborative environments (ViCE). This cross-platform and cross-organizational image registry bridges gaps between execution environment platforms and user communities. The presented concept consists of a conceptual architecture and a sophisticated set of metadata fields to describe images as virtual environments. The main challenge is the wide spread definition of an image. The terminology defines execution environments, which consist of runtime technologies (virtual machines, containers, applications) and a management layer (basic management, cloud computing, container clusters, job schedulers). An execution environment runs a deployable implicit or declarative image to build a virtual environment. With this abstraction the image registry can share virtual environment across Cloud computing, HPC, classroom setups, with any of KVM, Docker, Singularity, etc. in use. The open source implementation is written in Go and presented with a scalable microservice architecture, using Couchbase as metadata store and RabbitMQ as communication hub between software components.


OTM Confederated International Conferences "On the Move to Meaningful Internet Systems" | 2017

Gibbon: An Availability Evaluation Framework for Distributed Databases

Daniel Seybold; Christopher B. Hauser; Simon Volpert; Jörg Domaschka

Driven by new application domains, the database management systems (DBMSs) landscape has significantly evolved from single node DBMS to distributed database management systems (DDBMSs). In parallel, cloud computing became the preferred solution to run distributed applications. Hence, modern DDBMSs are designed to run in the cloud. Yet, in distributed systems the probability of failures is the higher the more entities are involved and by using cloud resources the probability of failures increases even more. Therefore, DDBMSs apply data replication across multiple nodes to provide high availability. Yet, high availability limits consistency or partition tolerance as stated by the CAP theorem. As the decision for two of the three attributes in not binary, the heterogeneous landscape of DDBMSs gets even more complex when it comes to their high availability mechanisms. Hence, the selection of a high available DDBMS to run in the cloud becomes a very challenging task, as supportive evaluation frameworks are not yet available. In order to ease the selection and increase the trust in running DDBMSs in the cloud, we present the Gibbon framework, a novel availability evaluation framework for DDBMSs. Gibbon defines quantifiable availability metrics, a customisable evaluation methodology and a novel evaluation framework architecture. Gibbon is discussed by an availability evaluation of MongoDB, analysing the take over and recovery time.


Archive | 2017

CactoSim simulation framework final prototype: accompanying document for project deliverable D6.4

Gabriel González Castañé; Sergej Svorobej; James Byrne; Christian Stier; Sebastian Krach; Jakub Krzywda; Christopher B. Hauser; Athanasios Tsitsipas; Mayur Ahir; James Allsop; Kam Star; Peter J. Byrne; Ahmed Ali-Eldin

SimPlugin # simulationProcess : SimProcess # remainingSimulationDuration : double # nextStopPoint : double +resumeSimulation(time : double):void +run():void +finishExecution():void +waitForSimProcessResume():void #stopFromSim(time: double double, mainEvent: SimEvent) :void Figure 75: Simulation Plugin class diagram The abstract methods to be implemented at the Simulation engine are grouped in four categories depending on their main aim: to control the simulation flow, to control the optimisation steps, reactive actions from simulation the simulation engine, and to manage the optimisation of resources.

Collaboration


Dive into the Christopher B. Hauser's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Stier

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henning Groenda

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar

James Byrne

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge