Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sascha Bosse is active.

Publication


Featured researches published by Sascha Bosse.


Reliability Engineering & System Safety | 2016

Multi-objective optimization of IT service availability and costs

Sascha Bosse; Matthias Splieth; Klaus Turowski

Abstract The continuous provision of highly available IT services is a crucial task for IT service providers in order to fulfill service level agreements with customers. Although the introduction of redundant components increases availability, the associated cost may be very high. Therefore, decision makers in the IT service design stage face a trade-off between cost and availability in order to define suitable service level objectives. Although this task can be seen as a redundancy allocation problem, the existing definitions in this area are not transferable to IT service design due to the assumption of independent component failures, which has been identified as unrealistic in IT systems. In this paper, a multi-objective redundancy allocation problem for IT service design is defined. Therefore, a Petri net Monte Carlo simulation is developed that estimates the availability and costs of a specific design. In order to provide (sub)optimal solutions to an IT service redundancy allocation problem, two meta-heuristics, namely a genetic algorithm and tabu search, are adapted. The approach is utilized to optimize the IT service design of an application service provider in terms of availability and cost to demonstrate its feasibility and suitability.


hawaii international conference on system sciences | 2017

Collaborative Software Performance Engineering for Enterprise Applications

Hendrik Müller; Sascha Bosse; Markus Wirth; Klaus Turowski

In the domain of enterprise applications, organizations usually implement third-party standard software components in order to save costs. Hence, application performance monitoring activities constantly produce log entries that are comparable to a certain extent, holding the potential for valuable collaboration across organizational borders. Taking advantage of this fact, we propose a collaborative knowledge base, aimed to support decisions of performance engineering activities, carried out during early design phases of planned enterprise applications. To verify our assumption of crossorganizational comparability, machine learning algorithms were trained on monitoring logs of 18,927 standard application instances productively running at different organizations around the globe. Using random forests, we were able to predict the mean response time for selected standard business transactions with a mean relative error of 23.19 percent. Hence, the approach combines benefits of existing measurement-based and model-based performance prediction techniques, leading to competitive advantages, enabled by interorganizational collaboration.


availability, reliability and security | 2015

Optimizing IT Service Costs with Respect to the Availability Service Level Objective

Sascha Bosse; Matthias Splieth; Klaus Turowski

Meeting the availability service level objective while minimizing the costs of the IT service provision is a major challenge for IT service designers. In order to optimize component choices and redundancy mechanisms, the redundancy allocation problem (RAP) was defined. RAP solution algorithms support decision makers with (sub)optimal design configurations that trade-off availability and costs. However, the existing RAP definitions are not suitable for IT service design since they do not include inter-component dependencies such as common mode failures. Therefore, a RAP definition is provided in this paper in which the characteristics of modern IT systems such as standby mechanisms, performance degradation and generic dependencies are integrated. The RAP definition and an adapted genetic algorithm are applied to optimize the costs of an excerpt of an application service providers IT system landscape. The results demonstrate that the developed approach is applicable and suitable to minimize IT service costs while fulfilling the availability guarantees that are documented in service level agreements.


the internet of things | 2018

Internet of Things Middleware: How Suitable are Service-oriented Architecture and Resource-oriented Architecture.

Janick Kubela; Matthias Pohl; Sascha Bosse; Klaus Turowski

Over the last years, the Internet of Things was researched widely. Thus, various IoT applications are developed based upon different use-cases. Numerous middleware solutions for the IoT are based on the Service oriented Architecture and Resource oriented Architecture. Both approaches do support the connection of distributed objects but no research is done to check the suitability of SoA and RoA in the context of common IoT requirements in an adequate scope. In the context of this paper the fundamental mechanisms of SoA and RoA are compared regarding to connectivity, compatibility, scalability, robustness and security. Resulting out of this comparison, both approaches are suitable as the base of an IoT middleware. Nevertheless, RoA has a lack of supporting bi-directional communication and real-time analysis while SoA rapidly become a heavy middleware solution. Therefore, the use of a mixed-up middleware is recommended.


international conference on cloud computing and services science | 2018

A Data-Science-as-a-Service Model.

Matthias Pohl; Sascha Bosse; Klaus Turowski

The keen interest in data analytics as well as the highly complex and time-consuming implementation lead to an increasing demand for services of this kind. Several approaches claim to provide data analytics functions as a service, however they do not process data analysis at all and provide only an infrastructure, a platform or a software service. This paper presents a Data-Science-as-a-Service model that covers all of the related tasks in data analytics and, in contrast to former technical considerations, takes a problem-centric and technologyindependent approach. The described model enables customers to categorize terms in data analytics environ-


the internet of things | 2017

A Holistic View of the IoT Process from Sensors to the Business Value.

Ateeq Khan; Matthias Pohl; Sascha Bosse; Stefan Willi Hart; Klaus Turowski

Internet of things (IoT) is the focus of research, and industries are investing heavily due to potential benefits of IoT in various fields. This paper provides a holistic view of different phases in IoT covering all phases from sensor data collection to the generation of business value. In this paper, we propose to use the proven Six Sigma methodology for IoT projects. We describe each phase using a structured approach. We discuss the consequences of each phase while selecting the phase as an entry or starting point. We use predictive maintenance as a use case to demonstrate the practicability of our IoT process. Using these insights, IoT project managers can identify required activities and competencies to increase success probability. In the end, we summarise the paper findings and highlight the future work.


ieee conference on business informatics | 2017

Capacity Planning as a Service for Enterprise Standard Software

Hendrik Müller; Sascha Bosse; Matthias Pohl; Klaus Turowski

Too often, capacity planning activities that are crucial to software performance are being pushed to late development phases where trivial measurement-based assessment techniques can be employed on enterprise applications that are nearing completion. This procedure is highly inefficient, time consuming, and may result in disproportionately high correction costs to meet existing service level agreements. However, enterprise applications nowadays excessively make use of standard software that is shipped by large software vendors to a wide range of customers. Therefore, an application similar to the one whose capacity is being planned may already be in production state and constantly produce log data as part of application performance monitoring facilities. In this paper, we demonstrate how potential capacity planning service providers can leverage the dissemination effects of standard software by applying machine learning techniques on measurement data from various running enterprise applications. Utilizing prediction models that were trained on a large scale of monitoring data enables cost-efficient measurement-based prediction techniques to be used in early design phases. Therefore, we integrate knowledge discovery activities into well-known capacity planning steps, which we adapt to the special characteristics of enterprise applications. We evaluate the feasibility of the modeled process using measurement data from more than 1,800 productively running enterprise applications in order to predict the response time of a widely used standard business transaction. Based on the trained model, we demonstrate how to simulate and analyze future workload scenarios. Using a Pareto approach, we were able to identify cost-effective design alternatives for a planned enterprise application.


ieee conference on business informatics | 2017

Providing Clarity on Big Data Technologies: A Structured Literature Review

Matthias Volk; Sascha Bosse; Klaus Turowski

The success of big data projects depends heavily on the ability to decide which technological expertise and architectures are required in such a context. However, there is confusion in scientific discourse regarding the systemization of big data technologies which exacerbates this problem. Therefore, in this paper, a structured literature review is conducted to assess the current state of the art and give an overview about big data technology classifications. It can be stated that only very limited approaches for classifications exist, which are partially incomplete or address only single aspects of big data technologies, such as the used database type. Conducting this literature review, investigations have also been made to derive possible starting points for extending existing or developing new classification approaches, which can form the basis of a future big data classification framework. Such a concept could be used to improve the decision-support for planning and managing big data projects. However, a common understanding would require unambiguous definitions of the term technology and various other expressions, which are used in the literature as a synonym today.


Archive | 2017

Optimization of Data Center Fault Tolerance Design

Sascha Bosse; Klaus Turowski

Balancing costs and quality of offered IT service is a challenging task for data center providers. In the case of availability, fault tolerance can be applied by introducing redundancy mechanisms into the service design. Redundancy allocation problems can be defined as combinatorial optimization problems to identify cost-effective redundancy configurations in which availability objectives are met. However, these approaches should be flexible to trade-off effort and benefit in a specific scenario. Therefore, a redundancy allocation problem is proposed in this chapter that is capable of modeling the specific characteristics of the IT system to be analyzed. In order to identify suitable design configurations, a generic Petri net simulation model is combined with a genetic algorithm. By defining the solution algorithm adaptively to the complexity of the considered problem definition, users are able to reduce modeling as well as computational effort. The suitability of the approach is demonstrated in the use-case of an international application service provider.


Complex Systems Informatics and Modeling Quarterly | 2017

Capacity Management as a Service for Enterprise Standard Software

Hendrik Müller; Sascha Bosse; Klaus Turowski

Capacity management approaches optimize component utilization from a strong technical perspective. In fact, the quality of involved services is considered implicitly by linking it to resource capacity values. This practice hinders to evaluate design alternatives with respect to given service levels that are expressed in user-centric metrics such as the mean response time for a business transaction. We argue that utilized historical workload traces often contain a variety of performance-related information that allows for the integration of performance prediction techniques through machine learning. Since enterprise applications excessively make use of standard software that is shipped by large software vendors to a wide range of customers, standardized prediction models can be trained and provisioned as part of a capacity management service which we propose in this article. Therefore, we integrate knowledge discovery activities into well-known capacity planning steps, which we adapt to the special characteristics of enterprise applications. Using a real-world example, we demonstrate how prediction models that were trained on a large scale of monitoring data enable cost-efficient measurement-based prediction techniques to be used in early design and redesign phases of planned or running applications. Finally, based on the trained model, we demonstrate how to simulate and analyze future workload scenarios. Using a Pareto approach, we were able to identify cost-effective design alternatives for an enterprise application whose capacity is being managed.

Collaboration


Dive into the Sascha Bosse's collaboration.

Top Co-Authors

Avatar

Klaus Turowski

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Hendrik Müller

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Matthias Splieth

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Ateeq Khan

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Christian Schulz

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Frederik Kramer

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Naoum Jamous

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Carsten Görling

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Johannes Hintsch

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge