Thomas Setzer
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Setzer.
ieee international conference on cloud computing technology and science | 2016
Andreas Wolke; Martin Bichler; Thomas Setzer
Nowadays corporate data centers leverage virtualization technology to cut operational and management costs. Virtualization allows splitting and assigning physical servers to virtual machines (VM) that run particular business applications. This has led to a new stream in the capacity planning literature dealing with the problem of assigning VMs with volatile demands to physical servers in a static way such that energy costs are minimized. Live migration technology allows for dynamic resource allocation, where a controller responds to overload or underload on a server during runtime and reallocates VMs in order to maximize energy efficiency. Dynamic resource allocation is often seen as the most efficient means to allocate hardware resources in a data center. Unfortunately, there is hardly any experimental evidence for this claim. In this paper, we provide the results of an extensive experimental analysis of both capacity management approaches on a data center infrastructure. We show that with typical workloads of transactional business applications dynamic resource allocation does not increase energy efficiency over the static allocation of VMs to servers and can even come at a cost, because migrations lead to overheads and service disruptions.
European Journal of Operational Research | 2013
Thomas Setzer; Martin Bichler
We consider the assignment of enterprise applications in virtual machines to physical servers, also known as server consolidation problem. Data center operators try to minimize the number of servers, but at the same time provide sufficient computing resources at each point in time. While historical workload data would allow for accurate workload forecasting and optimal allocation of enterprise applications to servers, the volume of data and the large number of resulting capacity constraints in a mathematical problem formulation renders this task impossible for any but small instances. We use singular value decomposition (SVD) to extract significant features from a large constraint matrix and provide a new geometric interpretation of these features, which allows for allocating large sets of applications efficiently to physical servers with this new formulation. While SVD is typically applied for purposes such as time series decomposition, noise filtering, or clustering, in this paper features are used to transform the original allocation problem into a low-dimensional integer program with only the extracted features in a much smaller constraint matrix. We evaluate the approach using workload data from a large data center and show that it leads to high solution quality, but at the same time allows for solving considerably larger problem instances than what would be possible without data reduction and model transform. The overall approach could also be applied to similar packing problems in service operations management.
European Journal of Operational Research | 2015
Sebastian M. Blanc; Thomas Setzer
We propose and empirically test statistical approaches to debiasing judgmental corporate cash flow forecasts. Accuracy of cash flow forecasts plays a pivotal role in corporate planning as liquidity and foreign exchange risk management are based on such forecasts. Surprisingly, to our knowledge there is no previous empirical work on the identification, statistical correction, and interpretation of prediction biases in large enterprise financial forecast data in general, and cash flow forecasting in particular. Employing a unique set of empirical forecasts delivered by 34 legal entities of a multinational corporation over a multi-year period, we compare different forecast correction techniques such as Theil’s method and approaches employing robust regression, both with various discount factors. Our findings indicate that rectifiable mean as well as regression biases exist for all business divisions of the company and that statistical correction increases forecast accuracy significantly. We show that the parameters estimated by the models for different business divisions can also be related to the characteristics of the business environment and provide valuable insights for corporate financial controllers to better understand, quantify, and feedback the biases to the forecasters aiming to systematically improve predictive accuracy over time.
international conference on cloud computing | 2012
Michael Seibold; Andreas Wolke; Martina-Cezara Albutiu; Martin Bichler; Alfons Kemper; Thomas Setzer
Running emerging main-memory database systems within virtual machines causes huge overhead, because these systems are highly optimized to get the most out of bare metal servers. But running these systems on bare metal servers results in low resource utilization, because database servers often have to be sized for peak loads, much higher than the average load. Instead, we propose to deploy them within light-weight containers that allow to control resource usage and to make use of spare resources by temporarily running other applications on the database server using virtual machines (VMs). The servers on which these VMs would normally run can be suspended, to save energy costs. But current database systems do not handle dynamic changes to resource allocation well and accurate estimates on resource demand are required to maintain SLAs. We focus on emerging main-memory database systems that support the mixed workloads of todays business intelligence applications and propose an cooperative approach in which the DBMS communicates its resource demand, gets informed about currently assigned resources and adapts its resource usage accordingly. We analyze the performance impact on the database system when spare resources are used by VMs and monitor SLA compliance.
Enterprise Modelling and Information Systems Architectures (EMISAJ) | 2016
Ralf Gitzel; Björn Schmitz; Hansjörg Fromm; Alf Isaksson; Thomas Setzer
Services in the industrial sector—commonly referred to as industrial services—are an important source of profit, differentiation and future growth for their providers. This sector includes industries such as heavy equipment manufacturing, energy production, chemical production and oil and gas. Despite its importance, neither the term industrial service(s) nor its concrete subareas are unambiguously defined. The goal of this paper is to motivate research which addresses the challenges currently faced in the industrial sector with regard to service. For this purpose, we review existing definitions of industrial services and identify the services in scope as well as the scientific disciplines that can contribute. The main part of the paper is a list of relevant current and future challenges which have been encountered by the authors during their daily practice.
hawaii international conference on system sciences | 2014
Jochen Martin; Thomas Setzer
Corporate financial planning relies on thousands of financial forecasts generated by human forecasters with varying performance (forecast errors). Previous work proposes ARIMA prediction as a competitive benchmark for manual forecasts. However, ARIMA can also produce large errors, and a company needs to understand sensitivity of ARIMA-outcome to time series characteristics before ARIMA-benchmarks can be established. Using forecast data provided by a global corporation, we present sensitivity analysis of ARIMA on shifts in fitting periods including the financial crises. Results show that ARIMA leads to rather robust performance, on average dominating human forecasters, with some huge errors not made by forecasters. We conclude that ARIMA can be applied to generate benchmarks in financial planning, which can then be refined to reflect novel expectations.
ieee conference on business informatics | 2015
Katerina Shapoval; Thomas Setzer
Preventing customer churn is an important task in customer relationship management (CRM), in which the identification of customers with an intention to terminate one or more contracts plays a pivotal role. Today, typically survival analysis is used for this purpose. These approaches, in their standard configuration, assume a proportional, time-invariant influence of covariates. In telecommunications, for instance, these assumptions are questionable because of existing fixed-term contracts and term of notice clauses. These can be expected to result in non-monotonous cancellation probabilities over time, with increased frequencies of cancellation in time periods before minimum subscription periods end. In this paper, we consider customer-specific contract duration dates within established methods of survival analysis. We introduce a novel, non-standard feature generation procedure for this purpose. In addition, we study the impact of product variety in a customers portfolio on his churn probability, as there is evidence both from theory and practical experiences in other industries that product variety can be related to loyalty. In the empirical part of the paper, we evaluate the proposed extended model using data provided by one of the largest telecommunication companies in Europe. Results show that both model extensions significantly increase churn prediction performance in out-of-sample tests.
network operations and management symposium | 2012
Christian Markl; Thomas Setzer
Prioritization of business processes in distributed transaction processing systems is applied to circumvent drawbacks resulting from queueing effects. These effects arise because of increasing workload of business processes using shared services. To predict the performance of such systems, typically queueing and simulation models are used and prioritization schemes are created for specific business process definitions and demand mixes. Unfortunately, the derived solution may get obsolete after every demand-change and process re-engineering cycle. We propose an approach to adapt the prioritization by continuously aligning threshold-prices for the service usage reflecting the opportunity costs of using a service. The prioritization of a process is then reduced by the sum of current threshold prices of services invoked by that process. First we show that this approach is asymptotically optimal with an increasing number of business processes and workload requests. Second, we conduct simulations to show that the dynamic prioritization approach dominates common static prioritization approaches even in scenarios with moderate numbers of processes and process requests.
hawaii international conference on system sciences | 2017
Jennifer Schoch; Philipp Staudt; Thomas Setzer
Battery electric vehicles (BEV) are increasingly used in mobility services such as car-sharing. A severe problem with BEV is battery degradation, leading to a reduction of the already very limited range of a BEV. Analytic models are required to determine the impact of service usage to provide guidance on how to drive and charge and also to support service tasks such as predictive maintenance. However, while the increasing number of sensor data in automotive applications allows for more fine-grained model parameterization and better predictive outcomes, in practical settings the amount of storage and transmission bandwidth is limited by technical and economical considerations. By means of a simulation-based analysis, dynamic user behavior is simulated based on real-world driving profiles parameterized by different driver characteristics and ambient conditions. We find that by using a shrinked subset of variables the required storage can be reduced considerably at low costs in terms of only slightly decreased predictive accuracy.
hawaii international conference on system sciences | 2014
Hansjoerg Fromm; Thomas Setzer
The Minitrack on Service Analytics is part of the Decision Analytics, Mobile Services and Service Science Track of the 48th Annual Hawaii International Conference on System Sciences (HICSS-48) on January 5-8, 2015. Service Analytics describe all processes of capturing, processing, and analyzing data taken from a service system -- in order to improve, extend, and personalize the service provided and to create new value for both the provider and the customer.