Matthias Uflacker
Hasso Plattner Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthias Uflacker.
Future Generation Computer Systems | 2011
Matthias Uflacker; Alexander Zeier
The early stages of engineering projects are considered the most critical phase of a product lifecycle and need to be better understood. The augmented virtualization and geographic dispersion of project environments create demand for an adaptive design research methodology, which takes the increasing role of distributed online interactions into account. This work presents a generic approach for the quasi-real-time exploration of collaboration structures captured from heterogeneous groupware and communication resources. We introduce Team Collaboration Networks (TCN) as a model to describe the temporal relationships between different actors and information resources over the course of collaboration. A service-based TCN implementation has been applied in eleven distributed engineering design projects to support in the assessment of communication patterns and to provide a live view into the online communication activities of conceptual design teams. The key findings of this pilot application are presented.
very large data bases | 2015
David Schwalb; Markus Dreseler; Matthias Uflacker; Hasso Plattner
Non-volatile RAM (NVRAM) will fundamentally change in-memory databases as data structures do not have to be explicitly backed up to hard drives or SSDs, but can be inherently persistent in main memory. To guarantee consistency even in the case of power failures, programmers need to ensure that data is flushed from volatile CPU caches where it would be susceptible to power outages to NVRAM. In this paper, we present the NVC-Hashmap, a lock-free hashmap that is used for unordered dictionaries and delta indices in in-memory databases. The NVC-Hashmap is then evaluated in both stand-alone and integrated database benchmarks and compared to a B+-Tree based persistent data structure.
international conference on software engineering | 2016
Christoph Matthies; Thomas Kowark; Keven Richly; Matthias Uflacker; Hasso Plattner
Agile methods are best taught in a hands-on fashion in realistic projects. The main challenge in doing so is to assess whether students apply the methods correctly without requiring complete supervision throughout the entire project. This paper presents experiences from a classroom project where 38 students developed a single system using a scaled version of Scrum. Surveys helped us to identify which elements of Scrum correlated most with student satisfaction or posed the biggest challenges. These insights were augmented by a team of tutors, which accompanied main meetings throughout the project to provide feedback to the teams, and captured impressions of method application in practice. Finally, we performed a post-hoc, tool-supported analysis of collaboration artifacts to detect concrete indicators for anti-patterns in Scrum adoption. Through the combination of these techniques we were able to understand how students implemented Scrum in this course and which elements require further lecturing and tutoring in future iterations. Automated analysis of collaboration artifacts proved to be a promising addition to the development process that could potentially reduce manual efforts in future courses and allow for more concrete and targeted feedback, as well as more objective assessment.
international conference on human computer interaction | 2007
Matthias Uflacker; Daniela K. Busse
Advanced business applications like enterprise resource planning systems (ERP) are characterized by a high degree of complexity in data, functionality, and processes. This paper examines some decisive causes and their implications on software configuration and user interaction specifically. A case study on SAP®s R/3® Sales & Distribution module exemplifies complexity in order management systems and documents its impact on the user experience. We emphasize the need to shield users appropriately from underlying system complexity to provide a convenient and simple to use software tool. Several approaches to address this goal are discussed.
Archive | 2011
Matthias Uflacker; Thomas Kowark; Alexander Zeier
How do designers leverage information and communication technology to collaborate with team partners and other process participants? Given the increasingly complex, distributed, and virtual setups of design environments and processes, answering this question is challenging. At HPI, we have developed computational data collection and analysis techniques to improve the efficiency and range of observations in technology-enabled design spaces. Using our software, we were able to capture and evaluate complex characteristics of online interactions in distributed design teams at quasi real-time. Besides new insights into the communication behavior of design teams, it could be demonstrated that communication activity signatures of high-performance design teams are significantly different than those of low-performance teams. The combination of new techniques along with quantifiable performance metrics provides a stable foundation for real-time design team diagnostics.
computer supported cooperative work in design | 2009
Matthias Uflacker; Alexander Zeier
We present a flexible approach to the observation of multi-modal communication streams over the course of project-based collaboration such as engineering design. We introduce team communication networks to formalize the occurrence and evolution of actors, resources, and semantic relationships in distributed design information spaces. Analyzing the digital traces of IT-mediated collaboration, we computationally construct team communication networks in an unobtrusive and progressive manner, thus providing a live view into the digital communication and information sharing activities of design teams. A service-based team communication network platform has been implemented and applied in distributed engineering design projects to enable the temporal analysis of information sharing activities and to support in the identification of characteristic communication patterns in distributed collaboration groups.
enterprise distributed object computing | 2017
Sebastian Serth; Nikolai Podlesny; Marvin Bornstein; Jan Lindemann; Johanna Latt; Jan Selke; Rainer Schlosser; Martin Boissier; Matthias Uflacker
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.
very large data bases | 2015
David Schwalb; Jan Kossmann; Martin Faust; Stefan Klauck; Matthias Uflacker; Hasso Plattner
In-memory database systems are well-suited for enterprise workloads, consisting of transactional and analytical queries. A growing number of users and an increasing demand for enterprise applications can saturate or even overload single-node database systems at peak times. Better performance can be achieved by improving a single machines hardware but it is often cheaper and more practicable to follow a scale-out approach and replicate data by using additional machines. In this paper we present Hyrise-R, a lazy master replication system for the in-memory database Hyrise. By setting up a snapshot-based Hyrise cluster, we increase both performance by distributing queries over multiple instances and availability by utilizing the redundancy of the cluster structure. This paper describes the architecture of Hyrise-R and details of the implemented replication mechanisms. We set up Hyrise-R on instances of Amazons Elastic Compute Cloud and present a detailed performance evaluation of our system, including a linear query throughput increase for enterprise workloads.
Archive | 2015
Franziska Häger; Thomas Kowark; Jens H. Krüger; Christophe Vetterli; Falk Übernickel; Matthias Uflacker
Design Thinking has shown its potential for generating innovative, user-centered concepts in various projects at d.schools, in innovation courses like ME310, used by design consultancies like IDEO, and recently even in projects at large companies. However, if Design Thinking activities are not properly integrated with production processes, e.g. software development, handovers become necessary and potentially prevent great ideas from becoming real products.
distributed event-based systems | 2017
Guenter Hesse; Christoph Matthies; Benjamin Reissaus; Matthias Uflacker
Against the backdrop of ever-growing data volumes and trends like the Internet of Things (IoT) or Industry 4.0, Data Stream Processing Systems (DSPSs) or data stream processing architectures in general receive a greater interest. Continuously analyzing streams of data allows immediate responses to environmental changes. A challenging task in that context is assessing and comparing data stream processing architectures in order to identify the most suitable one for certain settings. The present paper provides an overview about performance benchmarks that can be used for analyzing data stream processing applications. By describing shortcomings of these benchmarks, the need for a new application benchmark in this area, especially for a benchmark covering enterprise architectures, is highlighted. A key role in such an enterprise context is the combination of streaming data and business data, which is barely covered in current data stream processing benchmarks. Furthermore, first ideas towards the development of a solution, i.e., a new application benchmark that is able to fill the existing gap, are depicted.