Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Uwe Wloka is active.

Publication


Featured researches published by Uwe Wloka.


international conference on data engineering | 2008

DIPBench Toolsuite: A Framework for Benchmarking Integration Systems

Matthias Böhm; Dirk Habich; Wolfgang Lehner; Uwe Wloka

So far the optimization of integration processes between heterogeneous data sources is still an open challenge. A first step towards sufficient techniques was the specification of a universal benchmark for integration systems. This DIPBench allows to compare solutions under controlled conditions and would help generate interest in this research area. However, we see the requirement for providing a sophisticated toolsuite in order to minimize the effort for benchmark execution. This demo illustrates the use of the DIPBench toolsuite. We show the macro-architecture as well as the micro-architecture of each tool. Furthermore, we also present the first reference benchmark implementation using a federated DBMS. Thereby, we discuss the impact of the defined benchmark scale factors. Finally, we want to give guidance on how to benchmark other integration systems and how to extend the toolsuite with new distribution functions or other functionalities.


Information Systems | 2011

Cost-based vectorization of instance-based integration processes

Matthias Boehm; Dirk Habich; Steffen Preissler; Wolfgang Lehner; Uwe Wloka

Integration processes are workflow-based integration tasks. The inefficiency of these processes is often caused by low resource utilization and significant waiting times for external systems. With the aim to overcome these problems, we proposed the concept of process vectorization. There, instance-based integration processes are transparently executed with the pipes-and-filters execution model. The term vectorization is used in the sense of processing a sequence (vector) of messages by one standing process. Although it has been shown that process vectorization achieves a significant throughput improvement, this concept has two major drawbacks. First, the theoretical performance of a vectorized integration process mainly depends on the performance of the most cost-intensive operator. Second, the practical performance strongly depends on the number of used threads and thus, on the number of operators. In this paper, we present an advanced optimization approach that addresses the mentioned problems. We generalize the vectorization problem and explain how to vectorize process plans in a cost-based manner taking into account the cost of the single operators in the form of their execution time. Due to the exponential time complexity of the exhaustive computation approach, we also provide a heuristic algorithm with linear time complexity. Furthermore, we explain how to apply the general cost-based vectorization to multiple process plans and we discuss the periodical re-optimization. In conclusion of our evaluation, the message throughput can be significantly increased compared to both the instance-based execution as well as the rule-based vectorized execution.


conference on information and knowledge management | 2008

Workload-based optimization of integration processes

Matthias Boehm; Uwe Wloka; Dirk Habich; Wolfgang Lehner

The efficient execution of integration processes between distributed, heterogeneous data sources and applications is a challenging research area of data management. These integration processes are an abstraction for workflow-based integration tasks, used in EAI servers and WfMS. The major problem are significant workload changes during runtime. The performance of integration processes strongly depends on those dynamic workload characteristics, and hence workload-based optimization is important. However, existing approaches of workflow optimization only address the rule-based optimization and disregard changing workload characteristics. To overcome the problem of inefficient process execution in the presence of workload shifts, here, we present an approach for the workload-based optimization of instance-based integration processes and show that significant execution time reductions are possible.


international conference on data engineering | 2008

DIPBench: An independent benchmark for Data-Intensive Integration Processes

Matthias Böhm; Dirk Habich; Wolfgang Lehner; Uwe Wloka

The integration of heterogeneous data sources is one of the main challenges within the area of data engineering. Due to the absence of an independent and universal benchmark for data-intensive integration processes, we propose a scalable benchmark, called DIPBench (Data intensive integration Process Benchmark), for evaluating the performance of integration systems. This benchmark could be used for subscription systems, like replication servers, distributed and federated DBMS or message-oriented middleware platforms like Enterprise Application Integration (EAI) servers and Extraction Transformation Loading (ETL) tools. In order to reach the mentioned universal view for integration processes, the benchmark is designed in a conceptual, process-driven way. The benchmark comprises 15 integration process types. We specify the source and target data schemas and provide a toolsuite for the initialization of the external systems, the execution of the benchmark and the monitoring of the integration systems performance. The core benchmark execution may be influenced by three scale factors. Finally, we discuss a metric unit used for evaluating the measured integration systems performance, and we illustrate our reference benchmark implementation for federated DBMS.


International Workshop on Model-Based Software and Data Integration | 2008

Model-Driven Development of Complex and Data-Intensive Integration Processes

Matthias Böhm; Dirk Habich; Wolfgang Lehner; Uwe Wloka

Due to the changing scope of data management from centrally stored data towards the management of distributed and heterogeneous systems, the integration takes place on different levels. The lack of standards for information integration as well as application integration resulted in a large number of different integration models and proprietary solutions. With the aim of a high degree of portability and the reduction of development efforts, the model-driven development—following the Model-Driven Architecture (MDA)—is advantageous in this context as well. Hence, in the GCIP project (Generation of Complex Integration Processes), we focus on the model-driven generation and optimization of integration tasks using a process-based approach. In this paper, we contribute detailed generation aspects and finally discuss open issues and further challenges.


extending database technology | 2009

GCIP: exploiting the generation and optimization of integration processes

Matthias Boehm; Uwe Wloka; Dirk Habich; Wolfgang Lehner

As a result of the changing scope of data management towards the management of highly distributed systems and applications, integration processes have gained in importance. Such integration processes represent an abstraction of workflow-based integration tasks. In practice, integration processes are pervasive and the performance of complete IT infrastructures strongly depends on the performance of the central integration platform that executes the specified integration processes. In this area, the three major problems are: (1) significant development efforts, (2) low portability, and (3) inefficient execution. To overcome those problems, we follow a model-driven generation approach for integration processes. In this demo proposal, we want to introduce the so-called GCIP Framework (Generation of Complex Integration Processes) which allows the modeling of integration process and the generation of different concrete integration tasks. The model-driven approach opens opportunities for rule-based and workload-based optimization techniques.


advances in databases and information systems | 2009

Cost-Based Vectorization of Instance-Based Integration Processes

Matthias Boehm; Dirk Habich; Steffen Preissler; Wolfgang Lehner; Uwe Wloka

The inefficiency of integration processes--as an abstraction of workflow-based integration tasks--is often reasoned by low resource utilization and significant waiting times for external systems. With the aim to overcome these problems, we proposed the concept of process vectorization. There, instance-based integration processes are transparently executed with the pipes-and-filters execution model. Here, the term vectorization is used in the sense of processing a sequence (vector) of messages by one standing process. Although it has been shown that process vectorization achieves a significant throughput improvement, this concept has two major drawbacks. First, the theoretical performance of a vectorized integration process mainly depends on the performance of the most cost-intensive operator. Second, the practical performance strongly depends on the number of available threads. In this paper, we present an advanced optimization approach that addresses the mentioned problems. Therefore, we generalize the vectorization problem and explain how to vectorize process plans in a cost-based manner. Due to the exponential complexity, we provide a heuristic computation approach and formally analyze its optimality. In conclusion of our evaluation, the message throughput can be significantly increased compared to both the instance-based execution as well as the rule-based process vectorization.


international conference on enterprise information systems | 2009

Vectorizing Instance-Based Integration Processes

Matthias Boehm; Dirk Habich; Steffen Preissler; Wolfgang Lehner; Uwe Wloka

The inefficiency of integration processes—as an abstraction of workflow-based integration tasks—is often reasoned by low resource utilization and significant waiting times for external systems. Due to the increasing use of integration processes within IT infrastructures, the throughput optimization has high influence on the overall performance of such an infrastructure. In the area of computational engineering, low resource utilization is addressed with vectorization techniques. In this paper, we introduce the concept of vectorization in the context of integration processes in order to achieve a higher degree of parallelism. Here, transactional behavior and serialized execution must be ensured.In conclusion of our evaluation, the message throughput can be significantly increased.


international conference on enterprise information systems | 2009

Invisible Deployment of Integration Processes

Matthias Boehm; Dirk Habich; Wolfgang Lehner; Uwe Wloka

Due to the changing scope of data management towards the management of heterogeneous and distributed systems and applications, integration processes gain in importance. This is particularly true for those processes used as abstractions of workflow-based integration tasks; these are widely applied in practice. In such scenarios, a typical IT infrastructure comprises multiple integration systems with overlapping functionalities. The major problems in this area are high development effort, low portability and inefficiency. Therefore, in this paper, we introduce the vision of invisible deployment that addresses the virtualization of multiple, heterogeneous, physical integration systems into a single logical integration system. This vision comprises several challenging issues in the fields of deployment aspects as well as runtime aspects. Here, we describe those challenges, discuss possible solutions and present a detailed system architecture for that approach. As a result, the development effort can be reduced and the portability as well as the performance can be improved significantly.


international conference on enterprise information systems | 2008

MODEL-DRIVEN GENERATION AND OPTIMIZATION OF COMPLEX INTEGRATION PROCESSES

Matthias Boehm; Uwe Wloka; Dirk Habich; Wolfgang Lehner

Collaboration


Dive into the Uwe Wloka's collaboration.

Top Co-Authors

Avatar

Dirk Habich

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Lehner

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias Boehm

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias Böhm

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Steffen Preissler

Dresden University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge