Tino Fleuren
Kaiserslautern University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tino Fleuren.
software engineering and advanced applications | 2008
Tino Fleuren; Paul Müller
When designing a grid workflow, it might be necessary to integrate different kinds of services. In an ideal scenario all services are grid-enabled. But real workflows often consist of grid-enabled and non grid-enabled services. One reason is that grid-enabling services can be costly. Therefore it is favorable to solely grid-enable the compute-intensive and time-consuming applications. Additionally, workflows should be allowed to include grid jobs that execute legacy applications. Another reason is that very often, third parties charge fees for accessing their services. Hence, it is impossible to convert such a third party service into a service that can be integrated into a grid environment at all. This paper discusses problems of designing a workflow that consists of all these different kinds of services. The geospatial domain is exemplarily used to demonstrate difficulties that workflow designer have to overcome, i.e. constructing a geospatial workflow by using combinations of conventional Web services (XML-based), standard OGC Web services and grid-enabled OGC Web services (WSRF-based). The concept of a workflow engine capable of enacting these workflows is presented and an implementation based on the ActiveBPEL engine is proposed.
asia-pacific services computing conference | 2011
Hong Linh Truong; Schahram Dustdar; Joachim Götze; Tino Fleuren; Paul Müller; Salah Eddine Tbahriti; Michael Mrissa; Chirine Ghedira
Rich types of data offered by data as a service(DaaS) in the cloud are typically associated with different and complex data concerns that DaaS service providers, data providers and data consumers must carefully examine and agree with before passing and utilizing data. Unlike service agreements, data agreements, reflecting conditions established on the basis of data concerns, between relevant stakeholders have got little attention. However, as data concerns are complex and contextual, given the trend of mixing data sources by automated techniques, such as data mash up, data agreements must be associated with data discovery, retrieval and utilization. Unfortunately, exchanging data agreements so far has not been automated and incorporated into service and data discovery and composition. In this paper, we analyze possible steps and propose interactions among data consumers, DaaS service providers and data providers in exchanging data agreements. Based on that, we present a novel service for composing, managing, analyzing data agreements for DaaS in cloud environments and data marketplaces.
european conference on web services | 2010
Joachim Götze; Tino Fleuren; Paul Müller; Simon Schwantzer
Processing of licensed content with automatic compliance checking of the license terms is currently not supported in Grid environments. However, applications processing large amounts of data is a topic recently gaining more and more attention. The use of high performance computing capabilities, e.g., provided by Grid environments, is an obvious choice to speed up the processing time. Currently, most of the input data required in such Grid applications is freely accessible by standard Grid technology. However, many upcoming applications for Grid Computing require data provided under a specific license, also leading to license violations on a regular basis, because license compliance can currently not be validated. Data under a specific license is often retrieved from outside the Grid over individual portals directly from a content provider or distributor. Beside the additional efforts for user and security management, such external distribution approaches prevent an association between the license information and the content. In this work, the internal distribution approach for licensed content in Grid environments (License4Grid) is designed, maintaining the association between the license information and the corresponding data. The design respects the requirements emerging from handling either unprotected and protected content and makes use of the user and security mechanisms provided by common Grid technologies in order to fit into the environment homogeneously.
software engineering and advanced applications | 2013
Tino Fleuren; Joachim Götze; Paul Müller
This paper describes an operator for configuring scientific workflows that facilitates the process of assigning workflow activities to cloud resources. In general, modeling and configuring scientific workflows is complex and error-prone, because workflows are built of highly parallel patterns comprising huge numbers of tasks. Reusing tested patterns as building blocks avoids repeating errors. Workflow skeletons are parametrizable building blocks describing such patterns. Hence, scientists have a means to reuse validated parallel constructs for rapidly defining their in-silico experiments. Often, configurations of data parallel patterns are generated automatically. However, for many task parallel patterns each task needs to be configured manually. In frameworks like MapReduce, scientists have no control of how tasks are assigned to cloud resources. What is the strength of such patterns, may lead to unnecessary data transfers in other patterns. Workflow Skeletons facilitate the configuration by providing an operator that accepts parameters, this allows for scalable configurations saving time and cost by allocating cloud resources just in time. In addition, this configuration operator helps to define configurations that avoid unnecessary data transfers.
software engineering and advanced applications | 2013
Joachim Götze; Tino Fleuren; Bernd Reuther; Paul Müller
Cloud Computing provides flexible and dynamic provisioning of resources, services, and applications. As such, Cloud Computing is the ideal IT infrastructure for the ever changing workload of companies and service providers. Although, cloud providers offer functionality for usage accounting, this functionality is limited to their own requirements. Companies and service providers as cloud users have also a need for usage accounting, but with different requirements than the cloud providers. Additionally, cloud users are not limited to a single cloud, but make use of multiple cloud infrastructures and applications depending on their needs. Cloud users require a usage accounting infrastructure not only capable of supporting billing as an accounting application, but capable of supporting all kinds of applications. For example applications like cost allocation or trend analysis. In order to be able to manage such a complex and adaptable usage accounting infrastructure, we present a policy-based management approach. Policies are used as a high-level description of the intended behavior of the infrastructure which are then utilized to derive configurations for the infrastructure services. Such a solution ensures the efficient management and administration of complex usage accounting infrastructures.
Proceedings of the 6th International Workshop on Enhanced Web Service Technologies | 2011
Joachim Götze; Tino Fleuren; Bernd Reuther; Paul Müller
Service orientation is a successful architecture paradigm used within different application scenarios. Some of these scenarios are high performance computing (grid computing), the Internet of services, and cloud computing. Usage accounting is a topic currently covered prevalent for computing services and networking resources, but a generic approach comprising arbitrary services does not exist. Such an approach to usage accounting can provide a benefit for the platform provider by taking into account all kinds of services and applications. It is not only possible to create a financially-independent service platform, but also to take advantage of applications like cost allocation and trend analysis of service utilization. The main contributions of the solution for usage accounting proposed here are: a generic usage record format able to support accounting information of any kind of service in a future-proof manner as well as a scalable accounting infrastructure easily adaptable to the specific requirements of the application domain.
Archive | 2010
Dirk Henrici; Aneta Kabzeva; Tino Fleuren; Paul Müller
One of the advantages of the RFID technology over the still more widespread optical barcodes is the comparatively large data storage capacity. Conventional 1-dimensional barcodes can store just few bytes of data. For instance, the EAN13-code used at the point of sale in Europe stores 13 numerical digits identifying country, product manufacturer, and product type. There is no means for identifying each item uniquely. More complex 2dimensional barcodes or larger 1-dimensional barcodes extend the amount of data that can be stored. This comes at the cost of a larger printing area as long as the readability shall not decrease. While the amount of data that can be stored using optical barcodes is therewith limited by the available area, RFID transponders offer a more comprehensive data storage capacity. Already comparatively simple tags can store a serial number capable of identifying objects globally uniquely. RFID transponders can thus serve as a means of unique identification for different kinds of objects like clothes, foods, or documents. Transponders that are more expensive can store an even larger amount of data. For instance, additional data describing the tagged objects, a documentation of the objects’ history, or even data putting the object in the context of other objects can be stored. The question arises how to make use of the additional capabilities. What data should be stored directly on the RFID transponders and what data should be stored in databases in the backend of a system? The design decision influences many characteristics of the overall RFID system. Thus, data storage considerations are an important part in planning the architecture of such a system. This book chapter discusses different design possibilities for data storage in RFID systems and their impact on the quality factors of the resulting system. As will be shown, many characteristics of the systems are influenced. The design decision on the data storage in an RFID system is therewith of great importance. The decision should thus be taken with care considering all relevant aspects. Note that this book chapter relates only to RFID transponders used exclusively as data storage units. Transponders with processors, cryptographic hardware, or sensors require partially separate inspection and are out of scope.
european conference on web services | 2011
Tino Fleuren; Joachim Götze; Paul Müller
Archive | 2011
Tino Fleuren; Paul Müller
software engineering and advanced applications | 2014
Tino Fleuren; Joachim Götze; Paul Müller