Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heiko Schuldt is active.

Publication


Featured researches published by Heiko Schuldt.


ACM Transactions on Database Systems | 2002

Atomicity and isolation for transactional processes

Heiko Schuldt; Gustavo Alonso; Catriel Beeri; Hans-Jörg Schek

Processes are increasingly being used to make complex application logic explicit. Programming using processes has significant advantages but it poses a difficult problem from the system point of view in that the interactions between processes cannot be controlled using conventional techniques. In terms of recovery, the steps of a process are different from operations within a transaction. Each one has its own termination semantics and there are dependencies among the different steps. Regarding concurrency control, the flow of control of a process is more complex than in a flat transaction. A process may, for example, partially roll back its execution or may follow one of several alternatives. In this article, we deal with the problem of atomicity and isolation in the context of processes. We propose a unified model for concurrency control and recovery for processes and show how this model can be implemented in practice, thereby providing a complete framework for developing middleware applications using processes.


international workshop on research issues in data engineering | 1999

WISE: business to business e-commerce

Gustavo Alonso; Ulrich Fiedler; Claus Hagen; Amaia Lazcano; Heiko Schuldt; N. Weiler

The Internet and the proliferation of inexpensive computing power in the form of clusters of workstations or PCs provide the basic hardware infrastructure for business-to-business electronic commerce in small and medium enterprises (SMEs). Unfortunately, the corresponding software infrastructure is still missing. In this paper, we show a way to develop appropriate tools for electronic commerce by describing the approach we have taken in the WISE (Workflow-based Internet SErvices) project. The goals of WISE are to develop and deploy the software infrastructure that is necessary to support business-to-business electronic commerce in the form of virtual enterprises. The idea is to combine the tools and services of different companies as building blocks of a higher-level system in which a process acts as the blueprint for control and data flow within the virtual enterprise. From this idea, the final goal is to build the basic support for an Internet trading community where enterprises can join their services to provide value-added processes.


international conference on web services | 2004

Scalable peer-to-peer process management - the OSIRIS approach

Christoph Schuler; Roger Weber; Heiko Schuldt; Hans-Jörg Schek

The functionality of applications is increasingly being made available by services. General concepts and standards like SOAP, WSDL, and UDDI support the discovery and invocation of single Web services. State-of-the-art process management is conceptually based on a centralized process manager. The resources of this coordinator limit the number of concurrent process executions, especially since the coordinator has to persistently store each state change for recovery purposes. In this paper, we overcome this limitation by executing processes in a peer-to-peer way exploiting all nodes of the system. By distributing the execution and navigation costs, we can achieve a higher degree of scalability allowing for a much larger throughput of processes compared to centralized solutions. This paper describes our prototype system OSIRIS, which implements such a true peer-to-peer process execution. We further present very promising results verifying the advantages over centralized process management in terms of scalability.


Archive | 2008

CASCOM: Intelligent Service Coordination in the Semantic Web

Michael Schumacher; Heikki Helin; Heiko Schuldt

This book presents the design, implementation and validation of a value-added supportive infrastructure for Semantic Web based business application services across mobile and fixed networks, applied to an emergency healthcare application. This infrastructure has been realized by the CASCOM European research project. For end users, the CASCOM framework provides seamless access to semantic Web services anytime, anywhere, by using any mobile computing device. For service providers, CASCOM offers an innovative development platform for intelligent and mobile business application services in the Semantic Web. The essential approach of CASCOM is the innovative inter-disciplinary combination of intelligent agent, Semantic Web, peer-to-peer, and mobile computing technology. Conventional peer-to-peer computing environments are extended with components for mobile and wireless communication. Semantic Web services are provided by peer software agents, which exploit the coordination infrastructure to efficiently operate in highly dynamic environments. The generic coordination support infrastructure includes efficient communication means, support for context-aware adaptation techniques, as well as flexible, resource-efficient service discovery, execution, and composition planning. The book has three main parts. First, the state-or-the-art is reviewed in related research fields. Then, a full proof-of-concept design and implementation of the generic infrastructure is presented. Finally, quantitative and qualitative analysis is presented on the basis of the field trials of the emergency application.


international conference on service oriented computing | 2003

Peer-to-peer Process Execution with OSIRIS

Christoph Schuler; Robert Weber; Heiko Schuldt; Hans-Jörg Schek

Standards like SOAP, WSDL, and UDDI facilitate the proliferation of services. Based on these technologies, processes are a means to combine services to applications and to provide new value-added services. For large information systems, a centralized process engine is no longer appropriate due to limited scalability. Instead, in this paper, we propose a distributed and decentralized process engine that routes process instances directly from one node to the next ones. Such a Peer-to-Peer Process Execution (P3E) promises good scalability characteristics since it is able to dynamically balance the load of processes and services among all available service providers. Therefore, navigation costs only accumulate on nodes that are directly involved in the execution. However, this requires sophisticated strategies for the replication of meta information for P3E. Especially, replication mechanisms should avoid frequent accesses to global information repositories. In our system called Osiris (Open Service Infrastructure for Reliable and Integrated Process Support), we deploy a clever publish/subscribe based replication scheme together with freshness predicates to significantly reduce replication costs. This way, OSIRIS can support process-based applications in a dynamically evolving system without limiting scalability and correctness. First experiments have shown very promising results with respect to scalability.


symposium on principles of database systems | 1999

Concurrency control and recovery in transactional process management

Heiko Schuldt; Gustavo Alonso; Hans-Jörg Schek

The unified theory of concurrency control and recovery integrates atomicity and isolation within a common framework, thereby avoiding many of the shortcomings resulting from treating them as orthogonal problems. This theory can be applied to the traditional read/write model as well as to semantically rich operations. In this paper, we extend the unified theory by applying it to generalized process structures, i.e., arbitrary partially ordered sequences of transaction invocations. Using the extended unified theory, our goal is to provide a more flexible handling of concurrent processes while allowing as much parallelism as possible. Unlike in the original unified theory, we take into account that not all activities of a process might be compensatable and the fact that these process structures require transactional properties more general than in traditional ACID transactions. We provide a correctness criterion for transactional processes and identify the key points in which the more flexible structure of transactional processes implies dierences from traditional transactions.


conference on information and knowledge management | 2005

Decentralized coordination of transactional processes in peer-to-peer environments

Klaus Haller; Heiko Schuldt; Can Türker

Business processes executing in peer-to-peer environments usually invoke Web services on different, independent peers. Although peer-to-peer environments inherently lack global control, some business processes nevertheless require global transactional guarantees, i.e., atomicity and isolation applied at the level of processes. This paper introduces a new decentralized serialization graph testing protocol to ensure concurrency control and recovery in peer-to-peer environments. The uniqueness of the proposed protocol is that it ensures global correctness without relying on a global serialization graph. Essentially, each transactional process is equipped with partial knowledge that allows the transactional processes to coordinate. Globally correct execution is achieved by communication among dependent transactional processes and the peers they have accessed. In case of failures, a combination of partial backward and forward recovery is applied. Experimental results exhibit a significant performance gain over traditional distributed locking-based protocols with respect to the execution of transactions encompassing Web service requests.


International Journal of Cooperative Information Systems | 2005

PEER-TO-PEER EXECUTION OF (TRANSACTIONAL) PROCESSES

Christoph Schuler; Heiko Schuldt; Can Türker; Roger Weber; Hans-Jörg Schek

Standards like SOAP, WSDL, and UDDI facilitate the proliferation of services. Based on these technologies, processes are a means to combine services to applications and to provide new value-added s...


International Journal on Digital Libraries | 2007

DILIGENT: integrating digital library and Grid technologies for a new Earth observation research infrastructure

Leonardo Candela; Fuat Akal; Henri Avancini; Donatella Castelli; Luigi Fusco; Veronica Guidetti; Christoph Langguth; Andrea Manzi; Pasquale Pagano; Heiko Schuldt; Manuele Simi; Michael Springmann; Laura Cristiana Voicu

This paper introduces DILIGENT, a digital library infrastructure built by integrating digital library and Grid technologies and resources. This infrastructure allows different communities to dynamically build specialised digital libraries capable to support the entire e-Science knowledge production and consumption life-cycle by using shared computing, storage, content, and application resources. The paper presents some of the main software services that implement the DILIGENT system. Moreover, it exemplifies the provided features by presenting how the DILIGENT infrastructure is being exploited in supporting the activity of user communities working in the Earth Science Environmental sector.


Lecture Notes in Computer Science | 2001

Supporting Reliable Transactional Business Processes by Publish/Subscribe Techniques

Christoph Schuler; Heiko Schuldt; Hans Schek

Processes have increasingly become an important design principle for complex intra- and inter-organizational e-services. In particular, processes allow to provide value-added services by seamlessly combining existing e-services into a coherent whole, even across corporate boundaries. Process management approaches support the definition and the execution of predefined processes as distributed applications. They ensure that execution guarantees are observed even in the presence of failures and concurrency. The implementation of a process management execution environment is a challenging task in several aspects. First, the processes to be executed are not necessarily static and follow a predefined pattern but must be generated dynamically (e.g., choosing the best offer in a pre-sales interaction). Second, deferring the execution of some application services in case of overload or unavailability is often not acceptable and must be avoided by exploiting replicated services or even by automatically adding such services, and by monitoring and balancing the load. Third, in order to avoid a bottleneck at the process coordinator level, a centralized implementation must be avoided as much as possible. Hence, a framework is needed which supports both the modularization of the process coordinators functionality and the flexibility needed for dynamically generating and adopting processes. In this paper we show how publish/subscribe techniques can be used for the implementation of process management. We show how the overall architecture looks like when using a computer cluster and publish/subscribe components as the basic infrastructure to drive the enactment of processes. In particular we describe how load balancing, process navigation, failure handling, and process monitoring is supported with minimal intervention of a centralized coordinator.

Collaboration


Dive into the Heiko Schuldt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Schuler

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge