Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward So is active.

Publication


Featured researches published by Edward So.


international world wide web conferences | 2004

XVM: a bridge between xml data and its behavior

Quanzhong Li; Michelle Y. Kim; Edward So; Steve Wood

XML has become one of the core technologies for contemporary business applications, especially web-based applications. To facilitate processing of diverse XML data, we propose an extensible, integrated XML processing architecture, the XML Virtual Machine (XVM), which connects XML data with their behaviors. At the same time, the XVM is also a framework for developing and deploying XML-based applications. Using component-based techniques, the XVM supports arbitrary granularity and provides a high degree of modularity and reusability. XVM components are dynamically loaded and composed during XML data processing. Using the XVM, both client-side and server-side XML applications can be developed and deployed in an integrated way. We also present an XML application container built on top of the XVM along with several sample applications to demonstrate the applicability of the XVM framework.


measurement and modeling of computer systems | 2005

Low traffic overlay networks with large routing tables

Chunqiang Tang; Melissa J. Buco; Rong N. Chang; Sandhya Dwarkadas; Laura Z. Luan; Edward So; Christopher Ward

The routing tables of Distributed Hash Tables (DHTs) can vary from size O(1) to O(n). Currently, what is lacking is an analytic framework to suggest the optimal routing table size for a given workload. This paper (1) compares DHTs with O(1) to O(n) routing tables and identifies some good design points; and (2) proposes protocols to realize the potential of those good design points.We use total traffic as the uniform metric to compare heterogeneous DHTs and emphasize the balance between maintenance cost and lookup cost. Assuming a node on average processes 1,000 or more lookups during its entire lifetime, our analysis shows that large routing tables actually lead to both low traffic and low lookup hops. These good design points translate into one-hop routing for systems of medium size and two-hop routing for large systems.Existing one-hop or two-hop protocols are based on a hierarchy. We instead demonstrate that it is possible to achieve completely decentralized one-hop or two-hop routing, i.e., without giving up being peer-to-peer. We propose 1h-Calot for one-hop routing and 2h-Calot for two-hop routing. Assuming a moderate lookup rate, compared with DHTs that use O(log n) routing tables, 1h-Calot and 2h-Calot save traffic by up to 70% while resolving lookups in one or two hops as opposed to O(log n) hops.


international conference on web services | 2005

Fresco: a Web services based framework for configuring extensible SLA management systems

Christopher Ward; Melissa J. Buco; Rong N. Chang; Laura Z. Luan; Edward So; Chunqiang Tang

A service level agreement (SLA) is a service contract that includes the evaluation criteria for agreed service quality standards. Since agreeable specifications on the evaluation criteria cannot be limited in practice, competitive SLA management products must be extensible in terms of their support for contract-specific SLA compliance evaluations. While the need of running and managing those software products as services increases, we have found that developing a good solution for configuring them as per contractual terms is a challenging task. This paper presents the Fresco framework, which facilitates configuring extensible SLA management systems using Web services. An XML-based specification of SLA management related data called SCOL will also be presented to show how the framework supports contract-specific SLA terms and contract-specific extensions of the deployed SLA management software. The paper furthermore shows how the Fresco system uses a template-based approach to communicate with other Web services applications with support for various input and output formats. Our experience with implementing the Fresco framework for a leading commercial SLA management software product demonstrates that the framework facilitates the creation of effective and efficient solutions for configuring extensible SLA management systems.


ieee international conference on services computing | 2006

A Distributed Service Management Infrastructure for Enterprise Data Centers Based on Peer-to-Peer Technology

Chunqiang Tang; Rong N. Chang; Edward So

This paper presents a distributed service management infrastructure called BISE. One distinguishing feature of BISE is its adoption of the peer-to-peer (P2P) model in support of realtime service managements. BISE offers significant advantages over existing systems in scalability, resilience, and manageability. Current P2P algorithms are mainly developed for the file-sharing applications running on desktops, which have characteristics dramatically different from enterprise data centers. This difference led us to design our own P2P algorithms specifically optimized for enterprise environments. Based on these algorithms, we implemented a P2P substrate called BiseWeaver (25,000 lines of Java code) as the core of BISE. Our evaluation on a set of distributed machines shows that BiseWeaver is efficient and robust, and provides timely monitoring data in support of proactive SLA management


ieee international conference on services computing | 2005

PEM: a framework enabling continual optimization of workflow process executions based upon business value metrics

Melissa J. Buco; Rong N. Chang; Laura Z. Luan; Edward So; Chunqiang Tang; Christopher Ward

The competitiveness of the market place and the advent of on demand services computing are encouraging many organizations to improve their business efficiency and agility via business process management technologies. A lot of work has been done in process codification, tracking, and automation. However, a significant gap still remains between the way an organizations codified processes execute and the organizations business objectives such as maximizing profit with high-degree of customer satisfaction. This paper addresses this gap by proposing a process execution management (PEM) framework which enables continual optimization of workflow process executions based upon business value metrics such as SLA breach penalty, revenue, and customer satisfaction index. We have implemented the PEM framework based upon leading commercial products. We have also used the framework to develop two representative business performance management solutions for service quality management processes and application execution workflows. Our experimental results show that, when compared with a state-of-the-art commercial workflow product, our PEM system can reduce the loss of business value of a set of process execution requests by 67% on average.


ieee international conference on services computing | 2009

An Optimal Capacity Planning Algorithm for Provisioning Cluster-Based Failure-Resilient Composite Services

Chun Zhang; Rong N. Chang; Chang-shing Perng; Edward So; Chungqiang Tang; Tao Tao

Resilience against unexpected server failures is a key desirable function of quality-assured service systems. A good capacity planning decision should cost-effectively allocate spare capacity for exploiting failure resilience mechanisms. In this paper, we propose an optimal capacity planning algorithm for server-cluster based service systems,particularly the ones that provision composite services via several servers. The algorithm takes into account two commonly used failure resilience mechanisms: intra-cluster load-controlling and inter-cluster failover. The goal is to minimize the resource cost while assuring service levels on the end-to-end throughput and response time of provisioned composite services under normal conditions and server failure conditions. We illustrate that the stated goal can be formalized as a capacity planning optimization problem and can be solved mathematically via convex analysis and linear optimization techniques. We also quantitatively demonstrate that the proposed algorithm can find the min-cost capacity planning solution that assures the end-to-end performance of managed composite services for both the non-failure case and the common server failure cases in a three-tier web-based service system with multiple server clusters. To the best of our knowledge, this paper presents the first research effort in optimizing the cost of supporting failure resilience for quality-assured composite services.


international conference on web services | 2008

A Temporal Data-Mining Approach for Discovering End-to-End Transaction Flows

Ting Wang; Chang-shing Perng; Tao Tao; Chunqiang Tang; Edward So; Chun Zhang; Rong N. Chang; Ling Liu

Effective management of Web Services systems relies on accurate understanding of end-to-end transaction flows, which may change over time as the service composition evolves. This work takes a data mining approach to automatically recovering end-to-end transaction flows from (potentially obscure) monitoring events produced by monitoring tools. We classify the caller-callee relationships among monitoring events into three categories(identity, direct-invoke, and cascaded-invoke), and propose unsupervised learning algorithms to generate rules for each type of relationship. The key idea is to leverage the temporal information available in the monitoring data and extract patterns that have statistical significance. By piecing together the caller-callee relationships a teach step along the invocation path, we can recover the end-to-end flow for every executed transaction. Experiments demonstrate that our algorithms outperform human experts in terms of solution quality, scale well with the data size, and are robust against noises in monitoring data.


acm symposium on applied computing | 2004

XVM: XML Virtual Machine

Quanzhong Li; Michelle Y. Kim; Edward So; Steve Wood

XML is an emerging standard for data representation and data exchange on the Internet. XML-based web applications have been widely used in e-commerce and enterprise information management. In this paper, we propose an extensible, integrated XML processing architecture, the XML Virtual Machine (XVM). The XVM provides a framework for processing XML data, developing and deploying XML-based applications. By using a component-based technique, the XVM provides a high degree of modularity and reusability. XVM components are dynamically loaded and composed during XML data processing. New components can be easily added to existing applications and new applications can reuse existing components without difficulty. These features enable an XML application to keep up with requirements and schema evolution and to process compound documents. Both client-side and server-side XML applications can be developed and deployed in an integrated way. Also in this paper, we present an XML application container built on top of the XVM, along with several sample applications.


network operations and management symposium | 2012

Universal economic analysis methodology for IT transformations

Chang-shing Perng; Rong N. Chang; Tao Tao; Edward So; Mihwa Choi; Hidayatullah Shaikh

Economic analysis on the financial benefits and risks is crucial for deciding whether an IT transformation should take place. While the financial analysis techniques for general investment is well known, and there have been case studies for many IT types of IT transformations, there is no good methodology that an IT professional can follow and readily conduct return on investment or total cost analysis. This paper aims to fill this void and proposes an economic analysis methodology that is applicable to all IT transformations.


acm multimedia | 2003

Interleaving media data for MPEG-4 presentations

Jeff Boston; Michelle Y. Kim; William L. Luken; Edward So; Steve Wood

A composite multimedia presentation may be represented by a sequence of virtual media data packets. An algorithm is presented for ordering these virtual media data packets so as to minimize the initial delay required to transfer the composite stream from a media server to a client. This algorithm has been implemented as part of the IBM Toolkit for MPEG-4.

Researchain Logo
Decentralizing Knowledge