Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Zu-chia Teng is active.

Publication


Featured researches published by James Zu-chia Teng.


Ibm Systems Journal | 1984

Managing IBM database 2 buffers to maximize performance

James Zu-chia Teng; Robert A. Gumaer

The relational data base system, IBM Database 2 (DB2), has a component that manages data buffering. This paper describes the design considerations of the Buffer Manager and the tradeoffs involved in managing the allocation of DB2 buffers to maximize performance.


Ibm Systems Journal | 1997

DB2's use of the coupling facility for data sharing

Jeffrey William Josten; C. Mohan; Inderpal Narang; James Zu-chia Teng

We examine the problems encountered in extending DATABASE 2™ (DB2®) for Multiple Virtual Storage/Enterprise Systems Architecture (MVS/ESA™), also called DB2 for OS/390™, an industrial-strength relational database management system originally designed for a single-system environment, to support the multisystem shared-data architecture. The multisystem data sharing function was delivered in DB2 Version 4. DB2 data sharing requires a System® Parallel Sysplex™ environment because DB2s use of the coupling facility technology plays a central role in delivering highly efficient and scalable data sharing functions. We call this the shared-data architecture because the coupling facility is a unique feature that it employs.


measurement and modeling of computer systems | 1993

Performance comparison of thrashing control policies for concurrent Mergesorts with parallel prefetching

Kun-Lung Wu; Philip S. Yu; James Zu-chia Teng

We study the performance of various run-time thrashing control policies for the merge phase of concurrent mergesorts using parallel prefetching, where initial sorted runs are stored on multiple disks and the final sorted run is written back to another dedicated disk. Parallel prefetching via multiple disks can be attractive in reducing the response times for concurrent mergesorts. However, severe thrashing may develop due to imbalances between input and output rates, thus a large number of prefetched pages in the buffer can be replaced before referenced. We evaluate through detailed simulations three run-time thrashing control policies: (a) disabling prefetching, (b) forcing synchronous writes and (c) lowering the prefetch quantity in addition to forcing synchronous writes. The results show that (1) thrashing resulted from parallel prefetching can severely degrade the system response time; (2) though effective in reducing the degree of thrashing, disabling prefetching may worsen the response time since more synchronous reads are needed; (3) forcing synchronous writes can both reduce thrashing and improve the response time; (4) lowering the prefetch quantity in addition to forcing synchronous writes is most effective in reducing thrashing and improving the response time.


international conference on data engineering | 1994

Data placement and buffer management for concurrent mergesorts with parallel prefetching

Kun Lung Wu; Philip S. Yu; James Zu-chia Teng

Various data placement policies are studied for the merge phase of concurrent mergesorts using parallel prefetching, where initial sorted runs (input) of a merge and its final sorted run (output) are stored on multiple disks but each run resides only on a single disk. Since the merge phase involves only sequential references, parallel prefetching can be attractive an reducing the average response time for concurrent merges. However, without careful buffer control, severe thrashing may develop under certain run placement policies, reducing the benefits of prefetching. The authors examine through detailed simulations three different run placement policies. The results show that even though buffer thrashing can be almost avoided by placing the output run of a job on the same disk with at least one of its input runs, this thrashing-avoiding run placement policy can be substantially outperformed by other policies that use buffer thrashing control. With buffer thrashing avoidance, the best performance as achieved by a run placement policy that uses a proper subset of disks dedicated for writing the output runs while the rest of the disks are used for prefetching the input runs in parallel.<<ETX>>


Distributed and Parallel Databases | 2000

Workfile Disk Management for Concurrent Mergesorts in a Multiprocessor Database System

Kun Lung Wu; Philip S. Yu; Jen Yao Chung; James Zu-chia Teng

This paper studies workfile disk management for concurrent mergesorts ina multiprocessor database system. Specifically, we examine the impacts of workfile disk allocation and data striping on the average mergesort response time. Concurrent mergesorts in a multiprocessor system can creat severe I/O interference in which a large number of sequential write requests are continuously issued to the same workfile disk and block other read requests for a long period of time. We examine through detailed simulations a logical partitioning approach to workfile disk management and evaluate the effectiveness of datastriping. The results show that (1) without data striping, the best performance is achieved by using the entire workfile disks as a single partition if there are abundant workfile disks (or system workload is light); (2) however, if there are limited workfile disks (or system workload is heavy), the workfile disks should be partitioned into multiple groups and the optimal partition size is workload dependent; (3) data striping is beneficial only if the striping unit size is properly chosen; and (4) with a proper striping size, the best performance is generally achieved by using the entire disks as a single logical partition.


Knowledge and Information Systems | 1999

Run Placement Policies for Concurrent Mergesorts Using Parallel Prefetching

Kun-Lung Wu; Philip S. Yu; James Zu-chia Teng

We study the performance of various run placement policies on disks for the merge phase of concurrent mergesorts using parallel prefetching. The initial sorted runs (input) of a merge and its final sorted run (output) are stored on multiple disks but each run resides only on a single disk. In this paper, we examine through detailed simulations three different run placement policies and the impact of buffer thrashing. The results show that, with buffer thrashing avoidance, the best performance can be achieved by a run placement policy that uses a proper subset of the disks dedicated for writing the output runs while the rest of the disks are used for prefetching the input runs in parallel. However, the proper number of write disks is workload dependent, and if not carefully chosen, it can adversely affect the system performance. In practice, a reasonably good performance can be achieved by a run placement policy that does not place the output run of a merge on any of the disks that store its own input runs but allows the output run to share the same disk with some of the input runs of other merges.


Archive | 1991

Method for managing database recovery from failure of a shared store in a system including a plurality of transaction-based systems of the write-ahead logging type

C. Mohan; Inderpal Narang; James Zu-chia Teng


Archive | 1995

Efficient data base access using a shared electronic store in a multi-system environment with shared disks

Jeffrey William Josten; Tina Louise Masatani; C. Mohan; Inderpal Narang; James Zu-chia Teng


Archive | 1995

Query parallelism in a shared data DBMS system

William Robert Bireley; Tammie Dang; Paramesh S. Desai; Donald J. Haderle; Fen-Ling Lin; Maureen Mae McDevitt; Akira Shibamiya; Bryan Frederick Smith; James Zu-chia Teng; Hong Sang Tie; Yun Wang; Jerome Quan Wong; Kathryn Ruth Zeidenstein; Kou Horng Allen Yang


Archive | 1994

Goal-oriented resource allocation manager and performance index technique for servers

Jen-Yao Chung; Donald F. Ferguson; Christos Nikolaou; James Zu-chia Teng; George W. Wang

Collaboration


Dive into the James Zu-chia Teng's collaboration.

Researchain Logo
Decentralizing Knowledge