Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jyh-Biau Chang is active.

Publication


Featured researches published by Jyh-Biau Chang.


The Journal of Supercomputing | 2006

A Transparent Distributed Shared Memory for Clustered Symmetric Multiprocessors

Jyh-Biau Chang; Ce-Kuen Shieh; Tyng-Yeu Liang

A transparent distributed shared memory (DSM) system must achieve complete transparency in data distribution, workload distribution, and reconfiguration respectively. The transparency of data distribution allows programmers to be able to access and allocate shared data using the same user interface as is used in shared-memory systems. The transparency of workload distribution and reconfiguration can optimize the parallelism at both the user-level and the kernel-level, and also improve the efficiency of run-time reconfiguration. In this paper, a transparent DSM system referred to as Teamster is proposed and is implemented for clustered symmetric multiprocessors. With the transparency provided by Teamster, programmers can exploit all the computing power of the clustered SMP nodes in a transparent way as they do in single SMP computer. Compared with the results of previous researches, Teamster can realize the transparency of cluster computing and obtain satisfactory system performance.


cluster computing and the grid | 2001

Teamster: a transparent distributed shared memory for cluster symmetric multiprocessors

Jyh-Biau Chang; Ce-Kuen Shieh

Teamster is a transparent DSM system built on a cluster of symmetric x86 multiprocessors connected with 100 Mb Fast Ethernet. Teamster has a hybrid thread architecture so that a programmer can parallelize the application without concerning the underlying hardware configuration. The Global Memo Image of Teamster provides a truly global and synchronization objects are put into GMI. Because we declare and initialize these data and objects in the beginning, the linker of the operating system helps us forming this and need explicit annotations to propagate the modification of global static variables. With the support of the hybrid thread architecture and Global Memory Image, Teamster can provide a truly transparent DSM in the cluster of SSMP computers. More than that, the overhead of creating more application threads and supporting the Global Memory Image does not affect the performance of Teamster in our measurement.


middleware for grid computing | 2006

A multi-layer resource reconfiguration framework for grid computing

Po-Cheng Chen; Jyh-Biau Chang; Tyng-Yeu Liang; Ce-Kuen Shieh; Yi-Chang Zhuang

Grid is a non-dedicated and dynamic computing environment. Consequently, different programs have to compete with each other for the same resources, and resource availability varies over time. That causes the performance of user programs to degrade and to become unpredictable. For resolving this problem, we propose a multi-layer resource reconfiguration framework for grid computing. As named, this framework adopts different resource reconfiguration mechanisms for different workloads of resources. We have implemented this framework on a grid-enabled DSM system called Teamster-G. Our experimental result shows that our proposed framework allows Teamster-G not only to fully utilize abundant CPU cycles but also to minimize resource contention between the jobs of resource consumers and those of resource providers. As a result, the job throughput of Teamster-G is effectively increased.


Future Generation Computer Systems | 2007

A grid-enabled software distributed shared memory system on a wide area network

Tyng-Yeu Liang; Chun-Yi Wu; Ce-Kuen Shieh; Jyh-Biau Chang

This study implements a grid-enabled software distributed shared memory (SDSM) system called Teamster-G on a wide area network (WAN). With the support of Teamster-G, users can develop applications on computational grids by means of shared variables. When they wish to execute their applications, Teamster-G provides a transparent resource allocation service for the execution of the programs. To minimize the turnaround time, Teamster-G employs a session-oriented protocol to reduce the cost of resource allocation, and a two-level consistency protocol to minimize the cost of maintaining data consistency over the WAN. This paper presents the framework of Teamster-G and discusses its performance.


Applied Physics Letters | 2003

Observation of self-organized superlattice in AlGaInAsSb pentanary alloys

D. H. Jaw; Jyh-Biau Chang; Yan-Kuin Su

An unexpected self-organized superlattice structure has been observed in the AlGaInAsSb pentanary alloys grown by metalorganic vapor-phase epitaxy. The samples were studied by transmission electron microscopy, double-crystal x-ray diffraction, and secondary ion mass spectrometry measurements. The modulation strength and period of the self-organized superlattice are correlated to the alloy composition.


advanced information networking and applications | 2008

A Performance Study of Virtual Machine Migration vs. Thread Migration for Grid Systems

Po-Cheng Chen; Cheng-I Lin; Sheng-Wei Huang; Jyh-Biau Chang; Ce-Kuen Shieh; Tyng-Yeu Liang

Grid computing integrates abundant distributed resources into a single large-scale problem solving environment for parallel applications. However, the grid is a non-dedicated and dynamic computing environment. Grid applications consequently compete with each other for non-dedicated shared resources; moreover, shared resources are probably reclaimed by their owners according to administration policies, e.g. the scheduled maintenance. The job migration mechanisms which take the non-dedicated and dynamic natures of grids into consideration, therefore, become important for optimizing the application performance. The experiments of two job migration mechanisms, i.e. virtual machine migration and node reconfiguration by thread migration were presented in this study. We completed experiments on both LAN and WAN scenarios with a page-based grid-enabled DSM system, Teamster-G The experimental results suggest the performance of virtual machine migration competes with node reconfiguration on equal terms; and further, they demonstrated the potential applications of virtual machine technique in the grid environment.


grid and pervasive computing | 2010

Variable-Sized map and locality-aware reduce on public-resource grids

Po-Cheng Chen; Yen-Liang Su; Jyh-Biau Chang; Ce-Kuen Shieh

This paper presents a grid-enabled MapReduce framework called “Ussop” Ussop provides its users with a set of C-language based MapReduce APIs and an efficient runtime system for exploiting the computing resources available on public-resource grids Considering the volatility nature of the grid environment, Ussop introduces two novel task scheduling algorithms, namely: Variable-Sized Map Scheduling (VSMS) and Locality-Aware Reduce Scheduling (LARS) VSMS dynamically adjusts the size of the map tasks according to the computing power of grid nodes Moreover, LARS minimizes the data transfer cost of exchanging the intermediate data over a wide-area network The experimental results indicate that both VSMS and LARS achieved superior performance than the conventional scheduling algorithms.


Journal of Systems and Software | 2001

Proteus: an efficient runtime reconfigurable distributed shared memory system

Jyh-Chang Ueng; Ce-Kuen Shieh; Tyng-Yue Liang; Jyh-Biau Chang

Abstract This paper describes Proteus, a distributed shared memory (DSM) system which supports runtime node reconfiguration. Proteus allows users to change the node set during the execution of a DSM program. The capability of node addition allows users to further shorten the execution time of their DSM programs by dynamically adding newly available nodes to the system. Furthermore, competition for resources between system users and computer owners can be avoided by dynamically deleting nodes from the system. To make the system adapt to the node configuration efficiently, Proteus employs several techniques, including adaptive workload redistribution, affinity page movement, and forced update. Proteus supports both sequential consistency and release consistency. It provides an object-oriented parallel programming environment. This paper describes the design and implementation of node reconfiguration in Proteus, and presents the performance of the system. Experimental results indicate that Proteus can further improve the performance of the tested programs by taking advantage of node reconfiguration. Our results further demonstrate that the techniques employed in Proteus minimize communication and overhead.


Applied Physics Letters | 1999

Measurement of AlInAsSb/GaInAsSb heterojunction band offset by photoluminescence spectroscopy

Jyh-Biau Chang; Yan-Kuin Su; C. L. Lin; Kuo-Ming Wu; W. C. Huang; Yalin Lu; D. H. Jaw; W. L. Li; Szu-Chao Chen

We have grown unstrained Al0.66In0.34As0.85Sb0.15/Ga0.64In0.36As0.84Sb0.16 multiple-quantum-well (MQW) structures on InP substrates by metalorganic vapor phase epitaxy. Low-temperature photoluminescence was performed for these MQW structures. By comparing the luminescence peak energies with the theoretical calculations, we estimated the conduction-band offset ratio to be 0.75±0.10 for the Al0.66In0.34As0.85Sb0.15/Ga0.64In0.36As0.84Sb0.16 heterostructure.


international conference on parallel and distributed systems | 1998

An efficient thread architecture for a distributed shared memory on symmetric multiprocessor clusters

Jyh-Biau Chang; Y. J. Tsai; Ce-Kuen Shieh; P. C. Chung

The purpose of the paper is to demonstrate an efficient thread architecture for a distributed shared memory (DSM) system on symmetric multiprocessor (SMP) clusters. For DSM systems on SMP, how to utilize the processors efficiently without wasting available computational power is a major issue. We discuss three approaches that use the process, the kernel level thread, and the user level thread to map application threads onto execution entities respectively. Considering the advantages and disadvantages of each method, we construct our thread package by combining both the user level thread and the kernel level thread. User level threads correspond to application threads and kernel level threads schedule these user level threads across multiple processors. Threads are light weighted and can be migrated in our thread package. With this thread architecture, our DSM system performs well in elementary experiments.

Collaboration


Dive into the Jyh-Biau Chang's collaboration.

Top Co-Authors

Avatar

Ce-Kuen Shieh

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Tyng-Yeu Liang

National Kaohsiung University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Po-Cheng Chen

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yan-Kuin Su

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yi-Chang Zhuang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yen-Liang Su

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun-Yi Wu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Szu-Chao Chen

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge