Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tyng-Yeu Liang is active.

Publication


Featured researches published by Tyng-Yeu Liang.


cluster computing and the grid | 2005

Teamster-G: a grid-enabled software DSM system

Tyng-Yeu Liang; Chun-Yi Wu; Jyh-Biau Chang; Ce-Kuen Shieh

Providing users with a familiar programming tool to develop applications is an important issue of grid computing. Currently, much effort has been put on the grid implementation of MPI, Java and RPC. However, a little of work has been done for enabling the use of software distributed shared memory (DSM) system in the grid environment although software DSM systems offer an easier programming user interface than the others. In order to simplify programming in grids, we have developed Teamster-G, a grid-enabled software DSM system that allows users to run their DSM programs on a virtual dedicated homogenous cluster which is the coupling of multiple computers physically distributed at the same or different sites. In this paper, we present the framework of Teamster-G and discuss the preliminary results of performance evaluation in Teamster-G.


The Journal of Supercomputing | 2006

A Transparent Distributed Shared Memory for Clustered Symmetric Multiprocessors

Jyh-Biau Chang; Ce-Kuen Shieh; Tyng-Yeu Liang

A transparent distributed shared memory (DSM) system must achieve complete transparency in data distribution, workload distribution, and reconfiguration respectively. The transparency of data distribution allows programmers to be able to access and allocate shared data using the same user interface as is used in shared-memory systems. The transparency of workload distribution and reconfiguration can optimize the parallelism at both the user-level and the kernel-level, and also improve the efficiency of run-time reconfiguration. In this paper, a transparent DSM system referred to as Teamster is proposed and is implemented for clustered symmetric multiprocessors. With the transparency provided by Teamster, programmers can exploit all the computing power of the clustered SMP nodes in a transparent way as they do in single SMP computer. Compared with the results of previous researches, Teamster can realize the transparency of cluster computing and obtain satisfactory system performance.


middleware for grid computing | 2006

A multi-layer resource reconfiguration framework for grid computing

Po-Cheng Chen; Jyh-Biau Chang; Tyng-Yeu Liang; Ce-Kuen Shieh; Yi-Chang Zhuang

Grid is a non-dedicated and dynamic computing environment. Consequently, different programs have to compete with each other for the same resources, and resource availability varies over time. That causes the performance of user programs to degrade and to become unpredictable. For resolving this problem, we propose a multi-layer resource reconfiguration framework for grid computing. As named, this framework adopts different resource reconfiguration mechanisms for different workloads of resources. We have implemented this framework on a grid-enabled DSM system called Teamster-G. Our experimental result shows that our proposed framework allows Teamster-G not only to fully utilize abundant CPU cycles but also to minimize resource contention between the jobs of resource consumers and those of resource providers. As a result, the job throughput of Teamster-G is effectively increased.


Future Generation Computer Systems | 2007

A grid-enabled software distributed shared memory system on a wide area network

Tyng-Yeu Liang; Chun-Yi Wu; Ce-Kuen Shieh; Jyh-Biau Chang

This study implements a grid-enabled software distributed shared memory (SDSM) system called Teamster-G on a wide area network (WAN). With the support of Teamster-G, users can develop applications on computational grids by means of shared variables. When they wish to execute their applications, Teamster-G provides a transparent resource allocation service for the execution of the programs. To minimize the turnaround time, Teamster-G employs a session-oriented protocol to reduce the cost of resource allocation, and a two-level consistency protocol to minimize the cost of maintaining data consistency over the WAN. This paper presents the framework of Teamster-G and discusses its performance.


advanced information networking and applications | 2008

A Performance Study of Virtual Machine Migration vs. Thread Migration for Grid Systems

Po-Cheng Chen; Cheng-I Lin; Sheng-Wei Huang; Jyh-Biau Chang; Ce-Kuen Shieh; Tyng-Yeu Liang

Grid computing integrates abundant distributed resources into a single large-scale problem solving environment for parallel applications. However, the grid is a non-dedicated and dynamic computing environment. Grid applications consequently compete with each other for non-dedicated shared resources; moreover, shared resources are probably reclaimed by their owners according to administration policies, e.g. the scheduled maintenance. The job migration mechanisms which take the non-dedicated and dynamic natures of grids into consideration, therefore, become important for optimizing the application performance. The experiments of two job migration mechanisms, i.e. virtual machine migration and node reconfiguration by thread migration were presented in this study. We completed experiments on both LAN and WAN scenarios with a page-based grid-enabled DSM system, Teamster-G The experimental results suggest the performance of virtual machine migration competes with node reconfiguration on equal terms; and further, they demonstrated the potential applications of virtual machine technique in the grid environment.


Computer Communications | 1999

A Hopfield neural network based task mapping method

Wanlei Zhu; Tyng-Yeu Liang; Ce-Kuen Shieh

With a prior knowledge of a program, static mapping aims to identify an optimal clustering strategy that can produce the best performance. In this paper we present a static method that uses Hopfield neural network to cluster the tasks of a parallel program for a given system. This method takes into account both load balancing and communication minimization. The method has been tested on a distributed shared memory system against other three clustering methods. Four programs, SOR, N-body, Gaussian Elimination and VQ, are used in the test. The result shows that our method is superior to the other three.


international parallel and distributed processing symposium | 2012

Enabling Mixed OpenMP/MPI Programming on Hybrid CPU/GPU Computing Architecture

Tyng-Yeu Liang; Hung-Fu Li; Jun-Yao Chiu

Hybrid CPU/GPU computing architecture recently has become an alternative platform for high performance computing. This architecture provides massive computational power with lower energy consumption and less economic cost than the traditional one using only CPUs. However, the complexity of the GPU programming is too high for users to move their applications toward this hybrid computing architecture. To resolve this problem, we propose a framework called OMPICUDA for users to develop parallel applications on the hybrid CPU/GPU clusters by mixing the APIs of OpenMP and MPI. Furthermore, this framework allows users to select GPUs or CPUs for the execution of different parallel regions in the same program according to the properties of the regions, and supports resource reallocation based on the states of CPUs and GPUs.


Journal of Systems and Software | 2000

Distinguishing sharing types to minimize communication in software distributed shared memory systems

Tyng-Yeu Liang; Jyh-Chang Ueng; Ce-Kuen Shieh; Deh-Yuan Chuang; Jun-Qi Lee

Abstract Using thread migration to redistribute threads to processors is a common scheme for minimizing communication needed to maintain data consistency in software distributed shared memory (DSM) systems. In order to minimize data-consistency communication, the number of shared pages is used to identify the pair of threads that will cause the most communication. This pair of threads is then co-located on the same node. Thread pairs sharing a given page can be classified into thee types, i.e., read/read (r/r), read/write (r/w) and write/write (w/w). Based on memory-consistency protocol, these three types of sharing generate distinct amounts of data-consistency communication. Ignoring this factor will mispredict the amount of communication caused by cross-node sharing and generate wrong decisions in thread migration. This paper presents a new policy called distinguishing of types sharing (DOTS) for DSM systems. The basic concept of this policy is to classify sharing among threads as r/r, r/w or w/w, each with a different weight, and then evaluate communication cost in terms of these weights. Experiments show that considering sharing types is necessary for minimization of data-consistency communication in DSM. Using DOTS for thread mapping produces more communication reduction than considering only the number of shared pages.


The Journal of Supercomputing | 2013

A compound OpenMP/MPI program development toolkit for hybrid CPU/GPU clusters

Hung-Fu Li; Tyng-Yeu Liang; Jun-Yao Chiu

In this paper, we propose a program development toolkit called OMPICUDA for hybrid CPU/GPU clusters. With the support of this toolkit, users can make use of a familiar programming model, i.e., compound OpenMP and MPI instead of mixed CUDA and MPI or SDSM to develop their applications on a hybrid CPU/GPU cluster. In addition, they can adapt the types of resources used for executing different parallel regions in the same program by means of an extended device directive according to the property of each parallel region. On the other hand, this programming toolkit supports a set of data-partition interfaces for users to achieve load balance at the application level no matter what type of resources are used for the execution of their programs.


parallel computing | 2011

Data race avoidance and replay scheme for developing and debugging parallel programs on distributed shared memory systems

Yung Chang Chiu; Ce-Kuen Shieh; Tzu-Chi Huang; Tyng-Yeu Liang; Kuo Chih Chu

Distributed shared memory (DSM) allows parallel programs to run on distributed computers by simulating a global virtual shared memory, but data racing bugs may easily occur when the threads of a multi-threaded process concurrently access the physically distributed memory. Earlier tools to help programmers locate data racing bugs in non-DSM parallel programs are not easily applied to DSM systems. This study presents the data race avoidance and replay scheme (DRARS) to assist debugging parallel programs on DSM or multi-core systems. DRARS is a novel tool which controls the consistency protocol of the target program, automatically preventing a large class of data racing bugs when the parallel program is subsequently run, obviating much of the need for manual debugging. For data racing bugs that cannot be avoided automatically, DRARS performs a deterministic replay-type function on DSM systems, faithfully reproducing the behavior of the parallel program during run time. Because one class of data racing bugs has already been eliminated, the remaining manual debugging task is greatly simplified. Unlike previous debugging methods, DRARS does not require that the parallel program be written in a specific style or programming language. Moreover, DRARS can be implemented in most consistency protocols. In this paper, DRARS is realized and verified in real experiments using the eager release consistency protocol on a DSM system with various applications.

Collaboration


Dive into the Tyng-Yeu Liang's collaboration.

Top Co-Authors

Avatar

Ce-Kuen Shieh

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Jyh-Biau Chang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yen-Tso Liu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Hung-Fu Li

National Kaohsiung University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Po-Cheng Chen

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Chun-Yi Wu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yu-Jie Lin

National Kaohsiung University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Weiping Zhu

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Yi-Chang Zhuang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Alvin W.Y. Su

National Cheng Kung University

View shared research outputs
Researchain Logo
Decentralizing Knowledge