Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jhy-Chun Wang is active.

Publication


Featured researches published by Jhy-Chun Wang.


Journal of Parallel and Distributed Computing | 1993

Embedding meshes on the star graph

Sanjay Ranka; Jhy-Chun Wang; Nangkang Yeh

We develop algorithms for mapping <italic>n</italic>-dimensional meshes on a star graph of degree <italic>n</italic> with expansion 1 and dilation 3. We show that an <italic>n</italic>-degree star graph can efficiently simulate an <italic>n</italic>-dimensional mesh.


IEEE Transactions on Parallel and Distributed Systems | 1994

Static and run-time algorithms for all-to-many personalized communication on permutation networks

Sanjay Ranka; Jhy-Chun Wang; Geoffrey C. Fox

With the advent of new routing methods, the distance that a message is sent is becoming relatively less and less important. Thus, assuming no link contention, permutation seems to be an efficient collective communication primitive. In this paper, we present several algorithms for decomposing all-to-many personalized communication into a set of disjoint partial permutations. We discuss several algorithms and study their effectiveness from the view of static scheduling as well as run-time scheduling. An approximate analysis shows that with n processors, and assuming that every processor sends and receives d messages to random destinations, our algorithm can perform the scheduling in O(dn In d) time, on average, and can use an expected number of d+log d partial permutations to carry out the communication. We present experimental results of our algorithms on the CM-5. >


international conference on parallel processing | 1993

Personalized Communication Avoiding Node Contention on Distributed Memory Systems

Sanjay Ranka; Jhy-Chun Wang; Manoj Kumar

In this paper, we present several algorithms for per forming all-to-many personalized communication on distributed memory parallel machines. Each proces sor sends a different message (of potentially different size) to a subset of all the processors involved in the collective communication. The algorithms are based on decomposing the communication matrix into a set of partial permutations. We study the effectiveness of our algorithms both from the view of static scheduling as well as runtime scheduling.


hawaii international conference on system sciences | 1994

Distributed scheduling of unstructured collective communication on the CM-5

Jhy-Chun Wang; Tseng-Hui Lin; Ranka

Parallelization of irregular applications often results in unstructured collective communication. We present a distributed algorithm for scheduling such communication on parallel machines. We describe the performance of this algorithm on the CM-5 and show that the scheduling algorithm gives a significant improvement over naive methods.<<ETX>>


Journal of Parallel and Distributed Computing | 1995

Irregular personalized communication on distributed memory machines

Sanjay Ranka; Jhy-Chun Wang; Manoj Kumar

Abstract In this paper, we present several algorithms for performing all-to-many personalized communication on distributed memory parallel machines. We assume that each processor sends a different message (of potentially different size) to a subset of all the processors involved in the collective communication. The algorithms are based on decomposing the communication matrix into a set of partial permutations. We study the effectiveness of our algorithms from both the view of static scheduling and runtime scheduling.


international parallel processing symposium | 1993

A probabilistic analysis of a locality maintaining load balancing algorithm

Kishan G. Mehrotra; Sanjay Ranka; Jhy-Chun Wang

This paper presents a simple load balancing algorithm and its probabilistic analysis. Unlike most of the previous load balancing algorithms, this algorithm maintains locality. The authors show that the cost of this load balancing algorithm is small for practical situations and discuss some interesting applications for data remapping.<<ETX>>


software product lines | 1993

Scalable libraries for Fortran 90D/High Performance Fortran

Zeki Bozkus; Alok N. Choudhary; Geoffrey C. Fox; T. Haupt; Sanjay Ranka; Rajeev Thakur; Jhy-Chun Wang

High Performance Fortran (HPF) is a new language, based on Fortran 90, developed by HPF Forum. The language was designed to support data parallel programming with top performance on MIMD and SIMD computers with non-uniform memory access costs. The main features of the language include the FORALL construct, new intrinsic functions and data distribution directives. A perusal of HPF shows that most of the parallelism is hidden in the runtime library. Further, efficient parallelization of FORALL construct and array assignment functions on distributed memory machines requires the use of collective communication to access non-local data. This communication could be structured (like shift, broadcast, all-to-all communication) or unstructured. Thus, the scalability of the code generated by the compiler depend on the scalability of these libraries. In this paper, we present the design and performance of an scalable library for the intrinsic functions and the collective communication library.<<ETX>>


Parallel Processing Letters | 1995

DISTRIBUTED SCHEDULING OF UNSTRUCTURED COLLECTIVE COMMUNICATION ON THE CM-5

Jhy-Chun Wang; Tseng-Hui Lin; Sanjay Ranka

Parallelization of scientific applications often results in unstructured collective communication. In this paper, we present a distributed algorithm for scheduling such communication on parallel machines. We describe the performance of this algorithm on the CM-5 and show that the scheduling algorithm gives a significant improvement over naive methods.


Archive | 1993

Load balancing and communication support for irregular problems

Jhy-Chun Wang


Archive | 1994

Scalable Libraries for Fortran 90D/High Performance

Zeki Bozkus; Alok Choudharyf; Geoffrey C. Fox; Tom Haupt; Sanjay Ranka; Rajeev Thakur; Jhy-Chun Wang

Collaboration


Dive into the Jhy-Chun Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Rajeev Thakur

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ranka

Syracuse University

View shared research outputs
Researchain Logo
Decentralizing Knowledge