Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nan Dun is active.

Publication


Featured researches published by Nan Dun.


Future Generation Computer Systems | 2013

Design and implementation of GXP make - A workflow system based on make

Kenjiro Taura; Takuya Matsuzaki; Makoto Miwa; Yoshikazu Kamoshida; Daisaku Yokoyama; Nan Dun; Takeshi Shibata; Choi Sung Jun; Jun’ichi Tsujii

This paper describes the rational behind designing workflow systems based on the Unix make by showing a number of idioms useful for workflows comprising many tasks. It also demonstrates a specific design and implementation of such a workflow system called GXP make. GXP make supports all the features of GNU make and extends its platforms from single node systems to clusters, clouds, supercomputers, and distributed systems. Interestingly, it is achieved by a very small code base that does not modify GNU make implementation at all. While being not ideal for performance, it achieved a useful performance and scalability of dispatching one million tasks in approximately 16,000 seconds (60 tasks per second, including dependence analysis) on an 8 core Intel Nehalem node. For real applications, recognition and classification of protein-protein interactions from biomedical texts on a supercomputer with more than 8,000 cores are described.


high performance distributed computing | 2010

ParaTrac: a fine-grained profiler for data-intensive workflows

Nan Dun; Kenjiro Taura; Akinori Yonezawa

The realistic characteristics of data-intensive workflows are critical to optimal workflow orchestration and profiling is an effective approach to investigate the behaviors of such complex applications. ParaTrac is a fine-grained profiler for data-intensive workflows by using user-level file system and process tracing techniques. First, ParaTrac enables users to quickly understand the I/O characteristics of from entire application to specific processes or files by examining low-level I/O profiles. Second, ParaTrac automatically exploits fine-grained data-processes interactions in workflow to help users intuitively and quantitatively investigate realistic execution of data-intensive workflows. Experiments on thoroughly profiling Montage workflow demonstrate both the scalability and effectiveness of ParaTrac. The overhead of tracing thousands of processes is around 16%. We use low-level I/O profiles and informative workflow DAGs to illustrate the vantage of fine-grained profiling by helping users comprehensively understand the application behaviors and refine the scheduling for complex workflows. Our study also suggests that current workflow management systems may use fine-grained profiles to provide more flexible control for optimal workflow execution.


international conference on e-science | 2010

Design and Implementation of GXP Make -- A Workflow System Based on Make

Kenjiro Taura; Takuya Matsuzaki; Makoto Miwa; Yoshikazu Kamoshida; Daisaku Yokoyama; Nan Dun; Takeshi Shibata; Choi Sung Jun; Jun’ichi Tsujii

This paper describes the rational behind designing workflow systems based on the Unix make by showing a number of idioms useful for workflows comprising many tasks. It also demonstrates a specific design and implementation of such a workflow system called GXP make. GXP make supports all the features of GNU make and extends its platforms from single node systems to clusters, clouds, supercomputers, and distributed systems. Interestingly, it is achieved by a very small code base that does not modify GNU make implementation at all. While being not ideal for performance, it achieved a useful performance and scalability of dispatching one million tasks in approximately 16,000 seconds (60 tasks per second, including dependence analysis) on an 8 core Intel Nehalem node. For real applications, recognition and classification of protein-protein interactions from biomedical texts on a supercomputer with more than 8,000 cores are described.


international parallel and distributed processing symposium | 2012

An Empirical Performance Study of Chapel Programming Language

Nan Dun; Kenjiro Taura

In this paper we evaluate the performance of the Chapel programming language from the perspective of its language primitives and features, where the microbenchmarks are synthesized from our lessons learned in developing molecular dynamics simulation programs in Chapel. Experimental results show that most language building blocks have comparable performance to corresponding hand-written C code, while the complex applications can achieve up to 70% of the performance of C implementation. We identify several causes of overhead that can be further optimized by Chapel compiler. This work not only helps Chapel users understand the performance implication of using Chapel, but also provides useful feedbacks for Chapel developers to make a better compiler.


cluster computing and the grid | 2009

GMount: An Ad Hoc and Locality-Aware Distributed File System by Using SSH and FUSE

Nan Dun; Kenjiro Taura; Akinori Yonezawa

Developing and deploying distributed file system has been important for the Grid computing. By GMount, non-privileged users can instantaneously and effortlessly build a distributed file system on arbitrary machines that are reachable via SSH. It is scalable to hundreds of nodes in the wide-area Grid environments and adapts to NAT/Firewall. Unlike conventional distributed file systems, GMount can directly harness local file systems of each node without importing/exporting application data and utilize the network topology to make the metadata operations locality-aware. In this paper, we present the design and implementation of GMount by using two popular modules: SSH and FUSE. We demonstrate its viability and locality-aware metadata operation performance in a large scale Grid with over 320 nodes spreading across 12 clusters that are connected by heterogeneous wide-area links.


grid computing | 2010

Fine-Grained Profiling for Data-Intensive Workflows

Nan Dun; Kenjiro Taura; Akinori Yonezawa

Profiling is an effective dynamic analysis approach to investigate complex applications. ParaTrac is a user-level profiler using file system and process tracing techniques for data-intensive workflow applications. In two respects ParaTrac helps users refine the orchestration of workflows. First, the profiles of I/O characteristics enable users to quickly identify bottlenecks of underlying I/O subsystems. Second, ParaTrac can exploit fine-grained data-processes interactions in workflow execution to help users understand, characterize, and manage realistic data-intensive workflows. Experiments on thoroughly profiling Montage workflow demonstrate that ParaTrac is scalable to tracing events of thousands of processes and effective in guiding fine-grained workflow scheduling or workflow management systems improvements.


many-task computing on grids and supercomputers | 2010

Easy and instantaneous processing for data-intensive workflows

Nan Dun; Kenjiro Taura; Akinori Yonezawa

This paper presents a light-weight and scalable framework that enables non-privileged users to effortlessly and instantaneously describe, deploy, and execute data-intensive workflows on arbitrary computing resources from clusters, clouds, and supercomputers. This framework consists of three major components: GXP parallel/distributed shell as resource explorer and framework back-end, GMount distributed file system as underlying data sharing approach, and GXP Make as the workflow engine. With this framework, domain researchers can intuitively write workflow description in GNU make rules and harness resources from different domains with low learning and setup cost. By investigating the execution of real-world scientific applications using this framework on multi-cluster and supercomputer platforms, we demonstrate that our processing framework has practically useful performance and are suitable for common practice of data-intensive workflows in various distributed computing environments.


grid computing | 2008

GMount: Build your grid file system on the fly

Nan Dun; Kenjiro Taura; Akinori Yonezawa

By GMount, non-privilege users can easily and quickly build ad-hoc distributed file systems on any machines reachable via SSH. In the wide-area Grid environments, it can scale to hundreds of nodes and works with NAT or firewall. Given the network topology, the metadata operations of file system are locality-aware. GMount can be effortlessly deployed in multiple clusters without superuser privilege. In this paper, we present the design and implementation of GMount, and shows its viability in a large scale Grid platform with over 300 nodes spread across 11 clusters.


Ipsj Online Transactions | 2011

Performance Evaluation of a Distributed File System with Locality-Aware Metadata Lookups

Nan Dun; Kenjiro Taura; Akinori Yonezawa


Archive | 2008

GMount: Building Ad-hoc Distributed File Systems by GXP and SSHFS-MUX

Nan Dun; Kenjiro Taura; Akinori Yonezawa

Collaboration


Dive into the Nan Dun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makoto Miwa

Toyota Technological Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takuya Matsuzaki

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge