Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guangming Tan is active.

Publication


Featured researches published by Guangming Tan.


international workshop on high performance reconfigurable computing technology and applications | 2007

Implementation of the Smith-Waterman algorithm on a reconfigurable supercomputing platform

Peiheng Zhang; Guangming Tan; Guang R. Gao

An innovative reconfigurable supercomputing platform -- XD1000 is developed by XtremeData Inc. to exploit the rapid progress of FPGA technology and the high-performance of Hyper-Transport interconnection. In this paper, we present the implementations of the Smith-Waterman algorithm for both DNA and protein sequences on the platform. The main features include: (1) we bring forward a multistage PE (processing element) design which significantly reduces the FPGA resource usage and hence allows more parallelism to be exploited; (2) our design features a pipelined control mechanism with uneven stage latencies -- a key to minimize the overall PE pipeline cycle time; (3) we also put forward a compressed substitution matrix storage structure, resulting in substantial decrease of the on-chip SRAM usage. Finally, we implement a 384-PE systolic array running at 66.7MHz, which can achieve 25.6GCUPS peak performance. Compared with the 2.2GHz AMD Opteron host processor, the FPGA coprocessor speedups 185X and 250X respectively.


ieee international conference on high performance computing data and analytics | 2011

Fast implementation of DGEMM on Fermi GPU

Guangming Tan; Linchuan Li; Sean Triechle; Everett H. Phillips; Yungang Bao; Ninghui Sun

In this paper we present a thorough experience on tuning double-precision matrix-matrix multiplication (DGEM-M) on the Fermi GPU architecture. We choose an optimal algorithm with blocking in both shared memory and registers to satisfy the constraints of the Fermi memory hierarchy. Our optimization strategy is further guided by a performance modeling based on micro-architecture benchmarks. Our optimizations include software pipelining, use of vector memory operations, and instruction scheduling. Our best CUDA algorithm achieves comparable performance with the latest CUBLAS library1. We further improve upon this with an implementation in the native machine language, leading to 20% increase in performance. That is, the achieved peak performance (efficiency) is improved from 302Gflop/s (58%) to 362Gflop/s (70%).


acm symposium on parallel algorithms and architectures | 2007

A parallel dynamic programming algorithm on a multi-core architecture

Guangming Tan; Ninghui Sun; Guang R. Gao

Dynamic programming is an efficient technique to solve combinatorial search and optimization problem. There have been many parallel dynamic programming algorithms. The purpose of this paper is to study a family of dynamic programming algorithm where data dependence appear between non-consecutive stages, in other words, the data dependence is non-uniform. This kind of dynnamic programming is typically called nonserial polyadic dynamic programming. Owing to the non-uniform data dependence, it is harder to optimize this problem for parallelism and locality on parallel architectures. In this paper, we address the chanllenge of exploiting fine grain parallelism and locality of nonserial polyadic dynamic programming on a multi-core architecture. We present a programming and execution model for multi-core architectures with memory hierarchy. In the framework of the new model, the parallelism and locality benifit from a data dependence transformation. We propose a parallel pipelined algorithm for filling the dynamic programming matrix by decomposing the computation operators. The new parallel algorithm tolerates the memory access latency using multi-thread and is easily improved with tile technique. We formulate and analytically solve the optimization problem determing the tile size that minimizes the total execution time. The experiments on a simulator give a validation of the proposed model and show that the fine grain parallel algorithm achieves sub-linear speedup and that a potential high scalability on multi-core arichitecture.


programming language design and implementation | 2013

SMAT: an input adaptive auto-tuner for sparse matrix-vector multiplication

Jiajia Li; Guangming Tan; Mingyu Chen; Ninghui Sun

Sparse Matrix Vector multiplication (SpMV) is an important kernel in both traditional high performance computing and emerging data-intensive applications. By far, SpMV libraries are optimized by either application-specific or architecture-specific approaches, making the libraries become too complicated to be used extensively in real applications. In this work we develop a Sparse Matrix-vector multiplication Auto-Tuning system (SMAT) to bridge the gap between specific optimizations and general-purpose usage. SMAT provides users with a unified programming interface in compressed sparse row (CSR) format and automatically determines the optimal format and implementation for any input sparse matrix at runtime. For this purpose, SMAT leverages a learning model, which is generated in an off-line stage by a machine learning method with a training set of more than 2000 matrices from the UF sparse matrix collection, to quickly predict the best combination of the matrix feature parameters. Our experiments show that SMAT achieves impressive performance of up to 51GFLOPS in single-precision and 37GFLOPS in double-precision on mainstream x86 multi-core processors, which are both more than 3 times faster than the Intel MKL library. We also demonstrate its adaptability in an algebraic multigrid solver from Hypre library with above 20% performance improvement reported.


conference on high performance computing (supercomputing) | 2006

Locality and parallelism optimization for dynamic programming algorithm in bioinformatics

Guangming Tan; Shengzhong Feng; Ninghui Sun

Dynamic programming has been one of the most efficient approaches to sequence analysis and structure prediction in biology. However, their performance is limited due to the drastic increase in both the number of biological data and variety of the computer architectures. With regard to such predicament, this paper creates excellent algorithms aimed at addressing the challenges of improving memory efficiency and network latency tolerance for nonserial polyadic dynamic programming where the dependences are nonuniform. By relaxing the nonuniform dependences, we proposed a new cache oblivious scheme to enhance its performance on memory hierarchy architectures. Moreover we develop and extend a tiling technique to parallelize this nonserial polyadic dynamic programming using an alternate block-cyclic mapping strategy for balancing the computational and memory load, where an analytical parameterized model is formulated to determine the tile volume size that minimizes the total execution time and an algorithmic transformation is used to schedule the tile to overlap communication with computation to further minimize communication overhead on parallel architectures. The numerical experiments were carried out on several high performance computer systems. The new cache-oblivious dynamic programming algorithm achieve 2-10 speedup and the parallel tiling algorithm with communication-computation overlapping shows a desired potential for fine-grained parallel computing on massively parallel computer systems


field-programmable custom computing machines | 2012

Accelerating Millions of Short Reads Mapping on a Heterogeneous Architecture with FPGA Accelerator

Wen Tang; Wendi Wang; Bo Duan; Chunming Zhang; Guangming Tan; Peiheng Zhang; Ninghui Sun

The explosion of Next Generation Sequencing (NGS) data with over one billion reads per day poses a great challenge to the capability of current computing systems. In this paper, we proposed a CPU-FPGA heterogeneous architecture for accelerating a short reads mapping algorithm, which was built upon the concept of hash-index. In particular, by extracting and mapping the most time-consuming and basic operations to specialized processing elements (PEs), our new algorithm is favorable to efficient acceleration on FPGAs. The proposed architecture is implemented and evaluated on a customized FPGA accelerator card with a Xilinx Virtex5 LX330 FPGA resided. Limited by available data transfer bandwidth, our NGS mapping accelerator, which operates at 175MHz, integrates up to 100 PEs. Compared to an Intel six-cores CPU, the speedup of our accelerator ranges from 22.2 times to 42.9 times.


international conference on parallel processing | 2009

A Parallel Algorithm for Computing Betweenness Centrality

Guangming Tan; Dengbiao Tu; Ninghui Sun

In this paper we present a multi-grained parallel algorithm for computing betweenness centrality, which is extensively used in large-scale network analysis. Our method is based on a novel algorithmic handling of access conflicts for a CREW PRAM algorithm. We propose a proper data-processor mapping, a novel edge-numbering strategy and a new triple array data structure recording the shortest path for eliminating conflicts to access the shared memory. The algorithm requires


IEEE Transactions on Parallel and Distributed Systems | 2009

Improving Performance of Dynamic Programming via Parallelism and Locality on Multicore Architectures

Guangming Tan; Ninghui Sun; Guang R. Gao

O(n+m)


measurement and modeling of computer systems | 2009

Extending Amdahl's law in the multicore era

Erlin Yao; Yungang Bao; Guangming Tan; Mingyu Chen

space and


The Journal of Supercomputing | 2011

Analysis and performance results of computing betweenness centrality on IBM Cyclops64

Guangming Tan; Vugranam C. Sreedhar; Guang R. Gao

O(\frac{nm}{p})

Collaboration


Dive into the Guangming Tan's collaboration.

Top Co-Authors

Avatar

Ninghui Sun

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Mingyu Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shengzhong Feng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Erlin Yao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunming Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jie Yan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peiheng Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dongrui Fan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiajia Li

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge