Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dingwen Tao is active.

Publication


Featured researches published by Dingwen Tao.


international parallel and distributed processing symposium | 2017

Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization

Dingwen Tao; Sheng Di; Zizhong Chen; Franck Cappello

Todays HPC applications are producing extremely large amounts of data, such that data storage and analysis are becoming more challenging for scientific research. In this work, we design a new error-controlled lossy compression algorithm for large-scale scientific data. Our key contribution is significantly improving the prediction hitting rate (or prediction accuracy) for each data point based on its nearby data values along multiple dimensions. We derive a series of multilayer prediction formulas and their unified formula in the context of data compression. One serious challenge is that the data prediction has to be performed based on the preceding decompressed values during the compression in order to guarantee the error bounds, which may degrade the prediction accuracy in turn. We explore the best layer for the prediction by considering the impact of compression errors on the prediction accuracy. Moreover, we propose an adaptive error-controlled quantization encoder, which can further improve the prediction hitting rate considerably. The data size can be reduced significantly after performing the variable-length encoding because of the uneven distribution produced by our quantization encoder. We evaluate the new compressor on production scientific data sets and compare it with many other state-of-the-art compressors: GZIP, FPZIP, ZFP, SZ-1.1, and ISABELA. Experiments show that our compressor is the best in class, especially with regard to compression factors (or bit-rates) and compression errors (including RMSE, NRMSE, and PSNR). Our solution is better than the second-best solution by more than a 2x increase in the compression factor and 3.8x reduction in the normalized root mean squared error on average, with reasonable error bounds and user-desired bit-rates.


high performance distributed computing | 2016

New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

Dingwen Tao; Shuaiwen Leon Song; Sriram Krishnamoorthy; Panruo Wu; Xin Liang; Eddy Z. Zhang; Darren J. Kerbyson; Zizhong Chen

Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recover from errors when combined with a checkpoint/rollback scheme. These designs are capable of addressing scenarios under different error rates. Our ABFT approaches apply to a wide range of iterative solvers that primarily rely on matrix-vector multiplication and vector linear operations. We evaluate our designs through comprehensive analytical and empirical analysis. Experimental evaluation on the Stampede supercomputer demonstrates the low performance overheads incurred by our two ABFT schemes for preconditioned CG (0.4% and 2.2%) and preconditioned BiCGSTAB (1.0% and 4.0%) for the largest SPD matrix from UFL Sparse Matrix Collection. The evaluation also demonstrates the flexibility and effectiveness of our proposed designs for detecting and recovering various types of soft errors in general iterative methods.


high performance distributed computing | 2016

Towards Practical Algorithm Based Fault Tolerance in Dense Linear Algebra

Panruo Wu; Qiang Guan; Nathan DeBardeleben; Sean Blanchard; Dingwen Tao; Xin Liang; Jieyang Chen; Zizhong Chen

Algorithm based fault tolerance (ABFT) attracts renewed interest for its extremely low overhead and good scalability. However the fault model used to design ABFT has been either abstract, simplistic, or both, leaving a gap between what occurs at the architecture level and what the algorithm expects. As the fault model is the deciding factor in choosing an effective checksum scheme, the resulting ABFT techniques have seen limited impact in practice. In this paper we seek to close the gap by directly using a comprehensive architectural fault model and devise a comprehensive ABFT scheme that can tolerate multiple architectural faults of various kinds. We implement the new ABFT scheme into high performance linpack (HPL) to demonstrate the feasibility in large scale high performance benchmark. We conduct architectural fault injection experiments and large scale experiments to empirically validate its fault tolerance and demonstrate the overhead of error handling, respectively.


ieee international conference on high performance computing data and analytics | 2016

GreenLA: green linear algebra software for GPU-accelerated heterogeneous computing

Jieyang Chen; Li Tan; Panruo Wu; Dingwen Tao; Hongbo Li; Xin Liang; Sihuan Li; Rong Ge; Laxmi N. Bhuyan; Zizhong Chen

While many linear algebra libraries have been developed to optimize their performance, no linear algebra library considers their energy efficiency at the library design time. In this paper, we present GreenLA - an energy efficient linear algebra software package that leverages linear algebra algorithmic characteristics to maximize energy savings with negligible overhead. GreenLA is (1) energy efficient: it saves up to several times more energy than the best existing energy saving approaches that do not modify library source codes; (2) high performance: its performance is comparable to the highly optimized linear algebra library MAGMA; and (3) transparent to applications: with the same programming interface, existing MAGMA users do not need to modify their source codes to benefit from GreenLA. Experimental results demonstrate that GreenLA is able to save up to three times more energy than the best existing energy saving approaches while delivering similar performance compared to the state-of-the-art linear algebra library MAGMA.


acm sigplan symposium on principles and practice of parallel programming | 2017

Silent Data Corruption Resilient Two-sided Matrix Factorizations

Panruo Wu; Nathan DeBardeleben; Qiang Guan; Sean Blanchard; Jieyang Chen; Dingwen Tao; Xin Liang; Kaiming Ouyang; Zizhong Chen

This paper presents an algorithm based fault tolerance method to harden three two-sided matrix factorizations against soft errors: reduction to Hessenberg form, tridiagonal form, and bidiagonal form. These two sided factorizations are usually the prerequisites to computing eigenvalues/eigenvectors and singular value decomposition. Algorithm based fault tolerance has been shown to work on three main one-sided matrix factorizations: LU, Cholesky, and QR, but extending it to cover two sided factorizations is non-trivial because there are no obvious \textit{offline, problem} specific maintenance of checksums. We thus develop an \textit{online, algorithm} specific checksum scheme and show how to systematically adapt the two sided factorization algorithms used in LAPACK and ScaLAPACK packages to introduce the algorithm based fault tolerance. The resulting ABFT scheme can detect and correct arithmetic errors \textit{continuously} during the factorizations that allow timely error handling. Detailed analysis and experiments are conducted to show the cost and the gain in resilience. We demonstrate that our scheme covers a significant portion of the operations of the factorizations. Our checksum scheme achieves high error detection coverage and error correction coverage compared to the state of the art, with low overhead and high scalability.


international conference on parallel and distributed systems | 2014

Extending checksum-based ABFT to tolerate soft errors online in iterative methods

Longxiang Chen; Dingwen Tao; Panruo Wu; Zizhong Chen

As the size and complexity of high performance computers increase, more soft errors will be encountered during computations. Algorithm-Based Fault Tolerance (ABFT) has been proved to be a highly efficient technique to detect soft errors in dense linear algebra operations including matrix multiplication, Cholesky and LU factorization. While ABFT can also be applied to a iterative sparse linear algebra algorithm via applying it to every individual matrix-vector multiplication in the algorithm, it often introduces considerable overhead. In this paper, we propose novel extensions to ABFT to not only reduce the overhead but also protect computations that can not be protected by existing ABFT. Instead of maintaining checksums in every individual matrix-vector multiplication, we modified the algorithms so that checksums established at the beginning of the algorithms can be maintained at every iterations throughout the algorithms. Because soft errors in most iterative sparse linear algebra algorithms will propagate from one iteration to another, we do not have to verify the correctness of the checksums at each iteration to detect errors. By reducing the frequency of verification, the fault tolerance overhead can be greatly reduced. Experimental results demonstrate that, when used with local diskless checkpoints together, our approach introduces much less overhead than the existing ABFT techniques.


ieee international conference on high performance computing data and analytics | 2017

Correcting soft errors online in fast fourier transform

Xin Liang; Jieyang Chen; Dingwen Tao; Sihuan Li; Panruo Wu; Hongbo Li; Kaiming Ouyang; Yuanlai Liu; Fengguang Song; Zizhong Chen

While many algorithm-based fault tolerance (ABFT) schemes have been proposed to detect soft errors offline in the fast Fourier transform (FFT) after computation finishes, none of the existing ABFT schemes detect soft errors online before the computation finishes. This paper presents an online ABFT scheme for FFT so that soft errors can be detected online and the corrupted computation can be terminated in a much more timely manner. We also extend our scheme to tolerate both arithmetic errors and memory errors, develop strategies to reduce its fault tolerance overhead and improve its numerical stability and fault coverage, and finally incorporate it into the widely used FFTW library - one of the todays fastest FFT software implementations. Experimental results demonstrate that: (1) the proposed online ABFT scheme introduces much lower overhead than the existing offline ABFT schemes; (2) it detects errors in a much more timely manner; and (3) it also has higher numerical stability and better fault coverage.


high performance distributed computing | 2018

Improving performance of iterative methods by lossy checkponting

Dingwen Tao; Sheng Di; Xin Liang; Zizhong Chen; Franck Cappello

Iterative methods are commonly used approaches to solve large, sparse linear systems, which are fundamental operations for many modern scientific simulations. When the large-scale iterative methods are running with a large number of ranks in parallel, they have to checkpoint the dynamic variables periodically in case of unavoidable fail-stop errors, requiring fast I/O systems and large storage space. To this end, significantly reducing the checkpointing overhead is critical to improving the overall performance of iterative methods. Our contribution is fourfold. (1) We propose a novel lossy checkpointing scheme that can significantly improve the checkpointing performance of iterative methods by leveraging lossy compressors. (2) We formulate a lossy checkpointing performance model and derive theoretically an upper bound for the extra number of iterations caused by the distortion of data in lossy checkpoints, in order to guarantee the performance improvement under the lossy checkpointing scheme. (3) We analyze the impact of lossy checkpointing (i.e., extra number of iterations caused by lossy checkpointing files) for multiple types of iterative methods. (4) We evaluate the lossy checkpointing scheme with optimal checkpointing intervals on a high-performance computing environment with 2,048 cores, using a well-known scientific computation package PETSc and a state-of-the-art checkpoint/restart toolkit. Experiments show that our optimized lossy checkpointing scheme can significantly reduce the fault tolerance overhead for iterative methods by 23%∼70% compared with traditional checkpointing and 20%∼58% compared with lossless-compressed checkpointing, in the presence of system failures.


international conference on big data | 2017

In-depth exploration of single-snapshot lossy compression techniques for N-body simulations

Dingwen Tao; Sheng Di; Zizhong Chen; Franck Cappello


arXiv: Information Theory | 2018

Fixed-PSNR Lossy Compression for Scientific Data.

Dingwen Tao; Sheng Di; Xin Liang; Zizhong Chen; Franck Cappello

Collaboration


Dive into the Dingwen Tao's collaboration.

Top Co-Authors

Avatar

Zizhong Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Xin Liang

University of California

View shared research outputs
Top Co-Authors

Avatar

Franck Cappello

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Panruo Wu

University of California

View shared research outputs
Top Co-Authors

Avatar

Sheng Di

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jieyang Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Hongbo Li

University of California

View shared research outputs
Top Co-Authors

Avatar

Kaiming Ouyang

University of California

View shared research outputs
Top Co-Authors

Avatar

Nathan DeBardeleben

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Qiang Guan

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge