Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chu-Hsiang Huang is active.

Publication


Featured researches published by Chu-Hsiang Huang.


IEEE Transactions on Communications | 2014

Gallager B LDPC Decoder with Transient and Permanent Errors

Chu-Hsiang Huang; Yao Li; Lara Dolecek

In this paper, the performance of a noisy Gallager B decoder used to decode regular LDPC codes is studied. We assume that the noisy decoder is subject to both transient processor errors and permanent memory errors. Due to the asymmetric nature of permanent errors, we model error propagation in the decoder via a suitable asymmetric channel. We then develop a density evolution type analysis on this asymmetric channel. The recursive expression for the bit error probability is derived as a function of the code parameters (node degrees), codeword weight, transmission error rate and the error rates of the permanent and the transient errors. Based on this analysis, we then derive the residual error of the Gallager B decoder for the regime where the transmission error rate and the processing error rates are small. In this regime, we further observe that the residual error can be well approximated by the sum of suitably combined transient errors and permanent errors, provided that the check node degree is large enough. Based on this insight we then propose and analyze a simple scheme for detecting permanent errors. The scheme exploits the parity check equations of the code itself and reuses the existing hardware to locate permanent errors in memory blocks. With high probability, the detection scheme discovers correct locations of permanent memory errors, while, with low probability, it mislabels the functional memory as being defective.


IEEE Communications Letters | 2012

Optimal Design of a Gallager B Noisy Decoder for Irregular LDPC Codes

S. M. Sadegh Tabatabaei Yazdi; Chu-Hsiang Huang; Lara Dolecek

In this letter, we study the performance of a noisy Gallager B decoder used to decode irregular low-density parity-check (LDPC) codes. We derive the final bit error rate (BER) as a function of both the transmission noise and processing errors. We allow different components of the decoder associated with certain computational units (i.e., bit and check nodes of varying degrees) to have different processing errors. We formulate an optimization problem to distribute available processing resources across different components of a noisy decoder to achieve minimal BER. Simulations demonstrate that the optimal resource allocation derived from our analysis outperforms uninformed (random) resource assignment.


IEEE Transactions on Communications | 2015

Belief Propagation Algorithms on Noisy Hardware

Chu-Hsiang Huang; Yao Li; Lara Dolecek

The wide recognition that emerging nano-devices will be inherently unreliable motivates the evaluation of information processing algorithms running on noisy hardware as well as the design of robust schemes for reliable performance against hardware errors of varied characteristics. In this paper, we investigate the performance of a popular statistical inference algorithm, belief propagation (BP) on probabilistic graphical models, implemented on noisy hardware, and we propose two robust implementations of the BP algorithm targeting different computation noise distributions. We assume that the BP messages are subject to zero-mean transient additive computation noise. We focus on graphical models satisfying the contraction mapping condition that guarantees the convergence of the noise-free BP. We first upper bound the distances between the noisy BP messages and the fixed point of (noise-free) BP as a function of the iteration number. Next, we propose two implementations of BP, namely, censoring BP and averaging BP, that are robust to computation noise. Censoring BP rejects incorrect computations to keep the algorithm on the right track to convergence, while averaging BP takes the average of the messages in all iterations up to date to mitigate the effects of computation noise. Censoring BP works effectively when, with high probability, the computation noise is exactly zero, and averaging BP, although having a slightly larger overhead, works effectively for general zero-mean computation noise distributions. Sufficient conditions on the convergence of censoring BP and averaging BP are derived. Simulations on the Ising model demonstrate that the two proposed implementations successfully converge to the fixed point achieved by noise-free BP. Additionally, we apply averaging BP to a BP-based image denoising algorithm and as a BP decoder for LDPC codes. In the image denoising application, averaging BP successfully denoises an image even when nominal BP fails to do so in the presence of computation noise. In the BP LDPC decoder application, the power of averaging BP is manifested by the reduction in the residual error rates compared with the nominal BP decoder.


international conference on acoustics, speech, and signal processing | 2013

Analysis of finite-alphabet iterative decoders under processing errors

Chu-Hsiang Huang; Lara Dolecek

It is widely recognized that emerging hardware technologies will be inherently unreliable. In this paper, we study the performance of finite-alphabet iterative decoders when implemented on noisy hardware built out of unreliable components. We derive a recursive expression for the error probability in terms of both the transmission noise and processing errors. We allow different components of the decoding algorithm associated with certain computational units (i.e., bit and check nodes of varying degrees in the underlying graph) to be implemented using a collection of processors with varying levels of processing error rates. Performance analysis and optimal resource allocation of a noisy Gallager E decoder is presented as an application example of our general derivation. Simulations demonstrate that the implementation of a noisy iterative decoder according to the proposed analysis-guided optimal resource allocation outperforms implementations based on uninformed resource allocation under the common resource budget.


IEEE Transactions on Communications | 2015

ACOCO: Adaptive Coding for Approximate Computing on Faulty Memories

Chu-Hsiang Huang; Yao Li; Lara Dolecek

With scaling of process technologies and increase in process variations, embedded memories will be inherently unreliable. Approximate computing is a new class of techniques that relax the accuracy requirement of computing systems. In this paper, we present the Adaptive Coding for approximate Computing (ACOCO) framework, which provides us with an analysis-guided design methodology to develop adaptive codes for different computations on the data read from faulty memories. In ACOCO, we first compress the data by introducing distortion in the source encoder, and then add redundant bits to protect the data against memory errors in the channel encoder. We are thus able to protect the data against memory errors without additional memory overhead so that the coded data have the same bit-length as the uncoded data. We design the source encoder by first specifying a cost function measuring the effect of the data compression on the system output, and then design the source code according to this cost function. We develop adaptive codes for two types of systems under ACOCO. The first type of systems we consider, which includes many machine learning and graph-based inference systems, is the systems dominated by product operations. We evaluate the cost function statistics for the proposed adaptive codes, and demonstrate its effectiveness via two application examples: max-product image denoising and naïve Bayesian classification. Next, we consider another type of systems: iterative decoders with min operation and sign-bit decision, which are widely applied in wireless communication systems. We develop an adaptive coding scheme for the min-sum decoder subject to memory errors. A density evolution analysis and simulations on finite length codes both demonstrate that the decoder with our adaptive code achieves a residual error rate that is on the order of the square of the residual error rate achieved by the nominal min-sum decoder.


Information Technology | 2015

NSF expedition on variability-aware software: Recent results and contributions

Lucas Francisco Wanner; Liangzhen Lai; Abbas Rahimi; Mark Gottscho; Pietro Mercati; Chu-Hsiang Huang; Frederic Sala; Yuvraj Agarwal; Lara Dolecek; Nikil D. Dutt; Puneet Gupta; Rajesh K. Gupta; Ranjit Jhala; Rakesh Kumar; Sorin Lerner; Subhasish Mitra; Alexandru Nicolau; Tajana Simunic Rosing; Mani B. Srivastava; Steven Swanson; Dennis Sylvester; Yuanyuan Zhou

Abstract In this paper we summarize recent results and contributions from the NSF Expedition on Variability-Aware Software, a five year, multi-university effort to tackle the problem of hardware variations and its implications and opportunities in software. The Expedition has made contributions in characterization and online monitoring of variations (particularly in microprocessors and flash memories), proposed new coding techniques for variability-tolerant storage, provided tools and platforms for the development of variability-aware software, and created new runtime support systems for variability-aware task-scheduling and execution.


IEEE Transactions on Communications | 2015

Orthogonal Matching Pursuit on Faulty Circuits

Yao Li; Yuejie Chi; Chu-Hsiang Huang; Lara Dolecek

With the wide recognition that modern nanoscale devices will be error-prone, characterization of reliability of information processing systems built out of unreliable components has become an important topic. In this paper, we analyze the performance of orthogonal matching pursuit (OMP), a popular sparse recovery algorithm, running on faulty circuits. We identify sufficient conditions for correct recovery of the signal support and express these conditions in terms of the relationship among signal magnitudes, sparsity, and the mutual incoherence of the measurement matrix. We study both the effects of additive errors in arithmetic computations and logical errors in comparators. We find that the additive errors in the OMP computations have an impact on the overall performance comparable to that of the additive noise in the input measurements. We also show that parallel structures are more robust to logical errors than serial structures in the implementation of a noisy arg max operation, and thus lead to a better OMP performance.


international symposium on information theory | 2013

Gallager B LDPC Decoder with Transient and permanent errors

Chu-Hsiang Huang; Yao Li; Lara Dolecek

This paper studies the performance of a noisy Gallager B decoder for regular LDPC codes. We assume that the noisy decoder is subject to both transient processor errors and permanent memory errors. We permit different error rates at different functional components. In addition, for the sake of generality, we allow asymmetry in the permanent error rates of component outputs, and thus we model error propagation in the decoder via a suitable asymmetric channel. We then develop a density evolution-type analysis on this asymmetric channel. The recursive expression for the bit error probability is derived as a function of the code parameters (node degrees), codeword weight, transmission error rate, and the error rates of the permanent and the transient errors. Based on this analysis, we then derive the residual error of the Gallager B decoder for the regime where the transmission error rate and the processing error rates are small. In this regime, we further observe that the residual error rate can be well approximated by a suitable combination of the transient error rate and the permanent error rate at variable nodes, provided that the check node degree is large enough. Based on this insight, we then propose and analyze a scheme for detecting permanent errors and correcting detected residual errors. The scheme exploits the parity check equations of the code and reuses the existing hardware to locate permanent errors in memory blocks. Performance analysis and simulation results show that, with high probability, the detection scheme discovers correct locations of permanent memory errors, while, with low probability, it mislabels the functional memory as being defective. The proposed error detection-and-correction scheme can be implemented in-circuit and is useful in combating failures arising from aging.


design, automation, and test in europe | 2016

Error resilience and energy efficiency: An LDPC decoder design study

Philipp Schläfer; Chu-Hsiang Huang; Clayton Schoeny; Christian Weis; Yao Li; Norbert Wehn; Lara Dolecek

Iterative decoding algorithms for low-density parity check (LDPC) codes have an inherent fault tolerance. In this paper, we exploit this robustness and optimize an LDPC decoder for high energy efficiency: we reduce energy consumption by opportunistically increasing error rates in decoder memories, while still achieving successful decoding in the final iteration. We develop a theory-guided unequal error protection (UEP) technique. UEP is implemented using dynamic voltage scaling that controls the error probability in the decoder memories on a per iteration basis. Specifically, via a density evolution analysis of an LDPC decoder, we first formulate the optimization problem of choosing an appropriate error rate for the decoder memories to achieve successful decoding under minimal energy consumption. We then propose a low complexity greedy algorithm to solve this optimization problem and map the resulting error rates to the corresponding supply voltage levels of the decoder memories in each iteration of the decoding algorithm. We demonstrate the effectiveness of our approach via ASIC synthesis results of a decoder for the LDPC code in the IEEE 802.11ad standard, implemented in 28nm FD-SOI technology. The proposed scheme achieves an increase in energy efficiency of up to 40% compared to the state-of-the-art solution.


international symposium on information theory | 2015

Adaptive error correction coding scheme for computations in the noisy min-sum decoder

Chu-Hsiang Huang; Yao Li; Lara Dolecek

With scaling of process technologies and increase in process variations, embedded memories will be inherently unreliable. In this paper, we propose redundancy-free adaptive error-correcting codes for the noisy min-sum decoder subject to memory errors. We consider the popular memory error model with a binary symmetric channel. We first revisit the density evolution analysis proposed by Balatsoukas-Stimming and Burg for the noisy min-sum decoder. Two important consequences of the density evolution analysis are: (a) after a large enough number of iterations, most of the messages have large magnitudes, and the residual errors are mostly from the sign bit flips due to memory failures, and (b) errors in the least significant bits in large-magnitude messages have a negligible effect on the residual error rate. We thus propose adaptive error-correcting codes to protect sign bits by least significant bits when the messages have large magnitudes. The proposed coding scheme does not require any further data storage (i.e., this code is redundancy-free). Density evolution analysis for the noisy min-sum decoder implementing the proposed coding scheme is derived, demonstrating that the proposed decoder achieves a residual error rate that is on the order of the square of the residual error rate achieved by the nominal min-sum decoder. Simulation results on the finite block length LDPC code also agree with this density evolution analysis.

Collaboration


Dive into the Chu-Hsiang Huang's collaboration.

Top Co-Authors

Avatar

Lara Dolecek

University of California

View shared research outputs
Top Co-Authors

Avatar

Yao Li

Akamai Technologies

View shared research outputs
Top Co-Authors

Avatar

Abbas Rahimi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chung-Kai Yu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frederic Sala

University of California

View shared research outputs
Top Co-Authors

Avatar

Greg Pottie

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge