Yen-Chin Liao
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yen-Chin Liao.
international conference on communications | 2008
Yi Hsuan Wu; Yu Ting Liu; Hsiu-Chi Chang; Yen-Chin Liao; Hsie-Chia Chang
A technique to prune the paths for K-best sphere decoding algorithm (SDA) based on radius constraint is presented. The proposed scheme preserves breadth-first searching nature, and the distinct radii for each decoding layer are theoretically derived from the system model with the noise statistics. In addition, based on the data range provided by the radius, a low complexity sorting strategy is proposed. The proposed method can apply to SDA with various path cost functions. Euclidean norm and sum of absolute difference are demonstrated in this paper. With SNR degradation less than 0.2 dB, more than 47% and 90% computation complexity can be reduced in 16-QAM and 64-QAM 4times4 MIMO detection, respectively.
international symposium on circuits and systems | 2006
Hong-An Huang; Yen-Chin Liao; Hsie-Chia Chang
This paper presents a method for compensating the truncation error of fixed-width booth multipliers which keep the input and the output the same bit-width. The truncated part that produces the carry-out bits is replaced with a carry-estimation equation. In order to reduce the truncation error, different input-width multipliers will have different carry-estimation equations. Simulation results show that our self-compensation method can lead to 85 % reduction of truncation errors while compared with direct-truncated multipliers, as well as 40% reduction in area of a multiplier while compared with traditional booth multipliers. In contrast with the 128-point fast Fourier transform (FFT) using traditional booth multipliers, our approach has 10% area reduction but only 1 dB SQNR loss
signal processing systems | 2006
Yen-Chin Liao; Hsie-Chia Chang; Chih-Wei Liu
An n-bit fixed-width multiplier keeps the input-width and output-width the same by truncating the n least significant output bits. In order to reduce the complexity, direct-truncation multipliers omit the half of the partial products corresponding to the truncated part. However, a large truncation error will be introduced. Thus, error compensation, which equals to estimating the carry bits, is required. In this paper, three carry estimation schemes based on the dependency among the partial products and the inputs are proposed. Not only this dependency is investigated, statistical analysis for these estimation approaches is provided. Applying the proposed schemes, at least 84% the truncation error can be reduced
international symposium on circuits and systems | 2008
Yi-Kai Lin; Chih-Lung Chen; Yen-Chin Liao; Hsie-Chia Chang
Progressive edge-growth (PEG) algorithm was proven to be a simple and effective approach to design good LDPC codes. However, the Tanner graph constructed by PEG algorithm is non-structured which leads the positions of ls of the corresponding parity check matrix fully random. In this paper, we propose a general method based on PEG algorithm to construct structured Tanner graphs. These hardware-oriented LDPC codes can reduce the VLSI implementation complexity. Similar to PEG method, our CP-PEG approach can be used to construct both regular and irregular Tanner graphs with flexible parameters. For the consideration of encoding complexity and error floor, the modifications of proposed algorithm are discussed. Simulation results show that our codes, in terms of bit error rate (BER) or packet error rate (PER), outperform other PEG-based LDPC codes and are better than the codes in IEEE 802.16e.
IEEE Communications Letters | 2013
Kuo-Kuang Yen; Yen-Chin Liao; Chih-Lung Chen; Hsie-Chia Chang
In this letter, we propose a scheme to modify robust Soliton distribution (RSD) with respect to the expected ripple size. We only adjust the proportion of degree 1, degree 2 and the maximum degree to derive the modified RSD (MRSD). Thus the proposed scheme contains only two variables and its complexity does not increase with the code length. Our objective is to increase the mean of the expected ripple size while decreasing its variance at the same time. Furthermore, sequential quadratic programming is introduced to maximize the objective function under certain constraints. Simulation results show that, with different code lengths, MRSD saves 2% to 5.8% overhead to decode entire input symbols, compared to RSD.
IEEE Transactions on Multimedia | 2013
Kuo-Kuang Yen; Yen-Chin Liao; Chih-Lung Chen; John K. Zao; Hsie-Chia Chang
The performance of LT code is highly related to the code length. A decoder is more likely to deplete degree-1 encoding symbols and terminate during early stage when the code length is short. In this work, we modify the robust Soliton distribution (RSD) and increase the degree-1 proportion. More degree-1 encoding symbols can be generated to relieve early decoding termination. The proportion of low degrees, except for degree-1, is also reduced. Therefore, receivers collect less encoding symbols carrying redundant information. In addition, Non-Repetitive (NR) encoding scheme is proposed to avoid producing repeated degree-1 encoding symbols. To improve video transmission quality, previous studies redesign LT codes to provide Unequal Error Protection (UEP) for different Scalable Video Coding (SVC) layers. Unlike those studies to modify the code structure, we integrate multiple NR encoders to achieve UEP ability. Experimental results show that our UEP scheme outperforms previous studies in terms of the PSNR.
IEEE Transactions on Signal Processing | 2007
Yen-Chin Liao; Chien-Ching Lin; Hsie-Chia Chang; Chih-Wei Liu
The min-sum algorithm is the most common method to simplify the belief-propagation algorithm for decoding low-density parity-check (LDPC) codes. However, there exists a performance gap between the min-sum and belief-propagation algorithms due to nonlinear approximation. In this paper, a self-compensation technique using dynamic normalization is thus proposed to improve the approximation accuracy. The proposed scheme scales the min-sum algorithm by a dynamic factor that can be derived theoretically from order statistics. Moreover, applying the proposed technique to several LDPC codes for DVB-S2 system, the average signal-to-noise ratio degradation, which results from approximation inaccuracy and quantization error, is reduced to 0.2 dB. Not only does it enhance the error-correcting capability of the min-sum algorithm, but the proposed self-compensation technique also preserves a modest hardware cost. After realized with 0.13-mum standard cell library, the dynamic normalization requires about 100 additional gates for each check node unit in the min-sum algorithm
signal processing systems | 2005
Yen-Chin Liao; Chien-Ching Lin; Chih-Wei Liu; Hsie-Chia Chang
In this paper, a dynamic normalization technique is proposed to approximate the nonlinear operation in decoding LDPC codes. The criterion in determining the normalization factor is also presented with theoretical analysis. The proposed method improves the approximation accuracy as well as the error performance of min-sum algorithm. Furthermore, the hardware implementation benefits from a simplified normalization scheme, leading to reductions in complexity and implementation loss.
international symposium on circuits and systems | 2004
Hung-Yueh Lin; Tay-Jyi Lin; Chie-Min Chao; Yen-Chin Liao; Chih-Wei Liu; Chein-Wei Jen
Embedded DSP applications demand more dynamic range and higher precision to prevent overflow and improve the quality respectively. The most straightforward way to satisfy both is to use the floating-point arithmetic, where the data samples are represented in the exponent and the mantissa parts and the data are normalized for every operation. Designers prefer fixed-point arithmetic with much simpler hardware, but they need to frequently and explicitly scale down the intermediate results to prevent overflow. In this paper, we propose an alternative-static floating-point unit where the operands are represented in the normalized fractional numbers, similar to the mantissa part in the floating-point units. We use static techniques instead to normalize the intermediate values with implicit exponent tracking in our software tool. The simulation result shows that the proposed approach improves both the rounding error and the execution cycles of the fixed-point units with similar hardware complexity.
signal processing systems | 2007
Hsiu-Chi Chang; Yen-Chin Liao; Hsie-Chia Chang
In multiple-input multiple output (MIMO) systems, maximum likelihood (ML) detection can provide good performance, however, exhaustively searching for the ML solution becomes infeasible as the number of antenna and constellation points increases. Thus ML detection is often realized by K-best sphere decoding algorithm. In this paper, two techniques to reduce the complexity of K-best algorithm while remaining an error probability similar to that of the ML detection is proposed. By the proposed K-best with predicted candidates approach, the computation complexity can be reduced. Moreover, the proposed adaptive K-best algorithm provides a means to determine the value K according the received signals. The simulation result shows that the reduction in the complexity of 64-best algorithm ranges from 48% to 85%, whereas the corresponding SNR degradation is maintained within 0.13dB and 1.1dB for a 64-QAM 4 × 4 MIMO system.