Jeongseok Ha
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeongseok Ha.
IEEE Transactions on Information Forensics and Security | 2011
Demijan Klinc; Jeongseok Ha; Steven W. McLaughlin; João Barros; Byung-Jae Kwak
This paper presents a coding scheme for the Gaussian wiretap channel based on low-density parity-check (LDPC) codes. The messages are transmitted over punctured bits to hide data from eavesdroppers. The proposed coding scheme is asymptotically effective in the sense that it yields a bit-error rate (BER) very close to 0.5 for an eavesdropper whose signal-to-noise ratio (SNR) is lower than the threshold SNRE, even if the eavesdropper has the ability to use a bitwise maximum a posteriori (MAP) decoder. Such codes also achieve high reliability for the friendly parties provided they have an SNR above a second threshold SNRB . It is shown how asymptotically optimized LDPC codes are designed with differential evolution where the goal is to achieve high reliability between friendly parties while keeping the security gap SNRB/SNRE as small as possible to protect against passive eavesdroppers. The proposed coding scheme is encodable in linear time, applicable at finite block lengths, and can be combined with existing cryptographic schemes to deliver improved data security by taking advantage of the stochastic nature of many communication channels.
IEEE Transactions on Information Theory | 2011
Hyoungsuk Jeon; Namshik Kim; Jinho Choi; Hyuckjae Lee; Jeongseok Ha
We investigate the secrecy capacity of an ergodic fading wiretap channel when the main and eavesdropper channels are correlated. Assuming that the transmitter knows the full channel state information (CSI) (i.e., the channel gains from the transmitter to the legitimate receiver and eavesdropper), we quantify the loss of the secrecy capacity due to the correlation and investigate the asymptotic behavior of the secrecy capacity in the high signal-to-noise ratio (SNR) regime. While the ergodic capacity of fading channels grows logarithmically with SNR in general, we have found that the secrecy capacity converges to an upper-bound (a closed-form expression is derived) that will be shown to be a function of two channel parameters; the correlation coefficient and the ratio of the main to eavesdropper channel gains. From this, we are able to see how the two channel parameters affect the secrecy capacity and conclude that the excessively large signal power does not help to improve the secrecy capacity and the loss due to the correlation could be significant especially when the ratio of the main to eavesdropper channel gains is low.
information theory workshop | 2009
Demijan Kline; Jeongseok Ha; Steven W. McLaughlin; João Barros; Byung-Jae Kwak
A coding scheme for the Gaussian wiretap channel based on low-density parity-check (LDPC) codes is presented. The messages are transmitted over punctured bits to hide data from eavesdroppers. It is shown by means of density evolution that the BER of an eavesdropper, who operates below the codes SNR threshold and has the ability to use a bitwise MAP decoder, increases to 0.5 within a few dB. It is shown how asymptotically optimized LDPC codes can be designed with differential evolution where the goal is to achieve high reliability between friendly parties and security against a passive eavesdropper while keeping the security gap as small as possible. The proposed coding scheme is also efficiently encodable in almost linear time.
international conference on communications | 2003
Jeongseok Ha; Steven W. McLaughlin
In this paper, we consider rate compatible puncturing of low density parity check (LDPC) codes. We present a general density evolution-based procedure which finds the optimal puncturing of a based code. We show that puncturing can be performed across a range of rates and code lengths in a manner that produces punctured codes with good thresholds. This allows one to implement a single optimal LDPC code of a low rate that can be punctured across a wide range of rates without loss of threshold performance. Simulation results show that the error floors of the codes do not degrade after puncturing.
international midwest symposium on circuits and systems | 2011
Jinho Choi; Jeongseok Ha
Distributed beamforming using multiple relay nodes is an effective means to provide power efficient transmission with diversity gain. Various approaches have been proposed to decide relay weights for each relay so that they can cooperatively relay signals. The maximum signal-to-noise ratio (MSNR) has been widely adopted as a performance measure in deciding relay weights. In this paper, we adopt the minimum mean square error (MMSE) to decide relay weights as the MMSE criterion can easily allow distributed implementation.
international symposium on information theory | 2004
Jeongseok Ha; Jaehong Kim; Steven W. McLaughlin
In this paper we study and propose an algorithm to puncture finite length low density parity check (LDPC) codes (Ha, J, et al., 2002). The introduced puncturing criterion results in good performance (for 1024 and 4096 bits) when compared with both random puncturing and dedicated LDPC codes, i.e. unpunctured codes designed for a given rate. The comparison also shows that the proposed punctured LDPC codes have better block-error rates than the dedicated codes because of longer effective block lengths of the high-rate puncturing. Although we apply the idea for regular LDPC codes, we can easily modify the idea for irregular LDPC codes.
vehicular technology conference | 2007
Donghyuk Shin; Kyoungwoo Heo; Sangbong Oh; Jeongseok Ha
Low-density parity-check (LDPC) codes have an inherent stopping criterion, parity-check constraints (equations). By testing the parity-check constraints, an LDPC decoder can detect successful decoding and stop their decoding, which is, however, not possible with turbo codes. In this paper, we propose a stopping criterion to predict decoding failure of LDPC codes, instead of detecting successful decoding. If the decoder predicts the decoding failure in advance, the receiver can more rapidly response to the transmitter and request for additional parity bits with an automatic repeat request (ARQ) protocol, which reduces overall system latency. The receiver can also save power consumption by avoiding unnecessary decoder iterations. The proposed stopping criterion makes use of the variations of the number of satisfied parity-check constraints in the belief-propagation (BP) decoding which is always tested in the conventional BP decoding to detect successful decoding. Thus, the proposed stopping criterion does not require any additional complexity. The counting of satisfied parity-check constraints shows behaviors of the BP decoding, which comes, otherwise, from the observations of changes of log-likelihood ratio (LLR) values in multi-bit resolution with additional complexity.
Wireless Personal Communications | 2002
Jeongseok Ha; Apurva N. Mody; Joon Hyun Sung; John R. Barry; Steven W. McLaughlin; Gordon L. Stüber
Two transmit two receive space-time processingwith LDPC coding is evaluated for OFDM transmission.The two methodsfor space-time processing are Alamoutis combining and the SVD technique.The channel estimates are calculated and provided tothe diversity combiner, the SVD filters and LDPC decoder.Noise variance estimates areprovided to the LDPC decoder. Using the proposed schemewe can obtain a BER of 10−5 at an SNR of 2.6 dB withspectral efficiency of0.4 bits/sec/Hz and 14.5 dB with a spectral efficiency of 4.2 bits/sec/Hz.
Applied Optics | 2003
Hossein Pishro-Nik; Nazanin Rahnavard; Jeongseok Ha; Ali Adibi
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
IEEE Transactions on Information Theory | 2003
Jeongseok Ha; Steven W. McLaughlin
We consider low-density parity-check code (LDPCC) design for additive white Gaussian noise (AWGN) channels with erasures. This model, for example, represents a common situation in magnetic and optical recording where defects or thermal asperities in the system are detected and presented to the decoder as erasures. We give thresholds of regular and irregular LDPCCs and discuss practical code design over the mixed Gaussian/erasures channel. The analysis is an extension of the Gaussian approximation work of Chung et al. In the two limiting cases of no erasures and large signal-to-noise ratio (SNR), the analysis tends to the results of Chung et al. (see ibid., vol. 47, p.657-670, Feb. 2001) and Luby et al. (1997), respectively, giving a general tool for a class of mixed channels. We derive a steady-state equation which gives a graphical interpretation of decoder convergence. This allows one to estimate the maximum erasure capability on the mixture channel, or conversely, to estimate the additional signal power required to compensate for the loss due to erasures. We see that a good (capacity-approaching) LDPCC over an AWGN channel is also good over the mixed channel up to a moderate erasure probability. We also investigate practical issues such as the maximum number of iterations of message-passing decoders, the coded block length, and types of erasure patterns (random/block erasures). Finally, we design an optimized LDPCC for the mixed channel, which shows better performance if the erasure probability is larger than a certain value (0.1 in our simulation) at the expense of performance degradation at unerased (AWGN channel) and lower erasure probability regions (less than 0.1 in our simulation).