Hwanchol Jang
Gwangju Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hwanchol Jang.
Optics Express | 2016
Woong-Bi Lee; Hwanchol Jang; Sangjun Park; Young Min Song; Heung-No Lee
In nature, the compound eyes of arthropods have evolved towards a wide field of view (FOV), infinite depth of field and fast motion detection. However, compound eyes have inferior resolution when compared with the camera-type eyes of vertebrates, owing to inherent structural constraints such as the optical performance and the number of ommatidia. For resolution improvements, in this paper, we propose COMPUtational compound EYE (COMPU-EYE), a new design that increases acceptance angles and uses a modern digital signal processing (DSP) technique. We demonstrate that the proposed COMPU-EYE provides at least a four-fold improvement in resolution.
Optics Express | 2014
Hwanchol Jang; Changhyeong Yoon; Euiheon Chung; Wonshik Choi; Heung-No Lee
Speckle suppression is one of the most important tasks in the image transmission through turbid media. Insufficient speckle suppression requires an additional procedure such as temporal ensemble averaging over multiple exposures. In this paper, we consider the image recovery process based on the so-called transmission matrix (TM) of turbid media for the image transmission through the media. We show that the speckle left unremoved in the TM-based image recovery can be suppressed effectively via sparse representation (SR). SR is a relatively new signal reconstruction framework which works well even for ill-conditioned problems. This is the first study to show the benefit of using the SR as compared to the phase conjugation (PC) a de facto standard method to date for TM-based imaging through turbid media including a live cell through tissue slice.
Journal of Optics | 2013
Richa Khokhra; Manoj Kumar; Nitin Rawat; P. B. Barman; Hwanchol Jang; Rajesh Kumar; Heung-No Lee
Nanosheets, nanoparticles, and microstructures of ZnO were synthesized via a wet chemical method. ZnO films with a thickness of 44?46??m were fabricated by spray coating, and these have been investigated for their potential use in turbid lens applications. A morphology-dependent comparative study of the transmittance of ZnO turbid films was conducted. Furthermore, these ZnO turbid films were used to enhance the numerical aperture (NA) of a Nikon objective lens. The variation in NA with different morphologies was explained using size-dependent scattering by the fabricated films. A maximum NA of around 1.971 of the objective lens with a turbid film of ZnO nanosheets was achieved.
Optics Express | 2015
Hwanchol Jang; Changhyeong Yoon; Euiheon Chung; Wonshik Choi; Heung-No Lee
The input numerical aperture (NA) of multimode fiber (MMF) can be effectively increased by placing turbid media at the input end of the MMF. This provides the potential for high-resolution imaging through the MMF. While the input NA is increased, the number of propagation modes in the MMF and hence the output NA remains the same. This makes the image reconstruction process underdetermined and may limit the quality of the image reconstruction. In this paper, we aim to improve the signal to noise ratio (SNR) of the image reconstruction in imaging through MMF. We notice that turbid media placed in the input of the MMF transforms the incoming waves into a better format for information transmission and information extraction. We call this transformation as holistic random (HR) encoding of turbid media. By exploiting the HR encoding, we make a considerable improvement on the SNR of the image reconstruction. For efficient utilization of the HR encoding, we employ sparse representation (SR), a relatively new signal reconstruction framework when it is provided with a HR encoded signal. This study shows for the first time to our knowledge the benefit of utilizing the HR encoding of turbid media for recovery in the optically underdetermined systems where the output NA of it is smaller than the input NA for imaging through MMF.
international midwest symposium on circuits and systems | 2010
Hwanchol Jang; Heung-No Lee; Saeid Nooshabadi
In this paper, we propose a maximum likelihood (ML)-like performance reduced computational complexity sorted orthotope sphere decoding (OSD), and zero forced (ZF) sorted OSD algorithms for the spatial multiplexing (SM) in a multiple-input multiple-output (MIMO) system. In comparison with the original OSD our technique reduces the number of partial Euclidean distance (PED) computations by up to 28%, and 25% for QPSK and 16-QAM 4×4 MIMO systems, respectively.
asilomar conference on signals, systems and computers | 2011
Hwanchol Jang; Heung-No Lee; Saeid Nooshabadi
In this work, a method for predicting the pruning potential of a sphere constraint (SC) for sphere decoding (SD) is developed. Because the direct prediction of the pruning potential is not easy, the orthotope constraint (OC), an approximation of SC, is used instead of SC. This pruning potential prediction makes it possible to increase pruning at the root of the search tree in SD, considering it is the most desirable location for pruning.
IEEE Transactions on Vehicular Technology | 2017
Hwanchol Jang; Saeid Nooshabadi; Kiseon Kim; Heung-No Lee
We propose a low-complexity, complex-valued sphere decoding (CV-SD) algorithm, which is referred to as circular sphere decoding (CSD) and is applicable to multiple-input–multiple-output (MIMO) systems with arbitrary 2-D constellations. CSD provides a new constraint test. This constraint test is carefully designed so that the elementwise dependence is removed in the metric computation for the test. As a result, the constraint test becomes simple to perform without restriction on its constellation structure. By additionally employing this simple test as a prescreening test, CSD reduces the complexity of the CV-SD search. We show that the complexity reduction is significant, while its maximum-likelihood (ML) performance is not compromised. We also provide a powerful tool to estimate the pruning capacity of any particular search tree. Using this tool, we propose the predict-and-change strategy, which leads to a further complexity reduction in CSD. Extension of the proposed methods to soft output sphere decoding (SD) is also presented.
Proceedings of SPIE | 2016
Hwanchol Jang; Changhyeong Yoon; Wonshik Choi; Tae Joong Eom; Heung-No Lee
We provide an approach to improve the quality of image reconstruction in wide-field imaging through turbid media (WITM). In WITM, a calibration stage which measures the transmission matrix (TM), the set of responses of turbid medium to a set of plane waves with different incident angles, is preceded to the image recovery. Then, the TM is used for estimation of object image in image recovery stage. In this work, we aim to estimate highly resolved angular spectrum and use it for high quality image reconstruction. To this end, we propose to perform a dense sampling for TM measurement in calibration stage with finer incident angle spacing. In conventional approaches, incident angle spacing is made to be large enough so that the columns in TM are out of memory effect of turbid media. Otherwise, the columns in TM are correlated and the inversion becomes difficult. We employ compressed sensing (CS) for a successful high resolution angular spectrum recovery with dense sampled TM. CS is a relatively new information acquisition and reconstruction framework and has shown to provide superb performance in ill-conditioned inverse problems. We observe that the image quality metrics such as contrast-to-noise ratio and mean squared error are improved and the perceptual image quality is improved with reduced speckle noise in the reconstructed image. This results shows that the WITM performance can be improved only by executing dense sampling in the calibration stage and with an efficient signal reconstruction framework without elaborating the overall optical imaging systems.
asilomar conference on signals, systems and computers | 2011
Sangjun Park; Hwanchol Jang; Heung-No Lee
In this paper, we will analyze the performance limit for a multiple sensor system (MSS) based on compressive sensing. In our MSS, all of the sensors measure signals from a common source. There exists the redundancy in the measured signal because the measured signal comes from the common source. To reduce communication costs, this redundancy must be removed. For this purpose, we use compressive sensing at each sensor to obtain compressed measurements. After all of the sensors obtain compressed measurements, they transmit them to a central unit. A decoder at the central unit receives all of the transmitted signals and attempts to jointly estimate the correct support set, which is the set of indices corresponding to the locations of the non-zero coefficients of the measured signals. In order to analyze our MSS, we present a jointly typical decoder inspired by recent work [4]. We first obtain the upper bound probability that the jointly typical decoder fails to estimate the correct support set. Next, we prove that as the number of sensors increases, the compressed measurements per sensor (per-sensor measurements) can be reduced to sparsity, which is the number of non-zero coefficients in the measured signal. We present the sufficient number of sensors required with the increase in the noise variance.
Journal of Nanoparticle Research | 2013
Pawan Kumar; Nitin Rawat; P. B. Barman; S. C. Katyal; Hwanchol Jang; Heung-No Lee; Rajesh Kumar