SukHwan Lim
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by SukHwan Lim.
international conference on image processing | 2001
SukHwan Lim; A. El Gamal
Gradient-based optical flow estimation methods such as the Lucas-Kanade (1981) method work well for scenes with small displacements but fail when objects move with large displacements. Hierarchical matching-based methods do not suffer from large displacements but are less accurate. By utilizing the high speed imaging capability of CMOS image sensors, the frame rate can be increased to obtain more accurate optical flow with wide range of scene velocities in real time. Further, by integrating the memory and processing with the sensor on the same chip, optical flow estimation using high frame rate sequences can be performed without unduly increasing the off-chip data rate. The paper describes a method for obtaining high accuracy optical flow at a standard frame rate using high frame rate sequences. The Lucas-Kanade method is used to obtain optical flow estimates at high frame rate, which are then accumulated and refined to obtain optical flow estimates at a standard frame rate. The method is tested on video sequences synthetically generated by perspective warping. The results demonstrate significant improvements in optical flow estimation accuracy with moderate memory and computational power requirements.
IEEE Transactions on Circuits and Systems | 2004
SukHwan Lim; A. El Gamal
The paper describes a method that uses a video sequence to correct gain fixed pattern noise (FPN) in an image sensor. The captured sequence and its optical flow are used to estimate gain FPN. Assuming brightness constancy along the motion trajectories, the pixels are grouped in blocks and the gains of the pixels in each block are estimated by iteratively minimizing the sum of the squared brightness variations along the motion trajectories. Significant reductions in gain FPN are demonstrated using both real and synthetically generated video sequences with modest computations.
Proceedings of SPIE | 2001
Hui Tian; Xinqiao Liu; SukHwan Lim; Stuart Kleinfelder; Abbas El Gamal
CMOS image sensors have benefitted from technology scaling down to 0.35 micrometers with only minor process modifications. Several studies have predicted that below 0.25 micrometers , it will become difficult, if not impossible to implement CMOS image sensors with acceptable performance without more significant process modifications. To explore the imaging performance of CMOS Image sensors fabricated in standard 0.18 micrometers technology, we designed a set of single pixel photodiode and photogate APS test structures. The test structures include pixels with different size n+/pwell and nwell/psub photodiodes and nMOS photogates. To reduce the leakages due to the in-pixel transistors, the follower, photogate, and transfer devices all use 3.3V thick oxide transistors. The paper reports on the key imaging parameters measured from these test structures including conversion gain, dark current and spectral response. We find that dark current density decreases super-linearly in reverse bias voltage, which suggest that it is desirable to run the photodetectors at low bias voltages. We find that QE is quite low due to high pwell doping concentration. Finally we find that the photogate circuit suffered from high transfer gate off current. QE is not significantly affected by this problem, however.
Proceedings of SPIE | 2001
SukHwan Lim; Abbas El Gamal
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
ieee sensors | 2002
Ali Ozer Ercan; Feng Xiao; Xinqiao Liu; SukHwan Lim; A. El Gamal; Brian A. Wandell
CMOS image sensors are capable of very high-speed non-destructive readout, enabling many novel applications. To explore such applications, we designed and prototyped an experimental high speed imaging system based on a CMOS digital pixel sensor (DPS). The experimental system comprises a PCB that has the DPS chip interfaced to a PC via three I/O cards supported by an easy to use software environment. The system is capable of image acquisition at rates of up to 1,400 frames/sec. After describing the DPS chip and experimental imaging system,we present two applications: dynamic range extension and optical flow estimation. These applications rely on the DPSs ability to perform non-destructive readout of multiple frames at high-speed.
electronic imaging | 2002
SukHwan Lim; Abbas El Gamal
Fixed pattern noise (FPN) or nonuniformity caused by device and interconnect parameter variations across an image sensor is a major source of image quality degradation especially in CMOS image sensors. In a CMOS image sensor, pixels are read out through different chains of amplifiers each with different gain and offset. Whereas offset variations can be significantly reduced using correlated double sampling (CDS), no widely used method exists for reducing gain FPN. In this paper, we propose to use a video sequence and its optical flow to estimate gain FPN for each pixel. This scheme can be used in a digital video or still camera by taking any video sequence with motion prior to capture and using it to estimate gain FPN. Our method assumes that brightness along the motion trajectory is constant over time. The pixels are grouped in blocks and each blocks pixel gains are estimated by iteratively minimizing the sum of the squared brightness variations along the motion trajectories. We tested this method on synthetically generated sequences with gain FPN and obtained results that demonstrate significant reduction in gain FPN with modest computations.
IEEE Journal of Solid-state Circuits | 2001
Stuart Kleinfelder; SukHwan Lim; Xinqiao Liu; A. El Gamal
Archive | 2000
Abbas El Gamal; Xinqiao Liu; SukHwan Lim
Storage and Retrieval for Image and Video Databases | 2001
Hui Tian; Xinqiao Liu; SukHwan Lim; Stuart Kleinfelder; Abbas El Gamal
Archive | 2001
Stuart Kleinfelder; SukHwan Lim; Xinqiao Liu; Abbas El Gamal