Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hae-Gon Jeon is active.

Publication


Featured researches published by Hae-Gon Jeon.


computer vision and pattern recognition | 2015

Accurate depth map estimation from a lenslet light field camera

Hae-Gon Jeon; Jaesik Park; Gyeongmin Choe; Jinsun Park; Yunsu Bok; Yu-Wing Tai; In So Kweon

This paper introduces an algorithm that accurately estimates depth maps using a lenslet light field camera. The proposed algorithm estimates the multi-view stereo correspondences with sub-pixel accuracy using the cost volume. The foundation for constructing accurate costs is threefold. First, the sub-aperture images are displaced using the phase shift theorem. Second, the gradient costs are adaptively aggregated using the angular coordinates of the light field. Third, the feature correspondences between the sub-aperture images are used as additional constraints. With the cost volume, the multi-label optimization propagates and corrects the depth map in the weak texture regions. Finally, the local depth map is iteratively refined through fitting the local quadratic function to estimate a non-discrete depth map. Because micro-lens images contain unexpected distortions, a method is also proposed that corrects this error. The effectiveness of the proposed algorithm is demonstrated through challenging real world examples and including comparisons with the performance of advanced depth estimation algorithms.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017

Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features

Yunsu Bok; Hae-Gon Jeon; In So Kweon

We present a novel method for the geometric calibration of micro-lens-based light field cameras. Accurate geometric calibration is the basis of various applications. Instead of using sub-aperture images, we directly utilize raw images for calibration. We select appropriate regions in raw images and extract line features from micro-lens images in those regions. For the entire process, we formulate a new projection model of a micro-lens-based light field camera, which contains a smaller number of parameters than previous models. The model is transformed into a linear form using line features. We compute the initial solution of both the intrinsic and the extrinsic parameters by a linear computation and refine them via non-linear optimization. Experimental results demonstrate the accuracy of the correspondences between rays and pixels in raw images, as estimated by the proposed method.


international conference on computer vision | 2015

Learning a Deep Convolutional Network for Light-Field Image Super-Resolution

Youngjin Yoon; Hae-Gon Jeon; Donggeun Yoo; Joon-Young Lee; In So Kweon

Commercial Light-Field cameras provide spatial and angular information, but its limited resolution becomes an important problem in practical use. In this paper, we present a novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network. Rather than the conventional optimization framework, we adopt a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light-Field image. We first augment the spatial resolution of each sub-aperture image to enhance details by a spatial SR network. Then, novel views between the sub-aperture images are generated by an angular super-resolution network. These networks are trained independently but finally finetuned via end-to-end training. The proposed method shows the state-of-the-art performance on HCI synthetic dataset, and is further evaluated by challenging real-world applications including refocusing and depth map estimation.


computer vision and pattern recognition | 2016

High-Quality Depth from Uncalibrated Small Motion Clip

Hyowon Ha; Sunghoon Im; Jaesik Park; Hae-Gon Jeon; In So Kweon

We propose a novel approach that generates a high-quality depth map from a set of images captured with a small viewpoint variation, namely small motion clip. As opposed to prior methods that recover scene geometry and camera motions using pre-calibrated cameras, we introduce a self-calibrating bundle adjustment tailored for small motion. This allows our dense stereo algorithm to produce a high-quality depth map for the user without the need for camera calibration. In the dense matching, the distributions of intensity profiles are analyzed to leverage the benefit of having negligible intensity changes within the scene due to the minuscule variation in viewpoint. The depth maps obtained by the proposed framework show accurate and extremely fine structures that are unmatched by previous literature under the same small motion configuration.


international conference on computer vision | 2015

High Quality Structure from Small Motion for Rolling Shutter Cameras

Sunghoon Im; Hyowon Ha; Gyeongmin Choe; Hae-Gon Jeon; Kyungdon Joo; In So Kweon

We present a practical 3D reconstruction method to obtain a high-quality dense depth map from narrow-baseline image sequences captured by commercial digital cameras, such as DSLRs or mobile phones. Depth estimation from small motion has gained interest as a means of various photographic editing, but important limitations present themselves in the form of depth uncertainty due to a narrow baseline and rolling shutter. To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras. Additionally, we present a depth propagation method to fill in the holes associated with the unknown pixels based on our novel geometric guidance model. Both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3D depth maps than those by the state-of-the-art method.


Experimental Mechanics | 1989

An optimal cam profile design considering dynamic characteristics of a cam-valve system

Hae-Gon Jeon; K. J. Park; Y. S. Park

In this work, an optimal cam profile design method for an OHV-type cam-valve train is studied considering the dynamic characteristics of the valve system. When designing a cam profile for an internal combustion engine, it is desirable to make the valve lift area be as large as possible and the valve peak acceleration and seating velocity be as low as those can be within the cam-event angle. But, as we know, those features conflict with one another. An optimal design must strike a compromise between the two. Another important factor in valve train design is avoiding abnormal valve motions such as jump and bounce. It is known that jump and bounce are closely related to valve train dynamic characteristics.In this paper, a two-step optimization technique to design an optimal cam profile is proposed. In the first step, an attempt was made to maximize valve lift area without causing abnormal valve motions while satisfying all the given constraints such as cam-event angle, maximum valve acceleration, and cam displacements at both ends of the cam-event angle. Then, in the second step, minor modifications of the cam developed in the first step were made in order to reduce the cam acceleration while maintaining the maximized valve lift area and satisfying constrains obtained in the first step.In order to prove the effectiveness of the optimization method, the valve motion driven by the optimized cam was not only simulated with a four degree of freedom model but was also tested experimentally. It was found that the measured valve motions agree quite well with the simulation results. Comparing the valve motions of the optimized cam with those of the original cam, it was found that the optimized cam can increase the valve-lift area by 8.6 percent while reducing the peak cam acceleration by 28.7 percent. Also, it was noted that the optimized cam increases the cam-valve train operating speed at which jump and bounce occur.


international conference on computer vision | 2013

Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging

Hae-Gon Jeon; Joon-Young Lee; Yudeog Han; Seon Joo Kim; In So Kweon

Finding a good binary sequence is critical in determining the performance of the coded exposure imaging, but previous methods mostly rely on a random search for finding the binary codes, which could easily fail to find good long sequences due to the exponentially growing search space. In this paper, we present a new computationally efficient algorithm for generating the binary sequence, which is especially well suited for longer sequences. We show that the concept of the low autocorrelation binary sequence that has been well exploited in the information theory community can be applied for generating the fluttering patterns of the shutter, propose a new measure of a good binary sequence, and present a new algorithm by modifying the Legendre sequence for the coded exposure imaging. Experiments using both synthetic and real data show that our new algorithm consistently generates better binary sequences for the coded exposure problem, yielding better deblurring and resolution enhancement results compared to the previous methods for generating the binary codes.


IEEE Signal Processing Letters | 2017

Light-Field Image Super-Resolution Using Convolutional Neural Network

Youngjin Yoon; Hae-Gon Jeon; Donggeun Yoo; Joon-Young Lee; In So Kweon

Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.


computer vision and pattern recognition | 2017

A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms

Ole Johannsen; Katrin Honauer; Bastian Goldluecke; Anna Alperovich; Federica Battisti; Yunsu Bok; Michele Brizzi; Marco Carli; Gyeongmin Choe; Maximilian Diebold; Marcel Gutsche; Hae-Gon Jeon; In So Kweon; Jaesik Park; Jinsun Park; Hendrik Schilling; Hao Sheng; Lipeng Si; Michael Strecke; Antonin Sulc; Yu-Wing Tai; Qing Wang; Ting-Chun Wang; Sven Wanner; Zhang Xiong; Jingyi Yu; Shuo Zhang; Hao Zhu

This paper presents the results of the depth estimation challenge for dense light fields, which took place at the second workshop on Light Fields for Computer Vision (LF4CV) in conjunction with CVPR 2017. The challenge consisted of submission to a recent benchmark [7], which allows a thorough performance analysis. While individual results are readily available on the benchmark web page http://www.lightfield-analysis.net, we take this opportunity to give a detailed overview of the current participants. Based on the algorithms submitted to our challenge, we develop a taxonomy of light field disparity estimation algorithms and give a report on the current state-ofthe- art. In addition, we include more comparative metrics, and discuss the relative strengths and weaknesses of the algorithms. Thus, we obtain a snapshot of where light field algorithm development stands at the moment and identify aspects with potential for further improvement.


european conference on computer vision | 2016

All-Around Depth from Small Motion with a Spherical Panoramic Camera

Sunghoon Im; Hyowon Ha; Francois Rameau; Hae-Gon Jeon; Gyeongmin Choe; In So Kweon

With the growing use of head-mounted displays for virtual reality (VR), generating 3D contents for these devices becomes an important topic in computer vision. For capturing full 360 degree panoramas in a single shot, the Spherical Panoramic Camera (SPC) are gaining in popularity. However, estimating depth from a SPC remains a challenging problem. In this paper, we propose a practical method that generates all-around dense depth map using a narrow-baseline video clip captured by a SPC. While existing methods for depth from small motion rely on perspective cameras, we introduce a new bundle adjustment approach tailored for SPC that minimizes the re-projection error directly on the unit sphere. It enables to estimate approximate metric camera poses and 3D points. Additionally, we present a novel dense matching method called sphere sweeping algorithm. This allows us to take advantage of the overlapping regions between the cameras. To validate the effectiveness of the proposed method, we evaluate our approach on both synthetic and real-world data. As an example of the applications, we also present stereoscopic panorama images generated from our depth results.

Collaboration


Dive into the Hae-Gon Jeon's collaboration.

Researchain Logo
Decentralizing Knowledge