Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bumsub Ham is active.

Publication


Featured researches published by Bumsub Ham.


IEEE Transactions on Image Processing | 2014

Fast global image smoothing based on weighted least squares.

Dongbo Min; Sunghwan Choi; Jiangbo Lu; Bumsub Ham; Kwanghoon Sohn; Minh N. Do

This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory- and computation-intensive large linear system, defined over a d -dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ~10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 <; γ <;2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.


international conference on image processing | 2010

Visual fatigue evaluation and enhancement for 2D-plus-depth video

Jaeseob Choi; Donghyun Kim; Bumsub Ham; Sunghwan Choi; Kwanghoon Sohn

A 3D video is expected to be a representative technique of realistic system but still has some problems such as visual fatigue and headache. In this paper, we propose a visual fatigue evaluation algorithm to predict the degree of visual fatigue from a 2D-plus-depth video. Spatial and temporal characteristics of the depth video are main factors of visual fatigue for autostereoscopic displays. Using depth image directly, we estimate spatial and temporal complexities, depth position and scene movement of the 3D video. Then, the overall visual fatigue of the 3D video is evaluated to have higher correlation with subjective fatigue evaluation by a linear regression. Moreover we control the pixel value of depth image from the 3D video which may induce severe fatigue to make more comfortable 3D video. The results of proposed algorithm show a considerable correlation with subjective visual fatigue.


computer vision and pattern recognition | 2015

Robust image filtering using joint static and dynamic guidance

Bumsub Ham; Minsu Cho; Jean Ponce

Regularizing images under a guidance signal has been used in various tasks in computer vision and computational photography, particularly for noise reduction and joint upsampling. The aim is to transfer fine structures of guidance signals to input images, restoring noisy or altered structures. One of main drawbacks in such a data-dependent framework is that it does not handle differences in structure between guidance and input images. We address this problem by jointly leveraging structural information of guidance and input images. Image filtering is formulated as a nonconvex optimization problem, which is solved by the majorization-minimization algorithm. The proposed algorithm converges quickly while guaranteeing a local minimum. It effectively controls image structures at different scales and can handle a variety of types of data from different sensors. We demonstrate the flexibility and effectiveness of our model in several applications including depth super-resolution, scale-space filtering, texture removal, flash/non-flash denoising, and RGB/NIR denoising.


international conference on image processing | 2009

Spatial and temporal up-conversion technique for depth video

Jinwook Choi; Dongbo Min; Bumsub Ham; Kwanghoon Sohn

This paper proposes a novel framework for up-conversion of depth video resolution both in spatial and in time domain. Time-of-Flight (TOF) sensors are widely used in computer vision fields. Although TOF sensors provide depth video in real time, there are some problems in a sense that it provides a low resolution and a low frame-rate depth video. We propose a cheaper solution that enhances depth video obtained by TOF sensor by combining it with CCD camera. The proposed method provides high quality video as a cheaper solution for low resolution, and low frame-rate depth video. It is useful when depth video is used in various applications such as 3DTV, free-view TV, teleconference system. High-quality depth video can be obtained by motion compensated frame interpolation (MCFI) and extended Joint Bilateral Upsampling (JBU). Experimental results show that depth video obtained by the proposed method has satisfactory quality.


IEEE Transactions on Image Processing | 2013

A Generalized Random Walk With Restart and its Application in Depth Up-Sampling and Interactive Segmentation

Bumsub Ham; Dongbo Min; Kwanghoon Sohn

In this paper, the origin of random walk with restart (RWR) and its generalization are described. It is well known that the random walk (RW) and the anisotropic diffusion models share the same energy functional, i.e., the former provides a steady-state solution and the latter gives a flow solution. In contrast, the theoretical background of the RWR scheme is different from that of the diffusion-reaction equation, although the restarting term of the RWR plays a role similar to the reaction term of the diffusion-reaction equation. The behaviors of the two approaches with respect to outliers reveal that they possess different attributes in terms of data propagation. This observation leads to the derivation of a new energy functional, where both volumetric heat capacity and thermal conductivity are considered together, and provides a common framework that unifies both the RW and the RWR approaches, in addition to other regularization methods. The proposed framework allows the RWR to be generalized (GRWR) in semilocal and nonlocal forms. The experimental results demonstrate the superiority of GRWR over existing regularization approaches in terms of depth map up-sampling and interactive image segmentation.


IEEE Transactions on Image Processing | 2013

Space-Time Hole Filling With Random Walks in View Extrapolation for 3D Video

Sunghwan Choi; Bumsub Ham; Kwanghoon Sohn

In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Mahalanobis Distance Cross-Correlation for Illumination-Invariant Stereo Matching

Seungryong Kim; Bumsub Ham; Bongjoe Kim; Kwanghoon Sohn

A robust similarity measure called the Mahalanobis distance cross-correlation (MDCC) is proposed for illumination-invariant stereo matching, which uses a local color distribution within support windows. It is shown that the Mahalanobis distance between the color itself and the average color is preserved under affine transformation. The MDCC converts pixels within each support window into the Mahalanobis distance transform (MDT) space. The similarity between MDT pairs is then computed using the cross-correlation with an asymmetric weight function based on the Mahalanobis distance. The MDCC considers correlation on cross-color channels, thus providing robustness to affine illumination variation. Experimental results show that the MDCC outperforms state-of-the-art similarity measures in terms of stereo matching for image pairs taken under different illumination conditions.


IEEE Transactions on Image Processing | 2015

Depth Superresolution by Transduction

Bumsub Ham; Dongbo Min; Kwanghoon Sohn

This paper presents a depth superresolution (SR) method that uses both of a low-resolution (LR) depth image and a high-resolution (HR) intensity image. We formulate depth SR as a graph-based transduction problem. In particular, the HR intensity image is represented as an undirected graph, in which pixels are characterized as vertices, and their relations are encoded as an affinity function. When the vertices initially labeled with certain depth hypotheses (from the LR depth image) are regarded as input queries, all the vertices are scored with respect to the relevances to these queries by a classifying function. Each vertex is then labeled with the depth hypothesis that receives the highest relevance score. We design the classifying function by considering the local and global structures of the HR intensity image. This approach enables us to address a depth bleeding problem that typically appears in current depth SR methods. Furthermore, input queries are assigned in a probabilistic manner, making depth SR robust to noisy depth measurements. We also analyze existing depth SR methods in the context of transduction, and discuss their theoretic relations. Intensive experiments demonstrate the superiority of the proposed method over state-of-the-art methods both qualitatively and quantitatively.


IEEE Transactions on Image Processing | 2014

Probability-Based Rendering for View Synthesis

Bumsub Ham; Dongbo Min; Changjae Oh; Minh N. Do; Kwanghoon Sohn

In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Robust Guided Image Filtering Using Nonconvex Potentials

Bumsub Ham; Minsu Cho; Jean Ponce

Filtering images using a guidance signal, a process called guided or joint image filtering, has been used in various tasks in computer vision and computational photography, particularly for noise reduction and joint upsampling. This uses an additional guidance signal as a structure prior, and transfers the structure of the guidance signal to an input image, restoring noisy or altered image structure. The main drawbacks of such a data-dependent framework are that it does not consider structural differences between guidance and input images, and that it is not robust to outliers. We propose a novel SD (for static/dynamic) filter to address these problems in a unified framework, and jointly leverage structural information from guidance and input images. Guided image filtering is formulated as a nonconvex optimization problem, which is solved by the majorize-minimization algorithm. The proposed algorithm converges quickly while guaranteeing a local minimum. The SD filter effectively controls the underlying image structure at different scales, and can handle a variety of types of data from different sensors. It is robust to outliers and other artifacts such as gradient reversal and global intensity shift, and has good edge-preserving smoothing properties. We demonstrate the flexibility and effectiveness of the proposed SD filter in a variety of applications, including depth upsampling, scale-space filtering, texture removal, flash/non-flash denoising, and RGB/NIR denoising.

Collaboration


Dive into the Bumsub Ham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongbo Min

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean Ponce

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Minsu Cho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge