Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seon-Min Rhee is active.

Publication


Featured researches published by Seon-Min Rhee.


IEEE Transactions on Visualization and Computer Graphics | 2007

Low-Cost Telepresence for Collaborative Virtual Environments

Seon-Min Rhee; Remo Ziegler; Jiyoung Park; Martin Naef; Markus H. Gross; Myoung-Hee Kim

We present a novel low-cost method for visual communication and telepresence in a CAVEtrade-like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment


The Visual Computer | 2012

Time-of-flight sensor and color camera calibration for multi-view acquisition

Hyunjung Shim; Rolf Adelsberger; James D. K. Kim; Seon-Min Rhee; Taehyun Rhee; Jae Young Sim; Markus H. Gross; Changyeong Kim

This paper presents a multi-view acquisition system using multi-modal sensors, composed of time-of-flight (ToF) range sensors and color cameras. Our system captures the multiple pairs of color images and depth maps at multiple viewing directions. In order to ensure the acceptable accuracy of measurements, we compensate errors in sensor measurement and calibrate multi-modal devices. Upon manifold experiments and extensive analysis, we identify the major sources of systematic error in sensor measurement and construct an error model for compensation. As a result, we provide a practical solution for the real-time error compensation of depth measurement. Moreover, we implement the calibration scheme for multi-modal devices, unifying the spatial coordinate for multi-modal sensors.The main contribution of this work is to present the thorough analysis of systematic error in sensor measurement and therefore provide a reliable methodology for robust error compensation. The proposed system offers a real-time multi-modal sensor calibration method and thereby is applicable for the 3D reconstruction of dynamic scenes.


international symposium on visual computing | 2008

Stereoscopic View Synthesis by View Morphing

Seon-Min Rhee; Jongmoo Choi; Ulrich Neumann

We propose a novel approach to generate an arbitrary in-between stereoscopic view from a wide-baseline stereo camera using view morphing. Conventionally, a stereoscopic view for a real scene has been generated by a stereo camera which simulats the human eye configuration with approximately 65mm horizontal separation. Such a configuration, however, provides a fixed viewpoint and a depth feeling wholly depending on the camera pose. In this work, we use a wider-baseline stereo camera than that of conventional one to increase flexibility both for viewpoints and the degree of depth feeling. View morphing is a shape-preserving transition method from a source to a destination view. We can adapte this method to choose locations of two virtual cameras yielding an in-between stereoscopic view. We can control the degree of depth feeling by choosing a different distance between two virtual cameras to provide customized depth feeling. Experimental results show a series of synthesized in-between stereoscopic views generated from Middlebury stereo data set. We also show interlaced stereo composition results using a pair of synthesized views having a 65mm- and 130mm-baseline from input views acquired by a 160mm-baseline stereo camera.


international conference on image processing | 2012

Split and merge approach for detecting multiple planes in a depth image

Seon-Min Rhee; Yong-beom Lee; James D. K. Kim; Taehyun Rhee

We propose a novel method for detecting multiple planar structures in a scene from a depth image, and estimating their parametric models in real time. To realize this goal we split an entire depth image into small patches. Initially, we assume that all patches are on the planar structure and estimate their parametric models using least square fitting. We select patches where our assumption is valid and the selected planar patches are iteratively merged and refined in case that they are in the same planar structure. The qualitative and quantitative experiments shows that the proposed method has clear benefits in terms of accuracy and processing time.


Journal of Electronic Imaging | 2017

Deep neural network using color and synthesized three-dimensional shape for face recognition

Seon-Min Rhee; ByungIn Yoo; Jae-Joon Han; Wonjun Hwang

We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.


Proceedings of SPIE | 2014

SDTP: a robust method for interest point detection on 3D range images

Shandong Wang; Lujin Gong; Hui Zhang; Yongjie Zhang; Haibing Ren; Seon-Min Rhee; Hyong-Euk Lee

In fields of intelligent robots and computer vision, the capability to select a few points representing salient structures has always been focused and investigated. In this paper, we present a novel interest point detector for 3D range images, which can be used with good results in applications of surface registration and object recognition. A local shape description around each point in the range image is firstly constructed based on the distribution map of the signed distances to the tangent plane in its local support region. Using this shape description, the interest value is computed for indicating the probability of a point being the interest point. Lastly a Non-Maxima Suppression procedure is performed to select stable interest points on positions that have large surface variation in the vicinity. Our method is robust to noise, occlusion and clutter, which can be seen from the higher repeatability values compared with the state-of-the-art 3D interest point detectors in experiments. In addition, the method can be implemented easily and requires low computation time.


international conference on consumer electronics | 2013

Pose estimation of a depth camera using plane features

Seon-Min Rhee; Yong-beom Lee; James D. K. Kim; Taehyun Rhee

We present a novel method for pose estimation of a depth camera through plane features. Since conventional features for color images, mainly points and lines, are not applicable to a depth image, we propose a new type of feature utilizing planar structures in a scene. To measure the accuracy of our method, we generated a synthetic scene and calculated a position error between an estimated location and its ground truth. We also applied our method to the real world scene captured by a depth camera verifying practical usage.


international conference on consumer electronics | 2014

Two-pass ICP with Color constraint for noisy RGB-D point cloud registration

Seon-Min Rhee; Yong Beom Lee; Hyong-Euk Lee

We present a novel method called Two-Pass Iterative Closest Point (ICP) considering color information and depth noise characteristic for registration of noisy RGB-D data. While the first pass is same as the conventional ICP, color information is used as a constraint to the second pass against incorrect matches caused by the depth noise. Experimental results show that jitter is remarkably reduced using the proposed method.


international conference on consumer electronics | 2010

Accurate stereo view synthesis for an autostereoscopic 3D display

Seon-Min Rhee; Jongmoo Choi; Soo-Mi Choi

This paper presents a novel approach to synthesizing accurate stereo views using wide baseline stereo images and disparity maps. To provide correct depth perception when using an autostereoscopic 3D display, we develop a new hole-filling algorithm considering stereo consistency. Our hole-filling algorithm can generate more accurate results than previous methods because it considers the geometrical relationship in hole-regions.


international conference on universal access in human computer interaction | 2007

Combining pointing gestures with video avatars for remote collaboration

Seon-Min Rhee; Myoung-Hee Kim

We present a simple and intuitive method of user interaction, based on pointing gestures, which can be used with video avatars in a remote collaboration. By connecting the head and fingertip of a user in 3D space we can identify the direction in which they are pointing. Stereo infrared cameras in front of the user, together with an overhead camera, are used to find the users head and fingertip in a CAVE™-like system. The position of the head is taken to be the top of the users silhouette, while the location of the users fingertip is found directly in 3D space by searching the images from the stereo cameras for a match with its location in the overhead camera image in real time. The user can interact with the first object which collides with the pointing ray. In an experimental result, the result of the interaction is shown together with the video avatar which is visible to a remote collaborator.

Collaboration


Dive into the Seon-Min Rhee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ji-Young Park

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoo-Joo Choi

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Taehyun Rhee

Victoria University of Wellington

View shared research outputs
Researchain Logo
Decentralizing Knowledge