Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sang Min Oh is active.

Publication


Featured researches published by Sang Min Oh.


International Journal of Computer Vision | 2008

Learning and Inferring Motion Patterns using Parametric Segmental Switching Linear Dynamic Systems

Sang Min Oh; James M. Rehg; Tucker R. Balch; Frank Dellaert

AbstractnSwitching Linear Dynamic System (SLDS) models are a popular technique for modeling complex nonlinear dynamic systems. An SLDS can describe complex temporal patterns more concisely and accurately than an HMM by using continuous hidden states. However, the use of SLDS models in practical applications is challenging for three reasons. First, exact inference in SLDS models is computationally intractable. Second, the geometric duration model induced in standard SLDSs limits their representational power. Third, standard SLDSs do not provide a principled way to interpret systematic variations governed by higher order parameters.nnThe contributions in this paper address all of these three challenges. First, we present a data-driven MCMC (DD-MCMC) sampling method for approximate inference in SLDSs. We show DD-MCMC provides an efficient method for estimation and learning in SLDS models. Second, we present segmental SLDSs (S-SLDS), where the geometric distributions of the switching state durations are replaced with arbitrary duration models. Third, we extend the standard SLDS model with additional global parameters that can capture systematic temporal and spatial variations. The resulting parametric SLDS model (P-SLDS) uses EM to robustly interpret parametrized motions by incorporating additional global parameters that underly systematic variations of the overall motion.nnThe overall development of the extensions for SLDSs provide a principled framework to interpret complex motions. The framework is applied to the honey bee dance interpretation task in the context of the on-going BioTracking project at the Georgia Institute of Technology. The experimental results suggest that the enhanced models provide an effective framework for a wide range of motion analysis applications.n


international conference on robotics and automation | 2006

Traversability classification using unsupervised on-line visual learning for outdoor robot navigation

Dongshin Kim; Jie Sun; Sang Min Oh; James M. Rehg; Aaron F. Bobick

Estimating the traversability of terrain in an unstructured outdoor environment is a core functionality for autonomous robot navigation. While general-purpose sensing can be used to identify the existence of terrain features such as vegetation and sloping ground, the traversability of these regions is a complex function of the terrain characteristics and vehicle capabilities, which makes it extremely difficult to characterize a priori. Moreover, it is difficult to find general rules which work for a wide variety of terrain types such as trees, rocks, tall grass, logs, and bushes. As a result, methods which provide traversability estimates based on predefined terrain properties such as height or shape will be unlikely to work reliably in unknown outdoor environments. Our approach is based on the observation that traversability in the most general sense is an affordance which is jointly determined by the vehicle and its environment. We describe a novel on-line learning method which can make accurate predictions of the traversability properties of complex terrain. Our method is based on autonomous training data collection which exploits the robots experience in navigating its environment to train classifiers without human intervention. This is in contrast to other learning methods in which training data is collected manually. We have implemented and tested our traversability learning method on an unmanned ground vehicle (UGV) and evaluated its performance in several realistic outdoor environments. The experiments quantify the benefit of our on-line traversability learning approach


intelligent robots and systems | 2007

Traversability classification for UGV navigation: a comparison of patch and superpixel representations

Dongshin Kim; Sang Min Oh; James M. Rehg

Robot navigation in complex outdoor terrain can benefit from accurate traversability classification. Appearance- based traversability estimation can provide a long-range sensing capability which complements the traditional use of stereo or LIDAR ranging. In the standard approach to traversability classification, each image frame is decomposed into patches or pixels for further analysis. However, classification at the pixel level is prone to noise and complicates the task of identifying homogeneous regions for navigation. Fixed-sized patches aggregate pixel information, resulting in better noise properties, but they can span multiple distinct image regions, which can degrade the classification performance and make thin obstacles difficult to detect. We address the use of superpixels as the visual primitives for traversability estimation. Superpixels are obtained from an over-segmentation of the image and they aggregate visually homogeneous pixels while respecting natural terrain boundaries. We show that superpixels are superior to patches in classification accuracy and result in more effective navigation in complex terrain environments. Our experimental results include a study of the effect of patch and superpixel size on classification accuracy. We demonstrate that superpixels can be computed on-line on a real robot at a sufficient frame rate to support long-range sensing and planning.


intelligent robots and systems | 2004

Map-based priors for localization

Sang Min Oh; Sarah Tariq; Bruce N. Walker; Frank Dellaert

Localization from sensor measurements is a fundamental task for navigation. Particle filters are among the most promising candidates to provide a robust and real-time solution to the localization problem. They instantiate the localization problem as a Bayesian altering problem and approximate the posterior density over location by a weighted sample set. In this paper, we introduce map-based priors for localization, using the semantic information available in maps to bias the motion model toward areas of higher probability. We, show that such priors, under a particular assumption, can easily be incorporated in the particle filter by means of a pseudo likelihood. The resulting filter is more reliable and more accurate. We show experimental results on a GPS based outdoor people tracker that illustrate the approach and highlight its potential.


international conference on computer vision | 2005

Learning and inference in parametric switching linear dynamic systems

Sang Min Oh; James M. Rehg; Tucker R. Balch; Frank Dellaert

We introduce parametric switching linear dynamic systems (P-SLDS) for learning and interpretation of parametrized motion, i.e., motion that exhibits systematic temporal and spatial variations. Our motivating example is the honeybee dance: bees communicate the orientation and distance to food sources through the dance angles and waggle lengths of their stylized dances. Switching linear dynamic systems (SLDS) are a compelling way to model such complex motions. However, SLDS does not provide a means to quantify systematic variations in the motion. Previously, Wilson & Bobick (1999) presented parametric HMMs, an extension to HMMs with which they successfully interpreted human gestures. Inspired by their work, we similarly extend the standard SLDS model to obtain parametric SLDS. We introduce additional global parameters that represent systematic variations in the motion, and present general expectation-maximization (EM) methods for learning and inference. In the learning phase, P-SLDS learns canonical SLDS model from data. In the inference phase, P-SLDS simultaneously quantifies the global parameters and labels the data. We apply these methods to the automatic interpretation of honey-bee dances, and present both qualitative and quantitative experimental results on actual bee-tracks collected from noisy video data


computer vision and pattern recognition | 2006

Parameterized Duration Mmodeling for Switching Linear Dynamic Systems

Sang Min Oh; James M. Rehg; Frank Dellaert

We introduce an extension of switching linear dynamic systems (SLDS) with parameterized duration modeling capabilities. The proposed model allows arbitrary duration models and overcomes the limitation of a geometric distribution induced in standard SLDSs. By incorporating a duration model which reflects the data more closely, the resulting model provides reliable inference results which are robust against observation noise. Moreover, existing inference algorithms for SLDSs can be adopted with only modest additional effort in most cases where an SLDS model can be applied. In addition, we observe the fact that the duration models would vary across data sequences in certain domains, which complicates learning and inference tasks. Such variability in duration is overcome by introducing parameterized duration models. The experimental results on honeybee dance decoding tasks demonstrate the robust inference capabilities of the proposed model.


computer vision and pattern recognition | 2005

Mixture trees for modeling and fast conditional sampling with applications in vision and graphics

Frank Dellaert; Vivek Kwatra; Sang Min Oh

We introduce mixture trees, a tree-based data-structure for modeling joint probability densities using a greedy hierarchical density estimation scheme. We show that the mixture tree models data efficiently at multiple resolutions, and present fast conditional sampling as one of many possible applications. In particular, the development of this data-structure was spurred by a multi-target tracking application, where memory-based motion modeling calls for fast conditional sampling from large empirical densities. However, it is also suited to applications such as texture synthesis, where conditional densities play a central role. Results are presented for both these applications.


international conference on robotics and automation | 2010

Learning visibility of landmarks for vision-based localization

Pablo Fernández Alcantarilla; Sang Min Oh; Gian Luca Mariottini; Luis Miguel Bergasa; Frank Dellaert

We aim to perform robust and fast vision-based localization using a pre-existing large map of the scene. A key step in localization is associating the features extracted from the image with the map elements at the current location. Although the problem of data association has greatly benefited from recent advances in appearance-based matching methods, less attention has been paid to the effective use of the geometric relations between the 3D map and the camera in the matching process. In this paper we propose to exploit the geometric relationship between the 3D map and the camera pose to determine the visibility of the features. In our approach, we model the visibility of every map feature with respect to the camera pose using a non-parametric distribution model. We learn these non-parametric distributions during the 3D reconstruction process, and develop efficient algorithms to predict the visibility of features during localization. With this approach, the matching process only uses those map features with the highest visibility score, yielding a much faster algorithm and superior localization results. We demonstrate an integrated system based on the proposed idea and highlight its potential benefits for the localization in large and cluttered environments


national conference on artificial intelligence | 2005

Data-driven MCMC for learning and inference in switching linear dynamic systems

Sang Min Oh; James M. Rehg; Tucker R. Balch; Frank Dellaert


Archive | 2005

A Variational inference method for Switching Linear Dynamic Systems

Sang Min Oh; Ananth Ranganathan; James M. Rehg; Frank Dellaert

Collaboration


Dive into the Sang Min Oh's collaboration.

Top Co-Authors

Avatar

Frank Dellaert

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James M. Rehg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tucker R. Balch

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongshin Kim

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bruce N. Walker

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gian Luca Mariottini

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Jie Sun

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sarah Tariq

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge