Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chuho Yi is active.

Publication


Featured researches published by Chuho Yi.


intelligent robots and systems | 2009

Bayesian robot localization using spatial object contexts

Chuho Yi; Il Hong Suh; Gi Hyun Lim; Byung-Uk Choi

We propose a semantic representation and Bayesian model for robot localization using spatial relations among objects that can be created by a single consumer-grade camera and odometry. We first suggest a semantic representation to be shared by human and robot. This representation consists of perceived objects and their spatial relationships, and a qualitatively defined odometry-based metric distance. We refer to this as a topological-semantic distance map. To support our semantic representation, we develop a Bayesian model for localization that enables the location of a robot to be estimated sufficiently well to navigate in an indoor environment. Extensive localization experiments in an indoor environment show that our Bayesian localization technique using a topological-semantic distance map is valid in the sense that localization accuracy improves whenever objects and their spatial relationships are detected and instantiated.


systems, man and cybernetics | 2009

Active-semantic localization with a single consumer-grade camera

Chuho Yi; Il Hong Suh; Gi Hyun Lim; Byung-Uk Choi

This study addressed the problem of active localization, which requires massive computation. To solve the problem, we developed abstracted measurements that consist of qualitative metrics estimated by a single camera. These are contextual representations consisting of perceived landmarks and their spatial relations, and they can be shared by humans and robots. Next, we enhanced the Markov localization method to support contextual representations with which a robots location can be sufficiently estimated. In contrast to passive methodologies, our approach actively uses the greedy technique to select a robots action and improve localization results. The experiment was carried out in an indoor environment, and results indicate that the proposed active-semantic localization yields more efficient localization.


intelligent robots and systems | 2013

Semantic mapping and navigation: A Bayesian approach

Dong Wook Ko; Chuho Yi; Il Hong Suh

We propose Bayesian approaches for semantic mapping, active localization and local navigation with affordable vision sensors. We develop Bayesian model of egocentric semantic map which consists of spatial object relationships and spatial node relationships. Our topological-semantic-metric (TSM) map has characteristic that a node is one of the components of a general topological map that contains information about spatial relationships. In localization part, view dependent place recognition, reorientation and active search are used for robot localization. A robot estimates its location by Bayesian filtering which leverages spatial relationships among observed objects. Then a robot can infer the head direction to reach a goal in the semantic map. In navigation part, a robot perceives navigable space with Kinect sensor and then moves to goal location while preserving reference head direction. If obstacles are founded in front, then a robot changes the head direction to avoid them. After avoiding obstacles, a robot performs active localization and finds new head direction to goal location. Our Bayesian navigation program provides how a robot should select either an action for following line of moving direction or action for avoiding obstacles. We show that a mobile robot successfully navigates from starting position to goal node while avoiding obstacles by our proposed semantic navigation system with TSM map.


International Journal of Advanced Robotic Systems | 2011

Indoor Place Classification Using Robot Behavior and Vision Data

Chuho Yi; Young Ceol Oh; Il Hong Suh; Byung-Uk Choi

To realize autonomous navigation of intelligent robots in a variety of settings, analysis and classification of places, and the ability to actively collect information are necessary. In this paper, visual data are organized into an orientation histogram to roughly express input images by extracting and cumulating straight lines according to direction angle. In addition, behavioral data are organized into a behavioral histogram by cumulating motions performed to avoid obstacles encountered while the robot is executing specified behavioral patterns. These visual and behavioral data are utilized as input data, and the probability that a place belongs to a specific class is calculated by designating the places already learnt by the robot as categories. The naïve Bayes classification method is employed, in which the probability is calculated that the input data belong to each specific category, and the category with the highest probability is then selected. The location of the robot is classified by merging the probabilities for visual and behavioral data. The experimental results are as follows. First, a comparison of behavioral patterns used by the robot to collect data about a place indicates that a rotational behavior pattern provides the best performance. Second, classification performance is more accurate with two types of input data than with a single type of data.


international conference on neural information processing | 2008

Cognitive representation and Bayeisan model of spatial object contexts for robot localization

Chuho Yi; Il Hong Suh; Gi Hyun Lim; Seungdo Jeong; Byung-Uk Choi

This paper proposes a cognitive representation and Bayesian model for spatial relations among objects that can be constructed with perception data acquired by a single consumer-grade camera. We first suggest a cognitive representation to be shared by humans and robots consisting of perceived objects and their spatial relations. We then develop Bayesian models to support our cognitive representation with which the location of a robot can be estimated sufficiently well to allow the robot to navigate in an indoor environment. Based on extensive localization experiments in an indoor environment, we show that our cognitive representation is valid in the sense that the localization accuracy improves whenever new objects and their spatial relations are detected and instantiated.


the internet of things | 2015

Sensor fusion for accurate ego-motion estimation in a moving platform

Chuho Yi; Jungwon Cho

With the coming of “Internet of things” (IoT) technology, many studies have sought to apply IoT to mobile platforms, such as smartphones, robots, and moving vehicles. An estimation of ego-motion in a moving platform is an essential and important method to build a map and to understand the surrounding environment. In this paper, we describe an ego-motion estimation method using a vision sensor that is widely used in IoT systems. Then, we propose a new fusion method to improve the accuracy of motion estimation with other sensors in cases where there are limits in using only a vision sensor. Generally, because the dimension numbers of data that can be measured for each sensor are different, by simply adding values or taking averages, there is still a problem in that the answer will be biased to one of the data sources. These problems are the same when using the weighting sum using the covariance of the sensors. To solve this problem, in this paper, using relatively accurate sensor data (unfortunately, low dimension), the proposed method was used to estimate by creating artificial data to improve the accuracy (even of unmeasured dimensions).


The Smart Computing Review | 2012

Map Representation for Robots

Chuho Yi; Seungdo Jeong; Jungwon Cho

Map-building and localization for robots are the most basic technology required to create autonomous mobile robots. Unfortunately, they are difficult problems to comprehensively handle. If expensive sensors or a variety of external devices are used, then the problems can be resolved. However, there are still limits for various environments or platforms. Therefore, many researchers have proposed various different methods over a long period of time, and continue to do so today. In this paper, we first look at the state of existing research for map representations used in map-building and localization. We divide them into four main categories and compare the differences between them. These identified properties between the four categories can be used as good standards for choosing appropriate sensors or mathematical models when creating map-building and localization applications for robots.


IAS (1) | 2013

Ontology Representation and Instantiation for Semantic Map Building by a Mobile Robot

Gi Hyun Lim; Chuho Yi; Il Hong Suh; Dong Wook Ko; Seungwoo Hong

To offer sustainable robotic services, service robots must accumulate knowledge by using recognition results and choose a action for services intelligently. Robust knowledge instantiation and update by using imperfect sensing data such as misidentification of perception is a main issue to implement semantic robot intelligence. In this paper, robust knowledge acquisition method is proposed to enable robots to detect falsity of object recognition for robust knowledge instantiation, where spatial reasoning, temporal reasoning, movable properties and data confidences are considered.


International Conference on Grid and Distributed Computing | 2011

Detection and Recovery for Kidnapped-Robot Problem Using Measurement Entropy

Chuho Yi; Byung-Uk Choi

In robotics, the kidnapped robot problem commonly refers to a situation where an autonomous robot in operation is carried to an arbitrary location[1]. This is a very serious issue, and because the robot only computes its position in accordance with mathematical models, it is difficult to determine whether or not its position is being accurately reported. In this paper, to solve the kidnapped-robot problem, we suggest a method of automatic detection and recovery, using the entropy extracted from measurement likelihoods in our own technique of semantic localization. We verify the usefulness of the proposed procedure via repeated indoor localization experiments.


multimedia and ubiquitous engineering | 2013

Age-Group Classification for Family Members Using Multi-Layered Bayesian Classifier with Gaussian Mixture Model

Chuho Yi; Seungdo Jeong; Kyeong-Soo Han; Han-Kyu Lee

This paper proposes a TV viewer age-group classification method for family members based on TV watching history. User profiling based on watching history is very complex and difficult to achieve. To overcome these difficulties, we propose a probabilistic approach that models TV watching history with a Gaussian mixture model (GMM) and implements a feature-selection method that identifies useful features for classifying the appropriate age-group class. Then, to improve the accuracy of age-group classification, a multi-layered Bayesian classifier is applied for demographic analysis. Extensive experiments showed that our multi-layered classifier with GMM is valid. The accuracy of classification was improved when certain features were singled out and demographic properties were applied.

Collaboration


Dive into the Chuho Yi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jungwon Cho

Jeju National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Han-Kyu Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Kyeong-Soo Han

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge