Kyung-Wook Park
Hanyang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kyung-Wook Park.
international conference on control, automation and systems | 2007
Seung-Ho Baeg; Jae-Han Park; Jaehan Koh; Kyung-Wook Park; Moon-Hong Baeg
This paper is concerned with constructing a prototype smart home environment which has been built in the research building of Korea Institute of Industrial Technology (KITECH) to demonstrate the practicability of a robot-assisted future home environment. Key functionalities that a home service robot must provide are localization, navigation, object recognition and object handling. A considerable amount of research has been conducted to make the service robot perform these operations with its own sensors, actuators and a knowledge database. With all heavy sensors, actuators and a database, the robot could have performed the given tasks in a limited environment or showed the limited capabilities in a natural environment. We initiated a smart home environment project for light-weight service robots to provide reliable services by interacting with the environment through the wireless sensor networks. This environment consists of the following three main components: smart objects with an radio frequency identification (RFID) tag and smart appliances with sensor network functionality; the home server that connects smart devices as well as maintains information for reliable services; and the service robots that perform tasks in collaboration with the environment. In this paper, we introduce various types of smart devices which are developed for assisting the robot by providing sensing and actuation, and present our approach on the integration of these devices to construct the smart home environment. Finally, we discuss the future directions of our project.
international symposium on consumer electronics | 2014
Kyung-Wook Park; Hyun-Ki Hong; Dong-Ho Lee
In this paper, we focus on generating compact but efficient video signatures on mobile devices so that users quickly know whether there are near-duplicates in the social network systems when they upload a video. For this, the proposed method employs the idea of inverted index that is one of the most popular text retrieval methods. Experimental results show that our method can achieve similar results compared with state-of-the-art method whereas it requires low memory and computation cost.
database systems for advanced applications | 2007
Kyung-Wook Park; Jin-Woo Jeong; Dong-Ho Lee
One of the big issues facing current content-based image retrieval is how to automatically extract the high-level concepts from images. In this paper, we present an efficient system that automatically extracts the high-level concepts from images by using ontologies and semantic inference rules. In our method, MPEG-7 visual descriptors are used to extract the visual features of image, and the visual features are mapped to semi-concepts via the mapping algorithm. We also build the visual and animal ontologies to bridge the semantic gap. The visual ontology allows the definition of relationships among the classes describing the visual features and has the values of semi-concepts as the property values. The animal ontology can be exploited to identify the highlevel concept in an image. Also, the semantic inference rules are applied to the ontologies to extract the high-level concept. Finally, we evaluate the proposed system using the image data set including various animal objects and discuss the limitations of our system.
robot and human interactive communication | 2007
Seung-Ho Baeg; Jae-Han Park; Jaehan Koh; Kyung-Wook Park; Moon-Hong Baeg
Over the past few years, many research groups have attempted to build smart environments. Recently, several groups have been actively conducting research into the construction of smart home environment for service robots to be used as assistants. A home service robot must have are localization, navigation, object recognition, and object handling functionalities. These operations are usually performed by a service robot by itself and the robot can perform the given tasks in a limited environment. In addition, they have shown only limited capabilities in a natural environment. We initiated a smart home environment project, RoboMaidHome, for light-weight service robots to provide reliable services by interacting with the environment through wireless sensor networks. This environment consists of the following three main components: (i) smart objects and smart appliances with sensor network functionality; (ii) a home server that connects smart devices as well as maintains information for reliable services; and (iii) service robots that perform tasks in collaboration with the environment. In this paper, we explain the basic concepts and the architecture of our project and address some key issues related to the project. Finally, we discuss the future directions of our project.
international conference on control, automation and systems | 2007
Jae-Han Park; Seung-Ho Baeg; Jaehan Koh; Kyung-Wook Park; Moon-Hong Baeg
This paper describes a new object recognition system for service robots in a smart environment full of RFID tags and connected by wireless networks. Object recognition is one of the basic functionalities a service robot should perform and many researchers have attempted to make the service robot recognize objects through vision processing in natural environments. However there is no conventional vision system that can recognize target objects robustly in a real-world workspace. In our smart environment, flexibility and robustness in object recognition are provided with the help of RFID tags and communication networks. RFID tags provide the presence and identity information of the object and the robot can recognize the object through vision processing based on a detected RFID code and downloaded visual descriptor information. For feature descriptors, we adopted MPEG-7 visual descriptors due to its concise and unambiguous description of the complex media contents. This paper focuses on developing our object recognition engine based on visual descriptors information in the smart environment. To this end, we propose a new object recognition architecture for service robots, and present a implemented matching algorithm on the basis of MPEG-7 visual descriptors such as color and texture. Experimental results show that the proposed system works well with good performance in terms of recognition rate. This object recognition system will be incorporated into our service robot platform and a pose estimation module is to be included in the next version of our object recognition system.
intelligent robots and systems | 2007
Seung-Ho Baeg; Jae-Han Park; Jaehan Koh; Kyung-Wook Park; Moon-Hong Baeg
A prototype smart home environment for service robots has been constructed in the research building of Korea Institute of Industrial Technology (KITECH) to demonstrate the feasibility of a robot-assisted future home environment. An inexpensive service robot outfitted with a camera, a Radio Frequency Identification (RFID) reader, and a communication module is installed in the building as a future service robot system which is cheap but robust. In our robotic system, the RFID reader gets the rough location data and object information, and then the robot performs the object recognition scheme to get the exact position of the object in order to grasp it. Due to its concise and unambiguous description of the complex media contents, we adopted MPEG-7 visual descriptors for our object recognition system. In this paper, we propose a fast object recognition system for our smart home environment project, on the basis of MPEG-7 visual descriptors including color and texture. Experimental results show that our proposed system works well with good performance in terms of speed and recognition rate. This object recognition system will be incorporated into our mobile robotics platform and a shape descriptor is to be included in the next version of our object recognition system.
international conference on control, automation and systems | 2008
Jae-Han Park; Kyung-Wook Park; Seung-Ho Baeg; Moon-Hong Baeg
Object pose estimation from stereo images with unknown correspondence is a thoroughly studied problem in the computer vision and robot engineering literatures. Especially, it is important to detect the desirable corresponding points from images for object pose estimation. For this, many approaches have been proposed. Among them, the local feature descriptor, which describe the feature points that are robust to image deformations in an object or image, is one of the most promising approaches that has been applied to the stable feature detection problem successfully. Although any descriptors including the SIFT represent superior performance, these are based on luminance information rather than color information thereby resulting in instability to photometric variations such as shadows, highlights, and illumination changes. Therefore, we propose a novel method which extracts the interest points that are insensitive to both geometric and photometric variations in order to estimate more accurate and desirable object pose. In this method, we use photometric quasi-invariant features based on the dichromatic reflection model in order to achieve photometric invariance, and the SIFT is used for geometric invariance as well. The performance of the proposed method is evaluated with other local descriptors. Experimental results show that our method gives similar performance or outperforms them with respect to various imaging conditions. Finally, we estimate object pose by using the features extracted via the proposed method.
semantics, knowledge and grid | 2007
Kyung-Wook Park; Jeong Ho Lee; Young Shik Moon; Sung Han Park; Dong-Ho Lee
The need of techniques for automatic video annotation and summarization has been increased because digital videos have been becoming available at an ever-increasing rate. In this paper, we present an automatic video annotation and summarization system which employs the ontologies and semantic inference rules to facilitate the video retrieval. In our work, high -level concepts of shot / group / scene / video level are automatically extracted by applying semantic inference rules to VideoAnnotation ontology and object ontology. Finally, we show the retrieval effectiveness of our approach and discuss the future work.
acm multimedia | 2007
Jin-Woo Jeong; Kyung-Wook Park; Oukseh Lee; Dong-Ho Lee
Extracting high-level semantic concepts from low-level visual features of images is a very challenging research. Although traditional machine learning approaches just extract fragmentary information of images, their performance is still not satisfying. In this paper, we propose a novel system that automatically extracts high-level concepts such as spatial relationships or natural-enemy relationships from images using combination of ontologies and SVM classifiers. Our system consists of two phases. In the first phase, visual features are mapped to intermediate-level concepts (e.g, yellow, 45 angular stripes). And then, a set of these concepts are classified into relevant object concepts (e.g, tiger) by using SVM-classifiers. In this phase, revision module which improves the accuracy of classification is used. In the second phase, based on extracted visual information and domain ontology, we deduce semantic relationships such as spatial/natural-enemy relationships between multiple objects in an image. Finally, we evaluate the proposed system using color images including about 20 object concepts.
asian semantic web conference | 2006
Kyung-Wook Park; Dong-Ho Lee
One of the big issues facing current content-based image retrieval is how to automatically extract the semantic information from images In this paper, we propose an efficient method that automatically extracts the semantic information from images by using ontologies and the semantic inference rules In our method, MPEG-7 visual descriptors are used to extract the visual features of image which are mapped to the semi-concept values We also introduce the visual and animal ontology which are built to bridge the semantic gap The visual ontology facilitates the mapping between visual features and semi-concept values, and allows the definition of relationships between the classes describing the visual features The animal ontology representing the animal taxonomy can be exploited to identify the object in an image We also propose the semantic inference rules that can be used to automatically extract high-level concepts from images by applying them to the visual and animal ontology Finally, we discuss the limitations of the proposed method and the future work.