Gi Hyun Lim
University of Aveiro
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gi Hyun Lim.
systems man and cybernetics | 2011
Gi Hyun Lim; Il Hong Suh; Hyo-Won Suh
A significant obstacle for service robots is the execution of complex tasks in real environments. For example, it is not easy for service robots to find objects that are partially observable and are located at a place which is not identical but near the place where the robots saw them previously. To overcome the challenge effectively, robot knowledge represented as a semantic network can be extremely useful. This paper presents an ontology-based unified robot knowledge framework that integrates low-level data with high-level knowledge for robot intelligence. This framework consists of two sections: knowledge description and knowledge association. Knowledge description includes comprehensively integrated robot knowledge derived from low-level knowledge regarding perceptual features, part objects, metric maps, and primitive behaviors, as well as high-level knowledge about perceptual concepts, objects, semantic maps, tasks, and contexts. Knowledge association uses logical inference with both unidirectional and bidirectional rules. This characteristic enables reasoning to be performed even when only a partial information is available. The experimental results that demonstrate the advantages of using the proposed knowledge framework are also presented.
intelligent robots and systems | 2007
Il Hong Suh; Gi Hyun Lim; Wonil Hwang; Hyo-Won Suh; Jung-Hwa Choi; Young-Tack Park
An ontology-based multi-layered robot knowledge framework (OMRKF) is proposed to implement robot intelligence to be useful in a robot environment. OMRKF consists of four classes of knowledge (KClass), axioms and two types of rules. Four KClasses including perception, model, activity and context class are organized in a hierarchy of three knowledge levels (KLevel) and three ontology layers (OLayer). The axioms specify the semantics of concepts and relational constraints between ontological elements in each OLayer. One type of rule is designed for relationships between concepts in the same KClasses but in different KLevels. These rules will be used in a way of unidirectional reasoning. And, the other types of rules are also designed for association between concepts in different KLevels and different KClasses to be used in a way of bi-directional reasoning. These features will let OMRKF enable a robot to integrate robot knowledge from levels of sensor data and primitive behaviors to levels of symbolic data and contextual information regardless of class of knowledge. To show the validities of our proposed OMRKF, several experimental results will be illustrated, where some queries can be possibly answered by using uni-directional rules as well as bi-directional rules even with partial and uncertain information.
intelligent robots and systems | 2009
Chuho Yi; Il Hong Suh; Gi Hyun Lim; Byung-Uk Choi
We propose a semantic representation and Bayesian model for robot localization using spatial relations among objects that can be created by a single consumer-grade camera and odometry. We first suggest a semantic representation to be shared by human and robot. This representation consists of perceived objects and their spatial relationships, and a qualitatively defined odometry-based metric distance. We refer to this as a topological-semantic distance map. To support our semantic representation, we develop a Bayesian model for localization that enables the location of a robot to be estimated sufficiently well to navigate in an indoor environment. Extensive localization experiments in an indoor environment show that our Bayesian localization technique using a topological-semantic distance map is valid in the sense that localization accuracy improves whenever objects and their spatial relationships are detected and instantiated.
intelligent robots and systems | 2014
Miguel Oliveira; Gi Hyun Lim; Luís Seabra Lopes; S. Hamidreza Kasaei; Ana Maria Tomé; Aneesh Chauhan
This paper addresses the problem of grounding semantic representations in intelligent service robots. In particular, this work contributes to addressing two important aspects, namely the anchoring of object symbols into the perception of the objects and the grounding of object category symbols into the perception of known instances of the categories. The paper discusses memory requirements for storing both semantic and perceptual data and, based on the analysis of these requirements, proposes an approach based on two memory components, namely a semantic memory and a perceptual memory. The perception, memory, learning and interaction capabilities, and the perceptual memory, are the main focus of the paper. Three main design options address the key computational issues involved in processing and storing perception data: a lightweight, NoSQL database, is used to implement the perceptual memory; a thread-based approach with zero copy transport of messages is used in implementing the modules; and a multiplexing scheme, for the processing of the different objects in the scene, enables parallelization. The system is designed to acquire new object categories in an incremental and open-ended way based on user-mediated experiences. The system is fully integrated in a broader robot system comprising low-level control and reactivity to high-level reasoning and learning.
systems, man and cybernetics | 2009
Chuho Yi; Il Hong Suh; Gi Hyun Lim; Byung-Uk Choi
This study addressed the problem of active localization, which requires massive computation. To solve the problem, we developed abstracted measurements that consist of qualitative metrics estimated by a single camera. These are contextual representations consisting of perceived landmarks and their spatial relations, and they can be shared by humans and robots. Next, we enhanced the Markov localization method to support contextual representations with which a robots location can be sufficiently estimated. In contrast to passive methodologies, our approach actively uses the greedy technique to select a robots action and improve localization results. The experiment was carried out in an indoor environment, and results indicate that the proposed active-semantic localization yields more efficient localization.
Künstliche Intelligenz | 2014
Joachim Hertzberg; Jianwei Zhang; Liwei Zhang; Sebastian Rockel; Bernd Neumann; Jos Lehmann; Krishna Sandeep Reddy Dubba; Anthony G. Cohn; Alessandro Saffiotti; Federico Pecora; Masoumeh Mansouri; Štefan Konečný; Martin Günther; Sebastian Stock; Luís Seabra Lopes; M. Oliveira; Gi Hyun Lim; Hamidreza Kasaei; Vahid Mokhtari; Lothar Hotz; Wilfried Bohlken
This paper reports on the aims, the approach, and the results of the European project RACE. The project aim was to enhance the behavior of an autonomous robot by having the robot learn from conceptualized experiences of previous performance, based on initial models of the domain and its own actions in it. This paper introduces the general system architecture; it then sketches some results in detail regarding hybrid reasoning and planning used in RACE, and instances of learning from the experiences of real robot task execution. Enhancement of robot competence is operationalized in terms of performance quality and description length of the robot instructions, and such enhancement is shown to result from the RACE system.
Journal of Intelligent and Robotic Systems | 2015
S. Hamidreza Kasaei; Miguel Oliveira; Gi Hyun Lim; Luís Seabra Lopes; Ana Maria Tomé
Abstract3D object detection and recognition is increasingly used for manipulation and navigation tasks in service robots. It involves segmenting the objects present in a scene, estimating a feature descriptor for the object view and, finally, recognizing the object view by comparing it to the known object categories. This paper presents an efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner. In this paper, “open-ended” implies that the set of object categories to be learned is not known in advance. The training instances are extracted from on-line experiences of a robot, and thus become gradually available over time, rather than at the beginning of the learning process. This paper focuses on two state-of-the-art questions: (1) How to automatically detect, conceptualize and recognize objects in 3D scenes in an open-ended manner? (2) How to acquire and use high-level knowledge obtained from the interaction with human users, namely when they provide category labels, in order to improve the system performance? This approach starts with a pre-processing step to remove irrelevant data and prepare a suitable point cloud for the subsequent processing. Clustering is then applied to detect object candidates, and object views are described based on a 3D shape descriptor called spin-image. Finally, a nearest-neighbor classification rule is used to predict the categories of the detected objects. A leave-one-out cross validation algorithm is used to compute precision and recall, in a classical off-line evaluation setting, for different system parameters. Also, an on-line evaluation protocol is used to assess the performance of the system in an open-ended setting. Results show that the proposed system is able to interact with human users, learning new object categories continuously over time.
Intelligent Service Robotics | 2010
Gi Hyun Lim; Il Hong Suh
Robot knowledge is considered to endow service robots with intelligence. In the real environments, robot knowledge needs to represent dynamically changing world. Despite its advantages for semantic knowledge of service robots, robot knowledge may be instantiated and updated by using imperfect sensing data, such as misidentification of object recognition. In case of using commercially available visual recognition system, incorrect knowledge instances are created and changed frequently due to object misidentification and/or recognition failures. In this work, a robust semantic knowledge handling method under imperfect object recognition is proposed to instantiate and update robot knowledge with logical inference by estimating confidence of the object recognition results. The following properties may be applied to determine misidentifications in logical inference: temporal reasoning to represent relationships between time intervals, statistical reasoning with confidence of object recognition results. To show validity of our proposed method, experimental results are illustrated, where commercial visual recognition system is employed.
Robotics and Autonomous Systems | 2016
Miguel Oliveira; Luís Seabra Lopes; Gi Hyun Lim; S. Hamidreza Kasaei; Ana Maria Tomé; Aneesh Chauhan
This paper describes a 3D object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario. This system, developed within the scope of the European project RACE, integrates detection, tracking, learning and recognition of tabletop objects. Interaction capabilities were also developed to enable a human user to take the role of instructor and teach new object categories. Thus, the system learns in an incremental and open-ended way from user-mediated experiences. Based on the analysis of memory requirements for storing both semantic and perceptual data, a dual memory approach, comprising a semantic memory and a perceptual memory, was adopted. The perceptual memory is the central data structure of the described perception and learning system. The goal of this paper is twofold: on one hand, we provide a thorough description of the developed system, starting with motivations, cognitive considerations and architecture design, then providing details on the developed modules, and finally presenting a detailed evaluation of the system; on the other hand, we emphasize the crucial importance of the Point Cloud Library (PCL) for developing such system.11This paper is a revised and extended version of Oliveira et?al. (2014). We describe an object perception and perceptual learning system.The system is able to detect, track and recognize tabletop objects.The system learns novel object categories in an open-ended fashion.The Point Cloud Library is used in nearly all modules of the system.The system was developed and used in the European project RACE.
robot and human interactive communication | 2014
Gi Hyun Lim; Miguel Oliveira; Vahid Mokhtari; S. Hamidreza Kasaei; Aneesh Chauhan; Luís Seabra Lopes; Ana Maria Tomé
Intelligent service robots should be able to improve their knowledge from accumulated experiences through continuous interaction with the environment, and in particular with humans. A human user may guide the process of experience acquisition, teaching new concepts, or correcting insufficient or erroneous concepts through interaction. This paper reports on work towards interactive learning of objects and robot activities in an incremental and open-ended way. In particular, this paper addresses human-robot interaction and experience gathering. The robots ontology is extended with concepts for representing human-robot interactions as well as the experiences of the robot. The human-robot interaction ontology includes not only instructor teaching activities but also robot activities to support appropriate feedback from the robot. Two simplified interfaces are implemented for the different types of instructions including the teach instruction, which triggers the robot to extract experiences. These experiences, both in the robot activity domain and in the perceptual domain, are extracted and stored in memory, and they are used as input for learning methods. The functionalities described above are completely integrated in a robot architecture, and are demonstrated in a PR2 robot.