Jinseok Woo
Tokyo Metropolitan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jinseok Woo.
2014 10th France-Japan/ 8th Europe-Asia Congress on Mecatronics (MECATRONICS2014- Tokyo) | 2014
Jinseok Woo; János Botzheim; Naoyuki Kubota
This paper proposes a conversation system for verbal communication between a human and a robot partner. The system has three main components. In the first one, the conversation system uses the relationship between utterance sentences and perceptual information. Additionally, human utterance patterns are also used to realize a scenario-like conversation. Similarity information between verbs and adjectives of different sentences are also utilized. The second component is the sentence building module which is based on predefined set of rules. Therefore, the robot can also respond using sentence creation. In the third module, the robot can select sentence based on time dependent information. We show experimental results of the proposed method using a real robot partner.
systems, man and cybernetics | 2013
Jinseok Woo; Naoyuki Kubota
This paper proposes a conversation system based on multimodal perception for verbal communication between a human and a robot partner using various types of sensors. First, we describe the control structure of the robot partner and explain the architecture of the robot system. Next, evolutionary robot vision is applied to human and object detection. Next, a conversation system based on information ally structured space is proposed. Furthermore, we propose a method of conversation learning based on the flow of human utterance patterns and its related perceptual information. Finally, we show experimental results of the proposed method, and discuss the future direction on this research.
world automation congress | 2014
János Botzheim; Jinseok Woo; Noel Tay Nuo Wi; Naoyuki Kubota; Toru Yamaguchi
When conducting natural communication in addition to perform verbal communication, a robot partner should also understand non-verbal communication such as facial and gestural information. The word “understand” for the robot means how to grasp the meaning of the gesture itself. In this paper we propose a smart phone based system, where an emotional model connects the facial and gestural communication of a human and a robot partner. The input of the emotional model is based on face classification and gesture recognition from the human side. Based on the emotional model, the output action such as gestural and facial expressions for the robot is calculated.
international symposium on micro-nanomechatronics and human science | 2014
Jinseok Woo; János Botzheim; Naoyuki Kubota
Interaction with a robot partner requires many elements, including not only speech, but embodiment as well. Gestures and facial expressions of the robot partner are also important for natural communication. This paper proposes a control system for the robot partners facial and gestural expression. First, we describe the control structure of the robot partner and we explain the architecture of the robot system. Next, we propose a gesture expression system based on Laban movement analysis and interactive evolution strategy. Then, facial expression generation is discussed. Finally, we show experimental results of the proposed method, and discuss the future direction on this research.
robot and human interactive communication | 2015
Jinseok Woo; János Botzheim; Naoyuki Kubota
This paper proposes a verbal conversation system for a robot partner using emotional model. The robot partner calculates its emotional state based on the utterance sentence of the human. Then, the robot partner can control its utterance sentence based on the emotional parameters. As a results, the robot partner can interact with human emotionally naturally. In this paper, we explain the three parts of the conversation systems structure. The first part is time dependent selection based on the database contents. In this mode, the robot tells timely important contents, for example schedules. The mood parameter is used to change the sentence in this mode. The second component is utterance flow learning to select the utterance contents. The robot selects utterance sentence based on the utterance flow information and using its mood value as well. The third component is sentence building based on predefined rules. The rules include personality model of the robot partner. In this paper, we use emotional parameters based on the human sentences to make a natural communication system. Finally, we show experimental results of the proposed method, and conclude the paper. The future research for improving the robot partner system is discussed as well.
pacific rim international conference on artificial intelligence | 2010
Jinseok Woo; Naoyuki Kubota; Beom Hee Lee
This paper proposes a method of simultaneous localization and mapping based on computational intelligence for a robot partner in unknown environments. First, we propose a method of topological map building based on a growing neural network. Next, we propose a method of localization based on steady-state genetic algorithm. Finally, we discuss the effectiveness of the proposed methods through several experimental results.
ieee international conference on fuzzy systems | 2012
Koh Nishimura; Naoyuki Kubota; Jinseok Woo
Recently, the need of robot partners is increasing. Such robots should have an emotional model in order to co-exist and to realize the natural communication with people. In the communication, nonverbal communication and emotional expression based on emotional model are very important for robot partners. Moreover, facial and gestural expression should be adaptive to a user of the robot. Therefore, we propose a design support system of arm gestural and facial expression of robot partners based on interactive evolutionary computation and Laban features. Next, we conduct several experiments of the proposed method, and discuss the effectiveness of the proposed method.
international symposium on micro-nanomechatronics and human science | 2015
Jinseok Woo; Chiaki Kasuya; Naoyuki Kubota
Recently, various types of smart device based applications have been used owing to the development of information and network technology. Robot partners can also be developed based on the sensors a smart device equipped with. In this paper, we propose a robot partner “iPhonoid” based on smart device to support people. This robot is able to perform gestures by using its robot body based on our design. Furthermore, we can use stored personal information and contents based on inner information, e.g., calendar schedules, music contents, memos, and address books. This information is also used to make conversation contents to robot partner. We propose a method of using personal data for content-based conversation with people to information support as a secretary. We discuss the effectiveness of the proposed method by several experimental results.
2016 11th France-Japan & 9th Europe-Asia Congress on Mechatronics (MECATRONICS) /17th International Conference on Research and Education in Mechatronics (REM) | 2016
Mikaël Jacquemont; Jinseok Woo; János Botzheim; Naoyuki Kubota; Nathan Sartori; Eric Benoit
Robot partners are very useful to help elderly and disabled people keep mental health. This paper explains how a robot partner with a human-centric point of view has been designed and built through a cooperation between the Graduate School of System Design of the Tokyo Metropolitan University, Japan, and the LISTIC of Polytech Annecy Chambery, France. First, we describe the physical design of the robot partner. Next, we present the software part of the robot partner, divided into the offline speech recognition and the knowledge base. Then we show experimental results of the speech recognition. Finally, we discuss the future direction of this project.
ieee international conference on fuzzy systems | 2013
Jinseok Woo; Naoyuki Kubota; Jun Shimazaki; Hiroyuki Masuta; Yusei Matsuo; Hun-ok Lim
This paper discusses a robot partner system for natural communication using emotional models. In our daily life, robot partners should have an emotional model in order to co-exist and to realize natural communication with people. In this paper, we propose several emotional models for human-robot interaction based on computational intelligence. First we discuss the importance of emotion and its functions in the social interaction. Next, we propose an emotional model based on emotion, feeling, and mood. Furthermore, we use the emotional model as a method for communication system, and also, we discuss Frankls psychology as the basis of communication. Finally, we show several experimental results of the proposed method, and discuss the utterance systems for a robot partner.