Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kasumi Abe is active.

Publication


Featured researches published by Kasumi Abe.


intelligent robots and systems | 2012

Playmate robots that can act according to a child's mental state

Kasumi Abe; Akiko Iwasaki; Tomoaki Nakamura; Takayuki Nagai; Ayami Yokoyama; Takayuki Shimotomai; Hiroyuki Okada; Takashi Omori

We propose a playmate robot system that can play with a child. Unlike many therapeutic service robots, our proposed playmate system is implemented as a functionality of the domestic service robot with a high degree of freedom. This implies that the robot can play high-level games with children, i.e., beyond therapeutic play, using its physical features. The proposed system currently consists of ten play modules, including a chatbot with eye contact, card playing, and drawing. The algorithms of these modules are briefly discussed in this paper. To sustain the players interest in the system, we also propose an action-selection strategy based on a transition model of the childs mental state. The robot can estimate the childs state and select an appropriate action in the course of play. A portion of the proposed algorithms was implemented on a real robot platform, and experiments were carried out to design and evaluate the proposed system.


intelligent robots and systems | 2014

Physical embodied communication between robots and children: An approach for relationship building by holding hands

Chie Hieida; Kasumi Abe; Muhammad Attamimi; Takayuki Shimotomai; Takayuki Nagai; Takashi Omori

The influence of holding hands on the relationship building process between children and robots is investigated in this study. In particular, at the first meeting, it is difficult for a child to be open if he/she starts to rebuff the robot partner. This significantly degrades the possibility of the child forming a friendship relationship with the robot. Thus, the initial approach of the robot to the child in the early stage of the relationship building process should be appropriate. We hypothesize that physical embodied communication, such as walking hand in hand, improves the relationship between children and robots. A holding hands system was implemented in a real robot, and an experiment was conducted at a kindergarten to validate our hypothesis. The results strongly support our hypothesis.


human-agent interaction | 2014

Toward playmate robots that can play with children considering personality

Kasumi Abe; Chie Hieida; Muhammad Attamimi; Takayuki Nagai; Takayuki Shimotomai; Takashi Omori; Natsuki Oka

It is difficult to design robotic playmates for introverted children. Therefore, we examined how a robot should play with such shy children. In this study, we hypothesized and tested an effective play strategy for building a good relationship with shy children. We conducted an experiment with 5- to 6-year-old children and a humanoid robot teleoperated by a preschool teacher. We developed a valid play strategy for shy children.


intelligent robots and systems | 2013

Integrated concept of objects and human motions based on multi-layered multimodal LDA

Muhammad Fadlil; Keisuke Ikeda; Kasumi Abe; Tomoaki Nakamura; Takayuki Nagai

The human understanding of things is based on prediction which is made through concepts formed by the categorization of experience. To mimic this mechanism in robots, multimodal categorization, which enables the robot to form concepts, has been studied. On the other hand, segmentation and categorization of human motions have also been studied to recognize and predict future motions. This paper addresses the issue on how these different kinds of concepts are integrated to generate higher level concepts and, more importantly, on how the higher level concepts affect the formation of each lower level concept. To this end, we propose the multi-layered multimodal latent Dirichlet allocation (mMLDA), which is an expansion of the MLDA to learn and represent the hierarchical structure of concepts. We also examine a simple integration model and compare it with the mMLDA. The experimental results reveal that the mMLDA leads to a better inference performance and, indeed, forms higher level concepts which integrate motions and objects that are necessary for real-world understanding.


robot and human interactive communication | 2015

Model of strategic behavior for interaction that guide others internal state

Takashi Omori; Takayuki Shimotomai; Kasumi Abe; Takayuki Nagai

Though communication is one of our basic activity, it is not always that we can interact effectively. It is well known that a key point for a successful interaction is the inclusion of other with a good mood. It means acquisition of others interest is a precondition for a successful communication.


intelligent robots and systems | 2014

Integration of various concepts and grounding of word meanings using multi-layered multimodal LDA for sentence generation

Muhammad Attamimi; Muhammad Fadlil; Kasumi Abe; Tomoaki Nakamura; Kotaro Funakoshi; Takayuki Nagai

In the field of intelligent robotics, object handling by robots can be achieved by capturing not only the object concept through object categorization, but also other concepts (e.g., the movement while using the object), as well as the relationship between concepts. Moreover, capturing the concepts of places and people is also necessary to enable the robot to gain real-world understanding. In this study, we propose multi-layered multimodal latent Dirichlet allocation (mMLDA) to realize the formation of various concepts, and the integration of those concepts, by robots. Because concept formation and integration can be conducted by mMLDA, the formation of each concept affects others, resulting in a more appropriate formation. Another issue to be addressed in this paper is the language acquisition by the robots. We propose a method to infer which words are originally connected to a concept using mutual information between words and concepts. Moreover, the order of concepts in teaching utterances can be learned using a simple Markov model, which corresponds to grammar. This grammar can be used to generate sentences that represent the observed information. We report the results of experiments to evaluate the effectiveness of the proposed method.


robot and human interactive communication | 2017

Estimation of child personality for child-robot interaction

Kasumi Abe; Yuki Hamada; Takayuki Nagai; Masahiro Shiomi; Takashi Omori

We propose a technique to estimate a childs extraversion and agreeableness for social robots that interact with children. The proposed approach observed childrens behavior using only the robots sensors, without any sensor networks in the environment. An RGBD sensor was used to track and identify childrens facial expressions. Childrens interactions with the robot were observed, such as their distance from the robot and the duration of their eye contact, because such information would provide clues to estimate their personality. Data were collected when a robot, tele-operated by preschool teachers, interacted with kindergarten children individually. The data from 29 children was used to successfully estimate the childrens personality compared to chance rates.


human-agent interaction | 2016

ChiCaRo: Tele-presence Robot for Interacting with Babies and Toddlers

Masahiro Shiomi; Kasumi Abe; Yachao Pei; Tingyi Zhang; Narumitsu Ikeda; Takayuki Nagai

This paper reports a tele-presence robot named ChiCaRo, which is designed for interaction with babies and toddlers. ChiCaRo can physically interact with babies and toddlers by moving around and using its small hand. We conducted a field trial at a playroom where babies and toddlers can freely play to investigate ChiCaRos effectiveness. In the experiment adult participants interacted with their babies and toddlers by ChiCaRo and another robot. The adult participants evaluated ChiCaRo highly in the context of remote interaction with their babies and toddlers.


human robot interaction | 2016

Modeling of Honest Signals for Human Robot Interaction

Muhammad Attamimi; Yusuke Katakami; Kasumi Abe; Takayuki Nagai; Tomoaki Nakamura

Recent studies have shown that human beings unconsciously use signals that represent their thoughts and/or intentions when communicating with each other. These signals are known as “honest signals.” This study involves the use of a sociometer to capture multimodal data resulting from the interaction between humans. These data are then used to model the interaction using a multimodal hierarchical Dirichlet process hidden Markov model, which is then implemented in the robot. The model enables robots to generate “honest signals” and to interact in a natural manner.


international conference on neural information processing | 2013

Robots That Can Play with Children: What Makes a Robot Be a Friend

Muhammad Attamimi; Kasumi Abe; Akiko Iwasaki; Takayuki Nagai; Takayuki Shimotomai; Takashi Omori

In this paper, a playmate robot system, which can play with a child, is proposed. Unlike many therapeutic service robots, our proposed system is implemented as a functionality of the domestic service robot with a high degree of freedom. This implies that the robot can use its body and toys for playing high-level games with children, i.e., beyond therapeutic play, using its physical features. The proposed system currently consists of ten play modules, including a chatbot, card playing, and drawing. To sustain the player’s interest in the system, we also propose an action-selection strategy based on a transition model of the child’s mental state. The robot can estimate the child’s state and select an appropriate action in the course of play. A portion of the proposed algorithms was implemented on a real robot platform, and experiments were carried out to design and evaluate the proposed system.

Collaboration


Dive into the Kasumi Abe's collaboration.

Top Co-Authors

Avatar

Takayuki Nagai

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomoaki Nakamura

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Muhammad Attamimi

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natsuki Oka

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yachao Pei

University of Electro-Communications

View shared research outputs
Researchain Logo
Decentralizing Knowledge