Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michita Imai is active.

Publication


Featured researches published by Michita Imai.


Proceedings of the IEEE | 2004

Development and evaluation of interactive humanoid robots

Takayuki Kanda; Hiroshi Ishiguro; Michita Imai; Tetsuo Ono

We report the development and evaluation of a new interactive humanoid robot that communicates with humans and is designed to participate in human society as a partner. A human-like body will provide an abundance of nonverbal information and enable us to smoothly communicate with the robot. To achieve this, we developed a humanoid robot that autonomously interacts with humans by speaking and gesturing. Interaction achieved through a large number of interactive behaviors, which are developed by using a visualizing tool for understanding the developed complex system. Each interactive behavior is designed by using knowledge obtained through cognitive experiments and implemented by using situated recognition. The robot is used as a testbed for studying embodied communication. Our strategy is to analyze human-robot interaction in terms of body movements using a motion-capturing system that allows us to measure the body movements in detail. We performed experiments to compare the body movements with subjective evaluation based on a psychological method. The results reveal the importance of well-coordinated behaviors as well as the performance of the developed interactive behaviors and suggest a new analytical approach to human-robot interaction.


Industrial Robot-an International Journal | 2001

Robovie: An interactive humanoid robot

Hiroshi Ishiguro; Tetsuo Ono; Michita Imai; Takeshi Maeda; Takayuki Kanda; Ryohei Nakatsu

The authors have developed a robot called “Robovie” that has unique mechanisms designed for communication with humans. Robovie can generate human‐like behaviors by using human‐like actuators and vision and audio sensors. Software is a key element in the systems development. Two important ideas in human‐robot communication through research from the viewpoint of cognitive science have been obtained – one is importance of physical expressions using the body and the other is the effectiveness of the robot’s autonomy in the robot’s utterance recognition by humans. Based on these psychological experiments, a new architecture that generates episode chains in interactions with humans is developed. The basic structure of the architecture is a network of situated modules. Each module consists of elemental behaviors to entrain humans and a behavior for communicating with humans.


human-robot interaction | 2009

How to approach humans?: strategies for social robots to initiate interaction

Satoru Satake; Takayuki Kanda; Dylan F. Glas; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita

This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robots presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robots performance in initiating conversations.


intelligent robots and systems | 2002

A constructive approach for developing interactive humanoid robots

Takayuki Kanda; Hiroshi Ishiguro; Michita Imai; Tetsuo Ono; Kenji Mase

There is a strong correlation between the number of appropriate behaviors an interactive robot can produce and its perceived intelligence. We propose a robot architecture for implementing a large number of behaviors and a visualizing tool for understanding the developed complex system. Behaviors are designed by using knowledge obtained through cognitive experiments and implemented by using situated recognition. By representing relationships between behaviors, episode rules help to guide the robot in communicating with people in a consistent manner. We have implemented over 100 behaviors and 800 episode rules in a humanoid robot. As a result, the robot could entice people to relate to it interpersonally. An Episode Editor is a tool to support the development of episode rules and to visualize the complex relationships among the behaviors. We consider the visualization is to be necessary for the constructive approach.


Connection Science | 2006

Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model

Osamu Sugiyama; Takayuki Kanda; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita; Yuichiro Anzai

When describing a physical object, we indicate which object by pointing and using reference terms, such as ‘this’ and ‘that’, to inform the listener quickly of an indicated objects location. Therefore, this research proposes using a three-layer attention-drawing model for humanoid robots that incorporates such gestures and verbal cues. The proposed three-layer model consists of three sub-models: the Reference Term Model (RTM); the Limit Distance Model (LDM); and the Object Property Model (OPM). The RTM selects an appropriate reference term for distance, based on a quantitative analysis of human behaviour. The LDM decides whether to use a property of the object, such as colour, as an additional term for distinguishing the object from its neighbours. The OPM determines which property should be used for this additional reference. Based on this concept, an attention-drawing system was developed for a communication robot named ‘Robovie’, and its effectiveness was tested.


user interface software and technology | 2013

SenSkin: adapting skin as a soft interface

Masayasu Ogata; Yuta Sugiura; Yasutoshi Makino; Masahiko Inami; Michita Imai

We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.


human-robot interaction | 2008

How quickly should communication robots respond

Toshiyuki Shiwa; Takayuki Kanda; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita

This paper reports a study about system response time (SRT) in communication robots that utilize human-like social features, such as anthropomorphic appearance and conversation in natural language. Our research purpose established a design guideline for SRT in communication robots. The first experiment observed user preferences toward different SRTs in interaction with a robot. In other existing user interfaces, faster response is usually preferred. In contrast, our experimental result indicated that user preference for SRT in a communication robot is highest at one second, and user preference ratings level off at two seconds. However, a robot cannot always respond in such a short time as one or two seconds. Thus, the important question is “What should a robot do if it cannot respond quickly enough?” The second experiment tested the effectiveness of a conversational filler: behavior to notify listeners that the robot is going to respond. In Japanese “etto” is used to buy time to think and resembles “well...” and “uh...” In English. We used the same strategy in a communication robot to shadow system response time. Our results indicated that using a conversational filler by the robot moderated the users impression toward a long SRT.


user interface software and technology | 2012

iRing: intelligent ring using infrared reflection

Masayasu Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai

We present the iRing, an intelligent input ring device developed for measuring finger gestures and external input. iRing recognizes rotation, finger bending, and external force via an infrared (IR) reflection sensor that leverages skin characteristics such as reflectance and softness. Furthermore, iRing allows using a push and stroke input method, which is popular in touch displays. The ring design has potential to be used as a wearable controller because its accessory shape is socially acceptable, easy to install, and safe, and iRing does not require extra devices. We present examples of iRing applications and discuss its validity as an inexpensive wearable interface and as a human sensing device.


robot and human interactive communication | 2001

Physical relation and expression: joint attention for human-robot interaction

Michita Imai; Tetsuo Ono; Hiroshi Ishiguro

This paper proposes a speech generation system named Linta-III, which generates an utterance dependent on a real world situation. To generate the situated utterance, Linta-III has a joint attention mechanism, which develops joint attention between a person and a robot. The joint attention mechanism employs eye-contact and an attention expression. The eye-contact promotes the relationship between the person and the robot. The attention expression manifests relevant sensor information with a physical expression. With the eye-contact and attention expression, the joint attention mechanism is able to draw the persons attention to the same sensor information as the robot. As a result of the joint attention, Linta-III is able to omit obvious words in the situation from an utterance description. We also conducted a psychological experiment on the development of joint attention. The results indicated that the eye-contact and attention expression are significant factors in the development of joint attention.


ubiquitous computing | 2011

Toward a cooperative programming framework for context-aware applications

Bin Guo; Daqing Zhang; Michita Imai

OPEN is an ontology-based programming framework for rapid prototyping, sharing, and personalization of context-aware applications. Unlike previous systems that provide programming support for single group of users, OPEN provides different programming support for users with diverse technical skills. According to the programming requirements of different users, several cooperation patterns are identified, and the mechanisms to facilitate resource sharing and reuse are built into the framework. Three corresponding programming modes are elaborated by showing how a context-aware game has been developed with the support of the OPEN framework, and the usability of our system is validated through an initial user study.

Collaboration


Dive into the Michita Imai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge