Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hirotaka Osawa is active.

Publication


Featured researches published by Hirotaka Osawa.


user interface software and technology | 2012

iRing: intelligent ring using infrared reflection

Masayasu Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai

We present the iRing, an intelligent input ring device developed for measuring finger gestures and external input. iRing recognizes rotation, finger bending, and external force via an infrared (IR) reflection sensor that leverages skin characteristics such as reflectance and softness. Furthermore, iRing allows using a push and stroke input method, which is popular in touch displays. The ring design has potential to be used as a wearable controller because its accessory shape is socially acceptable, easy to install, and safe, and iRing does not require extra devices. We present examples of iRing applications and discuss its validity as an inexpensive wearable interface and as a human sensing device.


International Journal of Social Robotics | 2009

Using attachable humanoid parts for realizing imaginary intention and body image

Hirotaka Osawa; Ren Ohmura; Michita Imai

We propose a new approach to human-robot interaction (HRI) in which a common target object is anthropomorphized using attachable humanoid parts. With this approach, the user perceives that the target has an intention and senses what the imaginary body of the target looks like through the attached body parts. We experimented how users accepted the intentions and imaginary body image of a common target object using the humanoid parts. We also compared the resulting HRI with that in the case of the general communication robot Robovie to demonstrate the possibilities our proposed method offers. The results indicated that an anthropomorphized target object can interact with users through this approach. Furthermore, in comparison to interaction with an independent communication robot, with this approach, users remembered the functions of the anthropomorphized target to a greater extent and were more intimate with it.


Journal of Advanced Computational Intelligence and Intelligent Informatics | 2007

Anthropomorphization Framework for Human-Object Communication

Hirotaka Osawa; Jun Mukai; Michita Imai

We propose an anthropomorphization framework that determines an object’s body image. This framework directly intervenes and anthropomorphizes objects in ubiquitous-computing environments through robotic body parts shaped like those of human beings, which provide information through spoken directions and body language. Our purpose is to demonstrate that an object acquires subjective representations through anthropomorphization. Using this framework, people can more fully understand instructions given by an object. We designed an anthropomorphization framework that changes the body image by attaching body parts. We also conducted experiments to evaluate this framework. Results indicate that the site at which an anthropomorphization device is attached influences human perception of the object’s virtual body image, and participants in experiments understood several instructions given by the object more clearly. Results also indicate that participants better intuited their devices’ instructions and movement in ubiquitous-computing environments.


asia-pacific computer and human interaction | 2012

Pygmy: a ring-shaped robotic device that promotes the presence of an agent on human hand

Masa Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai

The human hand is an appropriate part to attach an agent robot. Pygmy is an anthropomorphic device that produces a presence on a human hand by magnifying the finger expressions. This device is in trial to develop an interaction model of an agent on the hand. It is based on the concept of hand anthropomorphism and uses finger movements to create the anthropomorphic effect. Wearing the device is similar to having eyes and a mouth on the hand; the wearers hand spontaneously expresses the agents presence with the emotions conveyed by the eyes and mouth. Interactive manipulation by controllers and sensors make the hand look animated. We observed that the character animated with the device provided user collaboration and interaction as though there were a living thing on the users hand. Further, the users play with the device by representing characters animated with Pygmy as their doubles.


human factors in computing systems | 2013

FlashTouch: data communication through touchscreens

Masayasu Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai

FlashTouch is a new technology that enables data communication between touchscreen-based mobile devices and digital peripheral devices. Touchscreen can be used as communication media using visible light and capacitive touch. In this paper, we designed a stylus prototype to describe the concept of FlashTouch. With this prototype, users can easily transfer data from one mobile device to another. It eliminates the complexity associated with data sharing among mobile users, which is currently achieved by online data sharing services or wireless connections for data sharing that need a pairing operation to establish connections between devices. Therefore, it can prove to be of particular significance to people who are not adept at current software services and hardware functions. Finally, we demonstrate the valuable applications in online settlements via mobile device, and data communication for mobile robots.


Web Intelligence and Agent Systems: An International Journal | 2012

Embodiment of an agent by anthropomorphization of a common object

Hirotaka Osawa; Yuji Matsuda; Ren Ohmura; Michita Imai

This paper proposes a direct anthropomorphization method to improve interaction between human and an agent. In this method, an artifact is converted into an agent by attaching humanoid parts to it. There have been many studies that have provided valuable information on using spoken directions and gestures via anthropomorphic agents such as computer graphics agents and communication robots. In the direct anthropomorphization method, an artifact is directly anthropomorphized by being fitted with robotic parts shaped in the form of human body parts. An anthropomorphized artifact with such robotic parts can provide information to people through spoken directions and body language. This will persuade people to pay more attention to the artifact, as compared to when using an anthropomorphic virtual or robot agent. The authors conducted experiments to investigate how the response of users to an explanation of the functions of an artifact changes using the direct anthropomorphization method. The results of pre-experiment indicated that participants paid more attention to the target artifact and memorized its functions more quickly and easily when using the direct anthropomorphization method than when using humanoid agent. In following two experiments, the authors compared human-like aspect separately and evaluate what is key element for anthropomorphization. The authors found that “voice” was the key factor for rendering an object as an anthropomorphic agent. Furthermore, the “eyes” were found to be more effective in interactions than the “mouth”.


ieee international conference on cyber technology in automation control and intelligent systems | 2012

Give me a hand — How users ask a robotic arm for help with gestures

Mahisorn Wongphati; Yusuke Kanai; Hirotaka Osawa; Michita Imai

A task that requires two hands to perform such as soldering usually needs additional tools for holding (e.g. a cable) or adding (e.g. solder) an object to a specific position. A robotic manipulator or robotic arm is one of the solution for this requirement. When gesture is selected as a method for controlling a robot, characteristics of gestures are needed for designing and developing a gesture recognition system. With this requirement, we conducted an experiment to obtain a set of user-defined gestures in the soldering task to find out properties and patterns of a gesture for future development of our research. 152 gestures were collected from 19 participants by presenting the “effect” of the gesture (robotic arm movement), and then asking the participants to perform its “cause” (a user-defined gesture). The analyzed data shows that hands are the most used body parts even they are occupied by the task, that one-hand and two-hands gestures were used interchangeably by the participants, that the majority of the participants performed reversible gestures for reversible movements, and that the participants were expecting for better recognition performance on an easier to plan gesture. Our finding can be useful as a guideline for creating gesture set and system for controlling robotic arms based on natural behavior of users.


human factors in computing systems | 2012

Pygmy: a ring-like anthropomorphic device that animates the human hand

Masayasu Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai

Pygmy is an anthropomorphic device that magnifies hand expressions. It is based on the concept of hand anthropomorphism and it uses finger movements to create the anthropomorphic effect. Wearing the device is similar to having eyes and a mouth on the hand; the wearers hand spontaneously expresses their emotions. Interactive manipulation by controllers and sensors make the hand look animated.


robot and human interactive communication | 2008

Towards anthropomorphized spaces: Human responses to anthropomorphization of a space using attached body parts

Hirotaka Osawa; Michita Imai

This study investigated the effects of a method of anthropomorphizing a target space using attachable robotic human parts. As a result of this method, users may perceive a space as having lifelike characteristics and may accept a virtual body image for various objects. We developed robotic human parts to evaluate our method and conducted an experiment in which subjects positioned objects according to instructions given within an anthropomorphic space to determine their acceptance of the virtual body images presented and the anthropomorphic representation of the space. The results showed that subjectspsila positioning of objects changed according to the position of the human parts. We found that our method successfully generated a virtual body image for a particular region that would not normally be recognized in this way by users.


Artificial Life and Robotics | 2011

Social modification using implementation of partial agency toward objects

Hirotaka Osawa; Seiji Yamada

This article considers what kind of partial agency can be implemented for objects to bring about better agencies for interacting with humans. We humans have the ability to inform our fellows about our intentions, internal states, and requirements through verbal means, gestures, attitudes, timings, and other representations. These representations help us to maintain our belief that we are sufficient agents. Robots and virtual agents also mimic these representations; they act as if they have such an agency. However, their agencies are sometimes too excessive compared to their task. This mismatch leads to a high cognitive load being placed on users and consequently leads to breakdowns in interaction; it prevents human-agent interaction from being a modality in certain applications. We have devised an agency with multiple selectable features. We believe that selectable features promote good designs of virtual agents, robots, machinery, and home appliances according to their intended traits. We categorized these agencies into several groups and discuss what elements lead to these features. The article also describes a method of identifying these features in human behavior.

Collaboration


Dive into the Hirotaka Osawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seiji Yamada

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge