Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomoko Yonemura is active.

Publication


Featured researches published by Tomoko Yonemura.


robot and human interactive communication | 2010

A study on wearable behavior navigation system (II) - a comparative study on remote behavior navigation systems for first-aid treatment

Eimei Oyama; Norifumi Watanabe; Hiroaki Mikado; Hikaru Araoka; Jun Uchida; Takashi Omori; Kousuke Shinoda; Itsuki Noda; Naoji Shiroma; Arvin Agah; Tomoko Yonemura; Hideyuki Ando; Daisuke Kondo; Taro Maeda

The capability to perform specific human tasks with the assistance of expert navigation is expected to be realized through the development wearable and ubiquitous computing technology. For instance, when an injured or ill person requires first-aid treatment, but only non-experts are nearby, instruction from an expert at a remote site is necessary. A behavior navigation system will allow the user to provide first-aid treatment in the same manner as an expert. Focusing on first-aid treatment, we have proposed and developed a prototype wearable behavior navigation system (WBNS) that uses augmented reality (AR) technology. This prototype WBNS has been evaluated in experiments, in which participants wore the prototype and successfully administered various first-aid treatments. Although the effectiveness of the WBNS has been confirmed, many challenges must be addressed to commercialize the system. The head-mounted displays (HMDs) used in the WBNS have a number of drawbacks, for example, high cost (which is not expected to decrease in the near future) and the time required for an ordinary user to become accustomed to the display. Furthermore, some individuals may experience motion sickness wearing the HMD. We expect that these drawbacks to the current technology will be resolved in the future; meanwhile, a near-future remote behavior navigation system (RBNS) is necessary. Accordingly, we have developed RBNSs for first-aid treatment using off-the-shelf components, in addition to the WBNS. In this paper, the basic mechanisms of the RBNS, experiments investigating the demonstration of expert behavior, and a comparative study of the WBNS and the RBNSs are presented.


Scientific Reports | 2015

Alternating images of congruent and incongruent movement creates the illusion of agency

Takumi Yokosaka; Hiroyuki Iizuka; Tomoko Yonemura; Daisuke Kondo; Hideyuki Ando; Taro Maeda

We report a novel illusion whereby people perceive both congruent and incongruent hand motions as a united, single, and continuous motion of ones own hand (i.e. a sense of agency). This arises when individuals watch congruent and incongruent hand motions alternately from a first person perspective. Despite an individual knowing that s/he is not performing the motion, this illusion still can arise. Although a sense of agency might require congruency between predicted and actual movements, united motion is incongruent with predicted movement because the motion contains oscillating movement which results from switching hand movement images. This illusion offers new insights into the integration mechanism of predicted and observed movements on agency judgment. We investigated this illusion from a subjective experience point of view and from a motion response point of view.


augmented human international conference | 2011

Inducing human motion by visual manipulation

Shin Okamoto; Hiroki Kawasaki; Hiroyuki Iizuka; Takumi Yokosaka; Tomoko Yonemura; Yuki Hashimoto; Hideyuki Ando; Taro Maeda

This paper reports a study of augmenting human motions by manipulating visual images displayed to users. The target motion is not only the motion that can be seen in the subjects views (i.e. hands or foots motion) but also the full-body motion that cannot be captured from their own perspective. As a result, it is shown that the motions are modulated without any physical contacts only by manipulated images.


augmented human international conference | 2011

Effective galvanic vestibular stimulation in synchronizing with ocular movement

Aru Sugisaki; Yuki Hashimoto; Tomoko Yonemura; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda

It is known that galvanic vestibular stimulation can cause ocular movement. Our final goal is to use GVS to support ocular movements. However, the effects of GVS to ocular movements are basically investigated while gazing at a certain point despite the fact that we have two different strategies to follow a moving target such as saccade and smooth pursuit. The effect might be different because those two use different mechanism. Therefore, this paper investigates the GVS effects during saccade. As a result, we show that the effect of GVS depends on the timing when GVS is given after the target marker moves.


international conference on computer graphics and interactive techniques | 2012

A video see-through face mounted display for view sharing

Yuki Hashimoto; Daisuke Kondo; Tomoko Yonemura; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda

If the feeling of the presence can be transferred to a different place from rather than the place where we actually exist, our life style will change drastically. By extending robot-human telexistence [1] technology to human-human situations, we are developing an environment where a skilled person, who actually exists at a different place, can work with high efficacy on the ground instead of non-skilled person. In order to realize such a telexistence environment in human interactions, we are developing remote communication technologies exploiting sensemotion sharing. In this project, we have developed a view sharing system to share first person perspectives between remote two people [2]. The system consists of a head mounted display and cameras, which make possible a video see though (VST-HMD). The user wearing the HMD can see his own view and the partners view, and also send his own view to the partner. Our aim is to share experience and to transmit the skills from one to another by sharing vision and motions [3]. We developed a new view sharing system to improve effectiveness and expand its applications.


international conference on robotics and automation | 2010

A study on wearable behavior navigation system - Development of simple parasitic humanoid system -

Eimei Oyama; Norifumi Watanabe; Hiroaki Mikado; Hikaru Araoka; Jun Uchida; Takashi Omori; Kousuke Shinoda; Itsuki Noda; Naoji Shiroma; Arvin Agah; Kazutaka Hamada; Tomoko Yonemura; Hideyuki Ando; Daisuke Kondo; Taro Maeda


augmented human international conference | 2011

Parasitic Humanoid: the wearable robotics as a behavioral assist interface like oneness between horse and rider

Taro Maeda; Hideyuki Ando; Hiroyuki Iizuka; Tomoko Yonemura; Daisuke Kondo; Masataka Niwa


Archive | 2013

MOTION GUIDE PRESENTATION METHOD AND SYSTEM THEREFOR, AND MOTION GUIDE PRESENTATION DEVICE

Taro Maeda; Hideyuki Ando; Hiroyuki Izuka; Tomoko Yonemura; Daisuke Kondo; Takumi Yokosaka


Proceedings of the Conference of Transdisciplinary Federation of Science and Technology | 2011

Multi-modal transmission and sharing by a parasitic humanoid

Yuki Hashimoto; Tomoko Yonemura; Daisuke Kondo; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda


The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2010

1A1-G14 Image Stabilization for View Shared Remote Cooperative Work

Daisuke Kondo; Keitaro Kurosaki; Tomoko Yonemura; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda

Collaboration


Dive into the Tomoko Yonemura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eimei Oyama

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Itsuki Noda

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge