Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomoko Yonezawa is active.

Publication


Featured researches published by Tomoko Yonezawa.


eye tracking research & application | 2008

Remote gaze estimation with a single camera based on facial-feature tracking without special calibration actions

Hirotake Yamazoe; Akira Utsumi; Tomoko Yonezawa; Shinji Abe

We propose a real-time gaze estimation method based on facial-feature tracking using a single video camera that does not require any special user action for calibration. Many gaze estimation methods have been already proposed; however, most conventional gaze tracking algorithms can only be applied to experimental environments due to their complex calibration procedures and lacking of usability. In this paper, we propose a gaze estimation method that can apply to daily-life situations. Gaze directions are determined as 3D vectors connecting both the eyeball and the iris centers. Since the eyeball center and radius cannot be directly observed from images, the geometrical relationship between the eyeball centers and the facial features and eyeball radius (face/eye model) are calculated in advance. Then, the 2D positions of the eyeball centers can be determined by tracking the facial features. While conventional methods require instructing users to perform such special actions as looking at several reference points in the calibration process, the proposed method does not require such special calibration action of users and can be realized by combining 3D eye-model-based gaze estimation and circle-based algorithms for eye-model calibration. Experimental results show that the gaze estimation accuracy of the proposed method is 5° horizontally and 7° vertically. With our proposed method, various application such as gaze-communication robots, gaze-based interactive signboards, etc. that require gaze information in daily-life situations are possible.


acm multimedia | 2008

Intuitive page-turning interface of e-books on flexible e-paper based on user studies

Taichi Tajika; Tomoko Yonezawa; Noriaki Mitsunaga

In this paper we propose an intuitive page-turning and browsing interface of e-books on a flexible e-paper based on user studies. Our user studies showed various types of page-turning actions such as flipping, grasping, and sliding by different situations or users. We categorized these actions into three categories: turn, flip through, and leaf through the page(s). Based on this categorized model, we have developed a conceptual design and prototype of an interface for an e-book reader, which enables intuitive page-turning interactions using a simple architecture in both hardware and software design. The prototype has a flexible plastic sheet with bend sensors, which is attached to a small LCD monitor to physically unite the visual display with a tangible control interface based on the natural page-turning actions as used in reading a real book. The prototype handles all three page-turning actions observed in the user studies by interpreting the bend degree of the sheet.


computer vision and pattern recognition | 2008

Remote and head-motion-free gaze tracking for real environments with automated head-eye model calibrations

Hirotake Yamazoe; Akira Utsumi; Tomoko Yonezawa; Shinji Abe

We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6deg accuracy with QVGA (320 times 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with long-distance observations (10 meters).


international symposium on wearable computers | 2013

Wearable partner agent with anthropomorphic physical contact with awareness of user's clothing and posture

Tomoko Yonezawa; Hirotake Yamazoe

In this paper, we introduce a wearable partner agent, that makes physical contacts corresponding to the users clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the users arm and b) it generates emotional expression by strongly enfolding the users arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agents messages and familiar impressions of the agent.


intelligent robots and systems | 2008

GazeRoboard: Gaze-communicative guide system in daily life on stuffed-toy robot with interactive display board

Tomoko Yonezawa; Hirotake Yamazoe; Akira Utsumi; Shinji Abe

In this paper, we propose a guide system for daily life in semipublic spaces by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. The system provides naturally anthropomorphic guidance through a) gaze-communicative behaviors of the stuffed-toy robot (ldquojoint attentionrdquo and ldquoeye-contact reactionsrdquo) that virtually express its internal mind, b) voice guidance, and c) projection on the board corresponding to the userpsilas gaze orientation. The userpsilas gaze is estimated by our remote gaze-tracking method. The results from both subjective/objective evaluations and demonstration experiments in a semipublic space show i) the holistic operation of the system and ii) the inherent effectiveness of the gaze-communicative guide.


international conference on multimodal interfaces | 2002

Musically expressive doll in face-to-face communication

Tomoko Yonezawa; Kenji Mase

We propose an application that uses music as a multimodal expression to activate and support communication that runs parallel with traditional conversation. We examine a personified doll-shaped interface designed for musical expression. To direct such gestures toward communication, we have adopted an augmented stuffed toy with tactile interaction as a musically expressive device. We constructed the doll with various sensors for user context recognition. This configuration enables translation of the interaction into melodic statements. We demonstrate the effect of the doll on face-to-face conversation by comparing the experimental results of different input interfaces and output sounds. Consequently, we have found that conversation with the doll was positively affected by the musical output, the doll interface, and their combination.


Paladyn: Journal of Behavioral Robotics | 2013

Attractive, Informative, and Communicative Robot System on Guide Plate as an Attendant with Awareness of User’s Gaze

Tomoko Yonezawa; Hirotake Yamazoe; Akira Utsumi; Shinji Abe

Abstract In this paper, we introduce an interactive guide plate system by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. An attached stuffed-toy robot on the system naturally show anthropomorphic guidance corresponding to the user’s gaze orientation. The guidance is presented through gaze-communicative behaviors of the stuffed-toy robot using joint attention and eye-contact reactions to virtually express its own mind in conjunction with b) vocal guidance and c) projection on the guide plate. We adopted our image-based remote gaze-tracking method to detect the user’s gazing orientation. The results from both empirical studies by subjective / objective evaluations and observations of our demonstration experiments in a semipublic space show i) the total operation of the system, ii) the elicitation of user’s interest by gaze behaviors of the robot, and iii) the effectiveness of the gaze-communicative guide adopting the anthropomorphic robot.


intelligent robots and systems | 2013

Physical contact using haptic and gestural expressions for ubiquitous partner robot

Tomoko Yonezawa; Hirotake Yamazoe; Shinji Abe

In this paper, we propose a portable robot to express physical contacts that are parallel to other modalities. It enfolds the users arm in its arms and tapping the users arm. The physical contact expressions are generated through a combination of several haptic stimuli and the robots anthropomorphic behaviors based on its internal state. The aim of our research is building a caregiver-like robot medium. The system was designed for gentle and delicate communication between the user and the robot during a users outings. The haptic stimuli express warm/cold, patting, and squeezing. Experimental results show that haptic communicative behaviors of the robot increase the intelligibility to the robots messages and familiar impressions to the robot.


Proceedings of the 5th ACM International Workshop on Context-Awareness for Self-Managing Systems | 2011

Privacy protected life-context-aware alert by simplified sound spectrogram from microphone sensor

Tomoko Yonezawa; Naoki Okamoto; Hirotake Yamazoe; Shinji Abe; Fumio Hattori; Norihiro Hagita

This paper introduces a design of life-context-aware alert system from multiple small microphone sensors at various places in home. In order to support the comfortable daily lives of elderly people who live alone, it is important to know their daily activities in home without privacy exposure. In the case of their emergency appeared from overwatching data, the system must alert the situation to the hospitals, ambulances, or their families. To reduce data for fast calculation on PIC and to protect their privacy, the system adopts simplified sound spectrogram from each installed microphone modules. The system first analyses these multiple signals to roughly understand what situation occurs, and decides what type of daily-life are found. When the users life shows emergent situations, the system alerts to the appropriate contact person or institution. This paper especially describes how to simplify the raw data from the microphone sensor with using frequency/time domain for reducing the amount of data and for privacy protection.


IWEC | 2003

Awareness Communications by Entertaining Toy Doll Agents

Kazuyuki Saitoh; Tomoko Yonezawa; Kenji Mase

In this paper, we propose a sensor-doll system that provides mUltiple users at remote locations with an awareness communication channel. A doll is used as the interface agent of the local user, and this agent is connected to a remote doll by local and/or wide area networks. The doll sends out information on the local ambient activities and the users intentional interactions to the remote agent and, at the same time, displays the received remote activities by adapting its presentation to the local context. Musical sound expression is used to display the remote awareness, mixing the local response and remote activities. Music also provides an entertaining and sympathetic intimacy with the doll and eventually the remote user. The design and implementation of the networked sensor-doll, equipped with various tactile sensors and a PC, are described in detail. We also discuss issues of awareness communication and give preliminary experimental results.

Collaboration


Dive into the Tomoko Yonezawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinji Abe

Hiroshima Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazuki Joe

Nara Women's University

View shared research outputs
Top Co-Authors

Avatar

Asuka Komeda

Nara Women's University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge