Hideaki Kuzuoka
University of Tsukuba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hideaki Kuzuoka.
human factors in computing systems | 1992
Hideaki Kuzuoka
Collaboration in three-dimensional space: “spatial workspace collaboration” is introduced and an approach supporting its use via a video mediated communication system is described. Verbal expression analysis is primarily focused on. Based on experiment results, movability of a focal point, sharing focal points, movability of a shared workspace, and the ability to confirm viewing intentions and movements were determined to be system requirements necessary to support spatial workspace collaboration. A newly developed SharedView system having the capability to support spatial workspace collaboration is also introduced, tested, and some experimental results described.
conference on computer supported cooperative work | 1994
Hideaki Kuzuoka; Toshio Kosuge; Masatomo Tanaka
An approach supporting spatial workspace collaboration via a video-mediated communication system is described. Based on experimental results, the following were determined to be the system requirements to support spatial workspace collaboration: independency of a field of view, predictability, confidence in transmission and sympathy toward the system. Additionally, a newly developed camera system, the GestureCam System, is introduced. A camera is mounted on an actuator with three degrees of freedom. It is controlled by master-slave method or by a touch-sensitive CRT. Also, a laser pointer is mounted to assist with remote pointing. Preliminary experiments were conducted and the results are described herein.
conference on computer supported cooperative work | 2000
Hideaki Kuzuoka; Shinya Oyama; Keiichi Yamazaki; Kenji Suzuki; Mamoru Mitsuishi
When designing systems that support remote instruction on physical tasks, one must consider four requirements: 1) participants should be able to use non-verbal expressions, 2) they must be able to take an appropriate body arrangement to see and show gestures, 3) the instructor should be able to monitor operators and objects, 4) they must be able to organize the arrangement of bodies and tools and gestural expression sequentially and interactively. GestureMan was developed to satisfy these four requirements by using a mobile robot that embodies a remote instructors actions. The mobile robot mounts a camera and a remote control laser pointer on it. Based on the experiments with the system we discuss the advantage and disadvantage of the current implementation. Also, some implications to improve the system are described.
human factors in computing systems | 2008
Akiko Yamazaki; Keiichi Yamazaki; Yoshinori Kuno; Matthew Burdelski; Michie Kawashima; Hideaki Kuzuoka
As research over the last several decades has shown that non-verbal actions such as face and head movement play a crucial role in human interaction, such resources are also likely to play an important role in human-robot interaction. In developing a robotic system that employs embodied resources such as face and head movement, we cannot simply program the robot to move at random but rather we need to consider the ways these actions may be timed to specific points in the talk. This paper discusses our work in developing a museum guide robot that moves its head at interactionally significant points during its explanation of an exhibit. In order to proceed, we first examined the coordination of verbal and non-verbal actions in human guide-visitor interaction. Based on this analysis, we developed a robot that moves its head at interactionally significant points in its talk. We then conducted several experiments to examine human participant non-verbal responses to the robots head and gaze turns. Our results show that participants are likely to display non-verbal actions, and do so with precision timing, when the robot turns its head and gaze at interactionally significant points than when the robot turns its head at not interactionally significant points. Based on these findings, we propose several suggestions for the design of a guide robot.
Personal and Ubiquitous Computing | 1999
Saul Greenberg; Hideaki Kuzuoka
Digital but physical surrogates are tangible representations of remote people (typically members of small intimate teams), positioned within an office and under digital control. Surrogates selectively collect and present awareness information about the people they represent. They also react to peoples explicit and implicit physical actions: a persons explicit acts include grasping and moving them, while their implicit acts include how they move towards or away from the surrogate. By responding appropriately to these physical actions of people, surrogates can control the communication capabilities of a media space in a natural way. Surrogates also balance awareness and privacy by limiting and abstracting how activities are portrayed, and by offering different levels of salience to its users. The combination of all these attributes means that surrogates can make it easy for intimate collaborators to move smoothly from awareness of each other to casual interaction while mitigating privacy and distraction concerns.Exploring different surrogate designs and how they work together can be straightforward if a good infrastructure is in place. We use anawareness server based on a distributed model-view-controller architecture, which automatically captures, stores and distributes events. We also package surrogates as physical widgets orphidgets with a well-defined interface; this makes it easy for a programmer to plug a surrogate into the awareness server as a controller (to generate awareness events), or view (to display events that others have produced), or both. Because surrogate design, implementation and use is still a new discipline, we also present several issues and next steps.
international symposium on wearable computers | 2004
Takeshi Kurata; Nobuchika Sakata; Masakatsu Kourogi; Hideaki Kuzuoka; Mark Billinghurst
The wearable active camera/laser (WACL) allows the remote collaborators not only to independently set their viewpoints into the wearers workplace but also to point to real objects directly with the laser spot. In this paper, we report a user test to examine the advantages and limitations of the WACL interface in remote collaboration by comparing a head-mounted display and a head-mounted camera-based headset interface. Results show that the WACL is more comfortable to wear, is more eye-friendly, and causes less fatigue to the wearer, although there is no significant difference in task completion time. We first review related works and user studies with wearable collaborative systems, and then describe the details on the user test.
human factors in computing systems | 2009
Naomi Yamashita; Rieko Inaba; Hideaki Kuzuoka; Toru Ishida
When people communicate in their native languages using machine translation, they face various problems in constructing common ground. This study investigates the difficulties of constructing common ground when multiparty groups (consisting of more than two language communities) communicate using machine translation. We compose triads whose members come from three different language communities--China, Korea, and Japan--and compare their referential communication under two conditions: in their shared second language (English) and in their native languages using machine translation. Consequently, our study suggests the importance of not only grounding between speaker and addressee but also grounding between addressees in constructing effective machine-translation-mediated communication. Furthermore, to successfully build common ground between addressees, it seems important for them to be able to monitor what is going on between a speaker and other addressees.
human factors in computing systems | 1999
Hideaki Kuzuoka; Saul Greenberg
Digital but physical surrogates are tangible representations of remote people positioned within an office and under digital control. Surrogates selectively collect and present awareness information about the people they represent. By having them react to physical actions of people, surrogates can control the communication capabilities of a media space. This enables the smooth transition from awareness to casual interaction while mitigating concerns about privacy.
conference on computer supported cooperative work | 2004
Hideaki Kuzuoka; Jun'ichi Kosaka; Keiichi Yamazaki; Yasuko Suga; Akiko Yamazaki; Paul Luff; Christian Heath
In this paper we investigated systems for supporting remote collaboration using mobile robots as communication media. It is argued that the use of a remote-controlled robot as a device to support communication involves two distinct ecologies: an ecology at the remote (instructors) site and an ecology at the operators (robot) site. In designing a robot as a viable communication medium, it is essential to consider how these ecologies can be mediated and supported. In this paper, we propose design guidelines to overcome the problems inherent in dual ecologies, and describe the development of a robot named GestureMan-3 based on these guidelines. Our experiments with GestureMan-3 showed that the system supports sequential aspects of the organization of communication.
conference on computer supported cooperative work | 2008
Naomi Yamashita; Keiji Hirata; Shigemi Aoyagi; Hideaki Kuzuoka; Yasunori Harada
In this study, we examine how changes in seating position across different sites affect video-mediated communication. We experimentally investigated the effects of altering seating positions on conversations in four-person group communication, two-by-two at identical locations: distant parties seated across from each other vs. distant parties seated side-by-side. In the latter seating arrangement, we found that speaker switches were more evenly distributed between distance-separated participants and co-located participants at points without verbal indication of the next speaker. Participants shared a higher sense of unity and reached a slightly better group solution. These findings demonstrate the importance of providing people with various seating arrangements across distant sites to facilitate different group activities.