Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kentaro Go is active.

Publication


Featured researches published by Kentaro Go.


Interactions | 2004

The blind men and the elephant: views of scenario-based system design

Kentaro Go; John M. Carroll

Six blind men encounter an elephant. Each of them touches a different part of the elephant and expresses what the elephant is. Although they are touching the same elephant, each mans description is completely different from that of the others. We have been using this story as a metaphor for understanding different views of scenario-based system design.


international conference on distributed computing systems workshops | 2004

A gaze and speech multimodal interface

Qiaohui Zhang; Atsumi Imamiya; Xiaoyang Mao; Kentaro Go

Eyesight and speech are two channels that humans naturally use to communicate with each other. However both the eye tracking and the speech recognition technique available today are still far from perfect. Our goal is find how to effectively make use of these error-prone information from both modes, in order to use one mode to correct errors of another mode, overcome the immature of recognition techniques, resolve the ambiguity of the users speaking, and improve the interaction speed. The integration strategies and the evaluation experiment demonstrate that these two modalities can be used multimodally to improve the usability and efficiency of user interface, which would not be available to speech-only or gaze-only systems.


intelligent user interfaces | 2004

Overriding errors in a speech and gaze multimodal architecture

Qiaohui Zhang; Atsumi Imamiya; Kentaro Go; Xiaoyang Mao

This work explores how to use the gaze and the speech command simultaneously to select an object on the screen. Multimodal systems have long been a key mean to reduce the recognition errors of individual components. But the multimodal system generates errors as well. This present study tries to classify the multimodal errors, analyze the reasons causing these errors, and propose the solutions for eliminating them. The goal of this study is to gain insight into multimodal integration errors, and to develop an error self-recoverable multimodal architecture so as to make the error-prone recognition technologies perform at a more stable and robust level within multimodal architecture.


workshop on mobile computing systems and applications | 2003

Designing a robust speech and gaze multimodal system for diverse users

Qiaohui Zhang; Kentaro Go; Atsumi Imamiya; Xiaoyang Mao

The recognition errors make recognition-based systems brittle, and lead to usability problems. Multimodal system is generally believed as an effective means of being able to contribute to error avoidance and recovery. This work explores how to combine gaze and speech, which are two error-prone modes, in order to get a robust multimodal architecture. Combining the two overcomes imperfections of recognition techniques, compensates for drawbacks of a single mode, resolves the language ambiguity, and leads to a much more effective system. In addition, we try to employ a new performance criterion about the error-handling ability to analyze and assess the multimodal integration strategies. With this new measure approach, not only the benefits of mutual disambiguation of individual input signals within the multimodal architecture are demonstrated, but also the condition under which the multimodal system becomes the most effective is identified.


human factors in computing systems | 2010

Arranging touch screen software keyboard split-keys based on contact surface

Kentaro Go; Leo Tsurumi

Touch screen devices, which have become ubiquitous in our daily lives, offer users flexible input and output operations. Typical operation methods for touch screen devices include the use of a stylus or a finger. A touch screen user can select a stylus or finger depending on the users situation and preference. In this paper, we propose a dynamic method of assigning symbols to keys for a software keyboard on a touch screen device. This method provides flexible adjustment to both the stylus operation and finger operation.


eye tracking research & application | 2004

Resolving ambiguities of a gaze and speech interface

Qiaohui Zhang; Atsumi Imamiya; Kentaro Go; Xiaoyang Mao

The recognition ambiguity of a recognition-based user interface is inevitable. Multimodal architecture should be an effective means to reduce the ambiguity, and contribute to error avoidance and recovery, compared with a unimodal one. But does the multimodal architecture always perform better than the unimode at any time? If not, when does it perform better than unimode, and when is it the optimum? Furthermore, how can modalities best be combined to gain the advantage of synergy? Little is known about these issues in the literature available. In this paper we try to give the answer through analyzing integration strategies for gaze and speech modalities, together with an evaluation experiment verifying these analyses. The approach involves studying the mutual correction cases and investigating when the mutual correction phenomena will occur. The goal of this study is to gain insights into integration strategies, and develop an optimum system to make error-prone recognition technologies perform at a more stable and robust level within a multimodal architecture.


IEEE Transactions on Software Engineering | 1999

A decomposition of a formal specification: an improved constraint-oriented method

Kentaro Go; Norio Shiratori

In this paper, the authors propose a decomposition method for a formal specification that divides the specification into two subspecifications composed by a parallel operator. To make these specification behaviors equivalent before and after decomposition, the method automatically synthesizes an additional control specification, which contains the synchronization information of the decomposed subspecifications. The authors prove that a parallel composition of the decomposed subspecifications synchronized with the control specification is strongly equivalent with the original (monolithic) specification. The authors also write formal specifications of the OSI application layers association-control service and decompose it using their method as an example of decomposition of a practical specification. Their decomposition method can be applied to top-down system development based on stepwise refinement.


advanced information networking and applications | 2004

Designing a mobile phone of the future: requirements elicitation using photo essays and scenarios

Kentaro Go; Yasuaki Takamoto; John M. Carroll

We report a case study of designing a mobile phone of the future, involving participatory requirements elicitation using a form of scenario-based design. Participants took photographs and wrote essays that illustrate their personal interests and perspectives on given themes. They then analyzed the photos and essays. They created scenarios and posed questions about the scenarios to envision contexts of future use. The participants produced novel design concepts and provided design insights even though they had no design training.


human factors in computing systems | 2003

PRESPE: participatory requirements elicitation using scenarios and photo essays

Kentaro Go; Yasuaki Takamoto; John M. Carroll; Atsumi Imamiya; Hisanori Masuda

We describe our ongoing investigation of the PRESPE (Participatory Requirements Elicitation using Scenarios and Photo Essays) method. PRESPE enables participants to reflect upon their personal experiences when using systems and create photo-essays based on this reflection. The participants can then analyze these experiences by forming design concepts, envision scenarios by imagining contexts of use, and create artifacts by sketching these scenarios. Our case study showed that PRESPE enabled participants, even those with no prior design education, to create novel ideas regarding system development.


international conference on pervasive computing | 2010

Iterative design of Teleoperative Slit Lamp Microscopes for Telemedicine

Kentaro Go; Kenji Kashiwagi; Naohiko Tanabe; Ken'ichi Horiuchi; Nobuya Koike

This paper reports a design project of Teleoperative Slit lamp Microscopes for Telemedicine. The project is a case study of development and deployment of a telemedicine system using human-centered design approaches. At the beginning of the design process, we conducted field research and paper prototyping of the doctors terminal with ophthalmologists. Later, we developed working prototypes for evaluation. We conducted ophthalmologists fixation data collection to redesign the user interface of the latest prototype.

Collaboration


Dive into the Kentaro Go's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Yanagida

Kurashiki University of Science and the Arts

View shared research outputs
Top Co-Authors

Avatar

Kazuhiko Yamazaki

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyang Mao

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar

John M. Carroll

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge