Kiyoshi Kiyokawa
Osaka University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kiyoshi Kiyokawa.
IEEE Computer Graphics and Applications | 2008
Doug A. Bowman; Sabine Coquillart; Bernd Froehlich; Michitaka Hirose; Yoshifumi Kitamura; Kiyoshi Kiyokawa; Wolfgang Stuerzlinger
Three-dimensional user interfaces (3D UIs) let users interact with virtual objects, environments, or information using direct 3D input in the physical and/or virtual space. In this article, the founders and organizers of the IEEE Symposium on 3D User Interfaces reflect on the state of the art in several key aspects of 3D UIs and speculate on future research.
international symposium on mixed and augmented reality | 2002
Kiyoshi Kiyokawa; Mark Billinghurst; Sean Hayes; Anoop Gupta; Yuki Sannohe; Hirokazu Kato
We conducted two experiments comparing communication behaviors of co-located users in collaborative augmented reality (AR) interfaces. In the first experiment, we compared optical, stereo- and mono-video, and immersive head mounted displays (HMDs) using a target identification task. It was found that differences in the real world visibility severely affect communication behaviors. The optical see-through case produced the best results with the least extra communication needed. Generally, the more difficult it was to use non-verbal communication cues, the more people resorted to speech cues to compensate. In the second experiment, we compared three different combinations of task and communication spaces using a 2D icon design task with optical see-through HMDs. It was found that the spatial relationship between the task and communication spaces also severely affected communication behaviors. Placing the task space between the subjects produced the most active behaviors in terms of initiatory body languages and utterances with least miscommunications.
IEEE MultiMedia | 2000
Kiyoshi Kiyokawa; Haruo Takemura; Naokazu Yokoya
The paper discusses SeamlessDesign, a novel collaborative workspace for rapid creation of 3D objects with constraints. Its seamless design supports both shape and behavioral designs of 3D objects in a unified and intuitive manner. Virtual and augmented setups support both multiple perspectives for parallel activity and face-to-face interaction for rich awareness.
IEEE Transactions on Visualization and Computer Graphics | 2015
Alexander Plopski; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker; Haruo Takemura
In recent years optical see-through head-mounted displays (OST-HMDs) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously. It remains challenging to deploy augmented reality (AR) applications that require consistent spatial visualization. Examples include maintenance, training and medical tasks, as the view of the attached scene camera is shifted from the users view. A calibration step can compute the relationship between the HMD-screen and the users eye to align the digital content. However, this alignment is only viable as long as the display does not move, an assumption that rarely holds for an extended period of time. As a consequence, continuous recalibration is necessary. Manual calibration methods are tedious and rarely support practical applications. Existing automated methods do not account for user-specific parameters and are error prone. We propose the combination of a pre-calibrated display with a per-frame estimation of the users cornea position to estimate the individual eye center and continuously recalibrate the system. With this, we also obtain the gaze direction, which allows for instantaneous uncalibrated eye gaze tracking, without the need for additional hardware and complex illumination. Contrary to existing methods, we use simple image processing and do not rely on iris tracking, which is typically noisy and can be ambiguous. Evaluation with simulated and real data shows that our approach achieves a more accurate and stable eye pose estimation, which results in an improved and practical calibration with a largely improved distribution of projection error.
Virtual Reality | 2002
Mark Billinghurst; Hirokazu Kato; Kiyoshi Kiyokawa; Daniel Belcher; Ivan Poupyrev
We describe a design approach, Tangible Augmented Reality, for developing face-to-face collaborative Augmented Reality (AR) interfaces. Tangible Augmented Reality combines Augmented Reality techniques with Tangible User Interface elements to create interfaces in which users can interact with spatial data as easily as real objects. Tangible AR interfaces remove the separation between the real and virtual worlds, and so enhance natural face-to-face communication. We present several examples of Tangible AR interfaces and results from a user study that compares communication in a collaborative AR interface to more traditional approaches. We find that in a collaborative AR interface people use behaviours that are more similar to unmediated face-to-face collaboration than in a projection screen interface.
international symposium on mixed and augmented reality | 2000
Kiyoshi Kiyokawa; Yoshinori Kurata; Hiroyuki Ohno
In a mixed reality system, mutual occlusion of real and virtual environments enhances the users feeling that virtual objects truly exist in the real world. However, conventional optical see-through displays cannot present mutual occlusion correctly since the synthetic objects always appear as semi-transparent ghosts floating in front of the real scene. In this paper, we propose a novel display design that attacks this well-known unsettled problem. Our optical see-through display with mutual occlusion capability has the following advantages: (1) since the light-blocking mechanism is embedded in the display and no additional setting is needed, it can be used anywhere, e.g. outdoors; (2) since incoming light can easily be cut off in any situation, virtual images keep their original intended colors, e.g. black; (3) since the light-blocking mechanism is separated from the display for color graphics, most existing see-through displays can be employed. We also describe our prototype display, which has confirmed the effectiveness of the approach.
International Journal of Human-computer Interaction | 2003
Mark Billinghurst; Daniel Belcher; Arnab Gupta; Kiyoshi Kiyokawa
The authors present an analysis of communication behavior in face-to-face collaboration using a multi-user augmented reality (AR) interface. 2 experiments were conducted. In the 1st experiment, collaboration with AR technology was compared with more traditional unmediated and screen-based collaboration. In the 2nd experiment, the authors compared collaboration with 3 different AR displays. Several measures were used to analyze communication behavior, and the authors found that users exhibited many of the same behaviors in a collaborative AR interface as in face-to-face unmediated collaboration. However, user communication behavior changed with the type of AR display used. The authors describe implications of these results for the design of collaborative AR interfaces and directions for future research.
international symposium on mixed and augmented reality | 2007
Kiyoshi Kiyokawa
The development of a wide field-of-view (FOV) head mounted display (HMD) has been a technological challenge for decades. Previous HMDs tackled this problem using multiple display units (tiling) or multiple curved mirrors. The former approach tends to be expensive and heavy, whereas the latter approach tends to suffer from image distortion and a small exit pupil. In order to provide a wide FOV image with a large exit pupil, the present paper proposes a novel head mounted projective display (HMPD) using a hyperbolic half-silvered mirror, rather than a conventional planar mirror. The first bench-top prototype has successfully shown wide field-of-view projection capability.
human factors in computing systems | 2002
D. Mogilev; Kiyoshi Kiyokawa; Mark Billinghurst; J. Pair
The AR Pad is a handheld display with a Spaceball and a camera, which can be used to view and interact with Augmented Reality models in collaborative setting.
intelligent user interfaces | 2013
Jason Orlosky; Kiyoshi Kiyokawa; Haruo Takemura
Reading text safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) or Heads Up Display (HUD) systems in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a users path, and scene variation must be dealt with effectively. This paper introduces a new intelligent text management system that actively manages movement of text in a users field of view. Research to date lacks a method to migrate user-centric content such as e-mail or text messages throughout a users environment while mobile. Unlike most current annotation and view management systems, we use camera tracking to find dark, uniform regions along the route on which a user is travelling in real time. We then implement methodology to move text from one viable location to the next to maximize readability. A pilot experiment with 19 participants shows that the text placement of our system is preferred to text in fixed location configurations.