Kazutaka Kurihara
National Institute of Advanced Industrial Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kazutaka Kurihara.
international conference on robotics and automation | 2002
Kazutaka Kurihara; Shin'ichiro Hoshino; Katsu Yamane; Yoshihiko Nakamura
This paper presents the real time processing of optical motion capture with pan-tilt camera tracking. Pan-tilt camera tracking expands the range of capturing field dynamically. The asymmetrical marker distribution and polyhedra search algorithm realize robust labeling against missing markers. The algorithm is developed for parallel cluster computation and enables real time data processing. Experimental results demonstrate the effectiveness of the system.
human factors in computing systems | 2006
Kazutaka Kurihara; Masataka Goto; Jun Ogata; Takeo Igarashi
It is tedious to handwrite long passages of text by hand. To make this process more efficient, we propose predictive handwriting that provides input predictions when the user writes by hand. A predictive handwriting system presents possible next words as a list and allows the user to select one to skip manual writing. Since it is not clear if people are willing to use prediction, we first run a user study to compare handwriting and selecting from the list. The result shows that, in Japanese, people prefer to select, especially when the expected performance gain from using selection is large. Based on these observations, we designed a multimodal input system, called speech-pen, that assists digital writing during lectures or presentations with background speech and handwriting recognition. The system recognizes speech and handwriting in the background and provides the instructor with predictions for further writing. The speech-pen system also allows the sharing of context information for predictions among the instructor and the audience; the result of the instructors speech recognition is sent to the audience to support their own note-taking. Our preliminary study shows the effectiveness of this system and the implications for further improvements.
advances in computer entertainment technology | 2013
Kazutaka Kurihara; Masakazu Takasu; Kazuhiro Sasao; Hal Seki; Takayuki Narabu; Mitsuo Yamamoto; Satoshi Iida; Hiroyuki Yamamoto
This paper demonstrates that face-like structures are everywhere, and can be detected automatically even with computers. Huge amount of satellite images of the Earth, the Moon, and the Mars are explored and many interesting face-like structure are detected. Throughout this fact, we believe that science and technologies can alert people not to easily become an occultist.
advances in computer entertainment technology | 2017
Kazutaka Kurihara; Akari Itaya; Aiko Uemura; Tetsuro Kitahara; Katashi Nagao
In this paper, we describe and evaluate Picognizer, a JavaScript library that detects and recognizes user-specified synthesized sounds using a template-matching approach. In their daily lives, people are surrounded by various synthesized sounds, so it is valuable to establish a way to recognize such sounds as triggers for invoking information systems. However, it is not easy to enable end-user programmers to create custom-built recognizers for each usage scenario through supervised learning. Thus, by focusing on a feature of synthesized sounds whose auditory deviation is small for each replay, we implemented a JavaScript library that detects and recognizes sounds using traditional pattern-matching algorithms. We evaluated its performance quantitatively and show its effectiveness by proposing various usage scenarios such as an autoplay system of digital games, and the augmentation of digital games including a gamification.
augmented human international conference | 2014
Kazutaka Kurihara; Yoko Sasaki; Jun Ogata; Masataka Goto
In video content such as feature films, the main themes and messages are often sufficiently conveyed through dialogue and narration. To augment human capability to consume video content, here we propose a system for watching such videos at very high speed while ensuring that speech is still comprehensible. Specifically, we employ a purpose-built automatic speech detector to realize two-level fast-forwarding for a wide variety of video content: very fast during segments without speech, and understandably fast during segments with speech. In our experiments, practical performance was achieved by frame-by-frame audio classification using Gaussian mixture models trained on subtitle information from 120 commercial DVD movies.
international conference on human-computer interaction | 2013
Kazutaka Kurihara; Koji Tsukada
Freedom of expression is welcomed in democratic nations, but there is no end of cases in which recorded video is processed to report information not intended by the person in the video. For this article, we have developed a prototype system for preventing this sort of bias in reporting. The system is a smartphone application that allows users, who are the subject of news-gathering, to also record the material themselves, post it to a video sharing site, and to display a QR code containing a link to the video. The system enables a link to a video reproducing the original statements to be forcefully embedded in the report video, which should inhibit bias in the reporting as it is presented later.
international conference on human computer interaction | 2011
Yuichi Murata; Kazutaka Kurihara; Toshio Mochizuki; Buntarou Shizuki; Jiro Tanaka
We describe the design of shadows of an overhead projector (OHP). metaphor-based presentation interface that visualizes a presenters action. Our interface work with graphics tablet devices. It superimposes a pen-shaped shadow based on position, altitude and azimuth of a pen. A presenter can easily point the slide with the shadow. Moreover, an audience can observe the presenters actions by the shadow. We performed two presentations using a prototype system and gather feedback from them. We decided on the design of the shadows on the basis of the feedback.
Archive | 2003
Yoshihiko Nakamura; Katsu Yamane; Kazutaka Kurihara; Ichiro Suzuki
international conference on multimodal interfaces | 2007
Kazutaka Kurihara; Masataka Goto; Jun Ogata; Yosuke Matsusaka; Takeo Igarashi
Archive | 2003
Yoshihiko Nakamura; Katsu Yamane; Ichiro Suzuki; Kazutaka Kurihara; Koji Tatani
Collaboration
Dive into the Kazutaka Kurihara's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs