Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Noriaki Ozawa is active.

Publication


Featured researches published by Noriaki Ozawa.


user interface software and technology | 2011

Gesture keyboard requiring only one camera

Taichi Murase; Atsunori Moteki; Noriaki Ozawa; Nobuyuki Hara; Takehiro Nakai; Katsuhito Fujimoto

In this paper, we propose a novel gesture-based virtual keyboard (Gesture Keyboard) of QWERTY key layout requiring only one camera. Gesture Keyboard tracks the users fingers and recognizes gestures as the input, and each virtual key of it follows a corresponding finger. Therefore, it is possible to input characters at the users preferred hand position even if displacing hands during inputting. Because Gesture Keyboard requires only one camera to obtain sensor information, keyboard-less devices can feature it easily.


document recognition and retrieval | 2003

Slide identification for lecture movies by matching characters and images

Noriaki Ozawa; Hiroaki Takebe; Yutaka Katsuyama; Satoshi Naoi; Haruo Yokota

Slide identification is very important when creating e-Learning materials as it detects slides being changed during lecture movies. Simply detecting the change would not be enough for e-Learning purposes. Because, which slide is now displayed in the frame is also important for creating e-Learning materials. A matching technique combined with a presentation file containing answer information is very useful in identifying slides in a movie frame. We propose two methods for slide identification in this paper. The first is character-based, which uses the relationship between the character code and its coordinates. The other is image-based, which uses normalized correlation and dynamic programming. We used actual movies to evaluate the performance of these methods, both independently and in combination, and the experimental results revealed that they are very effective in identifying slides in lecture movies.


symposium on 3d user interfaces | 2012

Poster: Head gesture 3D interface using a head mounted camera

Atsunori Moteki; Nobuyuki Hara; Taichi Murase; Noriaki Ozawa; Takehiro Nakai; Takahiro Matsuda; Katsuhito Fujimoto

In this paper, we propose a real world UI that uses head gestures. This UI detects user head motion obtained in images by head mounted camera (HMC). It estimates the relative position and distance between a users head and objects user is viewing. To prevent erroneous judgment, a head-specific motion model is applied in gesture recognition. As a feedback to the user, detailed object information is displayed on head mounted display (HMD). This UI allows hands-free interaction with surrounding objects. We show the UIs effectiveness by experiments.


Proceedings of SPIE | 2012

Hybrid gesture recognition system for short-range use

Akihiro Minagawa; Wei Fan; Yutaka Katsuyama; Hiroaki Takebe; Noriaki Ozawa; Yoshinobu Hotta; Jun Sun

In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the cameras field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.


Archive | 2007

Computer readable recording medium recorded with learning management program, learning management system and learning management method

Noriaki Ozawa; Satoshi Naoi; Hiroto Toda; Osamu Iemoto


Archive | 2008

Document recognizing apparatus and method

Hiroaki Takebe; Noriaki Ozawa; Akihiro Minagawa; Yusaku Fujii; Yoshinobu Hotta; Hiroshi Tanaka; Katsuhito Fujimoto; Junichi Hirai; Seiji Takahashi


Archive | 2005

Program, method and apparatus for generating fill-in-the-blank test questions

Yutaka Katsuyama; Noriaki Ozawa; Satoshi Naoi


Archive | 2003

Apparatus and method of analyzing layout of document, and computer product

Noriaki Ozawa; Hiroaki Takebe; Katsuhito Fujimoto; Satoshi Naoi


Archive | 2006

Learning management program and learning management device

Osamu Iemoto; Satoshi Naoi; Noriaki Ozawa; Hiroto Toda; 修 家本; 憲秋 小澤; 博人 戸田; 聡 直井


Archive | 2011

Character identification method and character identification device

Lanlan Chang; Sun Jun; Noriaki Ozawa; Komei Takebe; Hao Yu; Satoshi Naoi; Etsunobu Hotta

Collaboration


Dive into the Noriaki Ozawa's collaboration.

Researchain Logo
Decentralizing Knowledge