Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenji Mase is active.

Publication


Featured researches published by Kenji Mase.


IEEE Pervasive Computing | 2002

Activity and location recognition using wearable sensors

Seon-Woo Lee; Kenji Mase

Using measured acceleration and angular velocity data gathered through inexpensive, wearable sensors, this dead-reckoning method can determine a users location, detect transitions between preselected locations, and recognize and classify sitting, standing, and walking behaviors. Experiments demonstrate the proposed methods effectiveness.


Systems and Computers in Japan | 1991

Automatic lipreading by optical‐flow analysis

Kenji Mase; Alex Pentland

While the acoustic signal is the primary cue in human speech recognition, the visual cue is also very useful, especially when the acoustic signal is distorted. A computer system is developed for automatic recognition of continuously spoken words by using only visual data. The velocity of lip motions may be measured from optical flow data which allows muscle action to be estimated. Pauses in muscle action result in zero velocity of the flow and are used to locate word boundaries. The pattern of muscle action is used to recognize the spoken words. In limited experiments involving the recognition of digits, it appears that the visually derived patterns of muscle action are stable for multiple utterances of the same word. Even across speakers the patterns are so similar that speaker-independent recognition is possible. An overall accuracy including word spotting and recognition of approximately 70 percent is obtained across for continuously spoken test samples from three speakers.


Lecture Notes in Computer Science | 1998

C-MAP: Building a Context-Aware Mobile Assistant for Exhibition Tours

Yasuyuki Sumi; Tameyuki Etani; Sidney S. Fels; Nicolas Simonet; Kaoru Kobayashi; Kenji Mase

This paper presents the objectives and progress of the Context-aware Mobile Assistant Project (C-MAP). The C-MAP is an attempt to build a personal mobile assistant that provides visitors touring exhibitions with information based on their locations and individual interests. We have prototyped the first version of the mobile assistant and used an open house exhibition held by our research laboratory for a testbed. A personal guide agent with a life-like animated character on a mobile computer guides users using exhibition maps which are personalized depending on their physical and mental contexts. This paper also describes services for facilitating new encounters and information sharing among visitors and exhibitors who have shared interests during/after the exhibition tours.


international symposium on wearable computers | 2000

Recognizing user context via wearable sensors

Brian P. Clarkson; Kenji Mase; Alex Pentland

We describe experiments in recognizing a persons situation from only a wearable camera and microphone. The types of situations considered in these experiments are coarse locations (such as at work, in a subway or in a grocery store) and coarse events (such as in a conversation or walking down a busy street) that would require only global, non-attentional features to distinguish them.


user interface software and technology | 1998

Path drawing for 3D walkthrough

Takeo Igarashi; Rieko Kadobayashi; Kenji Mase; Hidehiko Tanaka

This paper presents an interaction technique for walkthrough in virtual 3D spaces, where the user draws the intended path directly on the scene, and the avatar automatically moves along the path. The system calculates the path by projecting the stroke drawn on the screen to the walking surface in the 3D world. Using this technique, the user can specify not only the goal position, but also the route to take and the camera direction at the goal with a single stroke. A prototype system is tested using a displayintegrated tablet, and experimental results suggest that the technique can enhance existing walkthrough techniques.


international conference on pattern recognition | 1990

Simultaneous multiple optical flow estimation

Masahiko Shizawa; Kenji Mase

The authors propose a simultaneous closed-form estimation method for multiple optical flow from image sequences in which each image point has multiple motions. This method only requires convolution for space-time filtering and low-dimensional eigensystem analysis as an optimization process. The authors propose a mixture flow model of a multiple flow and energy integral minimization as a model fitting method. It is shown that symmetry between component flows of the mixture flow can reduce the dimension of the eigensystem and make the optimization unimodal and stable. Successful experiments on double-flow estimation of random texture patterns and natural scene images are reported.<<ETX>>


Computers & Graphics | 1994

“Finger-Pointer”: Pointing interface by image processing

Masaaki Fukumoto; Yasuhito Suenaga; Kenji Mase

Abstract We have developed an experimental system for the 3D direct pointing interface “Finger-Pointer,” which can recognize finger pointing actions and simple hand forms in real-time by processing the image sequences captured by stereoscopic TV cameras. The operator is not required to attach any peculiar device such as the Data-Glove or a magnetic sensor. Simple and fast image processing algorithms employed in the system enable real-time processing without any special image processing hardware. By introducing the notion of “VPO (Virtual Projection Origin),” the system can recognize pointing actions stably and accurately regardless of the operators pointing style. The system also synchronizes and integrates the audio and the visual channels by introducing the “Timing Tag” technique.


ubiquitous intelligence and computing | 2007

Ontology-based semantic recommendation for context-aware e-learning

Zhiwen Yu; Yuichi Nakamura; Seiie Jang; Shoji Kajita; Kenji Mase

Nowadays, e-learning systems are widely used for education and training in universities and companies because of their electronic course content access and virtual classroom participation. However, with the rapid increase of learning content on the Web, it will be time-consuming for learners to find contents they really want to and need to study. Aiming at enhancing the efficiency and effectiveness of learning, we propose an ontology-based approach for semantic content recommendation towards context-aware e-learning. The recommender takes knowledge about the learner (user context), knowledge about content, and knowledge about the domain being learned into consideration. Ontology is utilized to model and represent such kinds of knowledge. The recommendation consists of four steps: semantic relevance calculation, recommendation refining, learning path generation, and recommendation augmentation. As a result, a personalized, complete, and augmented learning program is suggested for the learner.


intelligent robots and systems | 2002

A constructive approach for developing interactive humanoid robots

Takayuki Kanda; Hiroshi Ishiguro; Michita Imai; Tetsuo Ono; Kenji Mase

There is a strong correlation between the number of appropriate behaviors an interactive robot can produce and its perceived intelligence. We propose a robot architecture for implementing a large number of behaviors and a visualizing tool for understanding the developed complex system. Behaviors are designed by using knowledge obtained through cognitive experiments and implemented by using situated recognition. By representing relationships between behaviors, episode rules help to guide the robot in communicating with people in a consistent manner. We have implemented over 100 behaviors and 800 episode rules in a humanoid robot. As a result, the robot could entice people to relate to it interpersonally. An Episode Editor is a tool to support the development of episode rules and to visualize the complex relationships among the behaviors. We consider the visualization is to be necessary for the constructive approach.


Proceedings of the IEEE Workshop on Visual Motion | 1991

Principle of superposition: a common computational framework for analysis of multiple motion

Masahiko Shizawa; Kenji Mase

The principle of superposition is applied to various motion estimation problems. It can potentially resolve the difficulty of analyzing multiple motion, transparent motion and motion boundaries by using a common mathematical structure. The authors demonstrate that, by applying the principle, the techniques of optical flow, 3D motion and structure from flow fields, direct method for 3D motion and structure recovery, motion and structure from correspondences in two frames can be extended coherently to deal with multiple motion. The theory not only produces multiple-motion versions of the existing algorithms, but also provides tools for the theoretical analysis of multiple motion. Since the approach is not at the algorithm level as are conventional segmentation paradigms, but at the level of computational theory, i.e. of constraints, theoretical results derived also contribute to psychophysical and physiological studies on the preattentive stages of biological motion vision systems. The paper emphasizes the universality of the principle.<<ETX>>

Collaboration


Dive into the Kenji Mase's collaboration.

Top Co-Authors

Avatar

Yasuyuki Sumi

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazushi Nishimoto

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sidney S. Fels

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge