Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keiju Okabayashi is active.

Publication


Featured researches published by Keiju Okabayashi.


international conference on robotics and automation | 2006

Embedded vision system for mobile robot navigation

Naoyuki Sawasaki; Manabu Nakao; Yoshinobu Yamamoto; Keiju Okabayashi

We developed an embedded vision system that can accelerate the basic image processing functions for mobile robot navigation with compact hardware featuring low power consumption. The system is composed of a digital signal processor (DSP) and a dedicated LSI for low-level image processing, specifically for spatial filtering, feature extraction, and block matching operations. The image processing LSI has a dedicated systolic array processor consisting of 64 processing elements to accelerate basic block operations for image feature calculation and correlation-based image matching. The power consumption is only 10 W, about one-seventh that of a typical Pentium 4 personal computer, but the processing speed for correlation matching is roughly three times faster than such a system. With this vision system, we implemented a stereo-vision based navigation algorithm on our mobile service robot and performed a visual navigation experiment in a building hallway


ieee/sice international symposium on system integration | 2011

Research and development environments for robot services and its implementation

Yuka Kato; Toru Izui; Yoshihiko Murakawa; Keiju Okabayashi; Miwa Ueki; Yosuke Tsuchiya; Masahiko Narita

We have proposed the RSi Research Cloud (RSi-Cloud), which enables integration of robot services with internet services. Until now, we have developed a surveillance service using robot cameras on the cloud environment, and have realized internet connection from robots implemented with RSNP (Robot Service Network Protocol). This paper reports a design result of Robot Service HTML interface (RSHi), which is a mechanism to treat robot applications deployed in RSi-Cloud by the same way as RSNP robot clients. RSHi makes it possible to deploy RSNP service applications within firewall systems, and as a result, a robot service expansion can be expected. Moreover, in this paper, we show a usage example of micro-services on a service robot which uses a face detection function.


international conference on robotics and automation | 1994

Satellite berthing experiment with a two-armed space robot

Arata Ejiri; Ichiro Watanabe; Keiju Okabayashi; Masayoshi Hashima; Masayuki Tatewaki; Takashi Aoki; Tsugito Maruyama

To study autonomous control of a space robot, the authors developed their Advanced Space Robot Testbed with Redundant Arms (ASTRA). Berthing a satellite moving in space is a difficult and dangerous task when done manually. This paper describes a robot which berthed a mockup satellite moving and rotating in zero gravity. Features of the robot include autonomous approach to a moving target satellite using visual satellite motion estimation, using real-time visual tracking control to track satellite handles with two robot arms, and grasping the satellite handles with minimum mechanical shock using a flexible wrist mechanism and impedance control. The authors ran their satellite berthing experiment on the ASTRA to check its performance.<<ETX>>


conference of the industrial electronics society | 2013

Reliable cloud-based robot services

Masahiko Narita; Sen Okabe; Yuka Kato; Yoshihiko Murakwa; Keiju Okabayashi; Shinji Kanda

Internet and cloud-based robot services and their platforms are becoming attractive. Many related works have been also done, such as DAvinCi, ROS on Android, and RoboEarth. However, when robot services are provided via computer networks, very reliable services are required to deal with short/long-term disconnection between services and robots due to wireless LAN disconnection, robot service problems, system error on robots, and so on. In this paper, we adopt Robot Service Network Protocol (RSNP) as a robot service platform in order to integrate robot services with Internet services. In addition, we propose a method and architecture to realize reliable cloud-based robot services for RSNP in a communication view point.


ieee/sice international symposium on system integration | 2011

Verification of the effectiveness of robots for sales promotion in commercial facilities

Yoshihiko Murakawa; Keiju Okabayashi; Shinji Kanda; Miwa Ueki

Over the course of 4 years, we have conducted field operational tests by placing robots in commercial facilities to examine what services are possible and what are the issues faced in achieving practical use. As it stands now, we found great disparity between the users perception of using robots and actual robot functions. Also, we found that there were issues with items such as the cost of achieving functions in line with user perception. Thus, it is necessary to examine services that are feasible given the limits of current technology, as well as verify and evaluate the services in an actual environment. Among the services we conducted, we focused on sales promotion services, which can clearly show costs and quantitative effects. The effectiveness was measured and evaluated. We present these results and observations. We also present guidelines for operation for practical application and contents of service to be implemented.


international conference on interaction design & international development | 2014

Immediately-available Input Method Using One-handed Motion in Arbitrary Postures

Moyuru Yamada; Jiang Shan; Katsushi Sakai; Yuichi Murase; Keiju Okabayashi

Abstract This paper presents an input method, which enables the user to immediately interact with services using one-handed motion in arbitrary postures. This method is mainly targeted at workers who work with dirty hands or hold tools in their hands. To allow the workers to control applications as intended without handheld devices while they operate their hand work in various postures, we propose intentional segmentation, motion trajectory estimation of the wrist, and three types of inputs using wrist motions. The segmentation controller enables users to intentionally distinguish input motion from normal hand work by the wrist state without limiting the range of upper limb motion and immediately start performing the input motion. We estimate the motion trajectories of the wrist in the input segment using inertial sensors attached to the wrist, and recognize gestures from the trajectories without training data sets. The users can control various applications by three types of inputs: continuous inputs linked to the wrist motions, discrete inputs by gestures, and specific inputs generated from these. We also developed a wristwatch-like wearable device to evaluate the effectiveness of the proposed method. Through the two experimental evaluations, we showed the ease of learning the proposed method and achieved a mean recognition rate of 94.3% in various postures for six defined gestures.


ieee/sice international symposium on system integration | 2014

Developing a wearable wrist glove for fieldwork support: A user activity-driven approach

Shan Jiang; Katsushi Sakai; Moyuru Yamada; Junya Fujimoto; Hiroshi Hidaka; Keiju Okabayashi; Yuuichi Murase

This paper presents a wearable assistance system for service and maintenance personnel in industrial facilities, when used together with mobile smart devices. It provides a practical interaction metaphor designed to fully utilize the users physical activities, such as touch and hand gestures. As a result of a user activity-driven design approach, the presented system frees user from various constraints during various working operations. Especially, our new developed gesture input method enables the user to immediately interact with services using one-handed motions in an arbitrary postures. Furthermore, an event-driven architecture is proposed to solve the power management problem. We also developed a wristwatch-like wearable device to evaluate the effectiveness of the proposed method. The results presented in this paper demonstrate that the proposed system not only allows for effective and satisfactory service but also cuts down on errors and oversights. Finally, potential applications of this research that include design guidelines for future interface configurations are discussed.


society of instrument and control engineers of japan | 2015

Recognition of hand action using body-conducted sounds

Katsushi Miura; Shan Jiang; Yoshiro Hada; Keiju Okabayashi

Several methods of recognizing hand actions by examining the vibrations conducted through the body from muscular activity (we call them “body-conducted sounds” in this paper) were proposed in previous works. However, they did not consider the transfer characteristic of body-conducted sounds. In this paper, we propose a method for hand action recognition that extracts the main frequency elements of body-conducted sounds using the Mel-Frequency Cepstrum Coefficient (MFCC) and divides the hidden states of Hidden Markov Models (HMMs) based on the MFCC. The results of experiments show that our method makes it possible to correctly recognize 95% of hand actions on average.


augmented human international conference | 2017

Intuitive visualization method for locating off-screen objects inspired by motion perception in peripheral vision

Shizuko Matsuzoe; Shan Jiang; Miwa Ueki; Keiju Okabayashi

This paper describes an intuitive user interface (UI) that guides and attracts a users attention toward off-screen objects using a vibrating icon. Mobile augmented reality is often used for finding various annotations that are overlaid on a real world image, such as operating instructions for maintenance and inspection. However, owing to the limited scope of the device screen, it is difficult for the user to convey annotations located off-screen. In this work, we focused on motion perception and cognition properties and attempted to show off-screen information intuitively using a simple approach. To achieve the intuitive off-screen information display, we used the motion of a small, circular, vibrating icon. Experiments with tasks involving locating an off-screen target and memorizing numbers were conducted to evaluate the usability of the proposed guidance UI. The results show that users using the proposed UI can locate a target more quickly, memorize numbers more precisely, and provided a high user satisfaction more easily, than with other conventional guidance UIs.


Archive | 1993

System for and method of recognizing and tracking target mark

Masayoshi Hashima; Fumi Hasegawa; Keiju Okabayashi; Ichiro Watanabe; Shinji Kanda; Naoyuki Sawasaki; Yuichi Murase

Collaboration


Dive into the Keiju Okabayashi's collaboration.

Researchain Logo
Decentralizing Knowledge