Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shigeyuki Sakane is active.

Publication


Featured researches published by Shigeyuki Sakane.


Advanced Robotics | 1987

Occlusion avoidance of visual sensors based on a hand-eye action simulator system: HEAVEN

Shigeyuki Sakane; Masaru Ishii; Masayoshi Kakikura

In order to construct an intelligent robot system, the implementation of sensor systems is a prime requirement. With the increasing demands for hand-eye coordinating systems, problems of how to design, construct, and utilize hand-eye systems effectively have been realized which have to be solved. For example, teaching tasks to a hand-eye system requires off-line programming of the visual sensors as well as the manipulators. Although much attention has been paid recently to robot simulators for assisting the off-line programming of manipulators, less work has been done on simulators of the sensors. We have developed a hand-eye action simulator system called HEAVEN. This system provides model-based functions to assist the hand-eye system in visual recognition and monitoring the robot environment. This paper describes a function of assisting cameras in occlusion avoidance to input adequate image data without occlusion. The problem to select the best viewpoint for a camera is defined as to evaluate the viewpo...


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1983

Design and implementation of SPIDER—A transportable image processing software package

Hideyuki Tamura; Shigeyuki Sakane; Fumiaki Tomita; Naokazu Yokoya; Masahide Kaneko; Katsuhiko Sakaue

Abstract SPIDER is a general-purpose image processing software package which consists of over 400 FORTRAN IV subroutines for various image processing algorithms and several utility programs for managing them. The package was developed for the benefit of extensive interchange and accumulation of programs among research groups. Thus, high transportability of software is emphasized above all in its design concept. In effect, all the image processing subroutines are implemented to be completely free of I/O work such as file access or driving peripheral image devices. The specifications of SPIDER programs also regulate the style of comments in source programs and documentation for the users manual. SPIDER may also be very useful as a research tool in other scientific disciplines as well as integrating fundamental algorithms in the image processing community. The design concepts, specifications, and contents of SPIDER are described.


international conference on robotics and automation | 1991

Automatic planning of light source and camera placement for an active photometric stereo system

Shigeyuki Sakane; T. Sato

An automatic planning method of light source and camera placement for an active photometric stereo system is presented. Since photometric stereo systems normally use multiple light sources fixed to the environment, they cannot avoid shadows caused by surrounding objects. Using a movable light source and adapting its placement actively to the task environment eliminates shadows. Candidate positions for light source placement are obtained based on a three-dimensional model of the environment and image processing of a virtual sphere around the target objects. Possible combinations are evaluated using the combined criteria of reliability and detectability of the measurement. Experimental results obtained using a light source affixed to a manipulator demonstrate the usefulness of the method. To improve the detectability of the active photometric stereo system, an extension has been made by using a movable camera in place of a fixed camera.<<ETX>>


international conference on robotics and automation | 2000

A human-robot interface using an interactive hand pointer that projects a mark in the real work space

Shin Sato; Shigeyuki Sakane

A human-robot interface system is under development that takes into account the flexibility of the DigitalDesk approach. The prototype consists of a projector subsystem for information display and a real-time tracking vision subsystem to recognize the humans action. Two levels of interaction using a virtual operational panel and interactive image panel have been developed. This paper presents the third subsystem, the interactive hand pointer used for selecting objects or positions in the environment via the operators hand gestures. The system visually tracks the operators pointing hand and projects a mark at the indicated position using an LCD projector. Since the mark can be observed directly in the real work space without monitor displays or HMDs, correction of the indicated position by moving the hand is very easy for the operator. The system enables projection of a mark not only at a target plane with a known height but also to a plane with an unknown height. Experimental results of a pick-and-place task demonstrate the usefulness of the proposed system.


IEEE Transactions on Robotics | 2008

Sensor Planning for Mobile Robot Localization---A Hierarchical Approach Using a Bayesian Network and a Particle Filter

Hongjun Zhou; Shigeyuki Sakane

In this paper, we propose a hierarchical approach to solving sensor planning for the global localization of a mobile robot. Our system consists of two subsystems: a lower layer and a higher layer. The lower layer uses a particle filter to evaluate the posterior probability of the localization. When the particles converge into clusters, the higher layer starts particle clustering and sensor planning to generate an optimal sensing action sequence for the localization. The higher layer uses a Bayesian network for probabilistic inference. The sensor planning takes into account both localization belief and sensing cost. We conducted simulations and actual robot experiments to validate our proposed approach.


intelligent robots and systems | 1995

Strategy for unfolding a fabric piece by cooperative sensing of touch and vision

Eiichi Ono; Nobuyuki Kita; Shigeyuki Sakane

A hand/eye system for handling flexible materials is under development. Our concern is to increase the effectiveness of a cooperative sensing of touch and vision for handling flexible and limp materials. We consider that a cooperative sensing of touch and vision is more important when flexible or limp objects such as fabrics are handled. This paper presents a strategy of sensor-based manipulation for unfolding a fabric piece in a case of primitive fabric handling movements. Vision and tactile sensing are used for picking up the folded part.


Advanced Robotics | 1991

Illumination setup planning for a hand-eye system based on an environmental model

Shigeyuki Sakane; Ruprecht Niepold; Tomomasa Sato; Yoshiaki Shirai

In hand-eye systems for advanced robotic applications such as assembly, the degrees of freedom of the vision sensor should be increased and actively made use of to cope with unstable scene conditions. Particularly, in the case of using a simple vision sensor, an intelligent adaptation of the sensor is essential to compensate for its inability to adapt to a changing environment. This paper proposes a vision sensor setup planning system which operates based on environmental models and generates plans for using the sensor and its illumination assuming freedom of positioning for both. A typical vision task in which the edges of an object are measured to determine its position and orientation is assumed for the sensor setup planning. In this context, the system is able to generate plans for the camera and illumination position, and to select a set of edges best suited for determining the objects position. The system operates for stationary or moving objects by evaluating scene conditions such as edge length, ...


Robotics and Autonomous Systems | 2007

Mobile robot localization using active sensing based on Bayesian network inference

Hongjun Zhou; Shigeyuki Sakane

In this paper we propose a novel method of sensor planning for a mobile robot localization problem. We represent the conditional dependence relation between local sensing results, actions, and belief of the global localization using a Bayesian network. Initially, the structure of the Bayesian network is learned from the complete data of the environment using the K2 algorithm combined with a genetic algorithm (GA). In the execution phase, when the robot is kidnapped to some place, it plans an optimal sensing action by taking into account the trade-off between the sensing cost and the global localization belief, which is obtained by inference in the Bayesian network. We have validated the learning and planning algorithm by simulation experiments in an office environment.


international conference on multisensor fusion and integration for intelligent systems | 1996

Estimation of contact position between a grasped object and the environment based on sensor fusion of vision and force

T. Ishikawa; Shigeyuki Sakane; T. Sato; H. Tsukune

Robots require various techniques of sensor fusion to achieve effectively manipulation tasks. This paper presents a sensor fusion method of vision and force information to estimate contact position between a grasped object and the other objects in the environment. The technique will be useful especially for assembly tasks since manipulation of an object by using visual information only often falls into difficulties because of the occlusions caused by the grasped object, surrounding objects, and the manipulator itself. In such situations, force sensor information helps to estimate the contact position even when the exact contact position is invisible. Consequently, sensor fusion of vision and force permits to improve adaptability of robot systems to changing situations in the task. Experiments using a robot system demonstrate usefulness of the proposed method. In addition, we address problems of sensor planning to automatically select sensor information taking the fusion of vision and force into account.


intelligent robots and systems | 1993

Distributed sensing system with 3D model-based agents

Shigeyuki Sakane; Hiroaki Okoshi; Tomomasa Sato; Masayoshi Kakikura

In recent years, demand for advanced sensing in robot tasks using multiple sensing modules has increased. A traditional approach for the construction of such a sensing system is to organize the modules using a single and centrally controlled system. However, there are various disadvantages in this approach with respect to flexibility, robustness, extendability, and efficiency in achieving tasks. To overcome these difficulties, decentralized and cooperative control of multirobots is a promising approach. The authors propose a decentralized and cooperative sensing system for robot vision tasks and present a prototype of such a visual sensing system implemented using a multirobot system. It performs visual inspection of water leakage on the sleeve of a valve. In this task, active and cooperative control of cameras and light sources play a very important role. The authors extend the system with 3-D model-based agents which utilize 3-D model information about the task environment for distributed sensor planning.

Collaboration


Dive into the Shigeyuki Sakane's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobuyuki Kita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masaru Ishii

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastien Rougeaux

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshio Mikami

Tokyo Metropolitan Technical College

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge