Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nobuyuki Kita is active.

Publication


Featured researches published by Nobuyuki Kita.


IFAC Proceedings Volumes | 2004

Real-time 3d SLAM with wide-angle vision

Andrew J. Davison; Yolanda González Cid; Nobuyuki Kita

Abstract The performance of single-camera SLAM is improved when wide-angle optics provide a field of view greater than the 40 to 50 degrees lenses normally used in computer vision. The issue is one of feature contact: each landmark object mapped remains visible through a larger range of camera motion, meaning that feature density can be reduced and camera movement range can be increased. Further, localisation stability is improved since features at widely differing viewing angles are simultaneously visible. We present the first real-time (30 frames per second), fully automatic implementation of 3D SLAM using a hand-waved wide-angle camera, and demonstrate significant advances in the range and agility of motions which can be tracked over previous narrow field-of-view implementations.


asian conference on computer vision | 1995

Active Stereo Vision System with Foveated Wide Angle Lenses

Yasuo Kuniyoshi; Nobuyuki Kita; Sebastien Rougeaux; Takashi Suehiro

A novel active stereo vision system with a pair of foveated wide angle lenses is presented. The projection curve of the lens is designed so that it facilitates active vision algorithms for motion analysis, object identification, and precise fixation. A pair of such lenses are mounted on a specially designed active stereo vision platform. It is compact and light so that it can be mounted on a mobile robot or a manipulator. A real time stereo tracking system is constructed using the platform, a dual-processor servo controller, and pipelined image processors with a multi-processor backend.


international conference on robotics and automation | 2009

Clothes state recognition using 3D observed data

Yasuyo Kita; Toshio Ueshiba; Ee Sian Neo; Nobuyuki Kita

In this paper, we propose a deformable-model-driven method to recognize the state of hanging clothes using three-dimensional (3D) observed data. For the task to pick up a specific part of the clothes, it is indispensable to obtain the 3D position and posture of the part. In order to robustly obtain such information from 3D observed data of the clothes, we take a deformable-model-driven approach[4], that recognizes the clothes state by comparing the observed data with candidate shapes which are predicted in advance. To carry out this approach despite large shape variation of the clothes, we propose a two-staged method. First, small number of representative 3D shapes are calculated through physical simulations of hanging the clothes. Then, after observing clothes, each representative shape is deformed so as to fit the observed 3D data better. The consistency between the adjusted shapes and the observed data is checked to select the correct state. Experimental results using actual observations have shown the good prospect of the proposed method.


international conference on robotics and automation | 2004

A deformable model driven visual method for handling clothes

Yasuyo Kita; Fuminori Saito; Nobuyuki Kita

In this paper, we propose a deformable model-driven method to obtain the 3D information necessary for handling clothes by manipulators from observation with stereo cameras. The task considered in this paper is to hold up a target part of clothes (e.g. one shoulder of a pullover) by the second manipulator, when the clothes are held in the air at any point by the first manipulator. First, the method calculates possible 3D shapes of the hanging clothes by simulating the clothes deformation. The 3D shape whose appearance gives the best fit with the observed images is selected as estimation of the current state. Then, based on the estimated shape, the 3D position and normal direction of the part where the second manipulator should hold are calculated. The results of preliminary experiments using actual two manipulators have shown the good potential of the proposed method.


intelligent robots and systems | 1995

Strategy for unfolding a fabric piece by cooperative sensing of touch and vision

Eiichi Ono; Nobuyuki Kita; Shigeyuki Sakane

A hand/eye system for handling flexible materials is under development. Our concern is to increase the effectiveness of a cooperative sensing of touch and vision for handling flexible and limp materials. We consider that a cooperative sensing of touch and vision is more important when flexible or limp objects such as fabrics are handled. This paper presents a strategy of sensor-based manipulation for unfolding a fabric piece in a case of primitive fabric handling movements. Vision and tactile sensing are used for picking up the folded part.


Robotics and Autonomous Systems | 2001

Sequential localisation and map-building for real-time computer vision and robotics

Andrew J. Davison; Nobuyuki Kita

Abstract Reviewing the important problem of simultaneous localisation and map-building, we emphasise its genericity and in particular draw parallels between the often divided fields of computer vision and robot navigation. We compare sequential techniques with the batch methodologies currently prevalent in computer vision, and explain the additional challenges presented by real-time constraints which mean that there is still much work to be done in the sequential case, which when solved will lead to impressive and useful applications. In a detailed tutorial on map-building using first-order error propagation, particular attention is drawn to the roles of modelling and an active methodology. Finally, recognising the critical role of software in tackling a generic problem such as this, we announce the distribution of a proven and carefully designed open-source software framework which is intended for use in a wide range of robot and vision applications.


workshop on applications of computer vision | 2002

A model-driven method of estimating the state of clothes for manipulating it

Yasuyo Kita; Nobuyuki Kita

Aiming at manipulating clothes, a model-driven method of estimating the state of hanging clothes is proposed. We suppose a system consisting of two manipulators and a camera. The task considered in this paper is to hold a pullover at its two shoulders by two manipulators respectively, as a first step for folding it. The proposed method estimates the state of the clothes held by one manipulator in a model-driven way and indicates the position to be held next by the other manipulator. First, the possible appearances of the pullover when it is held at one point are roughly predicted. Using discriminative features of the predicted appearances, the possible states for the observed appearance are selected. Each appearance of the possible state is partially deformed so as to get close to the observed appearance. The state whose appearance successfully approaches closest to the observed appearance is selected as the final decision. The point to be held next is determined according to the state. The results of preliminary experiments using actual images have shown the good potential of the proposed method.


international conference on pattern recognition | 2004

A deformable model driven method for handling clothes

Yasuyo Kita; Fuminori Saito; Nobuyuki Kita

A model-driven method for handling clothes by two manipulators based on observation with stereo cameras is proposed. The task considered in this paper is to hold up a specific part of clothes (e.g. one shoulder of a pullover) by the second manipulator, when the clothes is held in the air by the first manipulator. First, the method calculates possible 3D shapes of the hanging clothes by simulating the clothes deformation. The 3D shape whose appearance gives the best fit with the observed appearance is selected as estimation of the current state. Then, based on the estimated shape, the 3D position and normal direction of the part where the second manipulator should hold are calculated. The experiments using actual two manipulators have shown the good potential of the proposed method.


ieee-ras international conference on humanoid robots | 2011

Clothes handling based on recognition by strategic observation

Yasuyo Kita; Fumio Kanehiro; Toshio Ueshiba; Nobuyuki Kita

In this paper, we propose a method to recognize clothing shape based on strategic observation during handling. When a robot handles largely deformed objects like clothes, it is important for the robot to recognize a constantly varying shape. Large variation in shape and complex self-occlusion, however, make recognition very difficult. To address these difficulties, we have proposed a model-driven strategy using actions for informative observation and have developed some core methods based on this strategy [1][2][3]. In this paper, we show how these core methods can be used for an actual task that involves handling an item of clothing. In addition to proposing a sequence for this task, basic functions for realizing the sequence are also described. Using a robot, the experimental results demonstrated practical utility of the proposed strategy.


intelligent robots and systems | 2000

Active visual localisation for cooperating inspection robots

Andrew J. Davison; Nobuyuki Kita

In the routine inspection of industrial or other areas, teams of robots with various sensors could operate together to great effect, but require reliable, accurate and flexible localisation capabilities to be able to move around safely. We demonstrate accurate localisation for an inspection team consisting of a robot with stereo active vision and its companion with an active lighting system, and show that in this case a single sensor can be used for measuring the position of known or unknown scene features, measuring the relative location of the two robots, and actually carrying out an inspection task.

Collaboration


Dive into the Nobuyuki Kita's collaboration.

Top Co-Authors

Avatar

Yasuyo Kita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastien Rougeaux

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Fumio Kanehiro

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Toshio Ueshiba

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ee Sian Neo

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuhito Yokoi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge