Idaku Ishii
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Idaku Ishii.
international conference on robotics and automation | 1996
Idaku Ishii; Yoshihiro Nakabo; Masatoshi Ishikawa
Most conventional visual feedback systems using CCD camera are restricted by video rates and therefore cannot be adapted to the changing environment sufficiently quickly. To solve this problem we developed a 1 ms visual feedback system using a general purpose massively parallel processing vision system in which photo-detectors and processing elements are directly connected. For high speed visual feedback fast image processing algorithms are also required. In particular, the difference of images between frames is very small in our system because of its high speed frame rate. Using this feature, we can realize several image processing techniques by simpler algorithms. In this paper we propose a simple algorithm for target tracking using the feature of high speed vision, and realize target tracking on the 1 ms visual feedback system.
international workshop on computer architecture for machine perception | 1997
Takashi Komuro; Idaku Ishii; Masatoshi Ishikawa
This paper describes a vision chip architecture for high-speed vision systems that we propose. The chip has general-purpose processing elements (PEs) in massively parallel architecture, with each PE directly connected to photo-detectors. Control programs allow various visual processing applications and algorithms to be implemented. A sampling rate of 1 ms is enough to realize high-speed visual feedback for robot control. To integrate as many PEs as possible on a single chip a compact design is required, so we aim to create a very simple architecture. The sample design has been implemented into an FPGA chip; a full custom chip has also been designed and has been submitted for fabrication.
international conference on robotics and automation | 1999
Akio Namiki; Yoshihiro Nakabo; Idaku Ishii; Masatoshi Ishikawa
In most conventional manipulation systems, changes in the environment cannot be observed in real time because the vision sensor is too slow. As a result the system is powerless under dynamic changes or sudden accidents. To solve this problem we have developed a grasping system using high-speed visual and force feedback. This is a multi-fingered hand-arm with a hierarchical parallel processing system and a high-speed vision system called SPE-256. The most important feature of the system is the ability to process sensory feedback at high speed, that is, in about 1 ms. By using an algorithm with parallel sensory feedback in this system, grasping with high responsiveness and adaptivity to dynamic changes in the environment is realized.
international conference on robotics and automation | 1999
Idaku Ishii; Masatoshi Ishikawa
In high speed real-time vision systems, such as vision chips, whose frame rate is much higher than video rates (NTSC 30 Hz/PAL 25 Hz), many conventional image processing techniques can be realized by more simplified algorithms by considering that in such a high speed vision system the change in images between frames is small. We propose segmentation/corresponding algorithms, self windowing, as simplified algorithms using such a property. In fact we show that the self-windowing algorithms can work on our vision chip, and we show experimental results for several image sequences at high frame rates.
Systems and Computers in Japan | 2001
Idaku Ishii; Masatoshi Ishikawa
With high-speed vision systems such as vision chips that support frame rates much higher than video signal (NTSC 30 Hz), image variation is small from frame to frame. Using this feature of high-speed vision, various image processing algorithms can be simplified. In particular, this paper proposes the Self Windowing algorithm for image segmentation and matching. Operability of the proposed algorithm was validated experimentally on S3PE vision chip architecture.
Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception | 2000
Idaku Ishii; Takashi Komuro; Masatoshi Ishikawa
A digital vision chip is a one-chip vision system that can perform operations much faster than the video rate (NTSC 30 Hz/PAL 25 Hz). The digital vision chip has a compact design for integration, and can execute several algorithms at high speed for robot control. To realize high-speed image processing on digital vision chips, it is important to develop image processing algorithms that work with the processing architecture. While most of the researches on vision chips have concerned image-to-image processing, there has been few researches on image-to-scalar feature extraction which is inevitable for visual feedback control. In this paper, we propose bit-plane feature decomposition (BPFD) as a method of image-to-scalar feature extraction in digital vision chip systems. We show the effectiveness of the proposed method by evaluating its application in a digital vision chip system.
Systems and Computers in Japan | 1999
Takashi Owaki; Yoshihiro Nakabo; Akio Namiki; Idaku Ishii; Masatoshi Ishikawa
A system was developed that supports virtually touching objects in a real environment by real-time transformation of visual data into haptic data. With the proposed system, local visual data of real objects are acquired using active vision; the visual data are then subjected to high-speed parallel image processing, and transformed into virtual reactive forces from objects. A virtual tactile sensation is generated by exerting a reactive force on the operators fingertip with a force display device. By using an original high-speed image processing system, the loop from visual data sensing through haptic display is operated at about 200 Hz. The present paper presents the basic concept of modality transformation, the system configuration, and results of experiments with objects of various shape carried out to verify the systems operation.
Advanced Robotics | 1997
Takashi Komuro; Idaku Ishii; Masatoshi Ishikawa
To solve the I/O bottleneck problem in existing vision systems and to realize versatile processing adaptive to various and changing environments, we propose a new vision chip architecture for applications such as robot vision. The chip has general-purpose processing elements (PEs) with each PE being directly connected to a photo detector (PD) and can implement various visual processing algorithms. We developed and simulated some sample programs for the chip and proved that they can be processed within 1 ms/frame, a rate that is high enough for high-speed visual feedback for robot control. Aiming to complete the chip, we are now developing test chips based on the architecture. The latest design has 8 x 8 PEs and PDs in an area 3.3 mm x 3.0 mm using a 0.8 μm CMOS process.
Systems and Computers in Japan | 2003
Idaku Ishii; Takashi Komuro; Masatoshi Ishikawa
Recently, an interesting general-purpose digital vision chip has been realized by integrating a photodetector (PD) and a processing element (PE) directly connected for each pixel on a single chip. This paper proposes bit plane (BP) feature decomposition as a realization of the idea of feature calculation that is suited to a massively parallel processing structure integrated on the digital vision chip. Moment calculation based on the method is discussed. An evaluation of the digital vision chip using the proposed calculation method is presented, and its effectiveness is demonstrated.
2000 International Topical Meeting on Optics in Computing (OC2000) | 2000
Daisuke Kawamata; Makoto Naruse; Idaku Ishii; Masatoshi Ishikawa
The extraordinary increase of digital image contents requires the development of methods to classify or quickly search video sequences in movie databases. In this paper, we suggest a novel algorithm based on Eigen value decomposition to construct and search video databases, and that can be implemented using smart pixel optoelectronic systems. By successive iterations the images are progressively classified in sets and subsets in a tree-type configuration. To evaluate this method, a movie containing 2,262 frames has been analyzed, and a successful classification of these images in function of their contents was obtained. The realization of this algorithm on a smart pixel system called OCULAR-II is also discussed, and a demonstration of the database search algorithm on the OCULAR-II system is described.
Collaboration
Dive into the Idaku Ishii's collaboration.
National Institute of Information and Communications Technology
View shared research outputs