Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takeshi Takaki is active.

Publication


Featured researches published by Takeshi Takaki.


international conference on robotics and automation | 2010

2000 fps real-time vision system with high-frame-rate video recording

Idaku Ishii; Tetsuro Tatebe; Qingyi Gu; Yuta Moriue; Takeshi Takaki; Kenji Tajima

This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and high frame rate video recording simultaneously. In IDP Express, a dedicated FPGA (Field Programmable Gate Array) board processes 512 × 512 pixel images from two camera heads by implementing image processing algorithms as hardware logic; the input images and processed results are transferred to standard PC memory at a rate of 2000 fps or more. Owing to the simultaneous high-frame-rate video processing and recording, IDP Express can be used as an intelligent video logger for long-term high-speed phenomenon analysis even when the measured objects move quickly in a wide area. We applied IDP Express to a mechanical target tracking system to record a high-frame-rate video at high resolution for a crucial moment, which is magnified by tracking the measured objects with visual feedback control. Several experiments on moving objects that undergo sudden shape deformation were performed. The results of the experiments involving the explosion of a rotating balloon and the crash of falling custard pudding have been provided to verify the effectiveness of IDP Express.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

High-Frame-Rate Optical Flow System

Idaku Ishii; Taku Taniguchi; Kenkichi Yamamoto; Takeshi Takaki

In this paper, we develop a high-frame-rate (HFR) vision system that can estimate the optical flow in real time at 1000 f/s for 1024×1024 pixel images via the hardware implementation of an improved optical flow detection algorithm on a high-speed vision platform. Based on the Lucas-Kanade method, we adopt an improved gradient-based algorithm that can adaptively select a pseudo-variable frame rate according to the amplitude of the estimated optical flow to accurately detect the optical flow for objects moving at high and low speeds in the same image. The performance of our developed HFR optical flow system was verified through experimental results for high-speed movements such as a tops spinnning motion and a humans pitching motion.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Fast FPGA-Based Multiobject Feature Extraction

Qingyi Gu; Takeshi Takaki; Idaku Ishii

This paper describes a high-frame-rate (HFR) vision system that can extract locations and features of multiple objects in an image at 2000 f/s for 512 × 512 images by implementing a cell-based multiobject feature extraction algorithm as hardware logic on a field-programmable gate array-based high-speed vision platform. In the hardware implementation of the algorithm, 25 higher-order local autocorrelation features of 1024 objects in an image can be simultaneously extracted for multiobject recognition by dividing the image into 8 × 8 cells concurrently with calculation of the zeroth and first-order moments to obtain the sizes and locations of multiple objects. Our developed HFR multiobject extraction system was verified by performing several experiments: tracking for multiple objects rotating at 16 r/s, recognition for multiple patterns projected at 1000 f/s, and recognition for human gestures with quick finger motion.


international conference on robotics and automation | 2010

Force visualization mechanism using a Moiré fringe applied to endoscopic surgical instruments

Takeshi Takaki; Youhei Omasa; Idaku Ishii; Tomohiro Kawahara; Masazumi Okajima

This paper presents a force visualization mechanism for endoscopic surgical instruments using a Moirée fringe. This mechanism can display fringes or characters that correspond to the magnitude of a force between the surgical instruments and internal organs without the use of electronic elements, such as amplifiers and strain gauges. As this mechanism is simply attached to the surgical instruments, there is no need for additional devices in the operating room or wires to connect these devices. The structure is simple, and its fabrication is inexpensive. An example is shown with the mechanism mounted on a 10-mm forceps. We experimentally verified in vivo, using a pig, that it can display characters corresponding to the magnitude of the force, thus visually displaying the force even in endoscopic image.


IEEE Transactions on Automation Science and Engineering | 2015

Simultaneous Vision-Based Shape and Motion Analysis of Cells Fast-Flowing in a Microchannel

Qingyi Gu; Tadayoshi Aoyama; Takeshi Takaki; Idaku Ishii

This paper proposes a novel concept for simultaneous cell shape and motion analysis in fast microchannel flows by implementing a multiobject feature extraction algorithm on a frame-straddling high-speed vision platform. The system can synchronize two camera inputs with the same view with only a tiny time delay on the sub-microsecond timescale. Real-time video processing is performed in hardware logic by extracting the moment features of multiple cells in 512 × 256 images at 4000 fps for the two camera inputs and their frame-straddling time can be adjusted from 0 to 0.25 ms in 9.9 ns steps. By setting the frame-straddling time in a certain range to avoid large image displacements between the two camera inputs, our frame-straddling high-speed vision platform can perform simultaneous shape and motion analysis of cells in fast microchannel flows of 1 m/s or greater. The results of real-time experiments conducted to analyze the deformabilities and velocities of sea urchin egg cells fast-flowing in microchannels verify the efficacy of our vision-based cell analysis system.


international conference on robotics and automation | 2014

Simultaneous projection mapping using high-frame-rate depth vision

Jun Chen; Takashi Yamamoto; Tadayoshi Aoyama; Takeshi Takaki; Idaku Ishii

In this paper, we report on the development of a projection mapping system that can project RGB light patterns that are enhanced for three-dimensional (3-D) scenes using a GPU-based high-frame-rate (HFR) vision system synchronized with HFR projectors. Our system can acquire 512×512 depth images in real time at 500 fps. The depth image processing is accelerated by installing a GPU board for parallel processing of a gray-code structured light method using infrared (IR) light patterns projected from an IR projector. Using the computed depth images, suitable RGB light patterns to be projected are generated in real time for enhanced application tasks. They are projected from an RGB projector as augmented information onto a 3-D scene with pixel-wise correspondence even when the 3-D scene is time-varied. Experimental results obtained from enhanced application tasks for time-varying 3-D scenes such as (1) depth-based color mapping and (2) augmented reality (AR) spirit level, confirm the efficacy of our system.


Journal of Electronic Imaging | 2012

Color-histogram-based tracking at 2000 fps

Idaku Ishii; Tetsuro Tatebe; Qingyi Gu; Takeshi Takaki

A high-speed vision system can be applied to color-histogram-based tracking at 2000 fps by hardware-implementing an improved CamShift algorithm. In the improved CamShift algorithm, the size, position, and orientation of a color-patterned object to be tracked in an image can be simultaneously extracted using only the hardware implementation of a color-histogram circuit module for calculating moment features of binary images quantized by 16 hue-based color bins. By hardware-implementing color-histogram circuit modules on a high-speed vision platform, IDP Express, the improved CamShift algorithm enables color-histogram-based tracking at 2000 fps for 512×511 pixel images in real-time. By installing our tracking system on a two-axis mechanical active vision system, we demonstrate the effectiveness of 2000 fps color-histogram-based tracking by performing several experiments of color-patterned objects, which are always tracked in the camera views even when they move rapidly under complicated backgrounds.


IEEE Sensors Journal | 2013

Dynamics-Based Stereo Visual Inspection Using Multidimensional Modal Analysis

Hua Yang; Qingyi Gu; Tadayoshi Aoyama; Takeshi Takaki; Idaku Ishii

This paper proposes the concept of multidirectional-modal-parameter-based visual inspection with high-frame-rate (HFR) stereo video analysis as a novel active sensing methodology for determining the dynamic properties of a vibrating object. HFR stereo video is used for observing the 3-D vibration distribution of an object under unknown excitations in the audio-frequency range, and the projections of the vibration displacement vectors along multiple directions can be verified using output-only modal analysis that can estimate their modal parameters such as resonant frequencies and mode shapes. Through implementing a fast output-only modal parameter estimation algorithm on a 10000-fps stereo vision platform, we developed a real-time multidirectional-modal-parameter-based visual inspection system; it can measure the 3-D vibration displacement vectors of 30 points on a beam-shaped object from 512 × 96 pixel stereo images at 10000 fps and can determine its resonant frequencies and mode shapes along 72 different directions around its beam axis as its input-invariant modal parameters. To demonstrate the performance of our system in modal-parameter-based visual inspection, the asymmetric dynamic properties, caused by cracks, of several steel beams vibrating at dozens of hertz and having artificial cracks were inspected in real time by determining the modal parameters along 72 directions around their beam axes.


international conference on robotics and automation | 2009

High performance anthropomorphic robot hand with grasp force magnification mechanism

Takeshi Takaki; Toru Omata

This paper presents a lightweight 328 g anthropomorphic robot hand that can exert a large grasp force. We propose a combination mechanism of a flexion-drive and a force-magnification-drive for a cable driven multifingered robot hand. The flexion-drive consisting of a feed screw enables quick motion of its fingers and the force-magnification-drive consisting of an eccentric cam, a bearing and a pulley enables a firm grasp. This paper also proposes a three-dimensional linkage for the thumb. This linkage consists of four links and is driven by a feed screw. It can oppose to a large force exerted by the other fingers with the force-magnification-drive. These mechanisms are compact enough to be installed in the developed lightweight hand. We experimentally verify that the maximum fingertip force of the hand exceeds 20 N and that the thumb can hold a large force of 100 N. The time to fully close the hand by using the flexion-drives is 0.47 s. After the fingers make contact with an object, the time to achieve a firm grasp by using the grasp-magnification-drive is approximately 1 s.


intelligent robots and systems | 2013

Real-time feature-based video mosaicing at 500 fps

Ken-ichi Okumura; Sushil Raut; Qingyi Gu; Tadayoshi Aoyama; Takeshi Takaki; Idaku Ishii

We conducted high-frame-rate (HFR) video mosaicing for real-time synthesis of a panoramic image by implementing an improved feature-based video mosaicing algorithm on a field-programmable gate array (FPGA)-based high-speed vision platform. In the implementation of the mosaicing algorithm, feature point extraction was accelerated by implementing a parallel processing circuit module for Harris corner detection in the FPGA on the high-speed vision platform. Feature point correspondence matching can be executed for hundreds of selected feature points in the current frame by searching those in the previous frame in their neighbor ranges, assuming that frame-to-frame image displacement becomes considerably smaller in HFR vision. The system we developed can mosaic 512×512 images at 500 fps as a single synthesized image in real time by stitching the images based on their estimated frame-to-frame changes in displacement and orientation. The results of an experiment conducted, in which an outdoor scene was captured using a hand-held camera-head that was quickly moved by hand, verify the performance of our system.

Collaboration


Dive into the Takeshi Takaki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hua Yang

Hiroshima University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroshi Matsuda

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar

Tomohiro Kawahara

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Gao

Hiroshima University

View shared research outputs
Top Co-Authors

Avatar

Jun Chen

Hiroshima University

View shared research outputs
Top Co-Authors

Avatar

Kenkichi Yamamoto

Tokyo University of Agriculture and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge