Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shinsuke Yasukawa is active.

Publication


Featured researches published by Shinsuke Yasukawa.


Neural Networks | 2016

Real-time object tracking based on scale-invariant features employing bio-inspired hardware

Shinsuke Yasukawa; Hirotsugu Okuno; Kazuo Ishii; Tetsuya Yagi

We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video.


Journal of Marine Science and Technology | 2017

Erratum to: Enhancement of deep-sea floor images obtained by an underwater vehicle and its evaluation by crab recognition

Jonghyun Ahn; Shinsuke Yasukawa; Takashi Sonoda; Tamaki Ura; Kazuo Ishii

The underwater robot is one of the most important research tools in deep-sea exploration where the high pressure, extreme darkness, and radio attenuation prevent direct access. In particular, Autonomous Underwater Vehicles (AUVs) are the focus of much attention since they do not have tethered cables and can navigate freely. However, ideally, AUVs should be able to make independent decisions even with limited information using mounted sensors. That is, AUVs need powerful programming and low-power consumption computer systems which enable them to recognize their surroundings and cruise for long distances. Thus, modern computer development makes it possible for AUVs to be one of the most practical solutions for deep-sea exploration and investigations. As a next-generation AUV, we have been developing a Sampling-AUV that can dive into deep-sea regions and bring back samples of marine creatures in order to more fully understand the marine ecosystem. Our mission scenario calls for the Sampling-AUV to transmit deep-sea floor images to scientists on the research ship using acoustic communication. The scientists select the marine creatures to sample, and the AUV is tasked with retrieving them. The AUV then returns to the area where the interesting marine creatures have been observed, and collects and brings back samples. In order to realize this mission scenario, the sea-floor images need to be enhanced to assist the judgment of the scientists as the color red attenuates rapidly and the images become bluish while small differences in AUV altitude to the sea-floor also affect the brightness of the images due to light attenuation. Moreover, although underwater acoustic communication is slow and inaccurate, the AUV is required to select interesting images that include marine life. In this paper, we propose a deep-sea floor image enhancement method based on the Retinex theory and its performance was evaluated using deep-sea floor images taken by an AUV. The performance of the image enhancement was evaluated through crab recognition.


international conference on mechatronics and automation | 2013

High-speed multiple spatial band-pass filtering using a resistive network

Shinsuke Yasukawa; Hirotsugu Okuno; Seiji Kameda; Tetsuya Yagi

In this study, we developed a vision system that separates an image into a set of spatial frequency bands using multiple spatial filters during each single frame sampling period of 20 ms. The vision system comprises active pixel sensors (APS), two sample and hold (S/H) circuits, an analog resistive network, and a field-programmable gate array (FPGA). The resistive network facilitates instantaneous spatial filtering where its spatial property depends on its resistance. We implemented a digital circuit in the FPGA, which controls resistance of the resistive network and the timing when reading out spatial-filtered images in a single frame. We examined the performance of this system by presenting a set of test images. The system output three spatial band-pass filtered images at 50 fps.


oceans conference | 2016

Development of an autonomous underwater vehicle with human-aware robot navigation

Yuya Nishida; Takashi Sonoda; Shinsuke Yasukawa; Jonghyun Ahn; Kazunori Nagano; Kazuo Ishii; Tamaki Ura

AUV Tuna-Sand 2 developed in February 2016 can take photograph of the seafloor with high resolution using a camera and two LED strobe every five second and detect sea creature from it based on color information. If sea creature is detected from taken photograph, the AUV transmit its compressed image to the ship using acoustic modem for image transmission. Operator on the ship can select photographing position that the AUV return it, from received photograph. After instruction by operator using acoustic modem for command, the AUV can return to photographing position in the range of ± 0.3 m. In future work, we will develop visual feedback method and manipulation method for sea creature sampling.


international journal of mechatronics and automation | 2014

A vision sensor system with a real-time multi-scale filtering function

Shinsuke Yasukawa; Hirotsugu Okuno; Seiji Kameda; Tetsuya Yagi

We developed a compact and energy-efficient vision sensor system that separates an image into a set of spatial frequency bands at 50 fps. The vision sensor system comprises a photo-sensor array, a metal-oxide-semiconductor (MOS)-based resistive network, and a field-programmable gate array (FPGA). To apply multiple spatial filters efficiently, which is required for the separation of an image, we employed a MOS-based resistive network, whose strengths are instantaneous filtering, configurable filter size, and low power consumption. A digital circuit for controlling the filter size of the resistive network was programmed in the FPGA; this circuit changes the filter size four times in a single frame sampling period. This control scheme generates four filtered images from a single resistive network. This system was applied to edge extraction of a photograph or a movie of natural scenes, and it was successful in extracting edges and separating them by spatial frequencies in real time, e.g., the outline and stripe patterns of a zebra.


ieee/sice international symposium on system integration | 2012

Detection of scale-invariant key points employing a resistive network

Shinsuke Yasukawa; Hirotsugu Okuno; Tetsuya Yagi

We assessed the feasibility of applying a resistive network (RN) filter to the scale-invariant feature transform (SIFT) algorithm by performing computer simulations for the hardware implementation of the filter. SIFT is an algorithm for computer vision to describe and detect local features that are invariant to scale and rotation of objects. However, it is difficult to perform multiple spatial filterings in SIFT algorithm in real time due to its high computational cost. To solve this problem, we employed an RN which performs spatial filtering instantaneously with extremely low power dissipation. In order to apply an RN filter to the SIFT algorithm instead of Gaussian filter, which is employed in the original SIFT algorithm, we investigated the difference in the spatial properties of the two filters. We simulated the SIFT algorithm employing the RN filter on a computer, and we demonstrated that key points were detected at the same place irrespective of the image size, and that the scale of the key point was detected appropriately.


OCEANS 2017 - Aberdeen | 2017

Sea-floor image transmission system for AUV

Jonghyun Ahn; Shinsuke Yasukawa; Tharindu Weerakoon; Takashi Sonoda; Yuya Nishida; Tamaki Ura; Kazuo Ishii

Autonomous Underwater Vehicle (AUV) has become one of the promising tool for ocean exploration during the last few decades, and, in particular, is the solution for the spatial-temporal investigations in wide areas for a long period. One of the next mission expected from AUV is deep sea specimen sampling, which is currently performed by Remotely Operated Vehicle (ROV) or Human Occupied Vehicle (HOV) where the sampling targets are selected by scientists on-line. In order to establish the similar on-line investigation with AUV system, the sea-floor images have to be transmitted to the scientists on the support vessel by acoustic communication. However, the speed of the acoustic communication is low compared with that of radio communication, and the data can be lost because of the directionality of acoustic modem, the positional relationship between the AUV and the support vessel, attenuation and so on. The robust image transmission system is necessary with acoustic communication for in-situ decision making for sampling by AUV with many tasks. In this paper, we propose a sea-floor image transmission system with image compression, and evaluated by sea trials in Suruga-bay. The image compression method is based on a set of color palettes, where the colors of a color palette are assigned as a set of main colors obtained from the minimum variance quantization, to represents a typical sea-floor image. The colors of the obtained images are replaced by the most similar colors in the color palette. The images compressed by a 16-colors color palette are evaluated by Structural SIMilarity (SSIM) method, and these compressed images have shown the SSIM index of 88.5%. The duration of one image transmission is about 40 seconds in the sea trials and the transmission success rate is 75%.


OCEANS 2016 - Shanghai | 2016

Image enhancement and compression of deep-sea floor image for acoustic transmission

Jonghyun Ahn; Shinsuke Yasukawa; Takashi Sonoda; Yuya Nishida; Kazuo Ishii; Tamaki Ura


Journal of robotics and mechatronics | 2018

Underwater Platform for Intelligent Robotics and its Application in Two Visual Tracking Systems

Yuya Nishida; Takashi Sonoda; Shinsuke Yasukawa; Kazunori Nagano; Mamoru Minami; Kazuo Ishii; Tamaki Ura


Journal of robotics and mechatronics | 2018

Image Mosaicing Using Multi-Modal Images for Generation of Tomato Growth State Map

Takuya Fujinaga; Shinsuke Yasukawa; Binghe Li; Kazuo Ishii

Collaboration


Dive into the Shinsuke Yasukawa's collaboration.

Top Co-Authors

Avatar

Takashi Sonoda

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tamaki Ura

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonghyun Ahn

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuo Ishii

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuya Nishida

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge