Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hayato Hagiwara is active.

Publication


Featured researches published by Hayato Hagiwara.


ieee global conference on consumer electronics | 2012

Real-time image processing system by using FPGA for service robots

Hayato Hagiwara; Kenichi Asami; Mochimitsu Komori

This paper presents a real-time image processing system for a mobile service robot which is implemented by an FPGA reconfigurable logic device. The digital image processing acquired by a CMOS image sensor is performed on the embedded FPGA board and Linux-based real-time video communication module. The integrated robot vision system aims to achieve a suitable platform available from both hardware and software. In addition, we also present spatial recognition by edge detection and tracking function of moving objects by using the frame difference method which is necessary for autonomous mobile robots.


international conference on hybrid information technology | 2011

FPGA Implementation of Image Processing for Real-Time Robot Vision System

Hayato Hagiwara; Kenichi Asami; Mochimitsu Komori

This paper presents a real-time robot vision system integrating adequate image processing and pan-tilt motion control which is implemented by FPGA reconfigurable logic device. The digital image processing acquired by CMOS image sensor is performed on the embedded FPGA board and Linux real-time video communication module. The integrated robot vision system aims to achieve a suitable platform available from both hardware and software. In addition, we also present spatial recognition by edge detection and tracking function of moving objects by determining color which are necessary for autonomous mobile robots.


ieee global conference on consumer electronics | 2012

Visual navigation system based on evolutionary computation on FPGA for patrol service robot

Kenichi Asami; Hayato Hagiwara; Mochimitsu Komori

The visual navigation system for a patrol service robot using image processing and evolutionary computation on FPGA is presented. The image processing and evolutionary computation are implemented into a reconfigurable device as original logic. The image processing circuit captures digital images from CMOS camera, extracts the edge and feature points in the images for self-localization to adjust the current course on the traveling map data. The genetic algorithm optimizes threshold values for the filtering operations according to various lighting environments. The visual navigation system has been developed for the linkage between flexible hardware circuits and real-time software applications for robot vision purpose.


trust security and privacy in computing and communications | 2011

Visual Navigation System with Real-Time Image Processing for Patrol Service Robot

Kenichi Asami; Hayato Hagiwara; Mochimitsu Komori

The visual navigation system for a mobile patrol robot using image processing by FPGA and real-time Linux is presented. The CMOS image sensor and the stepper motors driver ICs are connected to external I/O ports of the FPGA. The image processing and motor drive circuits are implemented into the reconfigurable device as original logic. The image capture circuit applies state machine and FIFO memory buffer to adjust timing for pixel data transmission. The motor drive circuit generates clock signals for steps according to the value from processor in the FPGA. The real-time device driver has been developed for the linkage between flexible hardware circuits and real-time software applications for robot vision purpose.


international conference on hybrid information technology | 2011

Development of Visual Navigation System for Patrol Service Robot

Kenichi Asami; Hayato Hagiwara; Mochimitsu Komori

The visual navigation system for a mobile patrol robot using image processing by FPGA and real-time device driver is presented. The CMOS image sensor and the stepper motor driver ICs are connected to external I/O ports of the FPGA. The image processing and motor drive circuits are implemented into the FPGA device together with state machine and FIFO memory buffer to adjust timing for pixel data transmission. The real-time device driver couples the flexible hardware circuits with software applications for the robot vision.


international conference on informatics electronics and vision | 2015

Scene recognition based on gradient feature for autonomous mobile robot and its FPGA implementation

Tsukasa Nakamura; Yasufumi Touma; Hayato Hagiwara; Kenichi Asami; Mochimitsu Komori

This paper introduces the image processing system of scene recognition using gradient feature and its FPGA implementation for mobile robots usage. We propose a hierarchical gradient feature descriptor, which could be easily implemented in a compact size of logic circuit on the FPGA. The gradient feature includes the function of corner detection by using the dispersion of directional gradient. The proposed hierarchical gradient feature analyzes the magnitude and direction in 17 regional blocks, where the input image is smoothed by the 7 line buffers of Gaussian filter with 8 parallel circuits as preprocessing.


international conference on informatics electronics and vision | 2015

FPGA-based stereo vision system using census transform for autonomous mobile robot

Kaichiro Nakazato; Yasufumi Touma; Hayato Hagiwara; Kenichi Asami; Mochimitsu Komori

This paper presents an FPGA-based stereo vision system using census transform for autonomous mobile robots. Most autonomous mobile robot need to quickly recognize the surroundings in order to avoid obstructions and find pathways. Therefore, faster image processing methods are required to be capable of mobility. Accordingly, the FPGA-based stereo vision system using census transform is proposed, where the feature description represents the adaptation to various environments and has compactness of binary information on logic circuits. As experimental results, the proposed FPGA-based stereo vision system was very faster than the software system on the same platform. Moreover, the mismatch rate was very low for the stereo correspondence.


international conference on informatics electronics and vision | 2015

3D map construction based on structure from motion using stereo vision

Daiki Kitayama; Yasufumi Touma; Hayato Hagiwara; Kenichi Asami; Mochimitsu Komori

In this paper 3D map construction using stereovision for autonomous mobile robot is presented. Parallax images by the triangulation method is obtained so as to calculate the 3D coordinates at the feature points. In addition, the Structure from Motion method is used so as to estimate the self-position of the camera to integrate the 3D map. In order to generate more accurate 3D map, the stereo vision system must deal with problems such as aperture problem and noise. The construction of the stereo measurement system and the experiment with mitigating the visual problems are conducted. The practical 3D map for the typical corridor environment was generated from the high-speed stereo image processing within 900 ms per scene.


soft computing | 2014

Local binary feature based on census transform for mobile robot

Yasufiimi Touma; Hayato Hagiwara; Kenichi Asami; Mochimitsu Komori

This paper presents a very simple descriptor for local image feature that is detected by DOB (Difference of Boxes) proposed in CenSurE. Image feature is often used to navigate a mobile robot, and some methods that are robust for scale and illumination changes are developed such as SIFT and CenSurE. However these descriptors still has heavy computation, and to use them is a big burden for some computers of low specification. Therefore we proposed a simple feature descriptor that is strong for several changes such as illumination change based on census transform. As experimental results, the proposal method has strength for some image changes, furthermore it has a high robustness for illumination change.


Journal of robotics and mechatronics | 2015

FPGA-Based Stereo Vision System Using Gradient Feature Correspondence

Hayato Hagiwara; Yasufumi Touma; Kenichi Asami; Mochimitsu Komori

Collaboration


Dive into the Hayato Hagiwara's collaboration.

Top Co-Authors

Avatar

Kenichi Asami

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mochimitsu Komori

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasufumi Touma

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daiki Kitayama

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaichiro Nakazato

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tsukasa Nakamura

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasufiimi Touma

Kyushu Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge