Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eiichiro Mutoh is active.

Publication


Featured researches published by Eiichiro Mutoh.


Proceedings of SPIE | 2009

Infrared image guidance for ground vehicle based on fast wavelet image focusing and tracking

Akira Akiyama; Nobuaki Kobayashi; Eiichiro Mutoh; Hideo Kumagai; Hirofumi Yamada; Hiromitsu Ishii

We studied the infrared image guidance for ground vehicle based on the fast wavelet image focusing and tracking. Here we uses the image of the uncooled infrared imager mounted on the two axis gimbal system and the developed new auto focusing algorithm on the Daubechies wavelet transform. The developed new focusing algorithm on the Daubechies wavelet transform processes the result of the high pass filter effect to meet the direct detection of the objects. This new focusing gives us the distance information of the outside world smoothly, and the information of the gimbal system gives us the direction of objects in the outside world to match the sense of the spherical coordinate system. We installed this system on the hand made electric ground vehicle platform powered by 24VDC battery. The electric vehicle equips the rotary encoder units and the inertia rate sensor units to make the correct navigation process. The image tracking also uses the developed newt wavelet focusing within several image processing. The size of the hand made electric ground vehicle platform is about 1m long, 0.75m wide, 1m high, and 50kg weight. We tested the infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking using the electric vehicle indoor and outdoor. The test shows the good results by the developed infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking.


Proceedings of SPIE | 2008

Space imaging infrared optical guidance for autonomous ground vehicle

Akira Akiyama; Nobuaki Kobayashi; Eiichiro Mutoh; Hideo Kumagai; Hirofumi Yamada; Hiromitsu Ishii

We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best focus image clearly and give the depth of the object from the infrared camera unit.


Proceedings of SPIE | 2014

Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

Akira Akiyama; Eiichiro Mutoh; Hideo Kumagai

We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.


Proceedings of SPIE | 2013

Stereo matching image processing by selected finite length edge line matching on least square method

Akira Akiyama; Nobuaki Kobayashi; Eiichiro Mutoh; Hideo Kumagai

We have developed the stereo matching image processing by selected finite length edge line matching on least square method to find the local distance information of the view. This method is based on a pair of the high pass wavelet images to find out the matching edge line. These high pass wavelet images are also used to choose the focused images by applying threshold operation on them. Each imager has the function for focusing, changing view angle and changing aperture by the servomotors and microcomputers. It is mounted on the gimbal unit to make the independent yaw and pitch movement. And a pair of imagers is mounted on the yaw gimbal to make the same yaw movement. The matching edge line for matching process is derived from making the 2-valued high pass image with correspondence for focused image, grouping high valued pixels in the 2 valued high pass image by 8 directional connectivity rule, thinning the grouped image by Hilditch thinning method, tracing the thinned line image for numbering pixels on the line continuously, calculating the line linearity by the least square method at each pixel point with adjacent finite number of pixel points, finding out the line segments to have linearity within the limited root mean square of the difference between the line by least square method and the thinned line segment, constructing the standard matching edge line by reducing the number of the pixels of matching edge line for the tolerance of matching due to the deformation between a pair of images. The selected standard matching edge line is evaluated by autocorrelation on the standard thinned line image to check the existence of similar line segments. Under the information of autocorrelation the edge line matching is evaluated by moving the pixel point through paired thinned line image by calculating the root mean square of the difference between them.


Proceedings of SPIE | 2012

Geometrical stereo matching image guidance for ground vehicle on focused image pixel grouping and stacked images statistical operation

Akira Akiyama; Nobuaki Kobayashi; Hideo Kumagai; Eiichiro Mutoh; Hiromitsu Ishii

We have developed the geometrical stereo matching image guidance for ground vehicle on focused image pixel grouping and stacked images statistical operation. The two imagers are mounted on the 5 degrees of freedom gimbal unit. The gimbal unit gives each imager the independent yaw and pitch movement, and makes the same rigid yaw rotation on the two imagers. The fast focus image is derived from the calculating the developed wavelet focus measure value of the horizontal high pass image and the vertical high pass image of the Daubechies wavelet transformed image. The highest wavelet focus measure value among them gives the best focus image directly. This focusing operation works finely similar to the other differential image techniques. We used the stereo matching operation between the binary blocked high pass images corresponding to the best focus image. To construct the binary blocked high pass image, we apply the 8 directional adjacent pixel connection to the binary high pass image. The group of the main block elements of the binary image can work as the appropriate matching block. The wide image and narrow image stereo matching operation on the binary high pass image give the correct matching. In particular the narrow image stereo matching operation provides the common area of the right image and the left image. For finding the surface we used the brightness variation of each pixel point through the stacked images for the focusing operation. The kinds of the calculated brightness variations are the standard variation and the absolute deviation from the average brightness on each pixel point. We applied the threshold to the variation and deviation to classify the image area into the mild variation brightness surface area and rough variation brightness surface area. The rough variation brightness surface area covers the group of the main blocked elements in the binary image.


Proceedings of SPIE | 2011

Wide and narrow dual image guidance system for ground vehicle on fast focusing and stereo matching operation

Akira Akiyama; Nobuaki Kobayashi; Eiichiro Mutoh; Hideo Kumagai; Hiromitsu Ishii

We have developed the wide and narrow dual image guidance system for ground vehicle on fast focusing and stereo matching operation. The fast focusing catches the distance information of outside world. The stereo matching operation on the focused two wide images finds the characteristic position to detect the fine distance information through the fast focusing using the narrow images due to the camera with the long focal length. Our fast focusing algorithm works precisely on the differential image such as the Daubechies wavelet transformed high pass image, the Roberts image, Prewitt image, Sobel image and the Laplacian image. After the stereo matching operation on the focused wide images, the two cameras serves the narrow image focusing operation. This procedure establishes the reliability of the detection of the object and gives the fine image information of the object. The pointing operation of the long focal length camera of the narrow image uses the related pixel address information due to the stereo matching and the 2 axes gimbal equipment of the precise resolution. We experimented the detection of the object by stereo matching and ranging the fine distance by narrow image focusing. The experiment gives the appropriate detection and fine pointing of the narrow image focusing to meet the guidance capability of the ground vehicle.


Proceedings of SPIE | 2010

Dual-image guidance system for autonomous vehicle on fast focusing and RGB similarity operation

Akira Akiyama; Nobuaki Kobayashi; Eiichiro Mutoh; Hideo Kumagai; Hirofumi Yamada; Hiromitsu Ishii

We have developed the dual camera image guidance system for autonomous vehicle based on the fast focusing and the spot RGB spectrum similarity operation. The fast focusing catches the distance information of outside world as a whole. The spot RGB spectrum similarity operation finds the object surface portion in the image. Our fast focusing algorithm works precisely on the differential image such as the Daubechies wavelet transformed high pass image, the Roberts image, Prewitt image, Sobel image and the Laplacian image. The spot RGB spectrum similarity operation for the surface detection comes from the idea of the laser range finder. The illuminated coherent laser reflects on the object surface and the reflected laser is detected on the spectrum band detector. The RGB spectrum distribution on the selected spot on one camera can give the expected similar spectrum information on the position-matched spot on another camera if the selected spot corresponds to the surface of the object. We move the autonomous vehicle based on the distance detection and the surface detection of the outside world due to the controlled dual color camera system. Our autonomous vehicle is equipped with the controllable independent four wheels drive. This vehicle can avoid the object geometrically even if it is just in front of the object. We mount the dual camera image guidance system on two axes jimbal system to aim the object in space.


Proceedings of SPIE | 2007

Space imaging optical guidance for ground vehicle

Akira Akiyama; Nobuaki Kobayashi; Eiichiro Mutoh; Hideo Kumagai; Hirofumi Yamada; Hiromitsu Ishii

We have developed the Space Imaging Optical Guidance for Ground Vehicle that uses the narrow field view of the Space Imaging Measurement System based on the fixed lens and fast moving detector to detect the objects to be evaded and the wide field view of the visible optical fine camera to set the drive path. In particular the angel between the optical axis of the narrow field view and the roadside object is very small. Therefore we considered the image segmentation processing of the narrow field view. This provides the accurate detection of the roadside object and its distance. To confirm the guidance ability of the Space Imaging Optical Guidance for Ground Vehicle we tested the Space Imaging Optical Guidance for Ground Vehicle on the road. The road has the object on the road and the roadside wall. The Space Imaging Optical Guidance for Ground Vehicle can detect the object and the surface of the wall and its distance.


Proceedings of SPIE | 2006

Space imaging measurement system based on fixed lens and moving detector

Akira Akiyama; Minoru Doshida; Eiichiro Mutoh; Hideo Kumagai; Hirofumi Yamada; Hiromitsu Ishii

We have developed the Space Imaging Measurement System based on the fixed lens and fast moving detector to the control of the autonomous ground vehicle. The space measurement is the most important task in the development of the autonomous ground vehicle. In this study we move the detector back and forth along the optical axis at the fast rate to measure the three-dimensional image data. This system is just appropriate to the autonomous ground vehicle because this system does not send out any optical energy to measure the distance and keep the safety. And we use the digital camera of the visible ray range. Therefore it gives us the cost reduction of the three-dimensional image data acquisition with respect to the imaging laser system. We can combine many pieces of the narrow space imaging measurement data to construct the wide range three-dimensional data. This gives us the improvement of the image recognition with respect to the object space. To develop the fast movement of the detector, we build the counter mass balance in the mechanical crank system of the Space Imaging Measurement System. And then we set up the duct to prevent the optical noise due to the ray not coming through lens. The object distance is derived from the focus distance which related to the best focused image data. The best focused image data is selected from the image of the maximum standard deviation in the standard deviations of series images.


Optical Engineering | 2005

Optical fiber imaging laser radar

Akira Akiyama; Yukiteru Kakimoto; Kazuhisa Kanda; Masahiro Kuwabara; Hiroyuki Yasuo; Eiichiro Mutoh; Hideo Kumagai; Takahiro Watanabe; Minoru Doshida; Hiromitsu Ishii

We develop an optical fiber imaging laser radar based on the focal plane array detection method using a small number of detectors less than the number of the focal plane array resolution. For the development of this kind of the focal array detection method, we produce the optical fiber dissector, the movable aperture, and the small-number parallel multichannel pulse counter receiver. The optical fiber dissector has one vertical cross section of the 35×35 optical fiber square array for the focal plane at one end and 25 vertical cross sections of the 25 optical fiber bundles for the 25-channel parallel multichannel pulse counter receiver at the other end. Each optical fiber bundle has the 49 optical fibers selected from the 35×35 optical fiber square array with no overlap. The movable aperture has a window of a size 5×5 optical fiber cross section to ensure no crosstalk for the detection of the divergent pulse laser beam. The divergent pulse laser beam is focused on some 5×5 area of the 35×35 optical fiber square array related to its scanning direction. The developed optical fiber imaging laser radar shows high range resolution and no-crosstalk angle resolution. The range resolution is under 15 cm. The angle resolution is 1 pixel.

Collaboration


Dive into the Eiichiro Mutoh's collaboration.

Top Co-Authors

Avatar

Akira Akiyama

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobuaki Kobayashi

Kanazawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fumio Wani

Kawasaki Heavy Industries

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroki Nagaoka

Kawasaki Heavy Industries

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomohito Takada

Kawasaki Heavy Industries

View shared research outputs
Researchain Logo
Decentralizing Knowledge