Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazunori Onoguchi is active.

Publication


Featured researches published by Kazunori Onoguchi.


international conference on pattern recognition | 2004

A practical stereo scheme for obstacle detection in automotive use

Hiroaki Nakai; Nobuyuki Takeda; Hiroshi Hattori; Yasukazu Okamoto; Kazunori Onoguchi

We propose a novel stereo scheme for obstacle detection which is aimed at practical automotive use. The basic methodology involves simple region matching between images, observed from a stereo camera rig, where it is assumed the images are related by a pseudo-projective transform. It provides an effective solution for determining boundaries of obstacles in noisy conditions, e.g. caused by weather or poor illumination, which conventional planar projection approaches cannot cope with. The linearity of the camera model also contributes significantly to compensation of road inclination. Essentially, precise lane detection and prior knowledge concerning obstacles or ambient conditions are unnecessary and the proposed scheme is therefore applicable to a wide variety of outdoor scenes. We have also developed a multi-VLIW processor that fulfills the essential specifications for automotive use. Our scheme for obstacle detection is largely reflected in the processor design so that real-time on-board processing can be realized with acceptable cost to both automobile users and manufacturers. The implementation of a prototype and experimental results illustrates our method.


international conference on intelligent transportation systems | 2010

Lane Detection System for Vehicle Platooning using Multi-information Map

Tatsuya Kasai; Kazunori Onoguchi

This paper proposes a method of lane marker detection for platooning. In order to reduce air resistance, it is desirable to shorten the distance between two vehicles. If the vehicular gap is very short, conventional methods, which detect lane markers ahead in images captured from a front camera, are useless because lane markers are occluded by a vehicle in front. To solve this problem, the proposed method recognizes a lane markers in images captured from two downward side cameras equipped with the front side and the rear side of a vehicle. First, candidate points of lane markers are extracted in each image by edge pair detection. Then, straight lines representing lane marker are detected by applying Hough transform to these candidate points. A lateral position in a traffic lane is estimated from a position of a straight line in an image of each downward side camera. A yaw angle toward a traffic lane is calculated by using these lateral positions and the distance between two downward side cameras. Because a downward side camera can take only a narrow area directly under it, lane markers must be detected from short parts of them. Therefore, the proposed method uses a multi-information map containing lane marker information to detect lane markers. The proposed method has been implemented in the image processing hardware whose CPU satisfies on-vehicle specifications. Experimental results show the effectiveness of the proposed method and a lane detection device. In experiments conducted at the test course and the highway under construction, the vehicle ran at 80 km/h along the straight lane automatically.


international conference on pattern recognition | 2006

Moving Object Detection Using a Cross Correlation between a Short Accumulated Histogram and a Long Accumulated Histogram

Kazunori Onoguchi

This paper presents a method for detecting moving objects effectively in the weather whose visibility is bad, such as in a snowfall or in a dense fog. In such weather, the visibility changes rapidly in a short time and the intensity of each pixel changes hard every frame. In order to overcome these problems, the proposed method divides an input image into grid regions and in each region, calculates a cross correlation between two histograms whose accumulated number of frames are different. A short accumulated histogram, generated from accumulating a few number of frames, changes quickly whenever moving objects go into the region. On the other hand, a long accumulated histogram, generated from accumulating the more number of frames, changes slowly. Therefore, moving objects are detected by measuring a variation on a cross correlation between a short accumulated histogram and a long accumulated histogram. Experimental results obtained with heavy snow images have shown the effectiveness of the proposed method


international conference on pattern recognition applications and methods | 2018

Road Boundary Detection using In-vehicle Monocular Camera.

Kazuki Goro; Kazunori Onoguchi

When a lane marker such as a white line is not drawn on the road or it’s hidden by snow, it’s important for the lateral motion control of the vehicle to detect the boundary line between the road and the roadside object such as curbs, grasses, side walls and so on. Especially, when the road is covered with snow, it’s necessary to detect the boundary between the snow side wall and the road because other roadside objects are occluded by snow. In this paper, we proposes the novel method to detect the shoulder line of a road including the boundary with the snow side wall from an image of an in-vehicle monocular camera. Vertical lines on an object whose height is different from a road surface are projected onto slanting lines when an input image is mapped to a road surface by the inverse perspective mapping. The proposed method detects a road boundary using this characteristic. In order to cope with the snow surface where various textures appear, we introduce the degree of road boundary that responds strongly at the boundary with the area where slant edges are dense. Since the shape of the snow wall is complicated, the boundary line is extracted by the Snakes using the degree of road boundary as image forces. Experimental results using the KITTI dataset and our own dataset including snow road show the effectiveness of the proposed method.


international conference on pattern recognition applications and methods | 2014

Snow Side Wall Detection using a Single Camera

Kazunori Onoguchi; Takahito Sato

In this paper, we proposes the novel method which measures the distance to the side wall by a single camera. Our method creates the inverse perspective mapping (IPM) image by projecting an input image to the virtual plane which is parallel to the moving direction of the vehicle and which is perpendicular to the road surface. Then, the distance to the side wall is calculated from the histogram whose bin is the length of an optical flow detected in the IPM image. The optical flow of the IPM image is detected by a block matching and the motion of the side wall is obtained from the peak of the histogram. Our method is robust to changes in the appearance of the texture on the side wall that occur when a vehicle moves along a road. Experimental results using simulation images and real road images show the effectiveness of the proposed method.


international conference on intelligent transportation systems | 2014

Snowfall Detection Under Bad Visibility Scene

Hiroshi Kawarabuki; Kazunori Onoguchi

An effect of weather is very critical in outdoor camera surveillance applications. So, we focus on bad weather by snow and we propose an algorithm of detecting snowfall from surveillance camera images. The proposed algorithm increases a contrast of an image by haze removal. A contrast of hazy image is reduced by haze color of atmosphere. When several haze colors exist in an image, the degree of contrast degradation is different for each haze color. To deal with this problem, the proposed method performs segmentation of an image for every haze color and estimates the airlight, the transmission, and the weight of dehaze in each segmentation area individually. Most of haze removal algorithms require a tone curve correction as post processing. However, it is very difficult to select an optimal tone curve correction for any image. To deal with this problem, we propose a novel algorithm that does not require any tone curve correction. The general haze removal algorithm treats only haze image. However, in the field of outdoor camera surveillance, its desirable to realize the algorithm which can be applied to various weather conditions. Our algorithm can detect snowfall at high speed and stably not only in bad weather but also in good weather.


international conference on pattern recognition applications and methods | 2016

Raindrop Detection on a Windshield Based on Edge Ratio

Junki Ishizuka; Kazunori Onoguchi

This paper proposes the method for detecting raindrops with various shapes on a windshield from an in-vehicle monocular camera. Since raindrops on a windshield gives various bad influence to video-based automobile applications, for example obstacle detection and lane estimation, a driving safety support system or an automatic driving vehicle needs to understand the state of the raindrop which adheres on a windshield. Previous works are considered on isolated spherical raindrops, but raindrops on a windshield show various shapes, such as a band-like shape. The proposed method can detect raindrops regardless of the shape. In the daytime, the difference of the blur between the surrounding areas are checked for raindrop detection. The ratio of the edge strength extracted from two kinds of smoothed images is used as the degree of the blur. At night, bright areas in which the intensity does not change so much are detected as raindrops.


international conference on pattern recognition applications and methods | 2016

Detection of Raindrop with Various Shapes on aWindshield

Junki Ishizuka; Kazunori Onoguchi

This paper presents the method to detect raindrops with various shapes on a windshield from an in-vehicle n nsingle camera. Raindrops on a windshield causes various bad influence for video-based automobile applications, n nsuch as pedestrian detection, lane detection and so on. Therefore, itâx80x99s important to understand the state n nof the raindrop on a windshield for a driving safety support system or an automatic driving vehicle. Although n nconventional methods are considered on isolated spherical raindrops, our method can be applied to raindrops n nwith various shapes, e.g. a band-like shape. In the daytime, our method detects raindrop candidates by examining n nthe difference of the blur between the surrounding areas. We uses the ratio of the edge strength extracted n nfrom two kinds of smoothed images as the degree of the blur. At night, bright areas whose intensity does not n nchange so much are detected as raindrops.


international conference on pattern recognition applications and methods | 2014

SCHOG Feature for Pedestrian Detection

Ryuichi Ozaki; Kazunori Onoguchi

Co-occurrence Histograms of Oriented Gradients(CoHOG) has succeeded in describing the detailed shape of the object by using a co-occurrence of features. However, unlike HOG, it does not consider the difference of gradient magnitude between the foreground and the background. In addition, the dimension of the CoHOG feature is also very large. In this paper, we propose Similarity Co-occurrence Histogram of Oriented Gradients(SCHOG) considering the similarity and co-occurrence of features. Unlike CoHOG which quantize edge gradient direction to eight directions, SCHOG quantize it to four directions. Therefore, the feature dimension for the co-occurrence between edge gradient direction decreases greatly. In addition to the co-occurrence between edge gradient directions the binary code representing the similarity between features is introduced. In this paper, we use the pixel intensity, the edge gradient magnitude and the edge gradient direction as the similarity. In spite of reducing the resolution of the edge gradient direction, SCHOG realizes higher performance and lower dimension than CoHOG by adding this similarity. In experiments using the INRIA Person Dataset, SCHOG is evaluated in comparison with the conventional CoHOG.


international conference on intelligent transportation systems | 2014

Occluded Side Road Detection Using a Single Camera

Itaru Kikuchi; Kazunori Onoguchi

This paper presents the method to detect the entrance to a side road from a monocular image. At first, the candidate of the entrance to a side road is estimated in the normal inverse perspective mapping (IPM) image created by mapping an input image to a road plane. In the IPM image, the entrance candidate is searched along the road boundary extracted by the rectangular separability filter. Next, the input image is projected to the virtual side plane which is parallel to the vehicles moving direction and vertical to the road surface. We call this mapping IPM-VSP. The distance to the candidate region is measured in the IPM-VSP image. The depth is discontinuous between a side road and the roadside object of this side. Therefore, when the gap of distance is detected around the candidate region, this candidate is selected as the entrance to a side road.

Collaboration


Dive into the Kazunori Onoguchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge