Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hideyuki Kume is active.

Publication


Featured researches published by Hideyuki Kume.


international conference on pattern recognition | 2010

Extrinsic Camera Parameter Estimation Using Video Images and GPS Considering GPS Positioning Accuracy

Hideyuki Kume; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

This paper proposes a method for estimating extrinsic camera parameters using video images and position data acquired by GPS. In conventional methods, the accuracy of the estimated camera position largely depends on the accuracy of GPS positioning data because they assume that GPS position error is very small or normally distributed. However, the actual error of GPS positioning easily grows to the 10m level and the distribution of these errors is changed depending on satellite positions and conditions of the environment. In order to achieve more accurate camera positioning in outdoor environments, in this study, we have employed a simple assumption that true GPS position exists within a certain range from the observed GPS position and the size of the range depends on the GPS positioning accuracy. Concretely, the proposed method estimates camera parameters by minimizing an energy function that is defined by using the reprojection error and the penalty term for GPS positioning.


international conference on image processing | 2013

Detection of 3D points on moving objects from point cloud data for 3D modeling of outdoor environments

Tsunetake Kanatani; Hideyuki Kume; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

A 3D modeling technique for an urban environment can be applied to several applications such as landscape simulations, navigational systems, and mixed reality systems. In this field, the target environment is first measured using several types of sensors (laser rangefinders, cameras, GPS sensors, and gyroscopes). A 3D model of the environment is then constructed based on the results of the 3D measurements. In this 3D modeling process, 3D points that exist on moving objects become obstacles or outliers to enable the construction of an accurate 3D model. To solve this problem, we propose a method for detecting 3D points on moving objects from 3D point cloud data based on photometric consistency and knowledge of the road environment. In our method, 3D points on moving objects are detected based on luminance variations obtained by projecting 3D points onto omnidirectional images. After detecting 3D the points based on evaluation value, the points are detected using prior information of the road environment.


Computer Vision and Image Understanding | 2015

Bundle adjustment using aerial images with two-stage geometric verification

Hideyuki Kume; Tomokazu Sato; Naokazu Yokoya

A new SfM pipeline that uses aerial images as external references is proposed.Good matches between ground and aerial images are found by two-stage verification.Consistency of orientation and scale from a feature descriptor is locally verified.Outliers are removed by global verification with sampling based bundle adjustment. In this paper, a new pipeline of structure-from-motion for ground-view images is proposed that uses feature points on an aerial image as references for removing accumulative errors. The challenge here is to design a method for discriminating correct matches from unreliable matches between ground-view images and an aerial image. If we depend on only local image features, it is not possible in principle to remove all the incorrect matches, because there frequently exist repetitive and/or similar patterns, such as road signs. In order to overcome this difficulty, we employ geometric consistency-verification of matches using the RANSAC scheme that comprises two stages: (1) sampling-based local verification focusing on the orientation and scale information extracted by a feature descriptor, and (2) global verification using camera poses estimated by the bundle adjustment using sampled matches.


intelligent vehicles symposium | 2014

Evaluation of image processing algorithms on vehicle safety system based on free-viewpoint image rendering

Akitaka Oko; Tomokazu Sato; Hideyuki Kume; Takashi Machida; Naokazu Yokoya

Development of algorithms for vehicle safety systems, which support safety driving, takes a long period of time and a huge cost because it requires an evaluation stage where huge combinations of possible driving situations should be evaluated by using videos which are captured beforehand in real environments. In this paper, we address this problem by using free viewpoint images instead of the real images. More concretely, we generate free-viewpoint images from a combination of a 3D point cloud measured by laser scanners and an omni-directional image sequence acquired in a real environment. We basically rely on the 3D point cloud for geometrically correct virtual viewpoint images. In order to remove the holes caused by the unmeasured region of the 3D point cloud and to remove false silhouettes in surface reconstruction, we have developed a technique of free-viewpoint image generation that uses both a 3D point cloud and depth information extracted from images. In the experiments, we have evaluated our framework with a white line detection algorithm and experimental results have shown the applicability of free-viewpoint images for evaluation of algorithms.


society of instrument and control engineers of japan | 2014

Detection of moving objects from point cloud data using photometric consistency and prior knowledge of road environment

Tsunetake Kanatani; Hideyuki Kume; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

3D models of urban environments are constructed based on measured data using several types of sensors such as laser rangefinders, cameras, GPS sensors and gyroscopes. In this 3D modeling process, 3D points on moving objects become obstacles to enable the construction of an accurate 3D model. To solve this problem, this paper has proposed a method for detecting and removing 3D points on moving objects from 3D point cloud data based on photometric consistency and knowledge of the road environment. In our method, 3D points on moving objects are detected based on luminance variations computed by projecting 3D points onto omnidirectional images. After detecting candidates for points on moving objects, points inside a certain size of region determined by prior knowledge of road environment are removed. We show the effectiveness of the method through experiments for a real outdoor environment.


Ipsj Transactions on Computer Vision and Applications | 2013

6-DOF Camera Pose Estimation Using Reference Points on an Aerial Image without Altitude Information

Taiki Sekii; Tomokazu Sato; Hideyuki Kume; Naokazu Yokoya

A new method for estimating a six-degrees-of-freedom camera pose for a ground-view image using reference points on an aerial image is presented. Unlike typical PnP problems, altitude information is not available for the reference points in our case. The camera pose is estimated by minimizing a cost function defined as the sum of squared distances between observed 2D positions of reference points on a ground-view image and corresponding lines that are projections of 3D vertical lines passing through 2D reference points on an aerial image. The accuracy of the proposed method is evaluated quantitatively in both simulation and real environments. The availability of the proposed method is demonstrated by generating AR images from aerial and ground-view images downloaded from Google Maps and Flickr.


Journal of Machine Vision and Applications | 2013

Vehicle Localization along a Previously Driven Route Using Image Database.

Hideyuki Kume; Arne Suppé; Takeo Kanade


IIEEJ transactions on image electronics and visual computing | 2015

Removal of Moving Objects from Point Cloud Data for 3D Modeling of Outdoor Environments

Tsunetake Kanatani; Hideyuki Kume; Takafumi Taketomi


international conference on computer vision theory and applications | 2014

Sampling based bundle adjustment using feature matches between ground-view and aerial images

Hideyuki Kume; Tomokazu Sato; Naokazu Yokoya


The Journal of the Institute of Image Electronics Engineers of Japan | 2014

Extrinsic Camera Parameter Estimation from Video Images Considering GPS Positioning Confidence

Hideyuki Kume; Tetsuji Anai; Tomokazu Sato; Takafumi Taketomi; Nobuo Kochi; Naokazu Yokoya

Collaboration


Dive into the Hideyuki Kume's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takafumi Taketomi

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taiki Sekii

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arne Suppé

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge