Yoshihiko Nomura
Nagoya University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoshihiko Nomura.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992
Yoshihiko Nomura; Michihiro Sagara; Hiroshi Naruse; Atsushi Ide
A simple and useful calibration method for a TV camera with a high-distortion lens is presented. The parameters to be calibrated are effective focal length, one-pixel width on an image plane, image distortion center, and distortion coefficient. A simple-pattern calibration chart composed of parallel straight lines is introduced as a reference for calibration. An ordinary 2D model fitting is decomposed into two 1D model fittings on the column and row of a frame buffer across the image distortion center by ingeniously utilizing the point symmetry characteristic of image distortion. Some parameters with a calibration chart are eliminated by setting up a calibration chart precisely and by utilizing negligibly low distortion near the image distortion center. Thus, the number of unknown parameters to be calibrated is drastically decreased, enabling simple and useful calibration. The effectiveness of the proposed calibration method is confirmed by experimentation. >
intelligent robots and systems | 1991
Dili Zhang; Yoshihiko Nomura; Seizo Fujii
The relation between the accuracy of calibrated TV camera parameters and the calibration condition is examined by applying a law of propagation, and the optimal calibration condition is proposed where an iterative method is applied to calibrate the parameter values. Furthermore, the variance of the estimated 3D information is determined quantitatively in the case of optimal calibration condition.<<ETX>>
international conference on pattern recognition | 1996
Yoshihiko Nomura; Dili Zhang; Yuko Sakaida; Seizo Fujii
Based on model-driven image matching, the 3-D object pose is iteratively estimated. Shading images and edge images are synthesized from an abject model, and are matched, individually, with the input images by using a nonlinear least-squares method. The fusion of the shading and the edge information is achieved choosing the better of the two pieces of image matching.
intelligent robots and systems | 1990
Yoshihiko Nomura; Hiroshi Naruse; Atsushi Ide; Michihiro Saga
Presents a novel visual robot applied as a visual inspection system for manhole facilities. The system enables remote inspection from the ground, and makes inspection work both efficient and safe. Furthermore, it draws a facilities plan automatically by digital picture processing and enables the construction of an electronic database of plant records. Basic technologies of the system are highly accurate measurement algorithms with edge location, edge orientation and TV camera calibration.<<ETX>>
computer vision and pattern recognition | 1996
Yoshihiko Nomura; Dili Zhang; Yuko Sakaida; Seizo Fujii
Human beings seem to recognize objects based on a kind of model-matching, i.e., a virtual manipulation on mental images. This paper presents a 3-D object pose estimation method simulating the human recognition scheme. Computer synthesizes not only an edge image but also a shading image from an object model. Then, it matches the two kinds of synthesized images with the inputted images individually by using a non-linear least-squares method, and estimates the pose parameter values. Finally, it chooses the better of the individually estimated poses. Thus, the fusion of the shading and the edge information is achieved. Since the two pieces of information complement each other, this method has the advantage of much higher robustness and accuracy of pose estimation than ordinary model-matching techniques which rely only on geometrical features such as vertices or edges.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1988
Yoshihiko Nomura; Hiroshi Naruse
Noise that replaces, rather than perturbs, a signal is not amenable to the usual approaches of noise removal. Here, multiple images of a stationary scene are used to reduce random obscuration or dropout noise by the following method. For each image coordinate, a gray-level initial histogram is computed over all the images. The initial histograms are averaged for all image coordinates, and for each image coordinate a gray-level histogram corresponding to the object is derived by subtracting the averaged histogram from the initial one. The gray level of the object is obtained from the resulting object histogram. The effectiveness of this method is confirmed through experiments using a scene obscured by air bubbles in a water tank. >
Applications in Optical Science and Engineering | 1993
Yoshihiko Nomura; Yujiro Harada; Seizo Fujii
This paper presents a simple and robust pattern matching algorithm working on image-data level and requiring no feature extraction. A model picture is transformed into an estimated picture, and the estimated picture is matched to an actually input picture. Both the geometrical affine transformation and a linear gray-level transformation are examined, and the transformation parameters relating to the rotation, translation, expansion, and brightness are estimated by using a statistical optimization technique, i.e., an iterative non-linear least squares method where the residual sum of squares between the actually input picture and the estimated picture is used as an evaluation function. The characteristic of the proposed method is that the parameters are estimated by linear matrices calculations so that the calculation is markedly simplified and it could be processed in parallel for all the pixels. The matrices are easily calculated from the gray-level and its spatial derivatives in the horizontal and vertical directions in the model picture, and the gray-level in the actually input picture. As a result of some experiments for a simple pattern and a complicated one, it is confirmed that a translation parameter value is accurately estimated with approximately 0.1 pixel. The dynamics of parameter estimation are also examined.
Systems and Computers in Japan | 1992
Hiroshi Naruse; Atsushi Ide; Mitsuhiro Tateda; Yoshihiko Nomura
This paper proposes a new model fit-type edge feature measurement method. The characteristic of the proposed method is that it introduces a blurred edge model which matches well with a gray-level pattern of an edge in an image actually observed. The blurred edge model is constructed by using not only edge features, which are the edge position and orientation within the pixel, but also the point spread function which expresses the image degradation during the image recording process as parameters. By using this model, the gray level of the multiple pixels near the edge for the various edge feature values is calculated, and a map is obtained from the edge features to the gray-level pattern. Next, the inverse map which obtains edge features from a gray-level pattern is obtained in advance through learning by using error backpropagation-type neural networks consisting of three layers. By using the obtained inverse map, the edge features are determined from the gray-level pattern of the actually observed image. Conventionally, since it was necessary to obtain this inverse map analytically, the edge model that could be used was restricted to the step-edge type. On the other hand, with the method being proposed which utilizes the neural networks, an arbitrary optimal edge model for an individual image recording device can be used. For this reason, edge features can be determined precisely with this method even from local information. Many measurement experiments which changed the edge position and orientation were performed and the effectiveness of this method was confirmed.
visual communications and image processing | 1989
Hiroshi Naruse; Yoshihiko Nomura; Michihiro Sagara
An important preprocess in computer vision is to measure the position of an object and to eradicate lens distortion which hampers the recognition of an object from its image. At first, the slit-ray projection method is investigated and some ideas for increasing measurement accuracy are proposed. A regression to normal distribution is achieved by the least squares method for the pixel positions and their intensities. Because of this, the slit-ray center position can be determined with a high accuracy of 0.05 pixel. A method for correcting the distortion of the slit-ray intensity caused by non-uniform reflection is described. It involves compensating the intensity on the basis of a uniform reference light. Optimum measurement conditions for the slit-ray width and the regression range are obtained by examining the relation between intensity fluctuations and measurement error. In order to increase 3-D measurement accuracy, methods for calibrating various parameters are described. Next, by combining three ideas, i.e., the reference of the shift quantities, the en bloc processing of pixel groups and the separation of the transformation into two directions, non-linear coordinate transformation time is reduced. This is effective for the speedy correction of image distortion.
Optics, Illumination, and Image Sensing for Machine Vision VII | 1993
Dili Zhang; Yoshihiko Nomura; Seizo Fujii
A simple and accurate camera calibration method is presented in this paper, and the relation between accuracy of calibrated TV camera parameters and calibration condition is examined by applying a law of error propagation. The optimal calibration condition is proposed where an iterative method is applied to calibrate the parameter values. Furthermore, the variance of the estimated 3-D information is determined quantitatively in the case of the optimal calibration condition. These results are confirmed through experiments.