Noriyoshi Okamoto
Kanto Gakuin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Noriyoshi Okamoto.
canadian conference on electrical and computer engineering | 2003
Seiichi Nagumo; Hiroaki Hasegawa; Noriyoshi Okamoto
In the majority of conventional papers, the forward extraction of cars for ITS (intelligent transport systems) is intended for use in the daytime. However, if accident prevention is taken into consideration, measures to cope with nighttime, when more accidents occur, will become indispensable. Therefore, this paper considered forward vehicle extraction in the daytime and nighttime, respectively. In the daytime, since vehicles contain many horizontal surfaces, vehicle extraction was performed by taking surfaces greater than the threshold value as the width of the vehicle area, while at night, taillight extraction was performed using the Cr component of YCrCb color space. However, in light of recent research and advances in infrastructure, the research in this paper was conducted on the premise that exact lane tracking will be easily performable, and thus this issue was not considered. Comparatively good results were obtained on extraction of vehicles both in the daytime and at night. Future studies may be warranted to research the applicability of this technique in various environments.
canadian conference on electrical and computer engineering | 1997
Noriyoshi Okamoto; Wenjie Chen; N. Iida; Toshi Minami
The paper proposes a new extraction algorithm for contour lines and feature points from profile images for automatic personal identification. First the authors input a side view of a human head within a dark background under uniform lighting conditions by a CCD camera, then transform the input to a color difference signal image. After enhancing edges of the image using Sobel operators they binarize the luminance level of each pixel and portray a black profile in a white background. Next they differentiate the black profile, binarize the differentiated results, and depict a contour line using a thinning operator. Finally they encode the extracted contour line using Freemans chain code and, using the encoded data, calculate the digital curvature in concave sections of the contour line and determine the concave feature points. Next they draw straight lines connecting the adjacent feature points and specify the convex feature points on the contour line which are farthest from the straight lines. They can obtain feature points of the profile automatically and reliably.
canadian conference on electrical and computer engineering | 2004
Keita Torii; Noriyoshi Okamoto
In conventional systems of fingerprint authentication for personal identification, identification rates are reduced when the surface of the fingerprint input device is soiled or damaged by contact. Moreover, 2-value information on ridges and valleys is fundamentally insufficient for evasion of impersonation (cheating). We have examined the possibility of non-contact fingerprint authentication to address these problems. Problems caused by contact can be avoided and information on the form and color of a finger can be acquired, if the input method is changed. In this study, a method for extracting the principal lines (valley lines) of a fingerprint from a color picture of a fingertip from a non-contact visual input is proposed as an approach to non-contact fingerprint authentication. In the proposed technique, first a wavelet transform is performed on the input image and the edge component equivalent to the valley line is separated from the signal by which zone division was carried out. Then, binarization and thinning are performed on the edge component. This processing is performed on the Y and Z components of an XYZ table color system, and valley lines are extracted complexly. A non-contact input method offers the further advantage of being able to perform acquisition of finger form. Although rotation processing of a fingerprint image is very complicated in the conventional systems, effective use of form information makes the processing simple in the proposed technique. The central line of a finger is computed using form information, and the axis of rotation is set as the tip of the finger. Accurate rotation is possible if rotation processing is performed by an affine transform so that the central line may be made perpendicular.
canadian conference on electrical and computer engineering | 2002
S. Aoki; Noriyoshi Okamoto
An encoding method for the purpose of improving the performance of color image compression by wavelet transform is proposed. As the main contents, high frequency elements are composed of a zero value tree structure and 5-dimensional vector quantization uses the LBG algorithm. After the range of quantization is discussed in the experiment, it is clear that the range of zero value quantization is almost the same as the vector quantization. Furthermore, the result applied to the color picture is described. The simulation results show the effectiveness of using the proposed method that unifies these techniques.
canadian conference on electrical and computer engineering | 1998
Wenjie Chen; Noriyoshi Okamoto; Takuya Minami
This paper proposes a new algorithm of the automatic personal identification using extracted contour lines and feature points from human face profiles. As a decision function for the identification, we use a norm of a 19 dimensional feature vector, the components of which are the weighted distances between two feature points and the angles between two lines connecting three consecutive feature points. The 11 feature points are extracted from a contour line of the input profile expressed by Freemans chain code using digital curvatures of the line. The effects of deformation of profiles caused by face panning and tilting and mouth opening upon identification accuracy have been investigated in detail. To overcome the deformation effects we propose to register three profiles per person: a normal head position profile, a tilted profile and a panned profile. The simulation results by 68 subjects show that the identification accuracy for the same persons is 91.7% and discrimination accuracy for different persons is 99.9%. This proves the superiority of the proposed algorithm.
canadian conference on electrical and computer engineering | 2005
Daisuke Takahashi; Noriyoshi Okamoto
In recent years much research concerned with biometrics, using parts of the human body, have been reported. The human face is most widely used in daily life for personal identification. Information obtained from the human face shows the persons authentication and intentions, and thus the human face is an important site. Especially the direction of a face is used to measure the interest of a candidate person, or the compensation process of personal authentication. Posture estimation mostly by the conventional technique had only used a 2-dimensional (2-D) picture, or only used 3-dimensional (3-D) data. However, there was a problem. The posture estimation at a fine angle was inaccurate for posture estimation when only using the 2-D picture and in posture estimation have to use a special movie camera only using the 3-D data. Moreover, the result may differ from the user intention because of inaccurate estimations when only using 2-D or 3-D data. The problems with only using 2-D pictures are that posture direction cannot be estimated because the lighting conditions are bad and the angles (facial directions), which can be estimated, are limited. And the problems with only using 3-D data is that it sometimes cannot adapt to rapidly changing image. This paper is a highly accurate, handy technique by estimating posture for 2-D picture using previously stored feature extraction 3-D reference picture. The rotation angle to be calculated can be decreased by using the forecast processing. Therefore, the posture estimation processing is accelerated. The feature of this technique is performing posture estimation with robust accuracy to an unfavorable condition. The bad condition of targeting by this technique is adaptable for spectacles, the dirt on lens, out of focus lens, and lighting changes
canadian conference on electrical and computer engineering | 2004
Daisuke Takahashi; Noriyoshi Okamoto
Numerous research studies in recent years have been done on analyzing gestures and sounds from video, and using them for man-machine interfaces. Human posture estimation by conventional techniques uses only 2D (2D) images or 3D data. However, estimation at fine angles is difficult using the 2D images, while use of special photography equipment is necessary to gather 3D data. In this paper, a technique for performing facial posture estimation from 2D video using a 3D reference picture is proposed. By this technique, the features of a human face are extracted using a 3D form of the human face obtained beforehand with a range finder. The 3D reference picture after the feature extraction is accumulated as a database. The human facial posture in an animation is presumed in analyzing the 2D facial domain extracted from the accumulated 3D reference picture and the animation. In posture estimation processing, in order to make the size and inclination of a human face, calibration is performed on the human face domain extracted from the 2D input image using a circular mask. Furthermore, a movement vector is computed from the relation with the previous and subsequent processing frames, and human face operation in the processing frame is predicted based on the calculated result. From the prediction results, posture estimation processing is accelerated by thinning out the rotation angle of the 3D reference. The key feature of this technique is in its ability to perform posture estimation with robust sufficient accuracy to the size or inclination of a human face.
canadian conference on electrical and computer engineering | 2001
Atushi Kurosawa; Noriyoshi Okamoto
Research studies and application of face recognition have become important, as face recognition is widely used in a variety of fields such as security systems, databases and so on. We propose a method for detecting each facial region from any given frame in a moving picture. Moving pictures unavoidably include incidences of occlusion facial regions; our method can extract distinct facial regions in mobile situations, which include occlusion facial region. To detect facial regions, we utilized color and shape input and movement vectors. A computer simulation of the proposed method was established showing the effectiveness on occlusion facial regions. An extraction rate of 87% for 1500 frames was achieved.
Ieej Transactions on Electronics, Information and Systems | 2007
Daisuke Takahashi; Noriyoshi Okamoto
In this study, we are aimed for the posture estimation that considered influence to lighting changes. We proposed the algorithm that considers a bad condition by using 3-D data and 2-D animation. The decrease at the gaze forward rate caused by the psychological condition and becomes tired is a problem of safety driving. There is a technique for detecting looking away and looking down for a long time by estimation driving persons face posture from the image as this method of settlement. However, conventional technique using images in the posture estimation, that is a problem of receiving a strong influence to the lighting condition and the size of the database. In this technique, it pays attention to strong for the lighting changes 3-D data and texture information. Making the database solved these problems, accuracy average was obtained 89[%] and estimation speed was obtained 6[f/s].
canadian conference on electrical and computer engineering | 2006
Daisuke Takahashi; Noriyoshi Okamoto
Recent studies of biometrics related with recognition of human postures and specific domain features for personal authentications by employing 3-dimensional (3D) face-profiling models have triggered much attention. However, changes in posture and lighting remain as critical issues in face-profiling to date. The human face has been and is still the biometrics most extensively exploited in daily-life personal authentications. Factors of 3D perspectives include color and brightness, which vary with the direction and intensity of illuminations, are of critical influence with difficult practical resolution. In addition, changes in the face domain may be abrupt, instantaneous and unpredictable, thus complicating image detection and extraction to eventually reduce precision in graphic recognition. In the present study, we innovated an economically viable and technically reliable technique to accurately estimate the face-posture under sudden changes in illumination. Our approach involves prior storage of 3D reference images with characteristic extraction of subjects for subsequent posture estimation of 2D movie images. Posture estimation-processing of circumstantial changes and posture predicts movement using a vector that predicts and collars according to the texture status. Our investigations revealed that posture estimation is independent of illumination changes even at high speed when 3D reference images are rotated in sync with posture estimation-processing