Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoshio Nagashima.
visual communications and image processing | 1990
Hiroshi Agawa; Gang Xu; Yoshio Nagashima; Fumio Kishino
We have studied a stereo-based approach to three-dimensional face modeling and facial image reconstruction virtually viewed from different angles. This paper describes the system, especially image analysis and facial shape feature extraction techniques using information about color and position of face and face components, and image histogram and line segment analysis. Using these techniques, the system can get the facial features precisely, automatically and independent of facial image size and face tilting. In our system, input images viewed from the front and side of the face are processed as follows: the input images axe first transformed into a set of color pictures with significant features. Regions are segmented by thresholding or slicing after analyzing the histograms of the pictures. Using knowledge about color and positions of the face, face and hair regions are obtained and facial boundaries extracted. Feature points along the obtained profile are extracted using information about curvature amplitude and sign, and knowledge about distance between the feature points. In the facial areas which include facial components, regions are again segmented by the same techniques with color information from each face component. The component regions are recognized using knowledge of facial component position. In each region, the pictures are filtered with various differential operators, which are selected according to each picture and region. Thinned images are obtained from the filtered images by various image processing and line segment analysis techniques. Then, feature points of the front and side views are extracted. Finally, the size and position differences and facial tilting between two input images are compensated for by matching the common feature points in the two views. Thus, the three-dimensional data of the feature points and the boundaries of the face are acquired. The two base face models, representing a typical Japanese man and woman, are prepared and the model of the same sex is modified with 3D data from the extracted feature points and boundaries in a linear manner. The images, which are virtually viewed from different angles, are reconstructed by mapping facial texture to the modified model.
visual communications and image processing | 1992
Yasuichi Kitamura; Yoshio Nagashima; Jun Ohya; Fumio Kishino
We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.
visual communications and image processing | 1993
Yoshio Nagashima; Gen Suzuki
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
visual communications and image processing | 1989
Gang Xu; Hiroshi Agawa; Yoshio Nagashima; Yukio Kobayashi
The goal of this research is to generate three-dimensional facial models from facial images and to synthesize images of the models virtually viewed from different angles, which is an integral component of the ATR virtual space conferencing system project. We take a stereo-based approach. Since there is a great gap between the images and a 3D model, we argue that it is necessary to have a base face model to provide a framework. The base model is built by carefully selecting and measuring a set of points on the extremal boundaries that can be readily identified from the stereo output and another set of points inside the boundaries that can be easily determined given the boundary points. A front view and a side view of a face are employed. First the extremal boundaries are extracted or interpolated, and features such as eyes, nose and mouth are extracted. The extracted features are then matched between the two images, and their 3D positions calculated. Using these 3D data, the prepared base face model is modified to approximate the face. Finally the points on the modified 3D model are assigned intensity values derived from the original stereo images, and images are synthesized assuming new virtual viewing angles. The originality and significance of this work lies in that the system can generate a face model without a human operators interaction with the system as in other conventional face modeling techniques.
Archive | 1995
Gen Suzuki; Shohei Sugawara; Hiroya Tanigawa; Machio Moriuchi; Yoshio Nagashima; Yasuhiro Nakajima; Hiroyuki Arita; Yumi Murakami
Archive | 2004
Takanori Hishiki; Tomoaki Komuro; Yoshio Nagashima; Satoshi Sakuma; Hajime Sakurai; Kazuto Usuda; 聡 佐久間; 智昭 小室; 元 桜井; 美雄 永嶋; 和人 臼田; 孝紀 菱木
Archive | 1995
Gen Suzuki; Shohei Sugawara; Hiroya Tanigawa; Machio Moriuchi; Yoshio Nagashima; Yasuhiro Nakajima; Hiroyuki Arita; Yumi Murakami
IEICE Transactions on Information and Systems | 1994
Shohei Sugawara; Gen Suzuki; Yoshio Nagashima; Michiaki Matsuura; Hiroya Tanigawa; Machio Moriuchi
Archive | 2000
Masaru Ando; Toru Ishihara; Masao Masugi; Yumi Murakami; Yoshio Nagashima; Hitoshi Sato; Kotaro Shinkawa; 仁 佐藤; 大 安藤; 晃太郎 新川; 由美 村上; 美雄 永嶋; 徹 石原; 正男 馬杉
Archive | 2004
Kenichi Asasaka; Tomoaki Komuro; Yoshio Nagashima; Satoshi Sakuma; Machiko Suzuki; 聡 佐久間; 智昭 小室; 美雄 永嶋; 健一 浅坂; 麻知子 鈴木