Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kunio Fukunaga is active.

Publication


Featured researches published by Kunio Fukunaga.


International Journal of Computer Vision | 2002

Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions

Atsuhiro Kojima; Takeshi Tamura; Kunio Fukunaga

We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments.


international conference on pattern recognition | 2000

Generating natural language description of human behavior from video images

Atsuhiro Kojima; Masao Izumi; Takeshi Tamura; Kunio Fukunaga

In visual surveillance applications, it is becoming popular to perceive video images and to interpret them using natural language concepts. We propose an approach to generating a natural language description of human behavior appearing in real video images. First, a head region of a human, on behalf of the whole body, is extracted from each frame. Using a model based method, three dimensional pose and position of the head are estimated. Next, the trajectory of these parameters is divided into segments of monotonous motions. For each segment, we evaluate conceptual features such as degree of change of pose and position and that of relative distance to some objects in the surroundings, and so on. By calculating the product of these feature values, a most suitable verb is selected and other syntactic elements are supplied. Finally natural language text is generated using a technique of machine translation.


international conference on pattern recognition | 2000

Blackboard segmentation using video image of lecture and its applications

Masaki Onishi; Masao Izumi; Kunio Fukunaga

We propose a method for segmentation of written regions on a blackboard in the lecture room using a video image. We first detect the static edges of which locations on the image are stationary. Next, we extract several rectangular regions in which these static edges are located densely. Finally, by use of fuzzy rules, the extracted rectangles are merges as contextual regions, where letters and figures in each contextual region explain each context. We apply our method to automatic production of lecture video, and archives segmentation of written regions on the blackboard in lecture rooms.


international conference on pattern recognition | 2004

Shooting the lecture scene using computer-controlled cameras based on situation understanding and evaluation of video images

Masaki Onishi; Kunio Fukunaga

We propose a computer-controlled camera work that shoots object scenes to model the professional cameramens work and selects the best image among plural video images as a switcher. We apply this system to a shooting of a lecture scene. In the first image, our system estimates a teachers action based on the features of a teacher and a blackboard. In the next, each camera is directed to a shooting area based on the teachers action, automatically. In the last, this system selects the best image among plural images under the evaluation rule. Moreover, we have tried experiments of shooting lecture scene and have confirmed the effectiveness of our approach.


international conference on pattern recognition | 2002

Textual description of human activities by tracking head and hand motions

Atsuhiro Kojima; Takeshi Tamura; Kunio Fukunaga

We propose a method for describing human activities from video images by tracking human skin regions: facial and hand regions. To detect skin regions robustly, three kinds of probabilistic information are extracted and integrated using Dempster-Shafer theory. The main difficulty in transforming video images into textual descriptions is bridging the semantic gap between them. By associating visual features of head and hand motion with natural language concepts, appropriate syntactic components such as verbs, objects, etc. are determined and translated into natural language.


international conference on pattern recognition | 2004

Scene recognition based on relationship between human actions and objects

Mirai Higuchi; Shigeki Aoki; Atsuhiro Kojima; Kunio Fukunaga

In this paper, we propose a novel method for scene recognition using video images through the analysis of human activities. We aim at recognizing three kinds of things such as human activities, objects and environment. In the previous method, locations and orientations of objects are estimated using shape models, which are often claimed to be dependent upon individual scene. Instead of shape models, we employ conceptual knowledge about function and/or usage of objects as well as that about human actions. In our method, the location and usage of objects can be identified by observing interaction of human with them.


international conference on innovative computing, information and control | 2008

Recognition and Textual Description of Human Activities by Mobile Robot

Atsuhiro Kojima; Mamoru Takaya; Shigeki Aoki; Takao Miyamoto; Kunio Fukunaga

In this paper, we propose a method for recognizing human actions and objects and translating them into natural language text. First, 3D environmental map is constructed by accumulating range maps captured from a 3D range sensor mounted on a mobile robot. Then, pose of a person in the scene is estimated by fitting articulated cylindrical model and also object is recognized by matching 3D models. On condition that the person handles some objects, interaction with the object is classified. Finally, using conceptual model representing human actions and related objects, a natural language expression which is most suitable to explain the persons action is generated.


international conference on document analysis and recognition | 1993

Incremental acquisition of knowledge about layout structures from examples of documents

Koichi Kise; Naoko Yajima; Noboru Babaguchi; Kunio Fukunaga

Document image analysis systems often utilize the knowledge about layout structures to extract layout objects labeled logically. However, the lack of the facility for knowledge acquisition limits the applicability of the systems. The authors propose a method of acquiring knowledge for document image analysis. Given examples of document images and their layout objects labeled logically, the method generates and modifies the knowledge. The method is incremental so that the knowledge can be efficiently modified using additional examples. Counterexamples generated as errors obtained from the analysis of an example image can also be reflected into the knowledge so that the system may no longer generate the errors. Experimental results on both knowledge acquisition and the analysis using the acquired knowledge are also presented.<<ETX>>


ieee conference on cybernetics and intelligent systems | 2004

Learning and recognizing behavioral patterns using position and posture of human

Shigeki Aoki; Masaki Onishi; Atsuhiro Kojima; Kunio Fukunaga

In general, it is possible to find certain behavioral patterns in human daily activity. Such patterns are called as daily behavioral patterns. The purpose of this research is to learn and recognize behavioral patterns. In the previous methods, it is difficult to recognize in detail how a person acts in a room because the methods recognize only a sequence of existing position of human by using the information of infrared sensors or of switching on/off of electrical appliances. On the other hand, many have proposed the methods recognizing human motions from sequential images, in most of which motion models must be prepared in advance. In this paper, we propose a method for learning and recognizing motions of human without any motion models. In addition, we also propose perceptive methods of recognizing behavioral patterns by taking not only the sequence of position but also the sequence of motion into consideration. Experiments show that our approach is able to learn and recognize human behavior and confirm effectiveness of our method


international conference on pattern recognition | 2000

Production of video images by computer controlled camera operation based on distribution of spatiotemporal mutual information

Masaki Onishi; Masao Izumi; Kunio Fukunaga

This paper defines a spatiotemporal mutual information on the pixels of a given video image on the basis of information theory (Shannons communication theory), which can be interpreted as the theoretical estimation of interested features for human being. As an application of this spatiotemporal mutual information, we propose a method of producing a vivid video image of the distance learning by using the computer controlled camera operation and switching of plural camera images on the basis of the video image processing. Results of the questionnaire survey for the video image produced by the method confirm the effectiveness of our approach.

Collaboration


Dive into the Kunio Fukunaga's collaboration.

Top Co-Authors

Avatar

Masao Izumi

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Atsuhiro Kojima

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Akio Ogihara

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Masaki Onishi

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koichi Kise

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Masaya Ohta

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Shigeki Aoki

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Katsumi Harashima

Osaka Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Takao Miyamoto

Osaka Prefecture University

View shared research outputs
Researchain Logo
Decentralizing Knowledge