Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshiaki Akazawa is active.

Publication


Featured researches published by Yoshiaki Akazawa.


human factors in computing systems | 2011

An actuated physical puppet as an input device for controlling a digital manikin

Wataru Yoshizaki; Yuta Sugiura; Albert C. Chiou; Sunao Hashimoto; Masahiko Inami; Takeo Igarashi; Yoshiaki Akazawa; Katsuaki Kawachi; Satoshi Kagami; Masaaki Mochimaru

We present an actuated handheld puppet system for controlling the posture of a virtual character. Physical puppet devices have been used in the past to intuitively control character posture. In our research, an actuator is added to each joint of such an input device to provide physical feedback to the user. This enhancement offers many benefits. First, the user can upload pre-defined postures to the device to save time. Second, the system is capable of dynamically adjusting joint stiffness to counteract gravity, while allowing control to be maintained with relatively little force. Third, the system supports natural human body behaviors, such as whole-body reaching and joint coupling. This paper describes the user interface and implementation of the proposed technique and reports the results of expert evaluation. We also conducted two user studies to evaluate the effectiveness of our method.


international conference on multimedia and expo | 2002

Real-time video based motion capture system based on color and edge distributions

Yoshiaki Akazawa; Yoshihiro Okada; Koichi Niijima

This paper proposes a real-time, video based motion capture system using two video cameras. Since conventional video based motion capture systems use many video cameras and take a long time to deal with many video images, they cannot generate motion data in real time. On the other hand, the prototype system proposed uses a few video cameras, up to two, it employs a very simple motion-tracking method based on object color and edge distributions, and it takes video images of the person, e.g., x, y position of the hand, feet and head, and then it generates motion data of such body parts in real time. Especially using two video cameras, it generates 3D motion data in real time. This paper mainly describes its aspects as a real-time motion capture system for the tip parts of the human body, i.e., the hands, feet and head, and validates its usefulness by showing its virtual reality (VR) application examples.


Proceedings of SPIE, the International Society for Optical Engineering | 2007

Voice and gesture-based 3D multimedia presentation tool

Hiromichi Fukutake; Yoshiaki Akazawa; Yoshihiro Okada

This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.


computer graphics, imaging and visualization | 2005

3D object layout by voice commands based on contact constraints

Hiromichi Fukutake; Yoshiaki Akazawa; Yoshihiro Okada; Koichi Niijima

There is a well-known problem that it is very difficult to accurately make a 3D object move/rotate to a specific position/orientation in a virtual 3D space by the direct manipulation of a mouse device on a 2D computer display screen. To deal with this problem, the authors have already proposed an automatic 3D object layout method based on contact constraints. After automatically generating a 3D scene using this method, the user often wants to modify it manually. For this, the authors propose voice commands for 3D object layout based on contact constraints in this paper. With the contact constraints used in the 3D object layout method, it becomes easier to layout 3D objects by voice commands. Voice commands are intuitive interface and very efficient in the situation when the user can not use a mouse device.


active media technology | 2005

Manipulation guide using contact constraints for construction of 3D composite objects

Yoshihiro Okada; Yoshiaki Akazawa; Koichi Niijima

There is a well-known problem that it is very difficult to accurately make a 3D object move/rotate into the specific position/orientation in a virtual 3D space by the direct manipulation of a mouse device on a 2D computer display screen. To deal with this problem, the authors have already proposed an automatic 3D object layout method based on contact constraints. After automatically generating a 3D scene using this method, the user often wants to modify it manually. For this, the authors propose manipulation guide functionality based on contact constraints in this paper. The contact constraints of the 3D object layout method can also be useful for the construction of 3D composite objects. This paper also reports the applicability of the 3D object layout method and manipulation guide functionality to the construction of 3D composite objects.


IEEE International Workshop on Haptic Audio Visual Environments and their Applications | 2005

Intelligent and intuitive interface for construction of 3D composite objects

Yoshiaki Akazawa; Yoshihiro Okada; Koichi Niijima

This paper treats an intelligent and intuitive interface for the construction of 3D composite objects. The manual construction of 3D composite objects takes a long time because 3D objects have six degrees of freedom (DOF) and are difficult to be controlled by using a standard 2D input device, e.g., a mouse device. To deal with this problem, the authors have already proposed an automatic 3D object layout system based on contact constraints. In this paper, the authors propose manipulation guide functionality based on the same contact constraints for the construction of 3D composite objects. Since this manipulation guide functionality works with DeMoCa, a video based hand motion tracking and hand posture recognition system already proposed by the same authors, it provides an intuitive interface.


robot and human interactive communication | 2007

Virtual Space Construction Based on Contact Constraints Using Robot Vision Technology for 3D Graphics Applications

Naoto Nakamura; Yoshiaki Akazawa; Shigeru Takano; Yoshihiro Okada

This paper discusses about an automatic 3D scene generation system. Recently 3D graphics applications has become in great demand in various fields, especially in the video game industry year by year. The development of 3D graphics applications is still not easy and needs much time. Many tools or development systems for 3D graphics applications have been proposed so far. Even using such a development system, we have to prepare 3D scenes by manually laying out various 3D objects using a mouse device on a computer screen when developing 3D graphics applications. It also takes much time. We need any tool or system that can automatically generates 3D scenes. The authors have already proposed an automatic 3D scene generation system based on contact constraints. The contact constraints among 3D objects are defined as semantic database records by manually. It is useful to generate such semantic database records automatically from the information about layouts of real objects in the real world. So, this paper discusses about the usability of the robot vision technology to automatically generate such semantic database records.


7th International Conference on Intelligent Games and Simulation, GAME-ON 2006 | 2006

Interactive learning interface for automatic 3D scene generation

Yoshiaki Akazawa; Yoshihiro Okada; Koichi Niijima


Archive | 2002

Real-time motion capture system using one video camera based on color and edge distribution

Yoshiaki Akazawa; Yoshihiro Okada; Koichi Niijima


Journal of Machine Vision and Applications | 2002

Robust Tracking Algorithm Based on Color and Edge Distribution for Real-time Video Based Motion Capture Systems.

Yoshiaki Akazawa; Yoshihiro Okada; Koichi Niijima

Collaboration


Dive into the Yoshiaki Akazawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katsuaki Kawachi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masaaki Mochimaru

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satoshi Kagami

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Igarashi

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wataru Yoshizaki

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge