Tsukasa Noma
Kyushu Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tsukasa Noma.
IEEE Computer Graphics and Applications | 2000
Tsukasa Noma; Liwei Zhao; Norman I. Badler
We have created a virtual human presenter who accepts speech texts with embedded commands as inputs. The presenter acts in real-time 3D animation synchronized with speech. The system was developed on the Jack animated-agent system. Jack provides a 3D graphical environment for controlling articulated figures, including detailed human models.
Journal of Visualization and Computer Animation | 1994
Jun-ichi Nakamura; Tetsuya Kaku; Kyungsil Hyun; Tsukasa Noma; Sho Yoshida
Since adding background music and sound effects even to short animations is not simple, an automatic music generation system would help improve the total quality of computer generated animations. This paper describes a prototype system which automatically generates background music and sound effects for existing animations. The inputs to the system are music parameters (mood types and musical motifs) and motion parameters for individual scenes of an animation. Music is generated for each scene. The key for a scene is determined by considering the mood type and its degree, and the key of the previous scene. The melody for a scene is generated from the given motifs and the chord progression for the scene which is determined according to appropriate rules. The harmony accompaniment for a scene is selected based on the mood type. The rhythm accompaniment for a scene is selected based on the mood type and tempo. The sound effects for motions are determined according to the characteristics and intensity of the motions. Both the background music and sound effects are generated so that the transitions between scenes are smooth.
IEEE Computer Graphics and Applications | 1987
Xiaoyang Mao; Tosiyasu L. Kunii; Issei Fujishiro; Tsukasa Noma
This article proposes G-octree as an extension of G-quadtree to three dimensions. A G-octree reflects in its construction a hierarchy of gray-scale level value homogeneity, as well as a hierarchy of spatial resolution. The article also develops two-way G-quadtree/Goctree conversion procedures based on the algorithms for the binary case. These procedures provide an integrated processing environment for hierarchically represented 2D/3D gray.scale images. We demonstrate our approach with an application to the color coding of macro-autoradiography images taken from rat brains.
Journal of Visualization and Computer Animation | 1994
Moon-Ryul Jung; Norman I. Badler; Tsukasa Noma
In this paper, we present a rule-based heuristic method of motion planning for an animated human agent with massively redundant degrees of freedom. It constructs motion plans to achieve 3D-space goals of control points on the body, e.g. a hand, while avoiding collisions. Like the artificial potential field approach, the method performs motion decisions in 3D world space rather than in joint space. To handle the massively redundant degrees of freedom, we use a qualitative kinematic model, which specifies motions of body parts and dependencies among them, without specifying the exact distance parameters. This model helps the body select appropriate primitive motions for given goals of control points more globally than does the gradient vector of an artificial potential field of the body. The method simulates (in imagination) the suggested plan to find whether some body parts hit objects, and how much they penetrate the objects. Based on this simulated collision information, the method suggests intermediate goals of the collision body parts. A subplan to achieve these intermediate goals is again postulated by using the qualitative kinematic model. This explicit reasoning helps alleviate the basic cause of local minima in the potential field approach, namely, conflicts between attractive potential fields due to goal positions of control points and repulsive potential fields due to obstacles.
Archive | 1988
Tsukasa Noma; Tosiyasu L. Kunii; Nami Kin; Hirohisa Enomoto; E. Aso; T. Yamamoto
This paper proposes a novel approach to 2D picture description. In this approach, points essential to specify pictures are defined through repetitive geometrical constructions, and the final image is drawn by referring to those points. The method fulfills the requirements for picture description: easiness, intuitiveness, and universality. In addition, to clarify the mechanism of drawing input, we formulate the specification of a drawing input system. To represent the relationships among points, lines, and circles, the specification uses the geometrical operations. We show the validity of drawing input through three applications: engineering drawing, apparel pattern-making, and Tibetan mandala image generation.
IEEE Computer Graphics and Applications | 1985
Tsukasa Noma; Tosiyasu L. Kunii
CIM animation systems are designed to provide exact and unambiguous displays of moving objects. This system also offers a special geometric model for almost real-time display.
Archive | 1989
Nami Kin; Tsukasa Noma; Tosiyasu L. Kunii
Picture Editor is proposed as a new constraint-based picture drawing system. Most constraint-based systems are based on numerical methods, that exhibit numerical instability. Our system, however, satisfies constraints by converting them to construction operations. This is a stable technique and it enables us to check consistency among constraints in a straightforward manner.
eurographics symposium on rendering techniques | 1995
Tsukasa Noma
This paper presents an approach to generating and rendering texels (3D volume textures) faithful to the geometry of a collection of surfaces, and thus to bridging an existing gap between surface rendering and volume/texel rendering. This approach enables us to render the identical collection of surfaces both in the fax distance and close at hand. Experiments on clouds of spheres and trees/forests are reported.
pacific conference on computer graphics and applications | 1999
Tsukasa Noma; I. Oishi; Hiroshi Futsuhara; Hiromi Baba; Takeshi Ohashi; Toshiaki Ejima
This paper purpose a motion generator approach to translating human motion from video image sequences to computer animations in real-time. In the motion generator approach, a motion generator makes inferences on the current human motion and posture from the data obtained by processing the source video source, and then generates and sends a set of joint angles to the target human body model. Compared with the existing motion capture approach, our approach is more robust, and tolerant of broader environmental and postural conditions. Experiments on a prototype system show that an animated virtual human can walk, sit, and lie as the real human performs without special illuminations control.
The Visual Computer | 1989
Tsukasa Noma; Tosivasu L. Kunii; Nami Kin; Hirohisa Enomoto; Emako Aso; Tetsushi Yamamoto
This paper proposes a novel approach to picture description called constructive picture description. In this approach, the points which specify pictures are defined through repetitive geometrical constructions, and the final image is drawn by referring to those points. This approach fulfills the requirements for picture description: easiness, intuitiveness, uniqueness, and less computation. In addition to that, to clarify the feature of our constructive picture description, we discuss the relationship between our proposal and formal elementary plane geometry. We show the usefulness of constructive picture description through three examples: a generalpurpose preprocessor, apparel patternmaking, and Tibetan mandala image generation.