Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satoshi Yonemoto is active.

Publication


Featured researches published by Satoshi Yonemoto.


articulated motion and deformable objects | 2002

Real-Time Human Motion Analysis Based on Analysis of Silhouette Contour and Color Blob

Ryuya Hoshino; Daisaku Arita; Satoshi Yonemoto; Rin-ichiro Taniguchi

This paper presents real-time human motion analysis for human-machine interface. In general, man-machine smart interface requires real-time human motion capturing systems without special devices or markers. Although vision-based human motion capturing systems do not use such special devices and markers, they are essentially unstable and can only acquire partial information because of self-occlusion. When we analyze full-body motion, the problem becomes severer. Therefore, we have to introduce a robust pose estimation strategy to deal with relatively poor results of image analysis. To solve this problem, we have developed a method to estimate full-body human postures, where an initial estimation is acquired by real-time inverse kinematics and, based on the estimation, more accurate estimation is searched for referring to the processed image. The key points are that our system combines silhouette contour analysis and color blob analysis for feature extraction to achieve robust feature extraction and that our system can estimate fullbody human postures from limited perceptual cues such as positions of a head, hands and feet, which can be stably acquired by feature extraction process. In this paper, we outline a real-time and on-line human motion analysis system.


conference on information visualization | 2006

A Tangible Interface for Hands-on Learning

Satoshi Yonemoto; Takahiro Yotsumoto; Rin-ichiro Taniguchi

In this paper, we describe a tangible interface for hands-on learning using physical-tangible objects (pattern blocks). A computer recognizes the states of the user-handled objects in real time and it gives the users advice to execute learning tasks if necessary. In our approach, a pattern block is employed as a primitive piece. It helps the users to do direct manipulation in real environments, while displayed events in virtual environments help them to support the learning tasks. Concretely, we have developed the interaction system can support hands-on math activities


ieee international conference on information visualization | 2004

Human figure control software for real-virtual application

Satoshi Yonemoto; Rin-ichiro Taniguchi

We present human figure control software using vision-based human motion capturing. We have already developed core techniques for vision-based human motion tracking and motion synthesis. We have provided the first version software as an open source kit. In this work, we have developed the second version software with modified motion synthesis. Our software consists of the following software: the vision software (includes 2D blob tracking modules and stereo vision module) and the motion synthesis software (includes skeletal structure reconstruction from a limit number of 3D positions, i.e., physically simulated joint position estimator). This software can be employed for many real-virtual applications such as avatar based video conferencing, virtual human control, and 3D game consoles which need the user postures as input. Our software requires the following hardware: more than two firewire cameras for vision processing and more than one PC for vision processing, motion synthesis, and rendering. In our real-virtual system, this software is also employed for virtual object manipulation interface with augmented avatar representation.


international conference on pattern recognition | 2002

Vision-based 3D direct manipulation interface for smart interaction

Satoshi Yonemoto; Rin-ichiro Taniguchi

This paper describes an real-time interaction system which enables 3D direct manipulation. Our purpose is to do seamless mapping of human action in the real world into virtual environments. With the aim of making computing systems suited for users, we have developed a vision based 3D direct manipulation interface as smart pointing devices. Our system realizes human motion analysis by 3D blob tracking, and human figure motion synthesis to generate realistic motion from a limited number of blobs. For the sake of realization of smart interaction, we assume that virtual objects in virtual environments can afford human figure action, that is, the virtual environments provide action information for a human figure model, or an avatar Extending the affordance based approach, this system can employ scene constraints in the virtual environments in order to generate more realistic motion.


international conference on image analysis and processing | 2003

Real-time human figure control using tracked blobs

Satoshi Yonemoto; Hiroshi Nakano; Rin-ichiro Taniguchi

This paper describes a vision based human figure motion control. Our purpose is to do seamless mapping of human motion in the real world into virtual environments. With the aim of making computing systems suited for users, we have developed a vision based human motion analysis and synthesis method. The human motion analysis method is implemented by blob tracking, and the motion synthesis method is focused on generating realistic motion from a limited number of blobs. This synthesis method is realized by using physical constraints and the other constraints. In order to realize more realistic motion synthesis, we introduce additional constraints in the synthesis method. We have estimated good constraints by analyzing real motion capture data. As a PUI application, we have applied these methods to real-time 3D interaction such as 3D direct manipulation interfaces.


international conference on image analysis and processing | 2013

An Interactive Image Rectification Method Using Quadrangle Hypothesis

Satoshi Yonemoto

In this paper, we propose an interactive image rectification method for general planar objects. Our method has two interactive techniques that allow a user to choose the target region of interest. First, with a user-stroke based cropping. Second, with a box based cropping. Our method can be applied to non-rectangular objects. The idea is based on use of horizontal and vertical lines with the target object. We assume that such lines can be richly detected. Practically, at least two horizontal lines and two vertical lines must be observed. Our method has the following procedures: First, detect primitive line segments, and then select horizontal and vertical line segments using baselines. Next, make a quadrangle hypothesis as a combination of 4 line segments. And then, evaluate whether re-projected line segments will be horizontal (vertical) or not. The quadrangle hypothesis with max goodness is the final solution. In our experiments, we showed promising cropping results for several images. And we demonstrated real-time marker-less tracking using the rectified reference image.


acm multimedia | 2003

Avatar motion control by user body postures

Satoshi Yonemoto; Hiroshi Nakano; Rin-ichiro Taniguchi

This paper describes an avatar motion control by body postures. Our goal is to do seamless mapping of human motion in the real world into virtual environments. We hope that the idea of direct human motion sensing will be used on future interfaces. With the aim of making computing systems suited for users, we have developed a computer vision based avatar motion control. The human motion sensing is based on skin-color blob tracking. Our method can generate realistic avatar motion from the sensing data. We address our framework to use virtual scene context as a priori knowledge. We assume that virtual objects in virtual environments can afford avatars action, that is, the virtual environments provide action information for the avatar. Avatars motion is controlled, based on simulating the idea of affordance extended into the virtual environments.


advanced visual interfaces | 2002

Real-time human motion analysis for human-machine interface

Rin-ichiro Taniguchi; Satoshi Yonemoto; Daisaku Arita

This paper presents real-time human motion analysis for human-machine interface. In general, man-machine smart interface requires real-time human motion capturing systems without special devices or markers. Although vision-based human motion capturing systems do not use such special devices and markers, they are essentially unstable and can only acquire partial information because of self-occlusion. When we analyze full-body motion, the problem becomes more severer. Therefore, we have to introduce a robust pose estimation strategy to deal with relatively poor results of image analysis. To solve this problem, we have developed a method to estimate full-body human postures, where an initial estimation is acquired by real-time inverse kinematics and, based on the estimation, more accurate estimation is searched for referring to the processed image. The key point is that our system can estimate full-body human postures from limited perceptual cues such as positions of a head, hands and feet, which can be stably acquired by silhouette contour analysis.


2013 17th International Conference on Information Visualisation | 2013

A Reference Image Generation Method for Marker-less AR

Satoshi Yonemoto

In this paper, we present a marker-less AR framework which enables virtual graffiti creation and reference image generation. Our framework also supports 3D annotations such as image textures (virtual graffiti), 3D objects and 3D text, which are superposed over the video stream. We adopt marker-less tracking technique based on key point based descriptors and the trackers. In general, reference image for marker-less AR must be acquired from real image in advance. In such situation, most marker-less tracking approaches force user to capture the front view of a target object. We suppose that reference image does not have to be captured under such condition. In experiments, we showed the estimation accuracy for reference image generation. And we demonstrated real-time marker-less tracking including reference image generation, easy-to-use virtual graffiti creation and immediate superimposing.


cyberworlds | 2012

A Video Annotation Tool Using Vision-based AR Technology

Satoshi Yonemoto

In this paper, we present a video annotation tool using vision-based Augmented Reality (AR) technology. We apply AR technology and computer vision method for making videos with 3D annotations such as image textures, video clips, 3D objects and 3D text. A planar object is manually selected in the static scene, while it can be semi-automatically tracked in the dynamic scene. Tracking of the planar object is called marker-less tracking, which is often used in AR applications. When user selects a planar object in the real image, external camera parameters are estimated from 4 corner points of the planar object. As a marker-less tracking method, we use SURF feature based descriptors and the trackers. 3D annotation is rendered on the selected (or tracked) planar object.

Collaboration


Dive into the Satoshi Yonemoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge