Yoshinari Kameda
University of Tsukuba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoshinari Kameda.
international symposium on mixed and augmented reality | 2004
Yoshinari Kameda; Taisuke Takemasa; Yuichi Ohta
This paper presents a new outdoor mixed-reality system designed for people who carry a camera-attached small handy device in an outdoor scene where a number of surveillance cameras are embedded. We propose a new functionality in outdoor mixed reality that the handy device can display live status of invisible areas hidden by some structures such as buildings, walls, etc. The function is implemented on a camera-attached, small handy subnotebook PC (HPC). The videos of the invisible areas are taken by surveillance cameras and they are precisely overlapped on the video of HPC camera, hence a user can notice objects in the invisible areas and see directly what the objects do. We utilize surveillance cameras for two purposes. (1) They obtain videos of invisible areas. The videos are trimmed and warped so as to impose them into the video of the HPC camera. (2) They are also used for updating textures of calibration markers in order to handle possible texture changes in real outdoor world. We have implemented a preliminary system with four surveillance cameras and proved that our system can visualize invisible areas in real time.
International Journal of Computer Vision | 2007
Yuichi Ohta; Itaru Kitahara; Yoshinari Kameda; Hiroyuki Ishikawa; Takayoshi Koyama
This paper proposes a method to realize a 3D video system that can capture video data from multiple cameras, reconstruct 3D models, transmit 3D video streams via the network, and display them on remote PCs. All processes are done in real time. We represent a player with a simplified 3D model consisting of a single plane and a live video texture extracted from multiple cameras. This 3D model is simple enough to be transmitted via a network. A prototype system has been developed and tested at actual soccer stadiums. A 3D video of a typical soccer scene, which includes more than a dozen players, was processed at video rate and transmitted to remote PCs through the internet at 15–24 frames per second.
international symposium on mixed and augmented reality | 2007
Shinya Minatani; Itaru Kitahara; Yoshinari Kameda; Yuichi Ohta
This paper proposes a novel remote face-to-face mixed reality (MR) system that enables two people in distant places to share MR space. Challenging issues to realize such an MR system include capturing, sending, and rendering each users appearance in real time. We developed a method to represent users upper body and hands on the table as a single deformed-billboard. An MR Othello game is implemented as a test bed of the remote face-to-face MR system. Users can play the tabletop game as if their opponent were sitting across from the table, despite being physically separated. By detecting and sending the status of each real game board to the other site, both users feel that they are sharing tabletop objects.
international conference on pattern recognition | 2000
Masaaki Iiyama; Yoshinari Kameda; Michihiko Minoh
Recognizing structure of the human body is important for modeling human motion. The human body is usually represented as an articulate model, which consists of the rigid parts and the joint points between them. The structure of the human body is specified by the joint points. We propose a method for estimating the location of joint points from successive volume data. Our joint point estimation method consists of three steps. In the first step, rigid parts are extracted from two successive volume data under the constraint of the rigid transformation. In the second step, the joint points are estimated based on the rigid parts. As the last step, false joint points are eliminated by using more successive data. Applying the method to the simulated data, the locations of joint points of the human are correctly estimated.
international conference on pattern recognition | 2010
Yoshinari Kameda; Yuichi Ohta
We propose a new computer vision approach to locate a walking pedestrian by a camera image of first-person vision in practical situation. We assume reference points have been registered with other first-person vision images. We utilize SURF and define seven matching criteria that derive from the property of first-person vision so that it rejects false matching. We have implemented a preliminary system that can respond to a query within 1/2 seconds for a path of approximately 1 km long around Tokyo downtown area where pedestrians and vehicles are always in images.
international conference on multimedia and expo | 2004
Yoshinari Kameda; Takayoshi Koyama; Yasuhiro Mukaigawa; Fumito Yoshikawa; Yuichi Ohta
We present a new video browsing method for multiple videos that are taken at large-scale space for live 3D events such as soccer games. By our method, multiple viewers over a computer network can browse a live 3D event from any viewpoint and each viewer can move his/her viewpoint freely. Our algorithm consists of five steps. Our system first captures videos from multiple cameras, then extracts texture segments from the videos, selects appropriate segments according to a viewpoint which is given by user dynamically, transmits them to users, and lays out the segments in virtual space so that each viewer can see the segments in a virtual environment as if the viewers were in the event. Our 3D video display system requires 10 Mbps at most to browse a soccer game. We conducted experiments at two real soccer stadiums and succeeded in realizing live realistic visualization with free viewpoint at about 26 fps
Proceedings of the 2005 international conference on Augmented tele-existence | 2005
Jeremy Bluteau; Itaru Kitahara; Yoshinari Kameda; Haruo Noma; Kiyoshi Kogure; Yuichi Ohta
This paper presents a system that allows patients and physicians to experience better communication during medical consultations using Augmented Reality (AR) technology. The AR system can superimpose augmentations (i.e., human body components) onto the real patients body, and such annotated information serves to form the cornerstone for collaborative work between the two actors. We focus on the advantages of projector-based technology and ARToolKit. Our technique, based on thermal markers (i.e., using human body temperature as a source of information) is used for tracking the location of pain in the patient through the projected augmentations. The second aim of using thermal markers is to protect the patients privacy. The required calibration method between thermal-camera and projector is also presented. The systems feasibility is demonstrated through development of a complete application.
advances in multimedia | 2007
Norihiro Ishii; Itaru Kitahara; Yoshinari Kameda; Yuichi Ohta
We propose an adaptive method that can estimate 3D position of a soccer ball by using two viewpoint videos. The 3D position of a ball is essential to realize a 3D free viewpoint browsing system and to analyze of soccer games. At an image processing step, our method detects the ball by selecting the best algorithm based on the ball states so as to minimize the chance to miss the ball and to reduce the computation cost. The 3D position of the ball is then estimated by the estimated 2D positions of the two camera images. When it is impossible to obtain the 3D position due to the loss of the ball in an image, we utilize the Kalman Filter to compensate the missing position information and predict the 3D ball position. We implemented a preliminary system and succeeded in tracking the ball in 3D at almost on-line speed.
pacific-rim symposium on image and video technology | 2013
Hidehiko Shishido; Itaru Kitahara; Yoshinari Kameda; Yuichi Ohta
To build a robust visual tracking method it is important to consider issues such as low observation resolution and variation in the target object’s shape. When we capture an object moving fast in a video camera motion blur is observed. This paper introduces a visual trajectory estimation method using blur characteristics in the 3D space. We acquire a movement speed vector based on the shape of a motion blur region. This method can extract both the position and speed of the moving object from an image frame, and apply them to a visual tracking process using Kalman filter. We estimated the 3D position of the object based on the information obtained from two different viewpoints as shown in figure 1. We evaluated our proposed method by the trajectory estimation of a badminton shuttlecock from video sequences of a badminton game.
acm multimedia | 2010
Nozomu Kasuya; Itaru Kitahara; Yoshinari Kameda; Yuichi Ohta
Our research aims to generate a players view video stream by using a 3D free-viewpoint video technique. Since player trajectories are necessary to generate the video, we propose a real-time player trajectory estimation method by utilizing the shadow regions from soccer scenes. This paper describes our trial to realize real-time processing. We divide the process into capture and server computers. In addition, we reduced the processing cost with pipeline parallelization and optimization. We apply our proposed method to an actual soccer match held in a stadium and show its effectiveness.