Yuko Uematsu
Keio University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuko Uematsu.
international conference on artificial reality and telexistence | 2007
Yuko Uematsu; Hideo Saito
This paper presents a method for improving accuracy of marker-based tracking using a 2D marker for augmented reality. We focus on that tracking becomes unstable when the view direction of the camera is almost perpendicular to a marker plane. Especially, tracking of Z axis which is perpendicular to the marker plane (X-Y) becomes unstable. For improving tracking accuracy in this case, we search rotation parameters that are the fittest to projected pattern based on the particle filter. By using particle filtering technique, then, our method can correctly estimate rotation parameters of the camera which are important to track the 3D coordinate system and improve the accuracy of the 3D coordinate system. This method can reduce jitters between frames, which is a big problem in AR. In the experiment, we demonstrate that our method can improve the tracking accuracy of the 3D coordinate system compared with just using ARToolkit.
international symposium on mixed and augmented reality | 2010
Michihiko Goto; Yuko Uematsu; Hideo Saito; Shuji Senda; Akihiko Iketani
This paper presents an instructional support system based on augmented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build electric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By transforming an instructional video to display, according to the users view, and by overlaying the video onto the users view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the users view may be visually confused, we add various visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain operation is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a users visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks.
international conference on image analysis and processing | 2005
Yuko Uematsu; Hideo Saito
We propose a novel vision-based registration approach for Augmented Reality with integration of arbitrary multiple planes. In our approach, we estimate the camera rotation and translation by an uncalibrated image sequence which includes arbitrary multiple planes. Since the geometrical relationship of those planes is unknown, for integration of them, we assign 3D coordinate system for each plane independently and construct projective 3D space defined by projective geometry of two reference images. By integration with the projective space, we can use arbitrary multiple planes, and achieve high-accurate registration for every position in the input images.
multimedia signal processing | 2010
Takanori Hashimoto; Yuko Uematsu; Hideo Saito
This paper presents a method of generating new view point movie for the baseball game. One of the most interesting view point on the baseball game is looking from behind the catcher. If only one camera is placed behind the catcher, however, the view is occluded by the umpire and catcher. In this paper, we propose a method for generating a see-through movie which is captured from behind the catcher by recovering the pitchers appearance with multiple cameras, so that we can virtually remove the obstacles (catcher and umpire) from the movie. Our method consists of three processes; recovering the pitchers appearance by Homography, detecting obstacles by Graph Cut, projecting the balls trajectory. For demonstrating the effectiveness of our method, in the experiment, we generate a see-through movie by applying our method to the multiple camera movies which are taken in the real baseball stadium. In the see-through movie, the pitcher can be appeared through the catcher and umpire.
Archive | 2008
Yuko Uematsu; Hideo Saito
Augmented Reality (AR) is a technique for overlaying virtual objects onto the real world. AR has recently been applied to many kinds of entertainment applications by using visionbased tracking technique, such as [Klein & Drummond, 2004, Henrysson et al., 2005, Haller et al. 2005, Schmalstieg & Wagner, 2007, Looser et al. 2007]. AR can provide users with immersive feeling by allowing the interaction between the real and virtual world. In the AR entertainment applications, virtual objects (world) generated with Computer Graphics are overlaid onto the real world. This means that the real 3D world is captured by a camera, and then the virtual objects are superimposed onto the captured images. By seeing the real world through some sort of displays, the users find that the virtual world is mixed with the real world. In such AR applications, the users carry a camera and move around the real world in order to change their view points. Therefore the pose and the position of the moving user’s camera should be obtained so that the virtual objects can be overlaid at correct position in the real world according to the camera motion. Such camera tracking should also be performed in real-time for interactive operations of the AR applications. Vision-based camera tracking for AR is one of the popular research areas because the visionbased method does not require any special device except cameras, in contrast with sensorbased approaches. And also, marker-based approach is a quite easy solution to make the vision-based tracking robust and running in real-time. This chapter focuses on marker-based approach. Especially, “AR-Toolkit” [H. Kato & M. Billinghurst, 1999] is a very popular tool for implementing simple on-line AR applications. ARToolkit is a kind of planar square marker for the camera tracking and estimates the camera position and pose with respect to the marker. By using the camera position and pose, virtual objects are overlaid onto the images as if the objects exist in the real world where the marker is placed. Since the user only has to place the marker, this kind of markerbase registration is very easy to implement AR systems. If only one marker is utilized, however, the cameras movable area is limited so that the camera (user) can see the marker. Moreover, when the marker cannot be recognized properly because of a change in its visibility, the registration of the virtual objects is getting unstable. In order to solve such problems, using multiple markers is a popular way. When multiple markers are utilized in a vision-based method, it is necessary to know the geometrical arrangement information of the marker such as their position and pose in advance. For example, the method in [Umlauf et al., 2002] requires the position and pose of a square marker. The method in [Kato et al., 2000] also needs the position of a point marker in advance. In [Genc et al., 2002], they proposed two-step approach; learning process and O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m
multimedia signal processing | 2010
Francois de Sorbier; Yuko Uematsu; Hideo Saito
Stereoscopic displays are becoming very popular since more and more contents are now available. As an extension, auto-stereoscopic screens allow several users to watch stereoscopic images without wearing any glasses. For the moment, synthetized content are the easiest solutions to provide, in realtime, all the multiple input images required by such kind of technology. However, live videos are a very important issue in some fields like augmented reality applications, but remain difficult to be applied on auto-stereoscopic displays. In this paper, we present a system based on a depth camera and a color camera that are combined to produce the multiple input images in realtime. The result of this approach can be easily used with any kind of auto-stereoscopic screen.
virtual systems and multimedia | 2010
Francois de Sorbier; Yuki Takaya; Yuko Uematsu; Ismaël Daribo; Hideo Saito
This paper presents a capture system based on a depth camera that is used for an augmented reality application. Most of depth cameras are unable to capture the color information corresponding to the same viewpoint. A color is then added besides the depth camera and we applied a transformation algorithm to match the depth map with the color cameras viewpoint. Then, using a depth image based rendering (DIBR) approaches, it becomes possible to synthesize new virtual views from the 2D-plus-depth data. We also address some research issues in the generation of the virtual view is to deal with the newly exposed areas, appearing as holes and denoted as occlusions, which may be revealed in each warped image. The color image and its corresponding enhanced depth image are then combined to produce a mesh representing the real scene. It is used to easily integrate virtual objects in the real scene. Finally, the result can be rendered to create the input image required by a auto-stereoscopic screen.
international conference on computer vision | 2012
Ruiko Miyano; Takuya Inoue; Takuya Minagawa; Yuko Uematsu; Hideo Saito
An Augmented Reality (AR) system on mobile phones has recently attracted attention because smartphones have increasingly been popular. For an AR system, we have to know a camera pose of a smartphone. A sensor-based method is one of the most popular ways to estimate the camera pose, but it cannot estimate an accurate pose. A vision-based method is another way to estimate the camera pose, but it is not suitable to a scene with few interest points such as a sports field. In this paper, we propose a novel method of a camera pose estimation for a scene without interest points by combining a sensor-based and a vision-based approach. In our proposed method, we use an acceleration and a magnetic sensor to roughly estimate a camera pose, then search the accurate pose by matching a captured image with a set of reference images. Our experiments show that our proposed method is accurate and fast enough to apply a real-time AR system.
international symposium on safety, security, and rescue robotics | 2008
Toshio Takeuchi; Yuko Uematsu; Hideo Saito; Yoshimitsu Aoki; Akihisa Ohya; Fumitoshi Matsuno; Iwaki Akiyama
Authors have studied a radar system with two dimensional array antennas to find survivors accidentally buried in rubble from the top under noisy environment. This paper concentrates on the two issues to improve the survivor-detection abilities. One is a method for the measurement of three dimensional location of each antennas by using ARToolkit. The other is a method for the reduction of impulsive noise in the received signals. Experimental results showed the feasibility of those methods.
international conference on artificial reality and telexistence | 2006
Yuko Uematsu; Hideo Saito
This paper presents “On-line AR Baseball Presentation System”, which is a vision-based AR application for entertainment. In this system, a user can watch a virtual baseball game scene on a real baseball field model placed on a tabletop through a web-camera attached to a hand-held LCD monitor. The virtual baseball scene is synthesized from an input history data of an actual baseball game. Visualizing the input history data can help the user to understand the contents of the game. For aligning the coordinate of the virtual baseball game scene with the coordinate of the real field model, we use multiple planar markers manually distributed into the real field model. In contrast with most of AR approaches using multiple markers, we do not need any manual measurement of the geometrical relationship of the markers, so that the user can easily started and enjoy this system.