Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shinji Uchiyama is active.

Publication


Featured researches published by Shinji Uchiyama.


international symposium on mixed and augmented reality | 2005

A hybrid and linear registration method utilizing inclination constraint

Daisuke Kotake; Kiyohide Satoh; Shinji Uchiyama; Hiroyuki Yamamoto

This paper describes a new hybrid vision-based registration method with an inclination sensor. In the method, a camera is tracked by solving linear equations under inclination constraint. Linear operations are faster than the nonlinear optimization process, but the output does not usually satisfy the orthonormality constraint. On the contrary, the proposed method calculates the camera position and azimuth directly, thus the result satisfies the orthonormality constraint. Many hybrid approaches using inertia sensors have been proposed for AR/MR. However, such methods still depend on vision-based methods in initialization processes. On the other hand, the proposed method is totally hybrid in that the inclination measured by the sensor is always incorporated in pose calculation process as well as vision information. The method can be used in initialization process of conventional hybrid methods as well as it can be used as an independent registration method. The proposed method can be applied to not only inside-out-style camera tracking but also outside-in-style object tracking.


international symposium on mixed and augmented reality | 2003

Robot vision-based registration utilizing bird's-eye view with user's view

Kiyohide Satoh; Shinji Uchiyama; Hiroyuki Yamamoto; Hideyuki Tamura

This paper describes new vision-based registration methods utilizing not only cameras on a users head-mounted display but also a birds-eye view camera that observes the user from an objective viewpoint. Two new methods, the line constraint method (LCM) and global error minimization method (GEM), are proposed. The former method reduces the number of unknown parameters concerning the users viewpoint by restricting it to be on the line of sight from the birds-eye view. The other method minimizes the sum of errors, which is the sum of the distance between the fiducials on the view and the calculated positions of them based on the current viewing parameters, for both the users view and the birds-eye view. The methods proposed here reduce the number of points that should be observed from the users viewpoint for registration, thus improving the stability. In addition to theoretical discussions, this paper demonstrates the effectiveness of our methods by experiments in comparison with methods that use only a users view camera or a birds-eye view camera.


international symposium on mixed and augmented reality | 2004

A head tracking method using bird's-eye view camera and gyroscope

Kiyohide Satoh; Shinji Uchiyama; Hiroyuki Yamamoto

This paper describes a new head tracking method utilizing a gyroscope mounted on a head-mounted display (HMD) and a birds-eye view camera that observes the HMD from a fixed third-person viewpoint. Furthermore, we propose an extension of this method to hybrid registration, combining it with users view cameras. The HMD is equipped with a gyroscope and a marker. The gyroscope measures the orientation of the user s view camera so that the number of pose parameters to be solved can be reduced. The other parameters are to be estimated by the birds-eye view camera that observes the marker. This method is an improvement over the conventional outside-in-style vision-based tracker method, which only uses visual information. Hence, it can be thought of as an alternative to a physical head tracker such as a magnetic sensor and to an inside-out-style vision-based tracker. In addition to theoretical discussions, this paper demonstrates the effectiveness of our methods by experiments. We also propose methods for calibrating the gyroscope and the marker on HMD, which are essential in implementing the tracking method.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2007

Visuo-Haptic Systems: Half-Mirrors Considered Harmful

Christian Sandor; Shinji Uchiyama; Hiroyuki Yamamoto

In recent years, systems that allow users to see and touch virtual objects in the same space (visuo-haptic systems) are being investigated. Most research projects are employing a half-mirror, while few use a video see-through, head-mounted display (HMD). The work presented in this paper points out advantages of the HMD-based approach. First, we present an experiment that analyzes human performance in a target acquisition task. We have compared a half-mirror system with an HMD system. Our main finding is, that a half-mirror significantly reduces performance. Second, we present an HMD-based painting application, which introduces new interaction techniques that could not be implemented with a half-mirror display. We believe that our findings could inspire other researchers, employing a half-mirror, to reconsider their approach


international symposium on mixed and augmented reality | 2007

A Fast Initialization Method for Edge-based Registration Using an Inclination Constraint

Daisuke Kotake; Kiyohide Satoh; Shinji Uchiyama; Hiroyuki Yamamoto

We propose a hybrid camera pose estimation method using an inclination sensor value and correspondence-free line segments. In this method, possible azimuths of the camera pose are hypothesized by a voting method under an inclination constraint. Then some camera positions for each possible azimuth are calculated based on the detected line segments that affirmatively voted for the azimuth. Finally, the most consistent one is selected as the camera pose out of the multiple sets of the camera positions and azimuths. Unlike many other tracking methods, our method does not use past information but rather estimates the camera pose using only present information. This feature is useful for an initialization measure of registration in augmented reality (AR) systems. This paper describes the details of the method and shows its effectiveness with experiments in which the method is actually used in an AR application.


ieee-ras international conference on humanoid robots | 2007

Viewing and reviewing how humanoids sensed, planned and behaved with Mixed Reality technology

Kazuhiko Kobayashi; Koichi Nishiwaki; Shinji Uchiyama; Hiroyuki Yamamoto; Satoshi Kagami

How can we see how humanoids sensed, planned, and behaved in the actual environment? We propose a mixed reality environment for viewing and reviewing of internal parameters computed by the humanoids against the actual environment. In the environment, the parameters snapped on each sampling time are treated as data streams, and stored to the distributed log servers in real time. 3-D graphical representation of the parameters helps us to observe plural multi-dimensional parameters. A video see-through head mounted display is used for viewing the representation. The stored parameters can be projected on the current actual scene from arbitrary viewpoints. By the viewing and reviewing functions, the mixed reality environment becomes a powerful tool for the autonomous behavior developer in order to debug the actual behavior by contrast with the actual environment. This paper describes the implementation of the system with a full size humanoid, HRP-2, and shows some experimental examples.


international conference on robotics and automation | 2008

Mixed reality environment for autonomous robot development

Koichi Nishiwaki; Kazuhiko Kobayashi; Shinji Uchiyama; Hiroyuki Yamamoto; Satoshi Kagami

This video demonstrates a mixed reality (MR) environment which is constructed for development of autonomous behaviors of robots. Many kinds of functions are required to be integrated for realizing an autonomous behavior. For example, autonomous navigation of humanoid robots needs functions, such as, recognition of environment, localization and mapping, path planning, gait planning, dynamically stable biped walking pattern generation, and sensor feedback stabilization of walking. Technologies to realize each function are well investigated by many research works. However, another effort is required for constructing an autonomous behavior by integrating those functions. We demonstrate a MR environment in which internal status of a robot, such as, sensor status, recognition results, planning results, and motion control parameters, can be projected to the environment and its body. We can understand intuitively how each function works as a part of total system in the real environment by using the proposed system, and it helps solving the integration problems. The overview of the system, projection of each internal status, and the application to an autonomous locomotion experiment are presented in the video clip.


international symposium on mixed and augmented reality | 2006

A registration evaluation system using an industrial robot

Kiyohide Satoh; Kazuki Takemoto; Shinji Uchiyama; Hiroyuki Yamamoto

This paper describes an evaluation system using an industrial robot, constructed for the purpose of evaluating registration technology for Mixed Reality. In this evaluation system, the tip of the robot arm plays the role of the users head, where a head- mounted display is mounted. By using an industrial robot, we can obtain the ground truth of the camera pose with a high level of accuracy and robustness. Additionally, we have the ability to play back the same specified operations repeatedly under identical conditions. In addition to the system implementation, we propose evaluation methods for motion robustness, relative orientation robustness, relative distance robustness, jitter, and an overall evaluation. We verify the validity of this system through some experiments.


international symposium on mixed and augmented reality | 2007

Overlay what Humanoid Robot Perceives and Thinks to the Real-world by Mixed Reality System

Kazuhiko Kobayashi; Koichi Nishiwaki; Shinji Uchiyama; Hiroyuki Yamamoto; Satoshi Kagami; Takeo Kanade

One of the problems in developing a humanoid robot is caused by the fact that intermediate results, such as what the robot perceives the environment, and how it plans its moving path are hard to be observed online in the physical environment. What developers can see is only the behavior. Therefore, they usually investigate logged data afterwards, to analyze how well each component worked, or which component was wrong in the total system. In this paper, we present a novel environment for robot development, in which intermediate results of the system are overlaid on physical space using mixed reality technology. Real-time observation enables the developers to see intuitively, in what situation the specific intermediate results are generated, and to understand how results of a component affected the total system. This feature makes the development efficient and precise. This environment also gives a human-robot interface that shows the robot internal state intuitively, not only in development, but also in operation.


Systems and Computers in Japan | 1996

The delaunay triangulation for accurate three-dimensional graphic model

Hiroyuki Yamamoto; Shinji Uchiyama; Hideyuki Tamura

This paper presents a method to construct the Delaunay triangulation for creating a polygon patch model, which is used as a graphic model for applications such as computer graphics and virtual reality. This method is based on the conventional incremental method to construct the Voronoi diagram, but has two characteristics that are especially suitable for a three-dimensional geometric modeling. One of these characteristics is that a point can be removed from the mesh. This feature is useful for the interactive modeling application in which points are often removed and moved. The other is that a line, which lies between two points, can be constrained to be an edge of the mesh. This is necessary for creating an accurate three-dimensional model. Without this feature, the mesh usually obtained has totally different three-dimensional structure from the real object. A process to apply this method to a radial range image is described as well as the details of this method. Experimental results and evaluation of this method are also shown.

Collaboration


Dive into the Shinji Uchiyama's collaboration.

Researchain Logo
Decentralizing Knowledge