Ronald Azuma
HRL Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ronald Azuma.
Presence: Teleoperators & Virtual Environments | 1997
Ronald Azuma
This paper surveys the field of augmented reality (AR), in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.
IEEE Computer Graphics and Applications | 2001
Ronald Azuma; Yohan Baillot; Reinhold Behringer; Steven Feiner; Simon J. Julier; Blair MacIntyre
In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.
international conference on computer graphics and interactive techniques | 1994
Ronald Azuma; Gary Bishop
In Augmented Reality, see-through HMDs superimpose virtual 3D objects on the real world. This technology has the potential to enhance a users perception and interaction with the real world. However, many Augmented Reality applications will not be accepted until we can accurately register virtual objects with their real counterparts. In previous systems, such registration was achieved only from a limited range of viewpoints, when the user kept his head still. This paper offers improved registration in two areas. First, our system demonstrates accurate static registration across a wide variety of viewing angles and positions. An optoelectronic tracker provides the required range and accuracy. Three calibration steps determine the viewing parameters. Second, dynamic errors that occur when the user moves his head are reduced by predicting future head locations. Inertial sensors mounted on the HMD aid head-motion prediction. Accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency. On average, prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all. Future steps that may further improve registration are outlined.
ieee virtual reality conference | 1999
Suya You; Ulrich Neumann; Ronald Azuma
The biggest single obstacle to building effective augmented reality (AR) systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation. Limiting their use to prepared areas that are relatively free of natural or man-made interference sources. Vision-based systems can use passive landmarks, but they are more computationally demanding and often exhibit erroneous behavior due to occlusion or numerical instability. Inertial sensors are completely passive, requiring no external devices or targets, however, the drift rates in portable strapdown configurations are too great for practical use. In this paper, we present a hybrid approach to AR tracking that integrates inertial and vision-based technologies. We exploit the complementary nature of the two technologies to compensate for the weaknesses in each component. Analysis and experimental results demonstrate this systems effectiveness.
Communications of The ACM | 1993
Ronald Azuma
I In this Issue, Fltzmaurlc,~ = and Feiner describe two different augmented-reality systems. Such sy~stems require highly capable head and object trackers to create an effective Illusion of virtual objects coexisting with the real world. For ordinary virtual environments that completely replace the real world with a virtual world, It sufflo~=s to know the approximate position and orientation of the users head. Small errors are not easily discernible because the users visual sense tencls to override 1:he conflictIng signals from his or her w~=stlbular and broprloceptlve systems. But In augmented reality, virtual objects supplement rather than supplant tl~e real world. Preserving the Illusion that the two coexist requires proper alignment and reglstral~lon of the vlrtu~al objects to the real world. Even tiny errors in regis-tratlon are easily detectable by the human visual system. What does augmented reality require from trackers to avoid such errors? First, a tracker must be accurate to a small fraction of a degree In orientation and a few millimeters (mm) in position. Figure 1. ,Conceptual ctrawing of sensors viiewing beacons In the ceiling Errors in measured head orientation usually cause larger registration offsets than object orientation errors do, making this requirement more critical for systems based on Head-Mounted Displays (HMDS). Try the following simple demonstration. Take out a dime and hold It at arms length. The diameter of the dime covers approximately 1.5 degrees of arc. In comparison, a full moon covers 1/2 degree of arc. Now imagine a virtual coffee cup sitting on the corner of a real table two meters away from you. An angular error of 1.5 degrees in head orientation moves the cup by about 52 mm. Clearly, small orientation errors could result In a cup suspended in midair or interpene-trating the table. Similarly, If we want the cup to stay within 1 to 2 mm of Its true position, then we cannot tolerate tracker positional errors of more than 1 to 2 mm. Second, the combined latency of the tracker and the graphics engine must be very low. Combined latency is the delay from the time the tracker subsystem takes its measurements to the time the corresponding images appear In the display devices. Many HMD-based systems have a combined latency over 100 ms. At a moderate head or object rotation rate of 50 degrees per second, 100 milliseconds (ms) of latency causes 5 degrees of angular error. At a rapid rate …
IEEE Computer Graphics and Applications | 1999
Suya You; Ulrich Neumann; Ronald Azuma
Our work stems from a program focused on developing tracking technologies for wide-area augmented realities in unprepared outdoor environments. Other participants in the Defense Advanced Research Projects Agency (Darpa) funded Geospatial Registration of Information for Dismounted Soldiers (Grids) program included University of North Carolina at Chapel Hill and Raytheon. We describe a hybrid orientation tracking system combining inertial sensors and computer vision. We exploit the complementary nature of these two sensing technologies to compensate for their respective weaknesses. Our multiple-sensor fusion is novel in augmented reality tracking systems, and the results demonstrate its utility.
ieee virtual reality conference | 1999
Ronald Azuma; Bruce Hoff; Howard Neely; Ron Sarfaty
Almost all previous Augmented Reality (AR) systems work indoors. Outdoor AR systems offer the potential for new application areas. However, building an outdoor AR system is difficult due to portability constraints, the inability to modify the environment, and the greater range of operating conditions. We demonstrate a hybrid tracker that stabilizes an outdoor AR display with respect to user motion, achieving more accurate registration than previously shown in an outdoor AR system. The hybrid tracker combines rate gyros with a compass and tilt orientation sensor in a near real-time system. Sensor distortions and delays required compensation to achieve good results. The measurements from the two sensors are fused together to compensate for each others limitations. From static locations with moderate head rotation rates, peak registration errors are /spl sim/2 degrees, with typical errors under 1 degree, although errors can become larger over long time periods due to compass drift. Without our stabilization, even small motions make the display nearly unreadable.
interactive 3d graphics and games | 1992
Mark Ward; Ronald Azuma; Robert Bennett; Stefan Gottschalk; Henry Fuchs
An optoelectronic head-tracking system for head-mounted displays is described. The system features a scalable work area that currently measures 10 x 12, a measurement update rate of 20-100 Hz with 20-60 ms of delay, and a resolution specification of 2 mm and 0.2 degrees. The sensors consist of four head-mounted imaging devices that view infrared lightemitting diodes (LEDs) mounted in a 10 x 12 grid of modular 2 x 2 suspended ceiling panels. Photogrammetric techniques allow the heads location to be expressed as a function of the known LED positions and their projected images on the sensors. The work area is scaled by simply adding panels to the ceilings grid. Discontinuities that occurred when changing working sets of LEDs were reduced by carefully managing all error sources, including LED placement tolerances, and by adopting an overdetermined mathematical model for the computation of head position: space resection by collinearity. The working system was demonstrated in the Tomorrows Realities gallery at the ACM SIGGRAPH 91 conference. CR categories and subject descriptors: I.3.1 [Computer Graphics]: Hardware Architecture three dimensional displays; I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism Virtual Reality Additional
international conference on computer graphics and interactive techniques | 1995
Ronald Azuma; Gary Bishop
The use of prediction to eliminate or reduce the effects of system delays in Head-Mounted Display systems has been the subject of several recent papers. A variety of methods have been proposed but almost all the analysis has been empirical, making comparisons of results difficult and providing little direction to the designer of new systems. In this paper, we characterize the performance of two classes of head-motion predictors by analyzing them in the frequency domain. The first predictor is a polynomial extrapolation and the other is based on the Kalman filter. Our analysis shows that even with perfect, noise-free inputs, the error in predicted position grows rapidly with increasing prediction intervals and input signal frequencies. Given the spectra of the original head motion, this analysis estimates the spectra of the predicted motion, quantifying a predictors performance on different systems and applications. Acceleration sensors are shown to be more useful to a predictor than velocity sensors. The methods described will enable designers to determine maximum acceptable system delay based on maximum tolerable error and the characteristics of user motions in the application. CR
international symposium on mixed and augmented reality | 2002
Chris Furmanski; Ronald Azuma; Michael J. Daily
One unique feature of mixed and augmented reality (MR/AR) systems is that hidden and occluded objects an be readily visualized. We call this specialized use of MR/AR, obscured information visualization (OIV). In this paper, we describe the beginning of a research program designed to develop such visualizations through the use of principles derived from perceptual psychology and cognitive science. In this paper we surveyed the cognitive science literature as it applies to such visualization tasks, described experimental questions derived from these cognitive principles, and generated general guidelines that can be used in designing future OIV systems (as well improving AR displays more generally). We also report the results from an experiment that utilized a functioning AR-OIV system: we found that in relative depth judgment, subjects reported rendered objects as being in front of real-world objects, except when additional occlusion and motion cues were presented together.