Suya You
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Suya You.
ieee virtual reality conference | 1999
Suya You; Ulrich Neumann; Ronald Azuma
The biggest single obstacle to building effective augmented reality (AR) systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation. Limiting their use to prepared areas that are relatively free of natural or man-made interference sources. Vision-based systems can use passive landmarks, but they are more computationally demanding and often exhibit erroneous behavior due to occlusion or numerical instability. Inertial sensors are completely passive, requiring no external devices or targets, however, the drift rates in portable strapdown configurations are too great for practical use. In this paper, we present a hybrid approach to AR tracking that integrates inertial and vision-based technologies. We exploit the complementary nature of the two technologies to compensate for the weaknesses in each component. Analysis and experimental results demonstrate this systems effectiveness.
ieee virtual reality conference | 2001
Suya You; Ulrich Neumann
A novel framework enables accurate augmented reality (AR) registration with integrated inertial gyroscope and vision tracking technologies. The framework includes a two-channel complementary motion filter that combines the low-frequency stability of vision sensors with the high-frequency tracking of gyroscope sensors, hence achieving stable static and dynamic six-degree-of-freedom pose tracking. Our implementation uses an extended Kalman filter (EKF). Quantitative analysis and experimental results show that the fusion method achieves dramatic improvements in tracking stability and robustness over either sensor alone. We also demonstrate a new fiducial design and detection system in our example AR annotation systems that illustrate the behavior and benefits of the new tracking method.
IEEE Computer Graphics and Applications | 1999
Suya You; Ulrich Neumann; Ronald Azuma
Our work stems from a program focused on developing tracking technologies for wide-area augmented realities in unprepared outdoor environments. Other participants in the Defense Advanced Research Projects Agency (Darpa) funded Geospatial Registration of Information for Dismounted Soldiers (Grids) program included University of North Carolina at Chapel Hill and Raytheon. We describe a hybrid orientation tracking system combining inertial sensors and computer vision. We exploit the complementary nature of these two sensing technologies to compensate for their respective weaknesses. Our multiple-sensor fusion is novel in augmented reality tracking systems, and the results demonstrate its utility.
IEEE Transactions on Multimedia | 1999
Ulrich Neumann; Suya You
Natural scene features stabilize and extend the tracking range of augmented reality (AR) pose-tracking systems. We develop robust computer vision methods to detect and track natural features in video images. Point and region features are automatically and adaptively selected for properties that lead to robust tracking. A multistage tracking algorithm produces accurate motion estimates, and the entire system operates in a closed-loop that stabilizes its performance and accuracy. We present demonstrations of the benefits of using tracked natural features for AR applications that illustrate direct scene annotation, pose stabilization, and extendible tracking range. Our system represents a step toward integrating vision with graphics to produce robust wide-area augmented realities.
IEEE Computer Graphics and Applications | 2003
Jinhui Hu; Suya You; Ulrich Neumann
Large-scale urban modeling technologies use a variety of sensors and data acquisition techniques. The authors categorize current approaches and describe their advantages and disadvantages. Their survey examines current research with respect to several performance criteria including data acquisition sources, user interaction level, geometric fidelity, model completeness, and intended applications. Although modeling systems vary with respect to these criteria, data acquisition strongly influences model characteristics and usefulness. We therefore cluster the methods into those based on photogrammetry, active sensors, and hybrid sensor systems.
First ACM SIGMM international workshop on Video surveillance | 2003
Ismail Oner Sebe; Jinhui Hu; Suya You; Ulrich Neumann
Recent advances in sensing and computing technologies have inspired a new generation of data analysis and visualization systems for video surveillance applications. We present a novel visualization system for video surveillance based on an Augmented Virtual Environment (AVE) that fuses dynamic imagery with 3D models in a real-time display to help observers comprehend multiple streams of temporal data and imagery from arbitrary views of the scene. This paper focuses on our recent technical extensions to our AVE system, including moving object detection, tracking, and 3D display for effective dynamic event comprehension and situational awareness. Moving objects are detected and tracked in video sequences and visualized as pseudo-3D elements in the AVE scene display in real-time. We show results that illustrate the utility and benefits of these new capabilities.
international conference on computational science and its applications | 2003
Suya You; Jinhui Hu; Ulrich Neumann; Pamela Fox
This paper presents a complete modeling system that extracts complex building structures with irregular shapes and surfaces. Our modeling approach is based on the use of airborne LiDAR which offers a fast and effective way to acquire models for a large urban environment. To verify and refine the reconstructed ragged model, we present a primitive-based model refinement approach that requires minor user assistance. Given the limited user input, the system automatically segments the building boundary, does the model refinement, and assembles the complete building model. By adapting a set of appropriate geometric primitives and fitting strategies, the system can model a range of complex buildings with irregular shapes. We demonstrate this systems ability to model a variety of complex buildings rapidly and accurately from LiDAR data of the entire USC campus.
Computers & Graphics | 1999
Ronald Azuma; Jong Weon Lee; Bolan Jiang; Jun Park; Suya You; Ulrich Neumann
Abstract Many augmented reality applications require accurate tracking. Existing tracking techniques require prepared environments to ensure accurate results. This paper motivates the need to pursue augmented reality tracking techniques that work in unprepared environments, where users are not allowed to modify the real environment, such as in outdoor applications. Accurate tracking in such situations is difficult, requiring hybrid approaches. This paper summarizes two 3DOF results: a real-time system with a compass — inertial hybrid, and a non-real-time system fusing optical and inertial inputs. We then describe the preliminary results of 5- and 6-DOF tracking methods run in simulation. Future work and limitations are described.
ieee virtual reality conference | 2004
Bolan Jiang; Ulrich Neumann; Suya You
We present a real-time hybrid tracking system that integrates gyroscopes and line-based vision tracking technology. Gyroscope measurements are used to predict orientation and image line positions. Gyroscope drift is corrected by vision tracking. System robustness is achieved by using a heuristic control system to evaluate measurement quality and select measurements accordingly. Experiments show that the system achieves robust, accurate, and real-time performance for outdoor augmented reality.
ieee virtual reality conference | 2003
Ulrich Neumann; Suya You; Jinhui Hu; Bolan Jiang; Jong Weon Lee
An augmented virtual environment (AVE) fuses dynamic imagery with 3D models. The AVE provides a unique approach to visualize and comprehend multiple streams of temporal data or images. Models are used as a 3D substrate for the visualization of temporal imagery, providing improved comprehension of scene activities. The core elements of AVE systems include model construction, sensor tracking, real-time video/image acquisition, and dynamic texture projection for 3D visualization. This paper focuses on the integration of these components and the results that illustrate the utility and benefits of the resulting augmented virtual environment.