Jonathan Mooser
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan Mooser.
interactive 3d graphics and games | 2005
John P. Lewis; Jonathan Mooser; Zhigang Deng; Ulrich Neumann
Blendshapes (linear shape interpolation models) are perhaps the most commonly employed technique in facial animation practice. A major problem in creating blendshape animation is that of blendshape interference: the adjustment of a single blendshape slider may degrade the effects obtained with previous slider movements, because the blendshapes have overlapping, non-orthogonal effects. Because models used in commercial practice may have 100 or more individual blendshapes, the interference problem is the subject of considerable manual effort. Modelers iteratively resculpt models to reduce interference where possible, and animators must compensate for those interference effects that remain. In this short paper we consider the blendshape interference problem from a linear algebra point of view. We find that while full orthogonality is not desirable, the goal of preserving previous adjustments to the model can be effectively approached by allowing the user to temporarily designate a set of points as representative of the previous (desired) adjustments. We then simply solve for blendshape slider values that mimic desired new movement while moving these tagged points as little as possible. The resulting algorithm is easy to implement and demonstrably reduces cases of blendshape interference found in existing models.
international symposium on mixed and augmented reality | 2007
Jonathan Mooser; Suya You; Ulrich Neumann
We present an efficient and accurate object tracking algorithm based on the concept of graph cut segmentation. The ability to track visible objects in real-time provides an invaluable tool for the implementation of markerless Augmented Reality. Once an object has been detected, its location in future frames can be used to position virtual content, and thus annotate the environment. Unlike many object tracking algorithms, our approach does not rely on a preexisting 3D model or any other information about the object or its environment. It takes, as input, a set of pixels representing an object in an initial frame and uses a combination of optical flow and graph cut segmentation to determine the corresponding pixels in each future frame. Experiments show that our algorithm robustly tracks objects of disparate shapes and sizes over hundreds of frames, and can even handle difficult cases where an object contains many of the same colors as its background. We further show how this technology can be applied to practical AR applications.
interactive 3d graphics and games | 2008
Jonathan Mooser; Suya You; Ulrich Neumann
We present a three degree-of-freedom control designed for viewing large documents and images on a mobile device equipped with a camera. Tracking natural features detected in the cameras field of view, we can roughly estimate the motion of the device, using the results to scroll and zoom the current document. Central to our implementation is the manner by which we amplify the motion, allowing the user to scroll through large portions of the document with minimal hand movement. Then, using a Hidden Markov Model, we determine when the user is scrolling, zooming, or some combination of the two, thus providing smoother, more fluid control. We demonstrate a prototype of our 3DOF control that can easily navigate documents that are many times larger than the display area, and show how it might be incorporated into a larger document retrieval application.
international conference on multimedia and expo | 2007
Jonathan Mooser; Lu Wang; Suya You; Ulrich Neumann
Recent years have seen growing interest in mobile augmented reality. The ability to retrieve information and display it as virtual content overlaid on top of an image of the real world is a natural extension to a mobile device equipped with a camera and wireless connectivity. Such applications need to address a number of technical hurdles including target recognition and camera pose estimation. Beyond these fundamental challenges, however, there is the problem of data connectivity and user interface presentation. How do we connect mobile clients to multiple data sources that may be changing in real time and provide the user with a flexible interface for navigating through the relevant content? We present an AR user interface framework specifically designed to expose disparate data sources through a single application server. Our proposed system uses an multi-tier architecture to separate back-end data retrieval from front-end graphical presentation and UI event handling. We describe how it might be used to build an oil platform equipment maintenance system, as an example of a collaborative, data-driven mobile application.
international conference on multimedia and expo | 2006
Jonathan Mooser; Suya You; Ulrich Neumann
Visual markers, or fiducials, have become one of the most common methods of camera pose estimation in augmented reality (AR) media. Many present day fiducial-based AR systems use arbitrary patterns, such as simple line drawings or alpha-numeric characters, and require that an application be trained to recognize its pattern set. These techniques work well on a small scale, but as the number of fiducials grows, accuracy and performance degrade. We describe a new fiducial design called TriCodes that, like a barcode, provides a systematic way of printing and identifying a vast library of patterns. We compare TriCodes to the popular ARToolkit package, demonstrating its advantages in the presence of large numbers of fiducials
workshop on applications of computer vision | 2009
Jonathan Mooser; Suya You; Ulrich Neumann; Quan Wang
We demonstrate a complete system for markerless augmented reality using robust structure from motion. The proposed system includes two main components. The first is a means of learning the appearance of complex 3D objects and augmenting them with virtual annotations. Its output is database of recognizable landmarks along with 3D descriptions of accompanying virtual objects. The second component uses this data to recognize the previously learned landmarks, recover camera pose, and render the associated virtual content. Both components make use of the recently developed subtrack optimization algorithm for structure from motion, which we demonstrate to be a useful tool for both learning the structure of objects and tracking camera pose after recognition. The complete system is demonstrated on several complex real-world examples.
international symposium on mixed and augmented reality | 2009
Wei Guan; Lu Wang; Jonathan Mooser; Suya You; Ulrich Neumann
We present a robust camera pose estimation approach for stereo images captured in untextured environments. Unlike most of existing registration algorithms which are point-based and make use of intensities of pixels in the neighborhood, our approach imports line segments in registration process. With line segments as primitives, the proposed algorithm is capable to handle untextured images such as scenes captured in man-made environments, as well as the cases when there are large viewpoint changes or illumination changes. Furthermore, since the proposed algorithm is robust to large base-line stereos, there are improvements on the accuracy of 3D points reconstruction. With well-calculated camera pose and object positions in 3D space, we can embed virtual objects into existing scene with higher accuracy for realistic effects. In our experiments, 2D labels are embedded in the 3D scene space to achieve annotation effects as in AR.
asian conference on computer vision | 2009
Jonathan Mooser; Suya You; Ulrich Neumann; Raphael Grasset; Mark Billinghurst
We present a novel algorithm for improving the accuracy of structure from motion on video sequences. Its goal is to efficiently recover scene structure and camera pose by using dynamic programming to maximize the lengths of putative keypoint tracks. By efficiently discarding poor correspondences while maintaining the largest possible set of inliers, it ultimately provides a robust and accurate scene reconstruction. Traditional outlier detection strategies, such as RANSAC and its derivatives, cannot handle high dimensional problems such as structure from motion over long image sequences. We prove that, given an estimate of the camera pose at a given frame, the outlier detection is optimal and runs in low order polynomial time. The algorithm is applied on-line, processing each frame in sequential order. Results are presented on several indoor and outdoor video sequences processed both with and without the proposed optimization. The improvement in average reprojection errors demonstrates its effectiveness.
Archive | 2008
Jonathan Mooser; Quan Wang; Suya You; Ulrich Neumann
Archive | 2008
John P. Lewis; Jonathan Mooser; Zhigang Deng; Ulrich Neumann