Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Nuernberger is active.

Publication


Featured researches published by Benjamin Nuernberger.


virtual reality software and technology | 2014

In touch with the remote world: remote collaboration with augmented reality drawings and virtual navigation

Steffen Gauglitz; Benjamin Nuernberger; Matthew Turk; Tobias Höllerer

Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.


symposium on 3d user interfaces | 2016

Interpreting 2D gesture annotations in 3D augmented reality

Benjamin Nuernberger; Kuo-Chin Lien; Tobias Höllerer; Matthew Turk

A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for a range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if the camera moves and is observing an arbitrary environment, 2D gesture annotations can easily lose their meaning when shown from novel viewpoints due to perspective effects. In this paper, we present a new approach towards solving this problem by using a gesture enhanced annotation interpretation. By first classifying which type of gesture the user drew, we show that it is possible to render the 2D annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for an augmented reality enhanced remote collaboration scenario by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel real-time method to automatically handle the two most common 2D gesture annotations - arrows and circles - and give a detailed analysis of the ambiguities that must be handled in each case. Arrow gestures are interpreted by identifying their anchor points and using scene surface normals for better perspective rendering. For circle gestures, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our method outperforms previous approaches by better conveying the meaning of the original drawing from different viewpoints.


international symposium on mixed and augmented reality | 2015

Efficient Computation of Absolute Pose for Gravity-Aware Augmented Reality

Chris Sweeney; John Flynn; Benjamin Nuernberger; Matthew Turk; Tobias Höllerer

We propose a novel formulation for determining the absolute pose of a single or multi-camera system given a known vertical direction. The vertical direction may be easily obtained by detecting the vertical vanishing points with computer vision techniques, or with the aid of IMU sensor measurements from a smartphone. Our solver is general and able to compute absolute camera pose from two 2D-3D correspondences for single or multi-camera systems. We run several synthetic experiments that demonstrate our algorithms improved robustness to image and IMU noise compared to the current state of the art. Additionally, we run an image localization experiment that demonstrates the accuracy of our algorithm in real-world scenarios. Finally, we show that our algorithm provides increased performance for real-time model-based tracking compared to solvers that do not utilize the vertical direction and show our algorithm in use with an augmented reality application running on a Google Tango tablet.


symposium on 3d user interfaces | 2017

Evaluating gesture-based augmented reality annotation

Yun Suk Chang; Benjamin Nuernberger; Bo Luan; Tobias Höllerer

Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the users fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified.


international symposium on mixed and augmented reality | 2015

[POSTER] 2D-3D Co-segmentation for AR-based Remote Collaboration

Kuo-Chin Lien; Benjamin Nuernberger; Matthew Turk; Tobias Höllerer

In Augmented Reality (AR) based remote collaboration, a remote user can draw a 2D annotation that emphasizes an object of interest to guide a local user accomplishing a task. This annotation is typically performed only once and then sticks to the selected object in the local users view, independent of his or her camera movement. In this paper, we present an algorithm to segment the selected object, including its occluded surfaces, such that the 2D selection can be appropriately interpreted in 3D and rendered as a useful AR annotation even when the local user moves and significantly changes the viewpoint.


ieee virtual reality conference | 2017

Gesture-based augmented reality annotation

Yun Suk Chang; Benjamin Nuernberger; Bo Luan; Tobias Höllerer; John O'Donovan

Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world.


virtual reality software and technology | 2016

Multi-view gesture annotations in image-based 3D reconstructed scenes

Benjamin Nuernberger; Kuo-Chin Lien; Lennon Grinta; Chris Sweeney; Matthew Turk; Tobias Höllerer

We present a novel 2D gesture annotation method for use in image-based 3D reconstructed scenes with applications in collaborative virtual and augmented reality. Image-based reconstructions allow users to virtually explore a remote environment using image-based rendering techniques. To collaborate with other users, either synchronously or asynchronously, simple 2D gesture annotations can be used to convey spatial information to another user. Unfortunately, prior methods are either unable to disambiguate such 2D annotations in 3D from novel viewpoints or require relatively dense reconstructions of the environment. In this paper, we propose a simple multi-view annotation method that is useful in a variety of scenarios and applicable to both very sparse and dense 3D reconstructions. Specifically, we employ interactive disambiguation of the 2D gestures via a second annotation drawn from another viewpoint, triangulating two drawings to achieve a 3D result. Our method automatically chooses an appropriate second viewpoint and uses image-based rendering transitions to keep the user oriented while moving to the second viewpoint. User experiments in an asynchronous collaboration scenario demonstrate the usability of the method and its superiority over a baseline method. In addition, we showcase our method running on a variety of image-based reconstruction datasets and highlight its use in a synchronous local-remote user collaboration system.


international symposium on mixed and augmented reality | 2016

PPV: Pixel-Point-Volume Segmentation for Object Referencing in Collaborative Augmented Reality

Kuo-Chin Lien; Benjamin Nuernberger; Tobias Höllerer; Matthew Turk

We present a method for collaborative augmented reality (AR) that enables users from different viewpoints to interpret object references specified via 2D on-screen circling gestures. Based on a users 2D drawing annotation, the method segments out the userselected object using an incomplete or imperfect scene model and the color image from the drawing viewpoint. Specifically, we propose a novel segmentation algorithm that utilizes both 2D and 3D scene cues, structured into a three-layer graph of pixels, 3D points, and volumes (supervoxels), solved via standard graph cut algorithms. This segmentation enables an appropriate rendering of the users 2D annotation from other viewpoints in 3D augmented reality. Results demonstrate the superiority of the proposed method over existing methods.


ieee virtual reality conference | 2016

Anchoring 2D gesture annotations in augmented reality

Benjamin Nuernberger; Kuo-Chin Lien; Tobias Höllerer; Matthew Turk

Augmented reality enhanced collaboration systems often allow users to draw 2D gesture annotations onto video feeds to help collaborators to complete physical tasks. This works well for static cameras, but for movable cameras, perspective effects cause problems when trying to render 2D annotations from a new viewpoint in 3D. In this paper, we present a new approach towards solving this problem by using gesture enhanced annotations. By first classifying which type of gesture the user drew, we show that it is possible to render annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for remote collaboration by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel system to automatically handle the top two 2D gesture annotations - arrows and circles. Arrows are handled by identifying their anchor points and using surface normals for better perspective rendering. For circles, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our approach outperforms previous methods in terms of better conveying the original drawings meaning from different viewpoints.


symposium on 3d user interfaces | 2014

Poster: Investigating viewpoint visualizations for click & go navigation

Benjamin Nuernberger; Steffen Gauglitz; Tobias Höllerer; Matthew Turk

We present an investigation of viewpoint visualizations for “Click & Go” 3D navigation interfaces based on a pre-populated set of viewpoints. These scenarios often occur in 3D navigation systems that are based on sets of photos and possibly an underlying 3D reconstruction. Given these photos (and the 3D reconstruction), how does one most effectively navigate through this environment? Existing systems often employ Click & Go interfaces which allow users to navigate with one click of the mouse or tap of the finger. In this work, we investigate viewpoint visualizations for such Click & Go interfaces, describing a preliminary user study and providing valuable insights into Click & Go and its viewpoint visualizations.

Collaboration


Dive into the Benjamin Nuernberger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Turk

University of California

View shared research outputs
Top Co-Authors

Avatar

Kuo-Chin Lien

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Luan

University of California

View shared research outputs
Top Co-Authors

Avatar

Chris Sweeney

University of California

View shared research outputs
Top Co-Authors

Avatar

Yun Suk Chang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge