Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ohan Oda is active.

Publication


Featured researches published by Ohan Oda.


intelligent technologies for interactive entertainment | 2008

Developing an augmented reality racing game

Ohan Oda; Levi Lister; Sean White; Steven Feiner

Augmented reality (AR) makes it possible to create games in which virtual objects are overlaid on the real world, and real objects are tracked and used to control virtual ones. We describe the development of an AR racing game created by modifying an existing racing game, using an AR infrastructure that we developed for use with the XNA game development platform. In our game, the driver wears a tracked video see-through head-worn display, and controls the car with a passive tangible controller. Other players can participate by manipulating waypoints that the car must pass and obstacles with which the car can collide. We discuss our AR infrastructure, which supports the creation of AR applications and games in a managed code environment, the user interface we developed for the AR racing game, the games software and hardware architecture, and feedback and observations from early demonstrations.


user interface software and technology | 2015

Virtual Replicas for Remote Assistance in Virtual and Augmented Reality

Ohan Oda; Carmine Elvezio; Mengu Sukan; Steven K. Feiner; Barbara Tversky

In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local users environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.


ieee international conference on pervasive computing and communications | 2009

Social mobile Augmented Reality for retail

Sinem Guven; Ohan Oda; Mark Podlaseck; Harry Stavropoulos; Sai Kolluri; Gopal Pingali

Consumers are increasingly relying on web-based social content, such as product reviews, prior to making to a purchase. Recent surveys in the Retail Industry confirm that social content is indeed the #1 aid in a buying decision. Currently, accessing or adding to this valuable web-based social content repository is mostly limited to computers far removed from the site of the shopping experience itself. We present a mobile Augmented Reality application, which extends such social content from the computer monitor into the physical world through mobile phones, providing consumers with in situ information on products right when and where they need to make buying decisions.


user interface software and technology | 2014

ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations

Mengu Sukan; Carmine Elvezio; Ohan Oda; Steven K. Feiner; Barbara Tversky

Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the users eyes and a range of acceptable angles in which the users head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.


symposium on 3d user interfaces | 2013

Poster: 3D referencing for remote task assistance in augmented reality

Ohan Oda; Mengu Sukan; Steven Feiner; Barbara Tversky

We present a 3D referencing technique tailored for remote maintenance tasks in augmented reality. The goal is to improve the accuracy and efficiency with which a remote expert can point out a real physical object at a local site to a technician at that site. In a typical referencing task, the remote expert instructs the local technician to navigate to a location from which a target object can be viewed, and then to attend to that object. The expert and technician both wear head-tracked, stereo, see-through, head-worn displays, and the experts hands are tracked by a set of depth cameras. The remote expert first selects one of a set of prerecorded viewpoints of the local site, and a representation of that viewpoint is presented to the technician to help them navigate to the correct position and orientation. The expert then uses hand gestures to indicate the target.


international symposium on mixed and augmented reality | 2012

3D referencing techniques for physical objects in shared augmented reality

Ohan Oda; Steven Feiner

We introduce an augmented reality referencing technique for shared environments that is designed to improve the accuracy with which one user can point out a real physical object to another user. Our technique, GARDEN (Gesturing in an Augmented Reality Depth-mapped ENvironment), is intended for use in otherwise unmodeled environments in which objects in the environment, and the hand of the user performing a selection, are interactively observed by a depth camera, and users wear tracked see-through displays. We present the results of a user study that compares GARDEN against existing augmented reality referencing techniques, as well as the use of a physical laser pointer. GARDEN performed significantly more accurately than all the comparison techniques when the participating users have sufficiently different views of the scene, and significantly more accurately than one of these techniques when the participating users have similar perspectives.


international symposium on mixed and augmented reality | 2011

Creating hybrid user interfaces with a 2D multi-touch tabletop and a 3D see-through head-worn display

Nicolas J. Dedual; Ohan Oda; Steven Feiner

How can multiple different display and interaction devices be used together to create an effective augmented reality environment? We explore the design of several prototype hybrid user interfaces that combine a 2D multi-touch tabletop display with a 3D head-tracked video-see-through display. We describe a simple modeling application and an urban visualization tool in which the information presented on the head-worn display supplements the information displayed on the tabletop, using a variety of approaches to track the head-worn display relative to the tabletop. In all cases, our goal is to allow users who can see only the tabletop to interact effectively with users wearing head-worn displays.


international symposium on mixed and augmented reality | 2009

Interference avoidance in multi-user hand-held augmented reality

Ohan Oda; Steven Feiner

In a multi-user augmented reality application for a shared physical environment, it is possible for users to interfere with each other. For example, in a multi-player game in which each player holds a display whose tracked position and orientation affect the outcome, one player may physically block another players view or physically contact another player. We explore software techniques intended to avoid such interference. These techniques modify what a user sees or hears, and what interaction capabilities they have, when their display gets too close to another users display. We present Redirected Motion, an effective, yet nondistracting, interference avoidance technique for hand-held AR, which transforms the 3D space in which the user moves their display, to direct the display away from other displays. We conducted a within-subject, formal user study to evaluate the effectiveness and distraction level of Redirected Motion compared to other interference avoidance techniques. The study is based on an instrumented, two-player, first-person-shooter, augmented reality game, in which each player holds a 6DOF-tracked ultra-mobile computer. Comparison conditions include an unmanipulated control condition and three other software techniques for avoiding interference: dimming the display, playing disturbing sounds, and disabling interaction capabilities. Subjective evaluation indicates that Redirected Motion was unnoticeable, and quantitative analysis shows that the mean distance between users during Redirected Motion was significantly larger than for the comparison conditions.


international conference on computer graphics and interactive techniques | 2005

Fast dynamic fracture of brittle objects

Ohan Oda; Stephen Chenney

Fracture is an important feature for computer games to enhance interactivity and realism. Due to its high computational expense, most computer games do not provide accurate fracturing features. Those that do typically pre-calculate cracks ahead of time in order to process the break instantaneously, but then only a small number of possible outcomes are available, regardless of how the object is hit. The ideal fracture model for computer games should be fast (things should break instantaneously when there is a collision) and respond to user actions somewhat realistically (if a ball hits the corner of a brick wall, the corner should break off, but not the center of the wall). Most existing fracture modeling techniques were designed for non-real-time computer animations (e.g. [O’Brien and Hodgins August 1999; Smith et al. 2001]) and are both too expensive and too realistic than necessary for computer games. Our model adds a novel multi-stage dynamic refinement scheme to Smith et al. [2001] to reduce the computational cost while retaining the realism of fracture.


user interface software and technology | 2010

ARmonica: a collaborative sonic environment

Mengu Sukan; Ohan Oda; Xiang Shi; Manuel Entrena; Shrenik Sadalgi; Jie Qi; Steven Feiner

ARmonica is a 3D audiovisual augmented reality environment in which players can position and edit virtual bars that play sounds when struck by virtual balls launched under the influence of physics. Players experience ARmonica through head-tracked head-worn displays and tracked hand-held ultramobile personal computers, and interact through tracked Wii remotes and touch-screen taps. The goal is for players to collaborate in the creation and editing of an evolving sonic environment. Research challenges include supporting walk-up usability without sacrificing deeper functionality.

Collaboration


Dive into the Ohan Oda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Qi

Columbia University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge