Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mengu Sukan is active.

Publication


Featured researches published by Mengu Sukan.


user interface software and technology | 2015

Virtual Replicas for Remote Assistance in Virtual and Augmented Reality

Ohan Oda; Carmine Elvezio; Mengu Sukan; Steven K. Feiner; Barbara Tversky

In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local users environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.


international symposium on mixed and augmented reality | 2012

Quick viewpoint switching for manipulating virtual objects in hand-held augmented reality using stored snapshots

Mengu Sukan; Steven Feiner; Barbara Tversky

Magic-lens style augmented reality applications allow users to control camera pose easily by manipulating a portable hand-held device and provide immediate visual feedback. However, strategic vantage points must often be revisited repeatedly, adding time and error and taxing memory. We describe a new approach that allows users to take snapshots of augmented scenes that can be virtually revisited at later times. The system stores still images of scenes along with camera poses, so that augmentations remain dynamic and interactive. Users can manipulate virtual objects while viewing snapshots, instead of moving to real-world views. We present a study comparing performance in snapshot and live mode conditions in a task in which a virtual object must be aligned with two pairs of physical objects. Proper alignment requires sequentially visiting two viewpoints. Participants completed the alignment task significantly faster and more accurately using snapshots than when using the live mode. Moreover, participants preferred manipulating virtual objects using snapshots to the live mode.


user interface software and technology | 2014

ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations

Mengu Sukan; Carmine Elvezio; Ohan Oda; Steven K. Feiner; Barbara Tversky

Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the users eyes and a range of acceptable angles in which the users head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.


international symposium on mixed and augmented reality | 2010

SnapAR: Storing snapshots for quick viewpoint switching in hand-held augmented reality

Mengu Sukan; Steven Feiner

Many tasks require a user to move between various locations within an environment to get different perspectives. This can take significant time and effort, especially when the user must switch among those viewpoints repeatedly. We explore augmented reality interaction techniques that involve taking still pictures of a physical scene using a tracked hand-held magic lens and seamlessly switching between augmenting either the live view or one of the still views, without needing to physically revisit the snapshot locations. We describe our optical-marker-tracking-based implementation and how we represent and switch among snapshots. To determine the effectiveness of our techniques, we developed a test application that lets its user view physical and virtual objects from different viewpoints.


symposium on 3d user interfaces | 2013

Poster: 3D referencing for remote task assistance in augmented reality

Ohan Oda; Mengu Sukan; Steven Feiner; Barbara Tversky

We present a 3D referencing technique tailored for remote maintenance tasks in augmented reality. The goal is to improve the accuracy and efficiency with which a remote expert can point out a real physical object at a local site to a technician at that site. In a typical referencing task, the remote expert instructs the local technician to navigate to a location from which a target object can be viewed, and then to attend to that object. The expert and technician both wear head-tracked, stereo, see-through, head-worn displays, and the experts hands are tracked by a set of depth cameras. The remote expert first selects one of a set of prerecorded viewpoints of the local site, and a representation of that viewpoint is presented to the technician to help them navigate to the correct position and orientation. The expert then uses hand gestures to indicate the target.


ieee virtual reality conference | 2017

Travel in large-scale head-worn VR: Pre-oriented teleportation with WIMs and previews

Carmine Elvezio; Mengu Sukan; Steven Feiner; Barbara Tversky

We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatars head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel.


symposium on spatial user interaction | 2016

Providing Assistance for Orienting 3D Objects Using Monocular Eyewear

Mengu Sukan; Carmine Elvezio; Steven K. Feiner; Barbara Tversky

Many tasks require that a user rotate an object to match a specific orientation in an external coordinate system. This includes tasks in which one object must be oriented relative to a second prior to assembly and tasks in which objects must be held in specific ways to inspect them. Research has investigated guidance mechanisms for some 6DOF tasks, using wide--field-of-view, stereoscopic virtual and augmented reality head-worn displays (HWDs). However, there has been relatively little work directed toward smaller field-of-view lightweight monoscopic HWDs, such as Google Glass, which may remain more comfortable and less intrusive than stereoscopic HWDs in the near future. We have designed and implemented a novel visualization approach and three additional visualizations representing different paradigms for guiding unconstrained manual 3DOF rotation, targeting these monoscopic HWDs. We describe our exploration of these paradigms and present the results of a user study evaluating the relative performance of the visualizations and showing the advantages of our new approach.


international symposium on mixed and augmented reality | 2015

[POSTER] Interactive Visualizations for Monoscopic Eyewear to Assist in Manually Orienting Objects in 3D

Carmine Elvezio; Mengu Sukan; Steven Feiner; Barbara Tversky

Assembly or repair tasks often require objects to be held in specific orientations to view or fit together. Research has addressed the use of AR to assist in these tasks, delivered as registered overlaid graphics on stereoscopic head-worn displays. In contrast, we are interested in using monoscopic head-worn displays, such as Google Glass. To accommodate their small monoscopic field of view, off center from the users line of sight, we are exploring alternatives to registered overlays. We describe four interactive rotation guidance visualizations for tracked objects intended for these displays.


human factors in computing systems | 2018

Mercury: A Messaging Framework for Modular UI Components

Carmine Elvezio; Mengu Sukan; Steven Feiner

In recent years, the entity--component--system pattern has become a fundamental feature of the software architectures of game-development environments such as Unity and Unreal, which are used extensively in developing 3D user interfaces. In these systems, UI components typically respond to events, requiring programmers to write application-specific callback functions. In some cases, components are organized in a hierarchy that is used to propagate events among vertically connected components. When components need to communicate horizontally, programmers must connect those components manually and register/unregister events as needed. Moreover, events and callback signatures may be incompatible, making modular UIs cumbersome to build and share within or across applications. To address these problems, we introduce a messaging framework, Mercury, to facilitate communication among components. We provide an overview of Mercury, outline its underlying protocol and how it propagates messages to responders using relay nodes, describe a reference implementation in Unity, and present example systems built using Mercury to explain its advantages.


2016 IEEE 9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) | 2016

A framework to facilitate reusable, modular widget design for real-time interactive systems

Carmine Elvezio; Mengu Sukan; Steven Feiner

Game engines have become popular development platforms for real-time interactive systems. Contemporary game engines, such as Unity and Unreal, feature component-based architectures, in which an objects appearance and behavior is determined by a collection of component scripts added to that object. This design pattern allows common functionality to be contained within component scripts and shared among different types of objects. In this paper, we describe a flexible framework that enables programmers to design modular, reusable widgets for real-time interactive systems using a collection of component scripts. We provide a reference implementation written in C# for the Unity game engine. Making an object, or a group of objects, part of our managed widget framework can be accomplished with just a few drag-and-drop operations in the Unity Editor. While our framework provides hooks and default implementations for common widget behavior (e.g., initialization, refresh, and toggling visibility), programmers can also define custom behavior for a particular widget or combine simple widgets into a hierarchy and build arbitrarily rich ones. Finally, we provide an overview of an accompanying library of scripts that support functionality for testing and networking.

Collaboration


Dive into the Mengu Sukan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Qi

Columbia University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge