Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yun Suen Pai is active.

Publication


Featured researches published by Yun Suen Pai.


international conference on computer graphics and interactive techniques | 2016

GazeSim: simulating foveated rendering using depth in eye gaze for VR

Yun Suen Pai; Benjamin Tag; Noriyasu Vontin; Kazunori Sugiura; Kai Kunze

We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field.


symposium on spatial user interaction | 2016

AnyOrbit: Fluid 6DOF Spatial Navigation of Virtual Environments using Orbital Motion

Yun Suen Pai; Kevin Fan; Kouta Minamizawa; Kai Kunze

Emerging media technologies such as 3D film and head-mounted displays (HMDs) call for new types of spatial interaction. Here we describe and evaluate AnyOrbit: a novel orbital navigation technique that enables flexible and intuitive 3D spatial navigation in virtual environments (VEs). Unlike existing orbital methods, we exploit toroidal rather than spherical orbital surfaces, which allow independent control of orbital curvature in vertical and horizontal directions. This control enables intuitive and smooth orbital navigation between any desired orbital centers and between any vantage points within VEs. AnyOrbit leverages our proprioceptive sense of rotation to enable navigation in VEs without inconvenient external motion trackers. In user studies, we demonstrate that within a sports spectating context, the technique allows smooth shifts in perspective at a rate comparable to broadcast sport, is fast to learn, and is without excessive simulator sickness in most users. The technique is widely applicable to gaming, computer-aided-design (CAD), data visualisation, and telepresence.


international conference on computer graphics and interactive techniques | 2017

GazeSphere: navigating 360-degree-video environments in VR using head rotation and eye gaze

Yun Suen Pai; Benjamin Tag; Megumi Isogai; Daisuke Ochi; Kai Kunze

Viewing 360-degree-images and videos through head-mounted displays (HMDs) currently lacks a compelling interface to transition between them. We propose GazeSphere; a navigation system that provides a seamless transition between 360-degree-video environment locations through the use of orbit-like motion, via head rotation and eye gaze tracking. The significance of this approach is threefold: 1) It allows navigation and transition through spatially continuous 360-video environments, 2) It leverages the humans proprioceptive sense of rotation for locomotion that is intuitive and negates motion sickness, and 3) it uses eye tracking for a completely seamless, hands-free, and unobtrusive interaction. The proposed method uses an orbital motion technique for navigation in virtual space, which we demonstrate in applications such as navigation and interaction in computer aided design (CAD), data visualization, as a game mechanic, and for virtual tours.


mobile and ubiquitous multimedia | 2017

Armswing: using arm swings for accessible and immersive navigation in AR/VR spaces

Yun Suen Pai; Kai Kunze

Navigating in a natural way in augmented reality (AR) and virtual reality (VR) spaces is a large challenge. To this end, we present ArmSwingVR, a locomotion solution for AR/VR spaces that preserves immersion, while being low profile compared to current solutions, particularly walking-in-place (WIP) methods. The user simply needs to swing their arms naturally to navigate in the direction where the arms are swung, without any feet or head movement. The benefits of ArmSwingVR are that arm swinging feels natural for bipedal organisms second only to leg movement, no additional peripherals or sensors are required, it is less obtrusive to swing our arms as opposed to WIP methods, and requires less energy allowing prolong uses for AR/VR. A conducted user study found that our method does not sacrifice immersion while also being more low profile and less energy consumption compared to WIP.


human factors in computing systems | 2017

IN360: A 360-Degree-Video Platform to Change Students Preconceived Notions on Their Career

Fathima Assilmia; Yun Suen Pai; Keiko Okawa; Kai Kunze

To motivate primary school students in Indonesia to learn more, career inspiration sessions are usually held by communities of professionals. However, these activities face limitations such as time, distance and physical infrastructure. We propose IN360, which is an exploration of alternative media to deliver career education to elementary students in remote, undeveloped and frontier area in Indonesia utilizing digital platform and 360-degree-video. The goals of this research are twofold; (1) to create a sustainable system or model for career education content using 360-degree-video format and (2) to deliver it through a digital platform.


user interface software and technology | 2016

Transparent Reality: Using Eye Gaze Focus Depth as Interaction Modality

Yun Suen Pai; Noriyasu Vontin; Kai Kunze

We present a novel, eye gaze based interaction technique, using focus depth as an input modality for virtual reality (VR) applications. We also show custom hardware prototype implementation. Comparing the focus depth based interaction to a scroll wheel interface, we find no statistically significant difference in performance (the focus depth works slightly better) and a subjective preference of the users in a user study with 10 participants playing a simple VR game. This indicates that it is a suitable interface modality that should be further explored. Finally, we give some application scenarios and guidelines for using focus depth interactions in VR applications.


international conference on computer graphics and interactive techniques | 2018

Make-a-face: a hands-free, non-intrusive device for tongue/mouth/cheek input using EMG

Takuro Nakao; Yun Suen Pai; Megumi Isogai; Hideaki Kimata; Kai Kunze

Current devices aim to be more hands-free by providing users with the means to interact with them using other forms of input, such as voice which can be intrusive. We propose Make-a-Face; a wearable device that allows the user to use tongue, mouth, or cheek gestures via a mask-shaped device that senses muscle movement on the lower half of the face. The significance of this approach is threefold: 1) It allows a more non-intrusive approach to interaction, 2) we designed both the hardware and software from the ground-up to accommodate the sensor electrodes and 3) we proposed several use-case scenarios ranging from smartphones to interactions with virtual reality (VR) content.


human computer interaction with mobile devices and services | 2018

Pinchmove: improved accuracy of user mobility for near-field navigation in virtual environments

Yun Suen Pai; Zikun Chen; Liwei Chan; Megumi Isogai; Hideaki Kimata; Kai Kunze

Navigation and mobility mechanics for virtual environments aim to be realistic or fun, but rarely prioritize the accuracy of movement. We propose PinchMove, a highly accurate navigation mechanic utilizing pinch gestures and manipulation of the viewport for confined environments that prefers accurate movement. We ran a pilot study to first determine the degree of simulator sickness caused by this mechanic, and a comprehensive user study to evaluate its accuracy in a virtual environment. We found that utilizing an 80° tunneling effect at a maximum speed of 15.18° per second was deemed suitable for PinchMove in reducing motion sickness. We also found our system to be at average, more accurate in enclosed virtual environments when compared to conventional methods. This paper makes the following three contributions: 1) We propose a navigation solution in near-field virtual environments for accurate movement, 2) we determined the appropriate tunneling effect for our method to minimize motion sickness, and 3) We validated our proposed solution by comparing it with conventional navigation solutions in terms of accuracy of movement. We also propose several use- case scenarios where accuracy in movement is desirable and further discuss the effectiveness of PinchMove.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Anyorbit: orbital navigation in virtual environments with eye-tracking

Yun Suen Pai; Tanner Person; Kouta Minamizawa; Kai Kunze

Gaze-based interactions promise to be fast, intuitive and effective in controlling virtual and augmented environments. Yet, there is still a lack of usable 3D navigation and observation techniques. In this work: 1) We introduce a highly advantageous orbital navigation technique, AnyOrbit, providing an intuitive and hands-free method of observation in virtual environments that uses eye-tracking to control the orbital center of movement; 2) The versatility of the technique is demonstrated with several control schemes and use-cases in virtual/augmented reality head-mounted-display and desktop setups, including observation of 3D astronomical data and spectator sports.


international symposium on wearable computers | 2017

face2faceVR: using AR to assist VR in ubiquitous environment usage

Yun Suen Pai; Megumi Isogai; Daisuke Ochi; Hideaki Kimata; Kai Kunze

As virtual reality (VR) usage becomes more popular, one of the issues, among others, which still prevents VR from being used in a more ubiquitous manner is spatial awareness, unlike augmented reality (AR). Generally, there are two forms of such an awareness; recognizing the environment and recognizing other people around us. We propose face2faceVR; an easy to use implementation of AR tracking to assist VR towards recognizing other nearby VR users. The contribution of this work are the following; 1) it is compatible with mobile VR technology that already caters towards a wider adoption, 2) it does not require a networked or shared virtual environment, and 3) it is an inexpensive implementation without any additional peripherals or hardware.

Collaboration


Dive into the Yun Suen Pai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge