Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Tag is active.

Publication


Featured researches published by Benjamin Tag.


international conference on computer graphics and interactive techniques | 2016

GazeSim: simulating foveated rendering using depth in eye gaze for VR

Yun Suen Pai; Benjamin Tag; Noriyasu Vontin; Kazunori Sugiura; Kai Kunze

We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field.


human factors in computing systems | 2016

In the Eye of the Beholder: The Impact of Frame Rate on Human Eye Blink

Benjamin Tag; Junichi Shimizu; Chi Zhang; Kai Kunze; Naohisa Ohta; Kazunori Sugiura

We introduce a study investigating the impact of high frame rate videos on viewers eye blink frequency. A series of videos with varying combinations of motion complexities and frame rates were shown to participants, while their eye blinks were counted with J!NS MEME (smart eye wear). Lower frame rates and lower motion complexity caused higher blink frequencies, which are markers for stress and emotional arousal.


international conference on computer graphics and interactive techniques | 2017

GazeSphere: navigating 360-degree-video environments in VR using head rotation and eye gaze

Yun Suen Pai; Benjamin Tag; Megumi Isogai; Daisuke Ochi; Kai Kunze

Viewing 360-degree-images and videos through head-mounted displays (HMDs) currently lacks a compelling interface to transition between them. We propose GazeSphere; a navigation system that provides a seamless transition between 360-degree-video environment locations through the use of orbit-like motion, via head rotation and eye gaze tracking. The significance of this approach is threefold: 1) It allows navigation and transition through spatially continuous 360-video environments, 2) It leverages the humans proprioceptive sense of rotation for locomotion that is intuitive and negates motion sickness, and 3) it uses eye tracking for a completely seamless, hands-free, and unobtrusive interaction. The proposed method uses an orbital motion technique for navigation in virtual space, which we demonstrate in applications such as navigation and interaction in computer aided design (CAD), data visualization, as a game mechanic, and for virtual tours.


human factors in computing systems | 2017

Facial Thermography for Attention Tracking on Smart Eyewear: An Initial Study

Benjamin Tag; Ryan Mannschreck; Kazunori Sugiura; George Chernyshov; Naohisa Ohta; Kai Kunze

We are describing the first step towards the development of an unobtrusive open eyewear system for attention tracking in daily life situations. We are logging thermographic data from infrared imaging and electrooculographic readings from off-the-shelf smart glasses and measure cognitive engagement of people in different situations. We are identifying new potential areas on the face for contactless IR temperature sensing. Attached to smart glasses and in combination with the EOG potentials we can monitor the wearers facial temperature changes, eye movement and eye blink in everyday situations, which is a major step towards becoming able to measure attention in unconstrained settings, and thus make it manageable.


ubiquitous computing | 2016

Eye blink as an input modality for a responsive adaptable video system

Benjamin Tag; Junichi Shimizu; Chi Zhang; Naohisa Ohta; Kai Kunze; Kazunori Sugiura

We propose a unique system that allows real-time adaption of video settings to a viewers physical state. A custom made program toggles between videos according to the average eye blink frequency of each viewer. The physical data is harnessed with J!NS MEME smart glasses that utilize electrooculography (EOG). To the best of our belief, this is the first adaptable multimedia system that responds in real time to physical data and alters technical settings of video contents.


international symposium on wearable computers | 2017

atmoSphere: mindfulness over haptic-audio cross modal correspondence

Benjamin Tag; Takuya Goto; Kouta Minamizawa; Ryan Mannschreck; Haruna Fushimi; Kai Kunze

We explore cross-modal correspondence between haptic and audio output for meditation support. To this end, we implement atmoSphere, a haptic ball to prototype several haptic/audio designs. AtmoSphere consists of a sphere shaped device which provides haptic feedback. The users can experience the design aimed at instructing them in breathing techniques shown to enhance meditation. The aim of the haptic/audio design is to guide the user into a particular rhythm of breathing. We detect this rhythm using smart eyewear (J!NS MEME) that estimates cardiac and respiratory parameters using embedded motion sensors. Once this rhythm is achieved the feedback stops. If the user drops out of the rhythm, the haptic/audio feedback starts again.


international symposium on wearable computers | 2017

Wearable aura: an interactive projection on personal space to enhance communication

Dingding Zheng; Laura Lugaresi; George Chernyshov; Benjamin Tag; Masa Inakage; Kai Kunze

This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto ones Personal Distance zone. This projection, as an extension of one-self is reactive to users cognitive status, aware of its environment, context and users activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.


international symposium on wearable computers | 2018

Shape memory alloy wire actuators for soft, wearable haptic devices

George Chernyshov; Benjamin Tag; Cedric Caremel; Feier Cao; Gemma Liu; Kai Kunze

This paper presents a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires. The proposed concept allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators. We explore possible design considerations and applications for such devices, present user studies proving the feasibility of delivering meaningful information and use nonlinear autoregressive neural networks to compensate for SMA inherent drawbacks, such as delayed onset, enabling us to characterize and predict the physical behavior of the device.


augmented human international conference | 2018

Towards Enhancing Emotional Responses to Media using Auto-Calibrating Electric Muscle Stimulation (EMS)

Takashi Goto; Benjamin Tag; Kai Kunze; Tilman Dingler

We evaluate the use of Electric Muscle Stimulation (EMS) as a method of amplifying emotional responses to multimedia content. This paper presents an auto-calibration method to stimulate two facial expressions using EMS. We focus on two expressions: frown and smile. We attempted control of facial muscles with facial feedback for automatically calibrating these facial expressions: our computer vision system detects the facial expression and auto-calibrates the EMS parameters (intensity and duration) based on the users current facial expression. We present results from a pilot study with four participants evaluating the auto-calibration system and collecting initial feedback on the use of EMS to augment, for example, media experiences: while watching movies we can enhance the emotional response of the users during happy and sad scenes by stimulating corresponding face muscles.


tangible and embedded interaction | 2017

Seamless Multithread Films in Virtual Reality

Oneris Daniel Rico Garcia; Benjamin Tag; Naohisa Ohta; Kazunori Sugiura

In this paper we are proposing a new system for the production of VR stories that allow users seamless interaction with the content. Our system plays real life footage rather than animation allowing for interactive live-action experiences. The unaware empowered users take subliminal decisions by focusing their attention at predefined ROIs. The moment a decision is taken is always placed seconds before the actual video branching happens to allow the system to preload only the chosen storyline in order to keep the computational workload as low as possible. We believe that this unique system will not only change the way we experience VR contents, but will rather lead to a paradigm shift in film production.

Collaboration


Dive into the Benjamin Tag's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge