Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Orlosky is active.

Publication


Featured researches published by Jason Orlosky.


intelligent user interfaces | 2013

Dynamic text management for see-through wearable and heads-up display systems

Jason Orlosky; Kiyoshi Kiyokawa; Haruo Takemura

Reading text safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) or Heads Up Display (HUD) systems in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a users path, and scene variation must be dealt with effectively. This paper introduces a new intelligent text management system that actively manages movement of text in a users field of view. Research to date lacks a method to migrate user-centric content such as e-mail or text messages throughout a users environment while mobile. Unlike most current annotation and view management systems, we use camera tracking to find dark, uniform regions along the route on which a user is travelling in real time. We then implement methodology to move text from one viable location to the next to maximize readability. A pilot experiment with 19 participants shows that the text placement of our system is preferred to text in fixed location configurations.


symposium on spatial user interaction | 2014

Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays

Jason Orlosky; Qifan Wu; Kiyoshi Kiyokawa; Haruo Takemura; Christian Nitschke

A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a users peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.


international symposium on mixed and augmented reality | 2014

Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks

Naohiro Kishishita; Kiyoshi Kiyokawa; Jason Orlosky; Tomohiro Mashita; Haruo Takemura; Ernst Kruijff

A wide field of view augmented reality display is a special type of head-worn device that enables users to view augmentations in the peripheral visual field. However, the actual effects of a wide field of view display on the perception of augmentations have not been widely studied. To improve our understanding of this type of display when conducting divided attention search tasks, we conducted an in depth experiment testing two view management methods, in-view and in-situ labelling. With in-view labelling, search target annotations appear on the display border with a corresponding leader line, whereas in-situ annotations appear without a leader line, as if they are affixed to the referenced objects in the environment. Results show that target discovery rates consistently drop with in-view labelling and increase with in-situ labelling as display angle approaches 100 degrees of field of view. Past this point, the performances of the two view management methods begin to converge, suggesting equivalent discovery rates at approximately 130 degrees of field of view. Results also indicate that users exhibited lower discovery rates for targets appearing in peripheral vision, and that there is little impact of field of view on response time and mental workload.


Mobile Computing and Communications Review | 2014

Managing mobile text in head mounted displays: studies on visual preference and text placement

Jason Orlosky; Kiyoshi Kiyokawa; Haruo Takemura

In recent years, the development of wearable displays has seen a drastic increase. However, there is still a strong resistance to using wearable technology for fear of decreased visibility and attention to ones surroundings. To address this concern, this paper describes a series of experiments that study user tendencies related to viewing and placing text in head mounted displays (HMDs). From the results of two pilot experiments, we show that awareness is to some extent better for HMDs compared to smartphones, and find that users would prefer to place text in the background rather than the HMD screen. We then consequently build an intelligent system to manage placement of text such as e-mail and messaging using computer vision algorithms.?? Finally, through two experiments comparing automatic and manual text placement, we show that our system can mimic human tendencies with approximately 70% accuracy.


IEEE Transactions on Visualization and Computer Graphics | 2015

ModulAR: Eye-Controlled Vision Augmentations for Head Mounted Displays

Jason Orlosky; Takumi Toyama; Kiyoshi Kiyokawa; Daniel Sonntag

In the last few years, the advancement of head mounted display technology and optics has opened up many new possibilities for the field of Augmented Reality. However, many commercial and prototype systems often have a single display modality, fixed field of view, or inflexible form factor. In this paper, we introduce Modular Augmented Reality (ModulAR), a hardware and software framework designed to improve flexibility and hands-free control of video see-through augmented reality displays and augmentative functionality. To accomplish this goal, we introduce the use of integrated eye tracking for on-demand control of vision augmentations such as optical zoom or field of view expansion. Physical modification of the devices configuration can be accomplished on the fly using interchangeable camera-lens modules that provide different types of vision enhancements. We implement and test functionality for several primary configurations using telescopic and fisheye camera-lens systems, though many other customizations are possible. We also implement a number of eye-based interactions in order to engage and control the vision augmentations in real time, and explore different methods for merging streams of augmented vision into the users normal field of view. In a series of experiments, we conduct an in depth analysis of visual acuity and head and eye movement during search and recognition tasks. Results show that methods with larger field of view that utilize binary on/off and gradual zoom mechanisms outperform snapshot and sub-windowed methods and that type of eye engagement has little effect on performance.


advanced visual interfaces | 2014

A natural interface for multi-focal plane head mounted displays using 3D gaze

Takumi Toyama; Jason Orlosky; Daniel Sonntag; Kiyoshi Kiyokawa

In mobile augmented reality (AR), it is important to develop interfaces for wearable displays that not only reduce distraction, but that can be used quickly and in a natural manner. In this paper, we propose a focal-plane based interaction approach with several advantages over traditional methods designed for head mounted displays (HMDs) with only one focal plane. Using a novel prototype that combines a monoscopic multi-focal plane HMD and eye tracker, we facilitate interaction with virtual elements such as text or buttons by measuring eye convergence on objects at different depths. This can prevent virtual information from being unnecessarily overlaid onto real world objects that are at a different range, but in the same line of sight. We then use our prototype in a series of experiments testing the feasibility of interaction. Despite only being presented with monocular depth cues, users have the ability to correctly select virtual icons in near, mid, and far planes in 98.6% of cases.


intelligent user interfaces | 2015

An Interactive Pedestrian Environment Simulator for Cognitive Monitoring and Evaluation

Jason Orlosky; Markus Weber; Yecheng Gu; Daniel Sonntag; Sergey A. Sosnovsky

Recent advances in virtual and augmented reality have led to the development of a number of simulations for different applications. In particular, simulations for monitoring, evaluation, training, and education have started to emerge for the consumer market due to the availability and affordability of immersive display technology. In this work, we introduce a virtual reality environment that provides an immersive traffic simulation designed to observe behavior and monitor relevant skills and abilities of pedestrians who may be at risk, such as elderly persons with cognitive impairments. The system provides basic reactive functionality, such as display of navigation instructions and notifications of dangerous obstacles during navigation tasks. Methods for interaction using hand and arm gestures are also implemented to allow users explore the environment in a more natural manner.


international conference on distributed, ambient, and pervasive interactions | 2014

Using Eye-Gaze and Visualization to Augment Memory

Jason Orlosky; Takumi Toyama; Daniel Sonntag; Kiyoshi Kiyokawa

In our everyday lives, bits of important information are lost due to the fact that our brain fails to convert a large portion of short term memory into long term memory. In this paper, we propose a framework that uses an eye-tracking interface to store pieces of forgotten information and present them back to the user later with an integrated head mounted display (HMD). This process occurs in three main steps, including context recognition, data storage, and augmented reality (AR) display. We demonstrate the system’s ability to recall information with the example of a lost book page by detecting when the user reads the book again and intelligently presenting the last read position back to the user. Two short user evaluations show that the system can recall book pages within 40 milliseconds, and that the position where a user left off can be calculated with approximately 0.5 centimeter accuracy.


intelligent user interfaces | 2015

Attention Engagement and Cognitive State Analysis for Augmented Reality Text Display Functions

Takumi Toyama; Daniel Sonntag; Jason Orlosky; Kiyoshi Kiyokawa

Human eye gaze has recently been used as an effective input interface for wearable displays. In this paper, we propose a gaze-based interaction framework for optical see-through displays. The proposed system can automatically judge whether a user is engaged with virtual content in the display or focused on the real environment and can determine his or her cognitive state. With these analytic capacities, we implement several proactive system functions including adaptive brightness, scrolling, messaging, notification, and highlighting, which would otherwise require manual interaction. The goal is to manage the relationship between virtual and real, creating a more cohesive and seamless experience for the user. We conduct user experiments including attention engagement and cognitive state analysis, such as reading detection and gaze position estimation in a wearable display towards the design of augmented reality text display applications. The results from the experiments show robustness of the attention engagement and cognitive state analysis methods. A majority of the experiment participants (8/12) stated the proactive system functions are beneficial.


international symposium on mixed and augmented reality | 2016

Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras

Alexander Plopski; Jason Orlosky; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker

Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the users constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individuals eye and even after the HMD moves on the users face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants.

Collaboration


Dive into the Jason Orlosky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haruo Takemura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge