Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver S. Schneider is active.

Publication


Featured researches published by Oliver S. Schneider.


user interface software and technology | 2015

Tactile Animation by Direct Manipulation of Grid Displays

Oliver S. Schneider; Ali Israr; Karon E. MacLean

Chairs, wearables, and handhelds have become popular sites for spatial tactile display. Visual animators, already expert in using time and space to portray motion, could readily transfer their skills to produce rich haptic sensations if given the right tools. We introduce the tactile animation object, a directly manipulated phantom tactile sensation. This abstraction has two key benefits: 1) efficient, creative, iterative control of spatiotemporal sensations, and 2) the potential to support a variety of tactile grids, including sparse displays. We present Mango, an editing tool for animators, including its rendering pipeline and perceptually-optimized interpolation algorithm for sparse vibrotactile grids. In our evaluation, professional animators found it easy to create a variety of vibrotactile patterns, with both experts and novices preferring the tactile animation object over controlling actuators individually.


conference on computer supported cooperative work | 2011

Chalk sounds: the effects of dynamic synthesized audio on workspace awareness in distributed groupware

Carl Gutwin; Oliver S. Schneider; Robert Xiao; Stephen A. Brewster

Awareness of other peoples activity is an important part of shared-workspace collaboration, and is typically supported using visual awareness displays such as radar views. These visual presentations are limited in that the user must be able to see and attend to the view in order to gather awareness information. Using audio to convey awareness information does not suffer from these limitations, and previous research has shown that audio can provide valuable awareness in distributed settings. In this paper we evaluate the effectiveness of synthesized dynamic audio information, both on its own and as an adjunct to a visual radar view. We developed a granular-synthesis engine that produces realistic chalk sounds for off-screen activity in a groupware workspace, and tested the audio awareness in two ways. First, we measured peoples ability to identify off-screen activities using only sound, and found that people are almost as accurate with synthesized sounds as with real sounds. Second, we tested dynamic audio awareness in a realistic groupware scenario, and found that adding audio to a radar view significantly improved awareness of off-screen activities in situations where it was difficult to see or attend to the visual display. Our work provides new empirical evidence about the value of dynamic synthesized audio in distributed groupware.


user interface software and technology | 2014

FeelCraft: crafting tactile experiences for media using a feel effect library

Siyan Zhao; Oliver S. Schneider; Roberta L. Klatzky; Jill Fain Lehman; Ali Israr

FeelCraft is a media plugin that monitors events and states in the media and associates them with expressive tactile content using a library of feel effects (FEs). A feel effect (FE) is a user-defined haptic pattern that, by virtue of its connection to a meaningful event, generates dynamic and expressive effects on the users body. We compiled a library of more than fifty FEs associated with common events in games, movies, storybooks, etc., and used them in a sandbox-type gaming platform. The FeelCraft plugin allows a game designer to quickly generate haptic effects, associate them to events in the game, play them back for testing, save them and/or broadcast them to other users to feel the same haptic experience. Our demonstration shows an interactive procedure for authoring haptic media content using the FE library, playing it back during interactions in the game, and broadcasting it to a group of guests.


ieee haptics symposium | 2014

Improvising design with a Haptic Instrument

Oliver S. Schneider; Karon E. MacLean

As the need to deploy informative, expressive haptic phenomena in consumer devices gains momentum, the inadequacy of current design tools is becoming more critically obstructive. Current tools do not support collaboration or serendipitous exploration. Collaboration is critical, but direct means of sharing haptic sensations are limited, and the absence of unifying conceptual models for working with haptic sensations further restricts communication between designers and stakeholders. This is especially troublesome for pleasurable, affectively targeted interactions that rely on subjective user experience. In this paper, we introduce an alternative design approach inspired by musical instruments - a new tool for real-time, collaborative manipulation of haptic sensations; and describe a first example, mHIVE, a mobile Haptic Instrument for Vibrotactile Exploration. Our qualitative study shows that mHIVE supports exploration and communication but requires additional visualization and recording capabilities for tweaking designs, and expands previous work on haptic language.


international computing education research workshop | 2012

Toward a validated computing attitudes survey

Allison Elliott Tew; Brian Dorn; Oliver S. Schneider

The Computing Attitudes Survey (CAS) is a newly designed instrument, adapted from the Colorado Learning Attitudes about Science Survey (CLASS), for measuring novice to expert-like perceptions about computer science. In this paper we outline the iterative design process used for the adaptation and present our progress toward establishing the instruments validity. We present results of think-aloud interviews and discuss procedures used to determine expert consensus for CAS items. We also detail results of a pilot of the instrument with 447 introductory students in Fall 2011 along with a preliminary factor analysis of this data. Findings to date show consistent interpretation of statements by faculty and students, establish expert consensus of opinion and identify eight candidate factors for further analysis.


human factors in computing systems | 2016

HapTurk: Crowdsourcing Affective Ratings of Vibrotactile Icons

Oliver S. Schneider; Hasti Seifi; Salma Kashani; Matthew Chun; Karon E. MacLean

Vibrotactile (VT) display is becoming a standard component of informative user experience, where notifications and feedback must convey information eyes-free. However, effective design is hindered by incomplete understanding of relevant perceptual qualities, together with the need for user feedback to be accessed in-situ. To access evaluation streamlining now common in visual design, we introduce proxy modalities as a way to crowdsource VT sensations by reliably communicating high-level features through a crowd-accessible channel. We investigate two proxy modalities to represent a high-fidelity tactor: a new VT visualization, and low-fidelity vibratory translations playable on commodity smartphones. We translated 10 high-fidelity vibrations into both modalities, and in two user studies found that both proxy modalities can communicate affective features, and are consistent when deployed remotely over Mechanical Turk. We analyze fit of features to modalities, and suggest future improvements.


AsiaHaptics | 2015

FeelCraft: User-Crafted Tactile Content

Oliver S. Schneider; Siyan Zhao; Ali Israr

Despite ongoing research into delivering haptic content, users still have no accessible way to add haptics to their experiences. Lack of haptic media infrastructure, few libraries of haptic content, and individual differences all provide barriers to creating mainstream haptics . In this paper, we present an architecture that supports generation of haptic content, haptic content repositories, and customization of haptic experiences. We introduce FeelCraft, a software plugin that monitors activities in media and associates them with expressive tactile patterns known as feel effects. The FeelCraft plugin allows end users to quickly generate haptic effects, associate them to events in the media, play them back for testing, save them, share them, and/or broadcast them to other users to feel the same haptic experience. The FeelCraft architecture supports both existing and future media content, and can be applied to a wide range of social, educational, and assistive applications.


human factors in computing systems | 2015

Exploring Embedded Haptics for Social Networking and Interactions

Ali Israr; Siyan Zhao; Oliver S. Schneider

Haptic feedback is frequently used for user interactions with mobile devices, wearables, and handheld controllers in virtual reality and entertainment settings. We explore the use of vibrotactile (VT) feedback for social and interpersonal communication on embedded systems, particularly in a mobile context. We propose an architecture that supports compact packet communication between devices and triggers expressive VT patterns in a typical messenger application. We present a communication API, haptic vocabularies, and an interface for receiving and authoring haptic messages. Finally, we conclude with an informal survey for using haptics in a social setting.


Pervasive and Mobile Computing | 2014

RRACE: Robust realtime algorithm for cadence estimation

Idin Karuei; Oliver S. Schneider; Bryan Stern; Michelle Chuang; Karon E. MacLean

We present an algorithm which analyzes walking cadence (momentary step frequency) via frequency-domain analysis of accelerometer signals available in common smartphones, and report its accuracy relative to the published state-of-the-art algorithms based on the data gathered in a controlled user study. We show that our algorithm (RRACE) is more accurate in all conditions, and is also robust to speed change and largely insensitive to orientation, location on person, and user differences. RRACEs performance is suitable for interactive mobile applications: it runs in realtime (~2 s latency), requires no tuning or a priori information, uses an extensible architecture, and can be optimized for the intended application. In addition, we provide an implementation that can be easily deployed on common smartphone platforms. Power consumption is measured and compared to that of the current commercially available mobile apps. We also describe a novel experiment design and analysis for verification of the best-optimized RRACEs performance under different conditions, executed outdoors to capture normal walking. The resulting extensive dataset allowed a direct comparison (conditions fully matched) of RRACE variants with a published time-based algorithm. We have made this verification design and dataset publicly available, so it can be re-used for gait (general attributes of walking movement) and cadence measurement studies or gait and cadence algorithm verification.


intelligent user interfaces | 2013

Real-time gait classification for persuasive smartphone apps: structuring the literature and pushing the limits

Oliver S. Schneider; Karon E. MacLean; Kerem Altun; Idin Karuei; Michael M.A. Wu

Persuasive technology is now mobile and context-aware. Intelligent analysis of accelerometer signals in smartphones and other specialized devices has recently been used to classify activity (e.g., distinguishing walking from cycling) to encourage physical activity, sustainable transport, and other social goals. Unfortunately, results vary drastically due to differences in methodology and problem domain. The present report begins by structuring a survey of current work within a new framework, which highlights comparable characteristics between studies; this provided a tool by which we and others can understand the current state-of-the art and guide research towards existing gaps. We then present a new user study, positioned in an identified gap, that pushes limits of current success with a challenging problem: the real-time classification of 15 similar and novel gaits suitable for several persuasive application areas, focused on the growing phenomenon of exercise games. We achieve a mean correct classification rate of 78.1% of all 15 gaits with a minimal amount of personalized training of the classifier for each participant when carried in any of 6 different carrying locations (not known a priori). When narrowed to a subset of four gaits and one location that is known, this improves to means of 92.2% with and 87.2% without personalization. Finally, we group our findings into design guidelines and quantify variation in accuracy when an algorithm is trained for a known location and participant.

Collaboration


Dive into the Oliver S. Schneider's collaboration.

Top Co-Authors

Avatar

Karon E. MacLean

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Gutwin

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

David Marino

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Hasti Seifi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Idin Karuei

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Paul Bucci

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge