Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julia Schwarz is active.

Publication


Featured researches published by Julia Schwarz.


user interface software and technology | 2011

TapSense: enhancing finger interaction on touch surfaces

Chris Harrison; Julia Schwarz; Scott E. Hudson

We present TapSense, an enhancement to touch interaction that allows conventional surfaces to identify the type of object being used for input. This is achieved by segmenting and classifying sounds resulting from an objects impact. For example, the diverse anatomy of a human finger allows different parts to be recognized including the tip, pad, nail and knuckle - without having to instrument the user. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input is extremely constrained. Our system can also identify different sets of passive tools. We conclude with a comprehensive investigation of classification accuracy and training implications. Results show our proof-of-concept system can support sets with four input types at around 95% accuracy. Small, but useful input sets of two (e.g., pen and finger discrimination) can operate in excess of 99% accuracy.


user interface software and technology | 2010

A framework for robust and flexible handling of inputs with uncertainty

Julia Schwarz; Scott E. Hudson; Jennifer Mankoff; Andrew D. Wilson

New input technologies (such as touch), recognition based input (such as pen gestures) and next-generation interactions (such as inexact interaction) all hold the promise of more natural user interfaces. However, these techniques all create inputs with some uncertainty. Unfortunately, conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. We present a framework for handling input with uncertainty in a systematic, extensible, and easy to manipulate fashion. To illustrate this framework, we present several traditional interactors which have been extended to provide feedback about uncertain inputs and to allow for the possibility that in the end that input will be judged wrong (or end up going to a different interactor). Our six demonstrations include tiny buttons that are manipulable using touch input, a text box that can handle multiple interpretations of spoken input, a scrollbar that can respond to inexactly placed input, and buttons which are easier to click for people with motor impairments. Our framework supports all of these interactions by carrying uncertainty forward all the way through selection of possible target interactors, interpretation by interactors, generation of (uncertain) candidate actions to take, and a mediation process that decides (in a lazy fashion) which actions should become final.


human factors in computing systems | 2012

Phone as a pixel: enabling ad-hoc, large-scale displays using mobile devices

Julia Schwarz; David Klionsky; Chris Harrison; Paul Henry Dietz; Andrew D. Wilson

We present Phone as a Pixel: a scalable, synchronization-free, platform-independent system for creating large, ad-hoc displays from a collection of smaller devices. In contrast to most tiled-display systems, the only requirement for participation is for devices to have an internet connection and a web browser. Thus, most smartphones, tablets, laptops and similar devices can be used. Phone as a Pixel uses a color-transition encoding scheme to identify and locate displays. This approach has several advantages: devices can be arbitrarily arranged (i.e., not in a grid) and infrastructure consists of a single conventional camera. Further, additional devices can join at any time without re-calibration. These are desirable properties to enable collective displays in contexts like sporting events, concerts and political rallies. In this paper we describe our system, show results from proof-of-concept setups, and quantify the performance of our approach on hundreds of displays.


human factors in computing systems | 2010

Cord input: an intuitive, high-accuracy, multi-degree-of-freedom input method for mobile devices

Julia Schwarz; Chris Harrison; Scott E. Hudson; Jennifer Mankoff

A cord, although simple in form, has many interesting physical affordances that make it powerful as an input device. Not only can a length of cord be grasped in different locations, but also pulled, twisted and bent---four distinct and expressive dimensions that could potentially act in concert. Such an input mechanism could be readily integrated into headphones, backpacks, and clothing. Once grasped in the hand, a cord can be used in an eyes-free manner to control mobile devices, which often feature small screens and cramped buttons. In this note, we describe a proof-of-concept cord-based sensor, which senses three of the four input dimensions we propose. In addition to a discussion of potential uses, we also present results from our preliminary user study. The latter sought to compare the targeting performance and selection accuracy of different cord-based input modalities. We conclude with brief set of design recommendations drawn upon results from our study.


interactive tabletops and surfaces | 2015

Estimating 3D Finger Angle on Commodity Touchscreens

Robert Xiao; Julia Schwarz; Chris Harrison

We describe a novel approach for estimating the pitch and yaw of fingers relative to a touchscreens surface, offering two additional, analog degrees of freedom for interactive functions. Further, we show that our approach can be achieved on off-the-shelf consumer touchscreen devices: a smartphone and smartwatch. We validate our technique though a user study on both devices and conclude with several demo applications that illustrate the value and immediate feasibility of our approach.


human factors in computing systems | 2014

TouchTools: leveraging familiarity and skill with physical tools to augment touch interaction

Chris Harrison; Robert Xiao; Julia Schwarz; Scott E. Hudson

The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on todays touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps.


user interface software and technology | 2011

Monte carlo methods for managing interactive state, action and feedback under uncertainty

Julia Schwarz; Jennifer Mankoff; Scott E. Hudson

Current input handling systems provide effective techniques for modeling, tracking, interpreting, and acting on user input. However, new interaction technologies violate the standard assumption that input is certain. Touch, speech recognition, gestural input, and sensors for context often produce uncertain estimates of user inputs. Current systems tend to remove uncertainty early on. However, information available in the user interface and application can help to resolve uncertainty more appropriately for the end user. This paper presents a set of techniques for tracking the state of interactive objects in the presence of uncertain inputs. These techniques use a Monte Carlo approach to maintain a probabilistically accurate description of the user interface that can be used to make informed choices about actions. Samples are used to approximate the distribution of possible inputs, possible interactor states that result from inputs, and possible actions (callbacks and feedback) interactors may execute. Because each sample is certain, the developer can specify most of the behavior of interactors in a familiar, non-probabilistic fashion. This approach retains all the advantages of maintaining information about uncertainty while minimizing the need for the developer to work in probabilistic terms. We present a working implementation of our framework and illustrate the power of these techniques within a paint program that includes three different kinds of uncertain input.


human factors in computing systems | 2014

Probabilistic palm rejection using spatiotemporal touch features and iterative classification

Julia Schwarz; Robert Xiao; Jennifer Mankoff; Scott E. Hudson; Chris Harrison

Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs.


human factors in computing systems | 2015

An Architecture for Generating Interactive Feedback in Probabilistic User Interfaces

Julia Schwarz; Jennifer Mankoff; Scott E. Hudson

Increasingly natural, sensed, and touch-based input is being integrated into devices. Along the way, both custom and more general solutions have been developed for dealing with the uncertainty that is associated with these forms of input. However, it is difficult to provide dynamic, flexible, and continuous feedback about uncertainty using traditional interactive infrastructure. Our contribution is a general architecture with the goal of providing support for continual feedback about uncertainty. Our architecture is based on prior work in modeling uncertainty using Monte Carlo sampling, and tracks multiple interfaces -- one for each plausible and differentiable sequence of input that the user may have intended. Importantly, it considers how the presentation of uncertainty can be organized and implemented in a general way. Our primary contribution is a method for reducing the number of alternative interfaces and fusing possible interfaces into a single interface that both communicates uncertainty and allows for disambiguation. We demonstrate the value of this result through a collection of 11 new and existing feedback techniques along with two applications demonstrating the use of the feedback architecture.


human computer interaction with mobile devices and services | 2014

Around-body interaction: sensing & interaction techniques for proprioception-enhanced input with mobile devices

Xiang 'Anthony' Chen; Julia Schwarz; Chris Harrison; Jennifer Mankoff; Scott E. Hudson

The space around the body provides a large interaction volume that can allow for big interactions on small mobile devices. However, interaction techniques making use of this opportunity are underexplored, primarily focusing on distributing information in the space around the body. We demonstrate three types of around-body interaction including canvas, modal and context-aware interactions in six demonstration applications. We also present a sensing solution using standard smartphone hardware: a phones front camera, accelerometer and inertia measurement units. Our solution allows a person to interact with a mobile device by holding and positioning it between a normal field of view and its vicinity around the body. By leveraging a users proprioceptive sense, around-body Interaction opens a new input channel that enhances conventional interaction on a mobile device without requiring additional hardware.

Collaboration


Dive into the Julia Schwarz's collaboration.

Top Co-Authors

Avatar

Chris Harrison

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Robert Xiao

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Scott E. Hudson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jennifer Mankoff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge