Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jessica R. Cauchard is active.

Publication


Featured researches published by Jessica R. Cauchard.


ubiquitous computing | 2015

Drone & me: an exploration into natural human-drone interaction

Jessica R. Cauchard; Jane L. E; Kevin Y. Zhai; James A. Landay

Personal drones are becoming popular. It is challenging to design how to interact with these flying robots. We present a Wizard-of-Oz (WoZ) elicitation study that informs how to naturally interact with drones. Results show strong agreement between participants for many interaction techniques, as when gesturing for the drone to stop. We discovered that people interact with drones as with a person or a pet, using interpersonal gestures, such as beckoning the drone closer. We detail the interaction metaphors observed and offer design insights for human-drone interactions.


user interface software and technology | 2011

Visual separation in mobile multi-display environments

Jessica R. Cauchard; Markus Löchtefeld; Pourang Irani; J. Schoening; Antonio Krüger; Mike Fraser; Sriram Subramanian

Projector phones, handheld game consoles and many other mobile devices increasingly include more than one display, and therefore present a new breed of mobile Multi-Display Environments (MDEs) to users. Existing studies illustrate the effects of visual separation between displays in MDEs and suggest interaction techniques that mitigate these effects. Currently, mobile devices with heterogeneous displays such as projector phones are often designed without reference to visual separation issues; therefore it is critical to establish whether concerns and opportunities raised in the existing MDE literature apply to the emerging category of Mobile MDEs (MMDEs). This paper investigates the effects of visual separation in the context of MMDEs and contrasts these with fixed MDE results, and explores design factors for Mobile MDEs. Our study uses a novel eye-tracking methodology for measuring switches in visual context between displays and identifies that MMDEs offer increased design flexibility over traditional MDEs in terms of visual separation. We discuss these results and identify several design implications.


ubiquitous computing | 2012

Steerable projection: exploring alignment in interactive mobile displays

Jessica R. Cauchard; Mike Fraser; Teng Han; Sriram Subramanian

Emerging smartphones and other handheld devices are now being fitted with a set of new embedded technologies such as pico-projection. They are usually designed with the pico-projector embedded in the top of the device. Despite the potential of personal mobile projection to support new forms of interactivity such as augmented reality techniques, these devices have not yet made significant impact on the ways in which mobile data is experienced. We suggest that this ‘traditional’ configuration of fixed pico-projectors within the device is unsuited to many projection tasks because it couples the orientation of the device to the management of the projection space, preventing users from easily and simultaneously using the mobile device and looking at the projection. We present a study which demonstrates this problem and the requirement for steerable projection behaviour and some initial users’ preferences for different projection coupling angles according to context. Our study highlights the importance of flexible interactive projections which can support interaction techniques on the device and on the projection space according to task. This inspires a number of interaction techniques that create different personal and shared interactive display alignments to suit a range of different mobile projection situations.


human factors in computing systems | 2016

ActiVibe: Design and Evaluation of Vibrations for Progress Monitoring

Jessica R. Cauchard; Janette L. Cheng; Thomas Pietrzak; James A. Landay

Smartwatches and activity trackers are becoming prevalent, providing information about health and fitness, and offering personalized progress monitoring. These wearable devices often offer multimodal feedback with embedded visual, audio, and vibrotactile displays. Vibrations are particularly useful when providing discreet feedback, without users having to look at a display or anyone else noticing, thus preserving the flow of the primary activity. Yet, current use of vibrations is limited to basic patterns, since representing more complex information with a single actuator is challenging. Moreover, it is unclear how much the user--s current physical activity may interfere with their understanding of the vibrations. We address both issues through the design and evaluation of ActiVibe, a set of vibrotactile icons designed to represent progress through the values 1 to 10. We demonstrate a recognition rate of over 96% in a laboratory setting using a commercial smartwatch. ActiVibe was also evaluated in situ with 22 participants for a 28-day period. We show that the recognition rate is 88.7% in the wild and give a list of factors that affect the recognition, as well as provide design guidelines for communicating progress via vibrations.


human computer interaction with mobile devices and services | 2012

m+pSpaces: virtual workspaces in the spatially-aware mobile environment

Jessica R. Cauchard; Markus Löchtefeld; Mike Fraser; Antonio Krüger; Sriram Subramanian

We introduce spatially-aware virtual workspaces for the mobile environment. The notion of virtual workspaces was initially conceived to alleviate mental workload in desktop environments with limited display real-estate. Using spatial properties of mobile devices, we translate this approach and illustrate that mobile virtual workspaces greatly improve task performance for mobile devices. In a first study, we compare our spatially-aware prototype (mSpaces) to existing context switching methods for navigating amongst multiple tasks in the mobile environment. We show that users are faster, make more accurate decisions and require less mental and physical effort when using spatially-aware prototypes. We furthermore prototype pSpaces and m+pSpaces, two spatially-aware systems equipped with pico-projectors as auxiliary displays to provide dual-display capability to the handheld device. A final study reveals advantages of each of the different configurations and functionalities when comparing all three prototypes. Drawing on these findings, we identify design considerations to create, manipulate and manage spatially-aware virtual workspaces in the mobile environment.


human factors in computing systems | 2017

BrushTouch: Exploring an Alternative Tactile Method for Wearable Haptics

Evan Strasnick; Jessica R. Cauchard; James A. Landay

Haptic interfaces are ideal in situations where visual/auditory attention is impossible, unsafe, or socially unacceptable. However, conventional (vibrotactile) wearable interfaces often possess a limited bandwidth for expressing information. We explore a novel form of tactile stimulation through brushing, and demonstrate BrushTouch, a wearable prototype for brushing haptics. We also present schemes for conveying information such as time and direction through multi-tactor wrist-worn haptic interfaces. To evaluate BrushTouch, two user studies were run, comparing it to a conventional vibrotactile wristband across a number of tasks in both lab and mobile conditions. We show that for certain cues brushing can be more accurately recognized than vibration, enabling more effective spatial schemes for presenting information through haptic means. We then show that BrushTouch is capable of greater information transfer using such cues. We believe that brushing, as with other non-vibrotactile haptic techniques, merits further investigation as potential vehicles for richer haptic feedback.


human factors in computing systems | 2017

Drone & Wo: Cultural Influences on Human-Drone Interaction Techniques

Jane L. E; Ilene L. E; James A. Landay; Jessica R. Cauchard

As drones become ubiquitous, it is important to understand how cultural differences impact human-drone interaction. A previous elicitation study performed in the USA illustrated how users would intuitively interact with drones. We replicated this study in China to gain insight into how these user-defined interactions vary across the two cultures. We found that as per the US study, Chinese participants chose to interact primarily using gesture. However, Chinese participants used multi-modal interactions more than their US counterparts. Agreement for many proposed interactions was high within each culture. Across cultures, there were notable differences despite similarities in interaction modality preferences. For instance, culturally-specific gestures emerged in China, such as a T-shape gesture for stopping the drone. Participants from both cultures anthropomorphized the drone, and welcomed it into their personal space. We describe the implications of these findings on designing culturally-aware and intuitive human-drone interaction.


virtual systems and multimedia | 2006

Virtual manuscripts for an enhanced museum and web experience ‘living manuscripts’

Jessica R. Cauchard; Peter Ainsworth; Daniela M. Romano; Bob Banks

Due to preservation and conservation issues, manuscripts are normally kept in research libraries far from public gaze. On rare occasions, visitors can see these priceless objects, typically separated from them by a sealed case, with only a fixed double page spread visible from a manuscript that may contain hundreds of folios. This restricts the amount of knowledge offered by these books. This paper proposes the creation of virtual manuscripts as exhibits in their own right in a museum context, and as part of a web-based virtual learning environment offering visitors the unique opportunity of engaging with the manuscripts, providing further possibilities for accessing the heritage and cultural information contained in them. A database supplying information about and from the manuscripts, held in a virtual environment, creates the illusion of their “real” presence and materiality. ‘Living Manuscripts’ aims to stimulate and encourage engagement with vulnerable materials via an innovative virtual experience.


user interface software and technology | 2011

Mobile multi-display environments

Jessica R. Cauchard


human robot interaction | 2016

Emotion Encoding in Human-Drone Interaction

Jessica R. Cauchard; Kevin Y. Zhai; Marco Spadafora; James A. Landay

Collaboration


Dive into the Jessica R. Cauchard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jérémy Frey

Interdisciplinary Center Herzliya

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge