Jarrod Knibbe
University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jarrod Knibbe.
human factors in computing systems | 2014
Sue Ann Seah; Diego Martinez Plasencia; Peter Bennett; Abhijit Karnik; Vlad Stefan Otrocol; Jarrod Knibbe; Andy Cockburn; Sriram Subramanian
We present SensaBubble, a chrono-sensory mid-air display system that generates scented bubbles to deliver information to the user via a number of sensory modalities. The system reliably produces single bubbles of specific sizes along a directed path. Each bubble produced by SensaBubble is filled with fog containing a scent relevant to the notification. The chrono-sensory aspect of SensaBubble means that information is presented both temporally and multimodally. Temporal information is enabled through two forms of persistence: firstly, a visual display projected onto the bubble which only endures until it bursts; secondly, a scent released upon the bursting of the bubble slowly disperses and leaves a longer-lasting perceptible trace of the event. We report details of SensaBubbles design and implementation, as well as results of technical and user evaluations. We then discuss and demonstrate how SensaBubble can be adapted for use in a wide range of application contexts -- from an ambient peripheral display for persistent alerts, to an engaging display for gaming or education.
human factors in computing systems | 2014
Jarrod Knibbe; Diego Martinez Plasencia; Christopher Bainbridge; Chee-Kin Chan; Jiawei Wu; Thomas Cable; Hassan Munir; David Coyle
The size of a smart watch limits the available interactive surface for the user. Most current smart watches use a combination of a touch screen and physical buttons. Unfortunately, a small touch screens usability is limited when it can be easily occluded, such as by a finger. In this paper, we look at extending the interactive surface for a smart watch to the back of the hand. Our approach reduces screen occlusion by enabling off-device gestural interaction. We define a range of supported bimanual gestures and present a prototype device.
user interface software and technology | 2015
Jarrod Knibbe; Hrvoje Benko; Andrew D. Wilson
Projector-camera (pro-cam) systems afford a wide range of interactive possibilities, combining both natural and mixed-reality 3D interaction. However, the latency inherent within these systems can cause the projection to ?slip? from any moving target, so pro-cam systems have typically shied away from truly dynamic scenarios. We explore software-only techniques to reduce latency; considering the best achievable results with widely adopted commodity devices (e.g. 30Hz depth cameras and 60Hz projectors). We achieve 50% projection alignment on objects in free flight (a 34% improvement) and 69% alignment on dynamic human movement (a 40% improvement).
interactive tabletops and surfaces | 2015
Jarrod Knibbe; Tovi Grossman; George W. Fitzmaurice
We present the Smart Makerspace; a context-rich, immersive instructional workspace for novice and intermediate makers. The Smart Makerspace guides makers through the completion of a DIY task, while providing detailed contextually-relevant assistance, domain knowledge, tool location, usage cues, and safety advice. Through an initial exploratory study, we investigate the challenges faced in completing maker tasks. Our observations allow us to define design goals and a design space for a connected workshop. We describe our implementation, including a digital workbench, augmented toolbox, instrumented power-tools and environmentally aware audio. We present a qualitative user study that produced encouraging results; providing features that users unanimously found useful.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017
Jarrod Knibbe; Paul Strohmeier; Sebastian Boring; Kasper Hornbæk
Electric muscle stimulation (EMS) can enable mobile force feedback, support pedestrian navigation, or confer object affordances. To date, however, EMS is limited by two interlinked problems. (1) EMS is low resolution -- achieving only coarse movements and constraining opportunities for exploration. (2) EMS requires time consuming, expert calibration -- confining these interaction techniques to the lab. EMS arrays have been shown to increase stimulation resolution, but as calibration complexity increases exponentially as more electrodes are used, we require heuristics or automated procedures for successful calibration. We explore the feasibility of using electromyography (EMG) to auto-calibrate high density EMS arrays. We determine regions of muscle activity during human-performed gestures, to inform stimulation patterns for EMS-performed gestures. We report on a study which shows that auto-calibration of a 60-electrode array is feasible: achieving 52% accuracy across six gestures, with 82% accuracy across our best three gestures. By highlighting the electrode-array calibration problem, and presenting a first exploration of a potential solution, this work lays the foundations for high resolution, wearable and, perhaps one day, ubiquitous EMS beyond the lab.
human factors in computing systems | 2016
Paul Worgan; Jarrod Knibbe; Mike Fraser; Diego Martinez Plasencia
Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, supporting different functionality, may have sufficient battery life to enable this task. We present PowerShake; an exploration of power as a shareable commodity between mobile (and wearable) devices. PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements. This paper demonstrates Wireless Power Transfer (WPT) between mobile devices. PowerShake is: simple to perform on-the-go; supports ongoing/continuous tasks (transferring at ~3.1W); fits in a small form factor; and is compliant with electromagnetic safety guidelines while providing charging efficiency similar to other standards (48.2% vs. 51.2% in Qi). Based on our proposed technical implementation, we run a series of workshops to derive candidate designs for PowerShake enabled devices and interactions, and to bring to light the social implications of power as a tradable asset.
human computer interaction with mobile devices and services | 2016
Aske Mottelson; Christoffer Larsen; Mikkel Lyderik; Paul Strohmeier; Jarrod Knibbe
The small displays of smartwatches make text entry difficult and time consuming. While text entry rates can be increased, this continues to occur at the expense of available screen display space. Soft keyboards can easily use half the display space of tiny-screened devices. To combat this problem, we present Invisiboard: an invisible text entry method using the entire display for both text entry and display simultaneously. Invisiboard combines a numberpad-like layout with swipe gestures. This maximizes input target size, provides a familiar layout, and maximizes display space. Through this, Invisiboard achieves entry rates comparable or even faster than an existing research baseline. A user study with 12 participants writing 3264 words revealed an entry rate of 10.6 Words Per Minute (WPM) after 30 minutes, 7% faster than ZoomBoard. Furthermore, with nominal training, some participants demonstrated entry rates of over 30 WPM.
human factors in computing systems | 2014
Diego Martinez Plasencia; Jarrod Knibbe; Andy Haslam; Eddie Latimer; Barnaby Dennis; Gareth Lewis; Matthew Whiteley; David Coyle
Tabletop systems are great platforms for collaborative work and social interaction. However, many fail to also accommodate contents visible only to some users, or they do so by reducing the surface visible to the rest of the users. We present ReflectoSlates, which combines a chest mounted camera-projector system connected to the users mobile device and retroreflective sheets (ReflectoSlates). When placed on the tabletop, ReflectoSlates allow users to see their private contents while other users continue to see the tabletop. They can be lifted and moved while still displaying each users individual content. Users can also interact with them using mid-air gestures detected by the camera-projector system. This way they do not interfere with other users when their contents are in the tabletop, or they can continue to interact with them, when they lift the ReflectoSlate or walk away from the tabletop.
augmented human international conference | 2018
Henning Pohl; Kasper Hornbæk; Jarrod Knibbe
Electric Muscle Stimulation (EMS) has emerged as an interaction paradigm for HCI. It has been used to confer object affordance, provide walking directions, and assist with sketching. However, the electrical signals used for EMS are multi-dimensional and require expert calibration before use. To date, this calibration has occurred as a collaboration between the experimenter, or interaction designer, and the user/participant. However, this is time-consuming, results in sampling only a limited space of possible signal configurations, and removes control from the participant. We present a calibration and signal exploration technique that both enables the user to control their own stimulation and thus comfort, and supports exploration of the continuous space of stimulation signals.
human factors in computing systems | 2015
Thomas Charlesworth; Helena Ford; Luke Milton; Thomas Mortensson; James Pedlingham; Jarrod Knibbe; Sue Ann Seah
TellTale is a wearable device that seeks to augment communication with subconscious emotion information. By sensing a users heart rate and galvanic response, two major biological indicators of physiological state, TellTale can provide insight into true physiological and emotional response. In this way, TellTale acts as a playful, wearable polygraph or lie-detector. Through abstracted visualisations of the physiological data, we aim to position TellTale in-line with the learned skills of communication. In this paper, we motivate the design of TellTale, detail a prototype device and pilot study and present future areas for TellTales exploration.