Sebastian Boring
University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sebastian Boring.
user interface software and technology | 2011
Nicolai Marquardt; Robert Diaz-Marino; Sebastian Boring; Saul Greenberg
People naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students.
human computer interaction with mobile devices and services | 2012
Sebastian Boring; David Ledo; Xiang ‘Anthony’ Chen; Nicolai Marquardt; Anthony Tang; Saul Greenberg
Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumbs contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumbs limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to precisely pan and zoom to a predefined region on a map and found that the Fat Thumb technique compared well to existing techniques.
australasian computer-human interaction conference | 2009
Sebastian Boring; Marko Jurmu; Andreas Butz
Large and public displays mostly provide little interactivity due to technical constraints, making it difficult for people to capture interesting information or to influence the screens content. Through the combination of largescale visual output and the mobile phone as an input device, bidirectional interaction with large public displays can be enabled. In this paper, we propose and compare three different interaction techniques (Scroll, Tilt and Move) for continuous control of a pointer located on a remote display using a mobile phone. Since each of these techniques seemed to have arguments for and against them, we conducted a comparative evaluation and discovered their specific strengths and weaknesses. We report the implementation of the techniques, their design and results of our user study. The experiment revealed that while Move and Tilt can be faster, they also introduce higher error rates for selection tasks.
tangible and embedded interaction | 2009
Raphael Wimmer; Sebastian Boring
As mobile and tangible devices are getting smaller and smaller it is desirable to extend the interaction area to their whole surface area. The HandSense prototype employs capacitive sensors for detecting when it is touched or held against a body part. HandSense is also able to detect in which hand the device is held, and how. The general properties of our approach were confirmed by a user study. HandSense was able to correctly classify over 80 percent of all touches, discriminating six different ways of touching the device (hold left/right, pick up left/right, pick up at top/bottom). This information can be used to implement or enhance implicit and explicit interaction with mobile phones and other tangible user interfaces. For example, graphical user interfaces can be adjusted to the users handedness.
interactive tabletops and surfaces | 2012
Nicolai Marquardt; Till Ballendat; Sebastian Boring; Saul Greenberg; Ken Hinckley
The increasing number of digital devices in our environment enriches how we interact with digital content. Yet, cross-device information transfer -- which should be a common operation -- is surprisingly difficult. One has to know which devices can communicate, what information they contain, and how information can be exchanged. To mitigate this problem, we formulate the gradual engagement design pattern that generalizes prior work in proxemic interactions and informs future system designs. The pattern describes how we can design device interfaces to gradually engage the user by disclosing connectivity and information exchange capabilities as a function of inter-device proximity. These capabilities flow across three stages: (1) awareness of device presence/connectivity, (2) reveal of exchangeable content, and (3) interaction methods for transferring content between devices tuned to particular distances and device capabilities. We illustrate how we can apply this pattern to design, and show how existing and novel interaction techniques for cross-device transfers can be integrated to flow across its various stages. We explore how techniques differ between personal and semi-public devices, and how the pattern supports interaction of multiple users.
human factors in computing systems | 2011
Sebastian Boring; Sven Gehring; Alexander Wiethoff; Anna Magdalena Blöckner; Johannes Schöning; Andreas Butz
The increasing number of media facades in urban spaces offers great potential for new forms of interaction especially for collaborative multi-user scenarios. In this paper, we present a way to directly interact with them through live video on mobile devices. We extend the Touch Projector interface to accommodate multiple users by showing individual content on the mobile display that would otherwise clutter the facades canvas or distract other users. To demonstrate our concept, we built two collaborative multi-user applications: (1) painting on the facade and (2) solving a 15-puzzle. We gathered informal feedback during the ARS Electronica Festival in Linz, Austria and found that our interaction technique is (1) considered easy-to-learn, but (2) may leave users unaware of the actions of others.
international symposium on pervasive displays | 2012
Miaosen Wang; Sebastian Boring; Saul Greenberg
Effective street peddlers monitor passersby, where they tune their message to capture and keep the passerbys attention over the entire duration of the sales pitch. Similarly, advertising displays in todays public environments can be more effective if they were able to tune their content in response to how passersby were attending them vs. just showing fixed content in a loop. Previously, others have prototyped displays that monitor and react to the presence or absence of a person within a few proxemic (spatial) zones surrounding the screen, where these zones are used as an estimate of attention. However, the coarseness and discrete nature of these zones mean that they cannot respond to subtle changes in the users attention towards the display. In this paper, we contribute an extension to existing proxemic models. Our Peddler Framework captures (1) fine-grained continuous proxemic measures by (2) monitoring the passerbys distance and orientation with respect to the display at all times. We use this information to infer (3) the passerbys interest or digression of attention at any given time, and (4) their attentional state with respect to their short-term interaction history over time. Depending on this attentional state, we tune content to lead the passerby into a more attentive stage, ultimately resulting in a purchase. We also contribute a prototype of a public advertising display -- called Proxemic Peddler -- that demonstrates these extensions as applied to content from the Amazon.com website.
interactive tabletops and surfaces | 2011
Nicolai Marquardt; Johannes Kiemer; David Ledo; Sebastian Boring; Saul Greenberg
Recent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which part of the hand, (2) which side of the hand, and (3) which person is actually touching the surface. Fiduciary-tagged gloves were previously introduced as a simple but reliable technique for providing this information. The problem is that its low-level programming model hinders the way developers could rapidly explore new kinds of user- and handpart-aware interactions. We contribute the TouchID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TouchID provides an easy-to-use event-driven API as well as higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for registering new hand postures and gestures for the toolkit to recognize. We illustrate TouchIDs expressiveness by showing how we developed a suite of techniques that exploits knowledge of which handpart is touching the surface.
human factors in computing systems | 2012
Dominikus Baur; Sebastian Boring; Steven Feiner
Handheld optical projectors provide a simple way to overcome the limited screen real-estate on mobile devices. We present virtual projection (VP), an interaction metaphor inspired by how we intuitively control the position, size, and orientation of a handheld optical projectors image. VP is based on tracking a handheld device without an optical projector and allows selecting a target display on which to position, scale, and orient an item in a single gesture. By relaxing the optical projection metaphor, we can deviate from modeling perspective projection, for example, to constrain scale or orientation, create multiple copies, or offset the image. VP also supports dynamic filtering based on the projection frustum, creating overview and detail applications, and selecting portions of a larger display for zooming and panning. We show exemplary use cases implemented using our optical feature-tracking framework and present the results of a user study demonstrating the effectiveness of VP in complex interactions with large displays.
nordic conference on human-computer interaction | 2010
Raphael Wimmer; Fabian Hennecke; Florian Schulz; Sebastian Boring; Andreas Butz; Heinrich Hußmann
Current desktop workspace environments consist of a vertical area (e.g., a screen with a virtual desktop) and a horizontal area (e.g., the physical desk). Daily working activities benefit from different intrinsic properties of both of these areas. However, both areas are distinct from each other, making data exchange between them cumbersome. Therefore, we present Curve, a novel interactive desktop environment, which combines advantages of vertical and horizontal working areas using a continous curved connection. This connection offers new ways of direct multi-touch interaction and new ways of information visualization. We describe our basic design, the ergonomic adaptions we made, and discuss technical challenges we met and expect to meet while building and configuring the system.