Christian Corsten
RWTH Aachen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Corsten.
interactive tabletops and surfaces | 2013
Christian Corsten; Ignacio Avellino; Max Möllers; Jan O. Borchers
Dedicated input devices are frequently used for system control. We present Instant User Interfaces, an interaction paradigm that loosens this dependency and allows operating a system even when its dedicated controller is unavailable. We implemented a reliable, marker-free object tracking system that enables users to assign semantic meaning to different poses or to touches in different areas. With this system, users can repurpose everyday objects and program them in an ad-hoc manner, using a GUI or by demonstration, as input devices. Users tested and ranked these methods alongside a Wizard-of-Oz speech interface. The testers did not show a clear preference as a group, but had individual preferences.
human factors in computing systems | 2015
Christian Corsten; Christian Cherek; Thorsten Karrer; Jan O. Borchers
Using a smartphone for touch input to control apps and games mirrored to a distant screen is difficult, as the user cannot see where she is touching while looking at the distant display. We present HaptiCase, an interaction technique that provides back-of-device tactile landmarks that the user senses with her fingers to estimate the location of her finger in relation to the touchscreen. By pinching the thumb resting above the touch- screen to a finger at the back, the finger position is transferred to the front as the thumb touches the screen. In a study, we compared touch performance of different landmark layouts with a regular landmark-free mobile device. Using a land- mark design of dots on a 3x5 grid significantly improves eyes-free tapping accuracy and allows targets to be as small as 17.5 mm---a 14% reduction in target size---to cover 99% of all touches. When users can look at the touchscreen, land- marks have no significant effect on performance. HaptiCase is low-cost, requires no electronics, and works with unmodified software.
human factors in computing systems | 2013
Christian Corsten; Chat Wacharamanotham; Jan O. Borchers
We introduce Fillables: low-cost and ubiquitous everyday vessels that are appropriated as tangible controllers whose haptics are tuned ad-hoc by filling, e.g., with water. We show how Fillables can assist users in video navigation and drawing tasks with physical controllers whose adjustable output granularity harmonizes with their haptic feedback. As proof of concept, we implemented a drawing application that uses vessels to control a virtual brush whose stroke width corresponds to the filling level. Furthermore, we found that humans can distinguish nine levels of haptic feedback when sliding water-filled paper cups (300 ml capacity) over a wooden surface. This discrimination follows Webers Law and was facilitated by sloshing of water.
human factors in computing systems | 2010
Christian Corsten
DragonFly is an application designed for reviewing lecture recordings of mind map-structured presentations. Instead of using a timeline slider, the lecture recording is controlled by selecting elements located at different positions on the map. Hence, video time is controlled by navigating in space. A controlled experiment revealed that DragonFly reviewers performed 1.5 times faster in finding a specific scene of a lecture recording compared to reviewers that worked with QuickTime Player and a mind map printout.
human factors in computing systems | 2018
Christian Corsten; Simon Voelker; Andreas Link; Jan O. Borchers
Picking values from long ordered lists, such as when setting a date or time, is a common task on smartphones. However, the system pickers and tables used for this require significant screen space for spinning and dragging, covering other information or pushing it off-screen. The Force Picker reduces this footprint by letting users increase and decrease values over a wide range using force touch for rate-based control. However, changing input direction this way is difficult. We propose three techniques to address this. With our best candidate, Thumb-Roll, the Force Picker lets untrained users achieve similar accuracy as a standard picker, albeit less quickly. Shrinking it to a single table row, 20% of the iOS picker height, slightly affects completion time, but not accuracy. Intriguingly, after 70 minutes of training, users were significantly faster with this minimized Thumb-Roll Picker compared to the standard picker, at the same accuracy and only 6% of the gesture footprint. We close with application examples.
Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces | 2017
Christian Corsten; Simon Voelker; Jan O. Borchers
Modern smartphones, like iPhone 7, feature touchscreens with co-located force sensing. This makes touch input more expressive, e.g., by enabling single-finger continuous zooming when coupling zoom levels to force intensity. Often, however, the user wants to select and confirm a particular force value, say, to lock a certain zoom level. The most common confirmation techniques are Dwell Time (DT) and Quick Release (QR). While DT has shown to be reliable, it slows the interaction, as the user must typically wait for 1 s before her selection is confirmed. Conversely, QR is fast but reported to be less reliable, although no reference reports how to actually detect and implement it. In this paper, we set out to challenge the low reliability of QR: We collected user data to (1) report how it can be implemented and (2) show that it is as reliable as DT (97.6% vs. 97.2% success). Since QR was also the faster technique and more preferred by users, we recommend it over DT for force confirmation on modern smartphones.
Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces | 2017
Nur Al-huda Hamdan; Ravi Kanth Kosuru; Christian Corsten; Jan O. Borchers
Devices like smartphones, smartwatches, and fitness trackers enable runners to control music, query fitness parameters such as heart rate and speed, or be guided by coaching apps. But while these devices are portable, interacting with them during running is difficult: they usually have small buttons or touchscreens which force the user to slow down to interact with them properly. On-body tapping is an interaction technique that allows users to trigger actions by tapping at different body locations eyes-free. This paper investigates on-body tapping as a potential input technique for runners. We conducted a user study to evaluate where and how accurately runners can tap on their body. We motion-captured participants while tapping locations on their body and running on a treadmill at different speeds. Results show that a uniform layout of five targets per arm and two targets on the abdomen achieved 96% accuracy rate. We present a set of design implications to inform the design of on-body interfaces for runners.
human computer interaction with mobile devices and services | 2016
Christian Corsten; Andreas Link; Thorsten Karrer; Jan O. Borchers
Using a smartphone touchscreen to control apps mirrored to a distant display is hard, since the user cannot see where she is touching while looking at the distant screen. Tactile landmarks at the back of the phone can mitigate this problem, especially in landscape mode [3]: By moving a finger across these landmarks, the user can haptically estimate the finger position in proportion to the touchscreen. Upon pinching the thumb resting above the touchscreen towards that finger at the back, the finger position is transferred to the front and registered as a touch. However, despite proprioception, this technique leads to a shift between back and front position, denoted as pinch error. We investigated this error using different target locations, device thicknesses, and tilt angles to derive target sizes that can be acquired at a 96% success rate.
interactive tabletops and surfaces | 2014
Simon Voelker; Christian Corsten; Nur Al-huda Hamdan; Kjell Ivar Øvergård; Jan O. Borchers
Tangibles on interactive surfaces enable users to physically manipulate digital content by placing, manipulating, or removing a tangible object. However, the information whether and how a user grasps these objects has not been mapped out for tangibles on interactive surfaces so far. Based on Buxtons Three-State Model for graphical input, we present an interaction model that describes input on tangibles that are aware of the users grasp. We present two examples showing how the user benefits from this extended interaction model. Furthermore, we show how the interaction with other existing tangibles for interactive tabletops can be modeled.
human factors in computing systems | 2014
Simon Voelker; Christian Corsten; Nur Al-huda Hamdan; Kjell Ivar Øvergård; Jan O. Borchers
Tangibles on interactive surfaces enable users to physically manipulate digital content by placing, moving, manipulating, or removing a tangible object. However, the information whether and how a user grasps these tangibles has not been exploited for input so far. Based on Buxtons Three-State Model for graphical input, we present an interaction model that describes input on tangibles that are aware of the users touch and grasp. We present two examples showing how the user benefits from this extended interaction model.