Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huy Viet Le is active.

Publication


Featured researches published by Huy Viet Le.


human computer interaction with mobile devices and services | 2017

A smartphone prototype for touch interaction on the whole device surface

Huy Viet Le; Sven Mayer; Patrick Bader; Niels Henze

Previous research proposed a wide range of interaction methods and use cases based on the previously unused back side and edge of a smartphone. Common approaches to implementing Back-of-Device (BoD) interaction include attaching two smartphones back to back and building a prototype completely from scratch. Changes in the devices form factor can influence hand grip and input performance as shown in previous work. Further, the lack of an established operating system and SDK requires more effort to implement novel interaction methods. In this work, we present a smartphone prototype that runs Android and has a form factor nearly identical to an off-the-shelf smartphone. It further provides capacitive images of the hand holding the device for use cases such as grip-pattern recognition. We describe technical details and share source files so that others can re-build our prototype. We evaluated the prototype with 8 participants to demonstrate the data that can be retrieved for an exemplary grip classification.


nordic conference on human-computer interaction | 2016

Investigating Screen Shifting Techniques to Improve One-Handed Smartphone Usage

Huy Viet Le; Patrick Bader; Thomas Kosch; Niels Henze

With increasingly large smartphones, it becomes more difficult to use these devices one-handed. Due to a large touchscreen, users can not reach across the whole screen using their thumb. In this paper, we investigate approaches to move the screen content in order to increase the reachability during one-handed use of large smartphones. In a first study, we compare three approaches based on back-of-device (BoD) interaction to move the screen content. We compare the most preferred BoD approach with direct touch on the front and Apples Reachability feature. We show that direct touch enables faster target selection than the other approaches but does not allow to interact with large parts of the screen. While Reachability is faster compared to a BoD screen shift method, only the BoD approach makes the whole front screen accessible.


human factors in computing systems | 2016

Impact of Video Summary Viewing on Episodic Memory Recall: Design Guidelines for Video Summarizations

Huy Viet Le; Sarah Clinch; Corina Sas; Tilman Dingler; Niels Henze; Nigel Davies

Reviewing lifelogging data has been proposed as a useful tool to support human memory. However, the sheer volume of data (particularly images) that can be captured by modern lifelogging systems makes the selection and presentation of material for review a challenging task. We present the results of a five-week user study involving 16 participants and over 69,000 images that explores both individual requirements for video summaries and the differences in cognitive load, user experience, memory experience, and recall experience between review using video summarisations and non-summary review techniques. Our results can be used to inform the design of future lifelogging data summarisation systems for memory augmentation.


Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces | 2017

Estimating the Finger Orientation on Capacitive Touchscreens Using Convolutional Neural Networks

Sven Mayer; Huy Viet Le; Niels Henze

In the last years, touchscreens became the most common input device for a wide range of computers. While touchscreens are truly pervasive, commercial devices reduce the richness of touch input to two-dimensional positions on the screen. Recent work proposed interaction techniques to extend the richness of the input vocabulary using the finger orientation. Approaches for determining a fingers orientation using off-the-shelf capacitive touchscreens proposed in previous work already enable compelling use cases. However, the low estimation accuracy limits the usability and restricts the usage of finger orientation to non-precise input. With this paper, we provide a ground truth data set for capacitive touch screens recorded with a high-precision motion capture system. Using this data set, we show that a Convolutional Neural Network can outperform approaches proposed in previous work. Instead of relying on hand-crafted features, we trained the model based on the raw capacitive images. Thereby we reduce the pitch error by 9.8% and the yaw error by 45.7%.


human factors in computing systems | 2017

Interaction Methods and Use Cases for a Full-Touch Sensing Smartphone

Huy Viet Le; Sven Mayer; Patrick Bader; Frank Bastian; Niels Henze

Touchscreens are successful in recent smartphones due to a combination of input and output in a single interface. Despite their advantages, touch input still suffers from common limitations such as the fat-finger problem. To address these limitations, prior work proposed a variety of interaction techniques based on input sensors beyond the touchscreen. These were evaluated from a technical perspective. In contrast, we envision a smartphone that senses touch input on the whole device. Through interviews with experienced interaction designers, we elicited interaction methods to address touch input limitations from a different perspective. In this work, we focus on the interview results and present a smartphone prototype which senses touch input on the whole device. It has dimensions similar to regular phones and can be used to evaluate presented findings under realistic conditions in future work.


human factors in computing systems | 2018

PalmTouch: Using the Palm as an Additional Input Modality on Commodity Smartphones

Huy Viet Le; Thomas Kosch; Patrick Bader; Sven Mayer; Niels Henze

Touchscreens are the most successful input method for smartphones. Despite their flexibility, touch input is limited to the location of taps and gestures. We present PalmTouch, an additional input modality that differentiates between touches of fingers and the palm. Touching the display with the palm can be a natural gesture since moving the thumb towards the devices top edge implicitly places the palm on the touchscreen. We present different use cases for PalmTouch, including the use as a shortcut and for improving reachability. To evaluate these use cases, we have developed a model that differentiates between finger and palm touch with an accuracy of 99.53% in realistic scenarios. Results of the evaluation show that participants perceive the input modality as intuitive and natural to perform. Moreover, they appreciate PalmTouch as an easy and fast solution to address the reachability issue during one-handed smartphone interaction compared to thumb stretching or grip changes.


human factors in computing systems | 2018

Fingers' Range and Comfortable Area for One-Handed Smartphone Interaction Beyond the Touchscreen

Huy Viet Le; Sven Mayer; Patrick Bader; Niels Henze

Previous research and recent smartphone development presented a wide range of input controls beyond the touchscreen. Fingerprint scanners, silent switches, and Back-of-Device (BoD) touch panels offer additional ways to perform input. However, with the increasing amount of input controls on the device, unintentional input or limited reachability can hinder interaction. In a one-handed scenario, we conducted a study to investigate the areas that can be reached without losing grip stability (comfortable area), and with stretched fingers (maximum range) using four different phone sizes. We describe the characteristics of the comfortable area and maximum range for different phone sizes and derive four design implications for the placement of input controls to support one-handed BoD and edge interaction. Amongst others, we show that the index and middle finger are the most suited fingers for BoD interaction and that the grip shifts towards the top edge with increasing phone sizes.


nordic conference on human-computer interaction | 2018

How to communicate new input techniques.

Sven Mayer; Lars Lischke; Adrian Lanksweirt; Huy Viet Le; Niels Henze

Touchscreens are among the most ubiquitous input technologies. Commercial devices typically limit the input to 2D touch points. While a body of work enhances the interaction through finger recognition and diverse gestures, advanced input techniques have had a limited commercial impact. A major challenge is explaining new input techniques to users. In this paper, we investigate how to communicate novel input techniques for smartphones. Through interviews with 12 Ux experts, we identified three potential approaches: Depiction uses an icon to visualize the input technique, Pop-up shows a modal dialog when the input technique is available, and Tutorial explains all available input techniques in a centralized way. To understand which approach is most preferred by users we conducted a study with 36 participants that introduced novel techniques using one of the communication methods. While Depiction was preferred, we found that the approach should be selected based on the complexity of the interaction, novelty to the user, and the device size.


user interface software and technology | 2018

InfiniTouch: Finger-Aware Interaction on Fully Touch Sensitive Smartphones

Huy Viet Le; Sven Mayer; Niels Henze

Smartphones are the most successful mobile devices and offer intuitive interaction through touchscreens. Current devices treat all fingers equally and only sense touch contacts on the front of the device. In this paper, we present InfiniTouch, the first system that enables touch input on the whole device surface and identifies the fingers touching the device without external sensors while keeping the form factor of a standard smartphone. We first developed a prototype with capacitive sensors on the front, the back and on three sides. We then conducted a study to train a convolutional neural network that identifies fingers with an accuracy of 95.78% while estimating their position with a mean absolute error of 0.74cm. We demonstrate the usefulness of multiple use cases made possible with InfiniTouch, including finger-aware gestures and finger flexion state as an action modifier.


human computer interaction with mobile devices and services | 2018

Designing finger orientation input for mobile touchscreens

Sven Mayer; Huy Viet Le; Niels Henze

A large number of todays systems use interactive touch surfaces as the main input channel. Current devices reduce the richness of touch input to two-dimensional positions on the screen. A growing body of work develops methods that enrich touch input to provide additional degrees of freedom for touch interaction. In particular, previous work proposed to use the fingers orientation as additional input. To efficiently implement new input techniques which make use of the new input dimensions, we need to understand the limitations of the input. Therefore, we conducted a study to derive the ergonomic constraints for using finger orientation as additional input in a two-handed smartphone scenario. We show that for both hands, the comfort and the non-comfort zone depend on how the user interacts with a touch surface. For two-handed smart-phone scenarios, the range is 33.3% larger than for tabletop scenarios. We further show that the phone orientation correlates with the finger orientation. Finger orientations which are harder to perform result in phone orientations where the screen does not directly face the user.

Collaboration


Dive into the Huy Viet Le's collaboration.

Top Co-Authors

Avatar

Niels Henze

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Sven Mayer

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Kosch

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge