Hui Shyong Yeo
University of St Andrews
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hui Shyong Yeo.
Multimedia Tools and Applications | 2015
Hui Shyong Yeo; Byung-Gook Lee; Hyotaek Lim
Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.
user interface software and technology | 2016
Hui Shyong Yeo; Gergely Flamich; Patrick Maurice Schrempf; David Harris-Birtill; Aaron J. Quigley
In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.
Journal of Network and Computer Applications | 2014
Hui Shyong Yeo; Xiao Shen Phang; Hoon Jae Lee; Hyotaek Lim
Despite high adoption rate among consumers, cloud storage services still suffer from many functional limitations and security issues. Recent studies propose utilization of RAID-like techniques in addition to multiple cloud storage services as an effective solution, but to the best of our knowledge, there is no research work done on applying this approach to resource-constrained mobile devices. In this paper, we propose a solution for mobile devices that unifies storage from multiple cloud providers into a centralized storage pool that is better in terms of availability, capacity, performance, reliability and security. First, we explore the feasibility of applying various storage technologies to address the aforementioned issues. Then, we validate our solution in comparisons with single cloud storage by implementation of a working prototype on mobile device. Our results show that it can improve the usage of consumer cloud storage at zero monetary cost, while the minimal overheads incurred are actually compensated by the performance gained.
human factors in computing systems | 2016
Daniel Saakes; Hui Shyong Yeo; Seung-Tak Noh; Gyeol Han; Woontack Woo
Virtual fitting rooms equipped with magic mirrors let people evaluate fashion items without actually putting them on. The mirrors superimpose virtual clothes on the users reflection. We contribute the Mirror Mirror system, which not only supports mixing and matching of existing fashion items, but also lets users design new items in front of the mirror and export designs to fabric printers. While much of the related work deals with interactive cloth simulation on live user data, we focus on collaborative design activities and explore various ways of designing on the body with a mirror.
human computer interaction with mobile devices and services | 2016
Hui Shyong Yeo; Juyoung Lee; Andrea Bianchi; Aaron J. Quigley
The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.
human factors in computing systems | 2017
Hui Shyong Yeo; Xiao-Shen Phang; Steven J. Castellucci; Per Ola Kristensson; Aaron J. Quigley
The popularity of mobile devices with large screens is making single-handed interaction difficult. We propose and evaluate a novel design point around a tilt-based text entry technique which supports single handed usage. Our technique is based on the gesture keyboard (shape writing). However, instead of drawing gestures with a finger or stylus, users articulate a gesture by tilting the device. This can be especially useful when the users other hand is otherwise encumbered or unavailable. We show that novice users achieve an entry rate of 15 words-per-minute (wpm) after minimal practice. A pilot longitudinal study reveals that a single participant achieved an entry rate of 32 wpm after approximate 90 minutes of practice. Our data indicate that tilt-based gesture keyboard entry enables walk-up use and provides a suitable text entry rate for occasional use and can act as a promising alternative to single-handed typing in certain situations.
human factors in computing systems | 2018
Nicolas Villar; Daniel Cletheroe; Greg Saul; Christian Holz; Tim Regan; Oscar Salandin; Misha Sra; Hui Shyong Yeo; William Field; Haiyan Zhang
We present Project Zanzibar: a flexible mat that can locate, uniquely identify and communicate with tangible objects placed on its surface, as well as sense a users touch and hover hand gestures. We describe the underlying technical contributions: efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection, and manufacturing techniques for a rollable device form-factor that enables portability, while providing a sizable interaction area when unrolled. In addition, we detail design patterns for tangibles of varying complexity and interactive capabilities, including the ability to sense orientation on the mat, harvest power, provide additional input and output, stack, or extend sensing outside the bounds of the mat. Capabilities and interaction modalities are illustrated with self-generated applications. Finally, we report on the experience of professional game developers building novel physical/digital experiences using the platform.
international symposium on wearable computers | 2017
Juyoung Lee; Hui Shyong Yeo; Murtaza Dhuliawala; Jedidiah Akano; Junichi Shimizu; Thad Starner; Aaron J. Quigley; Woontack Woo; Kai Kunze
We propose a sensing technique for detecting finger movements on the nose, using EOG sensors embedded in the frame of a pair of eyeglasses. Eyeglasses wearers can use their fingers to exert different types of movement on the nose, such as flicking, pushing or rubbing. These subtle gestures can be used to control a wearable computer without calling attention to the user in public. We present two user studies where we test recognition accuracy for these movements.
international conference on computer graphics and interactive techniques | 2015
Daniel Saakes; Hui Shyong Yeo; Seung-Tak Noh; Gyeol Han; Woontack Woo
When choosing what to wear, people often use mirrors to try clothing items and see the fit on their body. What if we can not only evaluate items in front of the mirror but also design items and have them fabricated on the spot?
human computer interaction with mobile devices and services | 2017
Hui Shyong Yeo; Juyoung Lee; Andrea Bianchi; David Harris-Birtill; Aaron J. Quigley
SpeCam is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support discreet micro-interactions to avoid the numerous distractions that users daily face with todays mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down.