Chi Thanh Vi
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chi Thanh Vi.
human factors in computing systems | 2012
Chi Thanh Vi; Sriram Subramanian
This paper examines the ability to detect a characteristic brain potential called the Error-Related Negativity (ERN) using off-the-shelf headsets and explores its applicability to HCI. ERN is triggered when a user either makes a mistake or the application behaves differently from their expectation. We first show that ERN can be seen on signals captured by EEG headsets like Emotiv™ when doing a typical multiple choice reaction time (RT) task -- Flanker task. We then present a single-trial online ERN algorithm that works by pre-computing the coefficient matrix of a logistic regression classifier using some data from a multiple choice reaction time task and uses it to classify incoming signals of that task on a single trial of data. We apply it to an interactive selection task that involved users selecting an object under time pressure. Furthermore the study was conducted in a typical office environment with ambient noise. Our results show that online single trial ERN detection is possible using off-the-shelf headsets during tasks that are typical of interactive applications. We then design a Superflick experiment with an integrated module mimicking an ERN detector to evaluate the accuracy of detecting ERN in the context of assisting users in interactive tasks. Based on these results we discuss and present several HCI scenarios for use of ERN.
human factors in computing systems | 2016
Marianna Obrist; Carlos Velasco; Chi Thanh Vi; Nimesha Ranasinghe; Ali Israr; Adrian David Cheok; Charles Spence; P. Gopalakrishnakone
The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of peoples multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time. The aim of this workshop is to deepen and expand the discussion on touch, taste, and smell within the CHI community and promote the relevance of multisensory experience design and research in HCI.
IEEE MultiMedia | 2017
Marianna Obrist; Elia Gatti; Emanuela Maggioni; Chi Thanh Vi; Carlos Velasco
For decades, the use of vision and audition for interaction dominated the field of human-computer interaction (HCI), despite the fact that nature has provided many more senses for perceiving and interacting with the world. Recently, HCI researchers have started trying to capitalize on touch, taste, and smell when designing interactive tasks, especially in gaming, multimedia, and art environments. Here, the authors provide a snapshot of their research into touch, taste, and smell, carried out at the Sussex Computer Human Interaction (SCHI) Lab at the University of Sussex in Brighton, UK.
international conference on human-computer interaction | 2015
Camille Jeunet; Chi Thanh Vi; Daniel Spelmezan; Bernard N'Kaoua; Fabien Lotte; Sriram Subramanian
Motor-Imagery based Brain Computer Interfaces (MI-BCIs) allow users to interact with computers by imagining limb movements. MI-BCIs are very promising for a wide range of applications as they offer a new and non-time locked modality of control. However, most MI-BCIs involve visual feedback to inform the user about the system’s decisions, which makes them difficult to use when integrated with visual interactive tasks. This paper presents our design and evaluation of a tactile feedback glove for MI-BCIs, which provides a continuously updated tactile feedback. We first determined the best parameters for this tactile feedback and then tested it in a multitasking environment: at the same time users were performing the MI tasks, they were asked to count distracters. Our results suggest that, as compared to an equivalent visual feedback, the use of tactile feedback leads to a higher recognition accuracy of the MI-BCI tasks and fewer errors in counting distracters.
human factors in computing systems | 2017
Patricia Ivette Cornelio Martinez; Silvana De Pirro; Chi Thanh Vi; Sriram Subramanian
Touchless interfaces allow users to view, control and manipulate digital content without physically touching an interface. They are being explored in a wide range of application scenarios from medical surgery to car dashboard controllers. One aspect of touchless interaction that has not been explored to date is the Sense of Agency (SoA). The SoA refers to the subjective experience of voluntary control over actions in the external world. In this paper, we investigated the SoA in touchless systems using the intentional binding paradigm. We first compare touchless systems with physical interactions and then augmented different types of haptic feedback to explore how different outcome modalities influence intentional binding. From our experiments, we demonstrated that an intentional binding effect is observed in both physical and touchless interactions with no statistical difference. Additionally, we found that haptic and auditory feedback help to increase SoA compared with visual feedback in touchless interfaces. We discuss these findings and identify design opportunities that take agency into consideration.
human factors in computing systems | 2014
Chi Thanh Vi; Izdihar Jamil; David Coyle; Sriram Subramanian
Error Related Negativity is triggered when a user either makes a mistake or the application behaves differently from their expectation. It can also appear while observing another user making a mistake. This paper investigates ERN in collaborative settings where observing another user (the executer) perform a task is typical and then explores its applicability to HCI. We first show that ERN can be detected on signals captured by commodity EEG headsets like an Emotiv headset when observing another person perform a typical multiple-choice reaction time task. We then investigate the anticipation effects by detecting ERN in the time interval when an executer is reaching towards an answer. We show that we can detect this signal with both a clinical EEG device and with an Emotiv headset. Our results show that online single trial detection is possible using both headsets during tasks that are typical of collaborative interactive applications. However there is a trade-off between the detection speed and the quality/prices of the headsets. Based on the results, we discuss and present several HCI scenarios for use of ERN in observing tasks and collaborative settings.
international conference on computer graphics and interactive techniques | 2013
Yoshifumi Kitamura; Chi Thanh Vi; Gengdai Liu; Kazuki Takashima; Yuichi Itoh; Sriram Subramanian
We propose D-FLIP, a novel algorithm that dynamically displays a set of digital photos using different principles for organizing them. A combination of requirements for photo arrangements can be flexibly replaced or added through the interaction and the results are continuously and dynamically displayed. D-FLIP uses combinatorial optimization and emergent computation approach, where parameters such as location, size, and photo angle are considered to be functions of time; dynamically determined by local relationships among adjacent photos at every time instance. Consequently, the global layout of all photos is automatically varied.Interactive technology has been one of the most important inseparable wheels of SIGGRAPH Asia, and the Emerging Technologies program plays a vital role in driving the development of research communities all over the world to pursue technological innovations that will have an impact on everyday life. To drive research that will help our lives in future, extensive research on creating working prototypes with novel technological innovations is crucial. This year, the Emerging Technologies program presents a broad scope of topics, reflecting the innovation of interactive technologies and a maturation of the field as it expands to include interactive visualization and other graphics-related technologies. Be fascinated by hands-on demonstrations that expand the limits of current display technologies, and exciting new hardware that enable sophisticated and nuanced user input, innovative interaction techniques that enable more complex interaction with application data and functionality, as well as excellent examples of haptics developed to support multi-/cross-modality scenarios.
Scientific Reports | 2018
Chi Thanh Vi; Marianna Obrist
Taking risks is part of everyday life. Some people actively pursue risky activities (e.g., jumping out of a plane), while others avoid any risk (e.g., people with anxiety disorders). Paradoxically, risk-taking is a primitive behaviour that may lead to a happier life by offering a sense of excitement through self-actualization. Here, we demonstrate for the first time that sour - amongst the five basic tastes (sweet, bitter, sour, salty, and umami) - promotes risk-taking. Based on a series of three experiments, we show that sour has the potential to modulate risk-taking behaviour across two countries (UK and Vietnam), across individual differences in risk-taking personality and styles of thinking (analytic versus intuitive). Modulating risk-taking can improve everyday life for a wide range of people.
Proceedings of the 3rd International Workshop on Multisensory Approaches to Human-Food Interaction - MHFI'18 | 2018
Chi Thanh Vi; Daniel Arthur; Marianna Obrist
When we are babies we put anything and everything in our mouths, from Lego to crayons. As we grow older we increasingly rely on our other senses to explore our surroundings and objects in the world. When interacting with technology, we mainly rely on our senses of vision, touch, and hearing, and the sense of taste becomes reduced to the context of eating and food experiences. In this paper, we build on initial efforts to enhance gaming experiences through gustatory stimuli. We introduce TasteBud, a gustatory gaming interface that we integrated with the classic Minesweeper game. We first describe the details on the hardware and software design for the taste stimulation and then present initial findings from a user study. We discuss how taste has the potential to transform gaming experiences through systematically exploiting the experiences individual gustatory stimuli (e.g., sweet, bitter, sour) can elicit.
intelligent user interfaces | 2016
Bo Wan; Chi Thanh Vi; Sriram Subramanian; Diego Martinez Plasencia
Transcranial Direct Current Stimulation (tDCS) is a non-invasive type of neural stimulation known for modulation of cortical excitability leading to positive effects on working memory and attention. The availability of low-cost and consumer grade tDCS devices has democratized access to such technology allowing us to explore its applicability to HCI. We review the relevant literature and identify potential avenues for exploration within the context of enhancing interactivity and use of tDCS in the context of HCI.