Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronnie Taib is active.

Publication


Featured researches published by Ronnie Taib.


human factors in computing systems | 2007

Galvanic skin response (GSR) as an index of cognitive load

Yu Shi; Natalie Ruiz; Ronnie Taib; Eric H. C. Choi; Fang Chen

Multimodal user interfaces (MMUI) allow users to control computers using speech and gesture, and have the potential to minimise users. experienced cognitive load, especially when performing complex tasks. In this paper, we describe our attempt to use a physiological measure, namely Galvanic Skin Response (GSR), to objectively evaluate users. stress and arousal levels while using unimodal and multimodal versions of the same interface. Preliminary results show that users. GSR readings significantly increase when task cognitive load level increases. Moreover, users. GSR readings are found to be lower when using a multimodal interface, instead of a unimodal interface. Cross-examination of GSR data with multimodal data annotation showed promising results in explaining the peaks in the GSR data, which are found to correlate with sub-task user events. This interesting result verifies that GSR can be used to serve as an objective indicator of user cognitive load level in real time, with a very fine granularity.


australasian computer-human interaction conference | 2006

Examining the redundancy of multimodal input

Natalie Ruiz; Ronnie Taib; Fang Chen

Speech and gesture modalities can allow users to interact with complex applications in novel ways. Often users will adapt their multimodal behaviour to cope with increasing levels of domain complexity. These strategies can change how multimodal constructions are planned and executed by users. In the frame of Baddeleys Theory of Working Memory, we present some of the results from an empirical study conducted with users of a multimodal interface, under varying levels of cognitive load. In particular, we examine how multimodal behavioural features are sensitive to cognitive load variations. We report significant decreases in multimodal redundancy (33.6%) and trends of increased multimodal complementarity, as cognitive load increases.


international conference on multimodal interfaces | 2010

Cognitive skills learning: pen input patterns in computer-based athlete training

Natalie Ruiz; Qian Qian Feng; Ronnie Taib; Tara Handke; Fang Chen

In this paper, we describe a longitudinal user study with athletes using a cognitive training tool, equipped with an interactive pen interface, and think-aloud protocols. The aim is to verify whether cognitive load can be inferred directly from changes in geometric and temporal features of the pen trajectories. We compare trajectories across cognitive load levels and overall Pre and Post training tests. The results show trajectory durations and lengths decrease while speeds increase, all significantly, as cognitive load increases. These changes are attributed to mechanisms for dealing with high cognitive load in working memory, with minimal rehearsal. With more expertise, trajectory durations further decrease and speeds further increase, which is attributed in part to cognitive skill acquisition and to schema development, both in extraneous and intrinsic networks, between Pre and Post tests. As such, these pen trajectory features offer insight into implicit communicative changes related to load fluctuations.


intelligent user interfaces | 2006

Multimodal interaction styles for hypermedia adaptation

Ronnie Taib; Natalie Ruiz

We explore the concept of interaction styles used to navigate through hypermedia systems. A demonstrator was built to conduct a user study with the objective of detecting whether any interaction pattern exists in relation to input modality choices. Our lightweight server-side web demonstrator is able to adapt output modalities as a function of input received from the user. The interface and content displayed are built from predefined presentation schemes that attempt to optimize the users experience and websites functionality. The results suggest that some levels of entrenchment do occur with reference to modality choices, with 45% of participants deviating from their preferred pattern in one or less interaction turns.


human factors in computing systems | 2013

Assessing recovery from cognitive load through pen input

Ling Luo; Ronnie Taib

This paper explores the impact of rest duration on recovery from cognitively demanding tasks, focusing on pen input features as an indicator of load and recovery. We designed a user experiment involving a cognitively loading task with three levels of difficulty, followed by a controlled rest period, and then a fixed difficulty task. The participants answered the tasks by writing alphabet letters on a tablet monitor. Subjective ratings validated the increasing difficulty of the first task (Friedman ANOVA pp<p<0.05), and also indicated that the rest duration had a significant impact on the perceived difficulty of the subsequent task (p=0.048). In terms of pen features, the height of the written characters decreased significantly when the rest duration was reduced (ANOVA p<p<0.05), and the pen pressure decreased significantly as the task difficult increased (p=0.009). These encouraging results suggest the addition of a crucial time factor in the cognitive load theory, and benefits to HCI practitioners through better control of content and information pace.


analysis, design, and evaluation of human-machine systems | 2007

Multimodal Human-Machine Interface and User Cognitive Load Measurement

Yu Shi; Ronnie Taib; Natalie Ruiz; Eric H. C. Choi; Fang Chen

Abstract Multimodal user interface (MMUI) is an emerging technology that aims at providing a more intuitive and natural way for people to operate and control a computer or a machine. MMUI allows users to control a computer using various input modalities, including speech, touch, gestures and hand-writing. It has potential to minimise users cognitive load when performing complex tasks. In this paper we present our work in building an MMUI research platform for intelligent transport system applications, and our attempt to evaluate a users cognitive load based on analysis of his or her multimodal behaviours and physiological measurement.


international conference on multimodal interfaces | 2014

Synchronising Physiological and Behavioural Sensors in a Driving Simulator

Ronnie Taib; Benjamin Itzstein; Kun Yu

Accurate and noise robust multimodal activity and mental state monitoring can be achieved by combining physiological, behavioural and environmental signals. This is especially promising in assistive driving technologies, because vehicles now ship with sensors ranging from wheel and pedal activity, to voice and eye tracking. In practice, however, multimodal user studies are confronted with challenging data collection and synchronisation issues, due to the diversity of sensing, acquisition and storage systems. Referencing current research on cognitive load measurement in a driving simulator, this paper describes the steps we take to consistently collect and synchronise signals, using the Orbit Measurement Library (OML) framework, combined with a multimodal version of a cinema clapperboard. The resulting data is automatically stored in a networked database, in a structured format, including metadata about the data and experiment. Moreover, fine-grained synchronisation between all signals is provided without additional hardware, and clock drift can be corrected post-hoc.


international conference on intelligent transportation systems | 2006

Multimodal Human-Computer Interfaces for Incident Handling in Metropolitan Transport Management Centre

Yu Shi; Ronnie Taib; Eric H. C. Choi; Fang Chen

Efficient road traffic incident management in metropolitan areas is crucial for the smooth traffic flow and the mobility and safety of community. Traffic incident management requires fast and accurate collection and retrieval of critical data, such as incident conditions, and contact information for the intervention crew, public safety organisations and other resources. Access to critical data by traffic control operators can be facilitated through various human-computer interfaces. This paper describes the judicious introduction of a multi-modal interaction paradigm to the user interfaces for incident handling in a metropolitan transport management centre. Prototypes supporting speech and gestural interaction have been built based on user-centred design methodology and their evaluations have been conducted through user studies. The presented innovative user interfaces provide traffic control operators with intuitive, cognitively efficient ways to record traffic incident conditions, facilitate fast retrieval of contact details, and support time-critical incident handling


international conference on control, automation, robotics and vision | 2006

GestureCam: A Smart Camera for Gesture Recognition and Gesture-Controlled Web Navigation

Yu Shi; Ronnie Taib; Serge Lichman


international conference on multimodal interfaces | 2005

A study of manual gesture-based selection for the PEMMI multimodal transport management interface

Fang Chen; Eric H. C. Choi; Julien Epps; Serge Lichman; Natalie Ruiz; Yu Shi; Ronnie Taib; Mike Wu

Collaboration


Dive into the Ronnie Taib's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Epps

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Kun Yu

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qian Qian Feng

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge