Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dingyun Zhu is active.

Publication


Featured researches published by Dingyun Zhu.


human factors in computing systems | 2011

Exploring camera viewpoint control models for a multi-tasking setting in teleoperation

Dingyun Zhu; Tamas Gedeon; Ken Taylor

Control of camera viewpoint plays a vital role in many teleoperation activities, as watching live video streams is still the fundamental way for operators to obtain situational awareness from remote environments. Motivated by a real-world industrial setting in mining teleoperation, we explore several possible solutions to resolve a common multi-tasking situation where an operator is required to control a robot and simultaneously perform remote camera operation. Conventional control interfaces are predominantly used in such teleoperation settings, but could overload an operators hand-operation capability, and require frequent attention switches and thus could decrease productivity. We report on an empirical user study in a model multi-tasking teleoperation setting where the user has a main task which requires their attention. We compare three different camera viewpoint control models: (1) dual manual control, (2) natural interaction (combining eye gaze and head motion) and (3) autonomous tracking. The results indicate the advantages of using the natural interaction model, while the manual control model performed the worst.


international conference on neural information processing | 2009

A Hybrid Fuzzy Approach for Human Eye Gaze Pattern Recognition

Dingyun Zhu; B. Sumudu U. Mendis; Tamas Gedeon; Akshay Asthana; Roland Goecke

Face perception and text reading are two of the most developed visual perceptual skills in humans. Understanding which features in the respective visual patterns make them differ from each other is very important for us to investigate the correlation between humans visual behavior and cognitive processes. We introduce our fuzzy signatures with a Levenberg-Marquardt optimization method based hybrid approach for recognizing the different eye gaze patterns when a human is viewing faces or text documents. Our experimental results show the effectiveness of using this method for the real world case. A further comparison with Support Vector Machines (SVM) also demonstrates that by defining the classification process in a similar way to SVM, our hybrid approach is able to provide a comparable performance but with a more interpretable form of the learned structure.


computer games | 2008

Eye gaze assistance for a game-like interactive task

Tamas Gedeon; Dingyun Zhu; B. Sumudu U. Mendis

Human beings communicate in abbreviated ways dependent on prior interactions and shared knowledge. Furthermore, humans share information about intentions and future actions using eye gaze. Among primates, humans are unique in the whiteness of the sclera and amount of sclera shown, essential for communication via interpretation of eye gaze. This paper extends our previous work in a game-like interactive task by the use of computerised recognition of eye gaze and fuzzy signature-based interpretation of possible intentions. This extends our notion of robot instinctive behaviour to intentional behaviour.We show a good improvement of speed of response in a simple use of eye gaze information.We also show a significant and more sophisticated use of the eye gaze information, which eliminates the need for control actions on the users part. We also make a suggestion as to returning visibility of control to the user in these cases.


australasian computer-human interaction conference | 2010

Head or gaze?: controlling remote camera for hands-busy tasks in teleoperation: a comparison

Dingyun Zhu; Tamas Gedeon; Ken Taylor

Head motion and eye gaze are general models of natural human interaction. Recent computer vision based head tracking and eye tracking technologies have expanded the possibilities of designing and developing more natural and intuitive user interfaces for a wide range of applications. In this work, we focus on common hands-busy situations in teleoperation activities, where operators often have to control multiple devices simultaneously by hand in order to accomplish operational tasks. This overloads an operators hand control ability and also reduces productivity. We present an empirical user study comparing head motion and eye gaze as different input modalities for remote camera control when a user is carrying out a hands-busy task. Both objective measures and subjective measures were used for the study. According to the results, we demonstrate the advantages of using gaze for remote camera control in such hands-busy settings.


agent and multi agent systems technologies and applications | 2008

Fuzzy logic for cooperative robot communication

Dingyun Zhu; Tamas Gedeon

This paper proposes a new approach which applies a recently developed fuzzy technique: Fuzzy Signature to model the communication between cooperative intelligent robots. Fuzzy signature is not only regarded as one of the key solutions to solve the rule explosion in traditional fuzzy inference systems, but also an effective approach for modeling complex problems or systems with a hierarchical structure. Apart from the application of fuzzy signatures, another modeling structure of pattern-matching with possibility calculation is designed for the further intentional inference of cooperative robot communication. By the combination of these two theoretical issues, a codebook for intelligent robot decision making has been developed, as well as its implementation - a Cooperative Robot Communication Simulator.


human factors in computing systems | 2010

Natural interaction enhanced remote camera control for teleoperation

Dingyun Zhu; Tamas Gedeon; Ken Taylor

In teleoperation, operators usually have to control multiple devices simultaneously, which requires frequent hand switches between different controllers. We designed and implemented two prototypes, one by applying head motion and the other by integrating eye gaze as intrinsic elements of teleoperation for remote camera control in a multi-control setting. We report a user study of a modeled multi-control experiment that compares the performance of head tracking control, eye tracking control and traditional joystick control. The results provide clear evidence that eye tracking control significantly outperforms joystick and head tracking control in both objective measures and subjective measures.


international conference on human-computer interaction | 2013

Wands Are Magic: A Comparison of Devices Used in 3D Pointing Interfaces

Martin Henschke; Tamas Gedeon; Richard Jones; Sabrina Caldwell; Dingyun Zhu

In our pilot study with 12 participants, we compared three interfaces, 3D mouse, glove and wand in a 3D naturalistic environment. The latter two were controlled by the same absolute pointing method and so are essentially identical except for the selection mechanism, grasp action versus button. We found that the mouse performed worst in terms of both time and errors which is reasonable for a relative pointing device in an absolute pointing setting, with the wand both outperforming and favored by users to the glove. We conclude that the presence of a held object in a pointing interface changes the user’s perception of the system and magically leads to a different experience.


international conference on computer graphics and interactive techniques | 2013

MobileHelper: remote guiding using smart mobile devices, hand gestures and augmented reality

Kostia Robert; Dingyun Zhu; Weidong Huang; Leila Alem; Tamas Gedeon

Due to the rapid development in wearable computing, gestural interaction and augmented reality in recent years, remote collaboration has been seen as a fast growing field with many advanced designs and implementations for a wide range of applications. Most of existing remote guiding or collaboration solutions still rely on specifically designed hardware systems on both helper and worker side with limitations on usage, mobility, flexibility and portability. Considering widespread deployment of smart mobile devices such as smartphones and tablets in the past a few years, it already provides us numerous potentials of migrating conventional remote guiding solutions to such powerful platforms with the possibility of overcoming many existing issues and limits. In this paper, we introduce MobileHelper, a remote guiding prototype that is developed on a tablet device with the feature of allowing helpers to use hand gestures to guide the remote worker for various physical tasks. The interface used on the worker side integrates a near eye display to support mobility and real time representations of the helpers hand gestures using augmented reality technologies. We present the design and features of MobileHelper along with the description of detailed implementation of the prototype system. Stable system performance is also reported from our preliminary internal test runs.


international conference on e learning and games | 2012

Developing a situated virtual reality simulation for telerobotic control and training

Tamas Gedeon; Dingyun Zhu; Stephane Bersot

In this paper, we present the development of a situated virtual reality simulation for control and training in a telerobotic mining setting. The original research scenario is derived from a real-world rock breaking task in mining teleoperation. With the intention of having better situational awareness and user control model for this application, we simulate the entire setting in a 3D Virtual Reality (VR) environment. Therefore, users are able to obtain more information (e.g. depth information) and feedback from the remote environment in this simulation than only working with real video streams from the remote camera(s). In addition, the concept of natural interaction has been applied in building more intuitive user control interfaces than conventional manual modes. Both human eye gaze and head movements have been used to develop natural and interactive viewpoint control models for users to complete the teleoperation task. By using such a 3D simulation, training in the complex teletobotic control process can be effectively carried out with the capability of changing visual and control conditions easily. A user study has also been conducted as the preliminary evaluation of the simulation. Encouraging feedback has been provided by the experimental participants regarding task learning, which suggests the effectiveness of using the simulation.


international conference on neural information processing | 2011

Document classification on relevance: a study on eye gaze patterns for reading

Daniel Fahey; Tamas Gedeon; Dingyun Zhu

This paper presents a study that investigates the connection between the way that people read and the way that they understand content. The experiment consisted of having participants read some information on selected documents while an eye-tracking system recorded their eye movements. They were then asked to answer some questions and complete some tasks, on the information they had read. With the intention of investigating effective analysis approaches, both statistical methods and Artificial Neural Networks (ANN) were applied to analyse the collected gaze data in terms of several defined measures regarding the relevance of the text. The results from the statistical analysis do not show any significant correlations between those measures and the relevance of the text. However, good classification results were obtained by using an Artificial Neural Network. This suggests that using advanced learning approaches may provide more insightful differentiations than simple statistical methods particularly in analysing eye gaze reading patterns.

Collaboration


Dive into the Dingyun Zhu's collaboration.

Top Co-Authors

Avatar

Tamas Gedeon

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Ken Taylor

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

B. Sumudu U. Mendis

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Akshay Asthana

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Aleck C. H. Lin

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Amir Riaz

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Daniel Fahey

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Huajie Wu

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Kostia Robert

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Lachlan Paget

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge