Winyu Chinthammit
University of Tasmania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Winyu Chinthammit.
australasian computer-human interaction conference | 2011
Simon Stannus; Daniel Rolf; Arko Lucieer; Winyu Chinthammit
Geographical Information Systems (GISs) are playing an increasingly important role in society. Not only have the capabilities of GIS packages expanded, but their spectrum has been widened by the popularisation of software such as Google Earth, which has added an extra dimension to navigation, while still using the same interaction method. We argue that traditional GIS interfaces limit productivity by not being sufficiently intuitive to new users and by causing extra delay due to unnecessary modality. As a step on the road to solving these problems, we propose an ideal gesture-based system and present the results of a mostly qualitative user-experiment on our current prototype for gestural navigation in Google Earth, which back up our assumptions about the importance of gestural interactions being both bimanual and simultaneous.
BioMed Research International | 2014
Winyu Chinthammit; Troy Merritt; Sj Pedersen; Ad Williams; Denis Visentin; Robert Rowe; Thomas A. Furness
This paper describes a pilot study using a prototype telerehabilitation system (Ghostman). Ghostman is a visual augmentation system designed to allow a physical therapist and patient to inhabit each others viewpoint in an augmented real-world environment. This allows the therapist to deliver instruction remotely and observe performance of a motor skill through the patients point of view. In a pilot study, we investigated the efficacy of Ghostman by using it to teach participants to use chopsticks. Participants were randomized to a single training session, receiving either Ghostman or face-to-face instructions by the same skilled instructor. Learning was assessed by measuring retention of skills at 24-hour and 7-day post instruction. As hypothesised, there were no differences in reduction of error or time to completion between participants using Ghostman compared to those receiving face-to-face instruction. These initial results in a healthy population are promising and demonstrate the potential application of this technology to patients requiring learning or relearning of motor skills as may be required following a stroke or brain injury.
international conference on computer graphics and interactive techniques | 2014
Mayra Donaji Barrera Machuca; Winyu Chinthammit; Yi Yang; Henry Been-Lirn Duh
We propose the use of 3D mobile interactions as a way to give public displays viewers (users) new ways to collaborate among them with the public display. Getting and maintaining the user attention is one of the main struggles of public displays but previous research has shown that collaboration among viewers are able to engage users with public displays more. Our proposed 3D mobile interactions for public displays utilize mobile devices as 3D user interfaces to facilitate the use of the users natural skills to control 3D content. The 3D content can also be positioned outside the public display, which let us explore new interaction techniques. We present a prototype of our 3D mobile interaction that demonstrates the proposed interaction as well as describes one of the use case scenarios.
ieee virtual reality conference | 2003
Winyu Chinthammit; Eric J. Seibel; Thomas A. Furness
The operation and performance of a six degree-of-freedom (DOF) shared-aperture tracking system with image overlay is described. This unique tracking technology shares the same aperture or scanned optical beam with the visual display, virtual retinal display (VRD). This display technology provides high brightness in an AR helmet-mounted display, especially in the extreme environment of a military cockpit. The VRD generates an image by optically scanning visible light directly to the viewers eye. By scanning both visible and infrared light, the head-worn display can be directly coupled to a head-tracking system. As a result, the proposed tracking system requires minimal calibration between the users viewpoint and the trackers viewpoint. This paper demonstrates that the proposed shared-aperture tracking system produces high accuracy and computational efficiency. The current proof-of-concept system has a precision of +/ 0.05 and +/ 0.01 deg. in the horizontal and vertical axes, respectively. The static registration error was measured to be 0.08 +/ 0.04 and 0.03 +/ 0.02 deg. for the horizontal and vertical axes, respectively. The dynamic registration error or the system latency was measured to be within 16.67 ms, equivalent to our display refresh rate of 60 Hz. In all testing, the VRD was fixed and the calibrated motion of a robot arm was tracked. By moving the robot arm within a restricted volume, this real-time shared-aperture method of tracking was extended to six-DOF measurements. Future AR applications of our shared-aperture tracking and display system will be highly accurate head tracking when the VRD is helmet mounted and worn within an enclosed space, such as an aircraft cockpit.
human factors in computing systems | 2014
Winyu Chinthammit; Henry Been-Lirn Duh; Jun Rekimoto
Food is essential to the survival of the world population. There are several processes in order to make food available to consumers: for example, production, transportation, and consumption. Since the global demand of food is always on the rise, there is a need to improve the efficiency in all the processes in food industries. For example, the food production industries are often not equipped with the right decision-making tools to allow farmers to properly deal with important factors such as environmental changes. On the other hand, tools are not difficult to create but can be very challenging to be successfully adopted by the professionals, especially when the tools require them to change their normal work practices. In this SIG, we will discuss how HCI can improve food product industries with suitable information for each food process. \
australasian computer-human interaction conference | 2014
Mark Brown; Winyu Chinthammit; Paddy Nixon
Although finger touch is widely expected as the control mechanism for touch tables, tangible object interaction is another, if rarely implemented possibility. Little empirical research exists showing uptake, user engagement, or use preferences for adult users of multi-touch tangible systems (Antle & Wise, 2013; Schneider et al., 2010) with the majority of past research for tangible objects focusing on children (Marshall et al., 2003; Price et al, 2008; Zuckerman et al., 2005). Yet it is adults, as decision makers, who are the true targets of increasingly available commercial multi-touch table applications. By observing the interaction behaviours of 20 participants, this research investigates the appeal of two distinctly different styles of tangible objects compared with their finger touch equivalents. The explorative style study measures user preferences, perceived engagement, fit for purpose, usability, and enjoyment. The aim is to determine how the inclusion of tangible object interaction as part of the interface influences user preferences compares with a touch only system. This provides valuable base information to predict potential uptake and preferences of local adult users for future tangible or hybrid tangible touch systems.
international conference on human computer interaction | 2013
Steven Neale; Winyu Chinthammit; Christopher Lueg; Paddy Nixon
In an ideal world, physical museum artefacts could be touched, handled, examined and passed between interested viewers by hand. Unfortunately, this is not always possible – artefacts may be too fragile to handle or pass around, or groups of people with mutual interests in objects may not be in the same location. This can be problematic when attempting to explain or make sense of the physical properties of artefacts.
Symmetry | 2018
Mayra Donaji Barrera Machuca; Winyu Chinthammit; Weidong Huang; Rainer Wasinger; Henry Been-Lirn Duh
Collaboration has been common in workplaces in various engineering settings and in our daily activities. However, how to effectively engage collaborators with collaborative tasks has long been an issue due to various situational and technical constraints. The research in this paper addresses the issue in a specific scenario, which is how to enable users to interact with public information from their own perspective. We describe a 3D mobile interaction technique that allows users to collaborate with other people by creating a symmetric and collaborative ambience. This in turn can increase their engagement with public displays. In order to better understand the benefits and limitations of this technique, we conducted a usability study with a total of 40 participants. The results indicate that the 3D mobile interaction technique promotes collaboration between users and also improves their engagement with the public displays.
2017 International Symposium on Big Data Visual Analytics (BDVA) | 2017
Elisabeth Adelia Widjojo; Winyu Chinthammit; Ulrich Engelke
Visual Analytics (VA) is a discipline that integrates computational and human efforts, allowing for effective data exploration with interactive and insightful user interfaces. Human- Data Interaction (HDI) is a relatively new term that we interpret here as the interactive interface between a human and the visual representation of the data, analogous to human computer interaction. Advanced user interfaces are widely integrated into modern computing devices enabling more effective human interaction with data sets. However, limited control and display modalities of computing devices often still do not easily facilitate exploration of the multidimensionality and heterogeneity of large data sets. Virtual Reality (VR) technologies have demonstrated their potential to unlock human understanding of complex data sets through immersion and natural spatial interaction. In this position paper, we share our views on how VR-based HDI can support exploration of multidimensional large data sets with the aim at providing direction for open research areas that may serve the design of a VR-based HDI system in this emerging field of research.
australasian computer-human interaction conference | 2013
SooJeong Yoo; Callum Parker; Winyu Chinthammit; Susan Turland
Currently, first-year chemistry students at the University of Tasmania learn about three-dimensional molecular structures using a combination of lectures, tutorials, and practical hands-on experience with molecular chemistry kits. We have developed a basic 3D molecule construction simulation, called MolyPoly, to help students grasp the concepts of chemistry easily through immersion and natural interaction with 3D molecules. It was designed to augment the teaching of organic chemistry with enhanced natural interaction and 3D visualization techniques. This paper presents the results of a pilot study conducted with the aforementioned chemistry class. Participating students were split into two groups; MolyPoly and traditional. The results demonstrated that the two groups have achieved similar learning outcomes at the end of the four (4) class sessions.