Chaklam Silpasuwanchai
Kochi University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chaklam Silpasuwanchai.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2015
Chaklam Silpasuwanchai; Xiangshi Ren
Abstract Full body gestures provide alternative input to video games that are more natural and intuitive. However, full-body game gestures designed by developers may not always be the most suitable gestures available. A key challenge in full-body game gestural interfaces lies in how to design gestures such that they accommodate the intensive, dynamic nature of video games, e.g., several gestures may need to be executed simultaneously using different body parts. This paper investigates suitable simultaneous full-body game gestures, with the aim of accommodating high interactivity during intense gameplay. Three user studies were conducted: first, to determine user preferences, a user-elicitation study was conducted where participants were asked to define gestures for common game actions/commands; second, to identify suitable and alternative body parts, participants were asked to rate the suitability of each body part (one and two hands, one and two legs, head, eyes, and torso) for common game actions/commands; third, to explore the consensus of suitable simultaneous gestures, we proposed a novel choice-based elicitation approach where participants were asked to mix and match gestures from a predefined list to produce their preferred simultaneous gestures. Our key findings include (i) user preferences of game gestures, (ii) a set of suitable and alternative body parts for common game actions/commands, (iii) a consensus set of simultaneous full-body game gestures that assist interaction in different interactive game situations, and (iv) generalized design guidelines for future full-body game interfaces. These results can assist designers and practitioners to develop more effective full-body game gestural interfaces or other highly interactive full-body gestural interfaces.
designing interactive systems | 2016
Chaklam Silpasuwanchai; Xiaojuan Ma; Hiroaki Shigemasu; Xiangshi Ren
Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.
designing interactive systems | 2016
Nem Khan Dim; Chaklam Silpasuwanchai; Sayan Sarcar; Xiangshi Ren
Mid-air gestures enable intuitive and natural interactions. However, few studies have investigated the use of mid-air gestures for blind people. TV interactions are one promising use of mid-air gestures for blind people, as listening to TV is one of their most common activities. Thus, we investigated mid-air TV gestures for blind people through two studies. Study 1 used a user-elicitation approach where blind people were asked to define gestures given a set of commands. Then, we present a classification of gesture types and the frequency of body parts usage. Nevertheless, our participants had difficulty imagining gestures for some commands. Thus, we conducted Study 2 that used a choice-based elicitation approach where the participants selected their favorite gesture from a predefined list of choices. We found that providing choices help guide users to discover suitable gestures for unfamiliar commands. We discuss concrete design guidelines for mid-air TV gestures for blind people.
Proceedings of the International Symposium on Interactive Technology and Ageing Populations | 2016
Sayan Sarcar; Jussi Joklnen; Antti Oulasvirta; Chaklam Silpasuwanchai; Zhenxin Wang; Xiangshi Ren
This paper addresses the design of user interfaces for aging adults. Older people differ vastly in how aging affects their perceptual, motor, and cognitive abilities. When it comes to interface design for aging users, the one design for all approach fails. We present first results from attempts to extend ability-based design to the aging population. We describe a novel approach using age-related differences as the principle of optimizing interactive tasks. We argue that, to be successful, predictive models must take into account how users adapt their behavioral strategies as a function of their abilities. When combined with design optimization, such models allow us to investigate optimal designs more broadly, examining trade-offs among several design factors. We present first results on optimizing text entry methods for user groups with different age-related declines.
human factors in computing systems | 2017
Jussi P. P. Jokinen; Sayan Sarcar; Antti Oulasvirta; Chaklam Silpasuwanchai; Zhenxin Wang; Xiangshi Ren
Predicting how users learn new or changed interfaces is a long-standing objective in HCI research. This paper contributes to understanding of visual search and learning in text entry. With a goal of explaining variance in novices typing performance that is attributable to visual search, a model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search. This allows predicting search times and visual search patterns for completely and partially new layouts. The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time. Practitioners can use it for estimating how long it takes to reach the desired level of performance with a given layout.
human factors in computing systems | 2014
Chaklam Silpasuwanchai; Xiangshi Ren
Motion gestures enable natural and intuitive input in video games. However, game gestures designed by developers may not always be the optimal gestures for players. A key challenge in designing appropriate game gestures lies in the interaction-intensive nature of video games, i.e., several actions/commands may need to be executed concurrently using different body parts. This study analyzes user preferences in game gestures, with the aim of accommodating high interactivity during gameplay. Two user-elicitation studies were conducted: first, to determine user preferences, participants were asked to define gestures for common game actions/commands; second, to develop effective combined-gestures, participants were asked to define possible game gestures using each body part (one and two hands, one and two legs, head, eyes, and torso). Our study presents a set of suitable and alternative body parts for common game actions/commands. We also present some simultaneously applied game gestures that assist interaction in highly interactive game situations (e.g., selecting a weapon with the feet while shooting with the hand). Interesting design implications are further discussed, e.g., transferability between hand and leg gestures.
Proceedings of the Second International Symposium of Chinese CHI on | 2014
Ryo Mizobata; Chaklam Silpasuwanchai; Xiangshi Ren
Full-body motion gestures enable realistic and intuitive input in video games. However, little is known regarding how different kinds of players engage/disengage with full-body game interaction. In this paper, adopting a user-typing approach, we explore player differences and their preferences in full-body gesture interaction (i.e., Kinect). Specifically, we hypothesize three human factors that influence player engagement in full-body game interaction, i.e., the players motivation to succeed (achiever vs. casual player), motivation to move (mover vs. non-mover), and game expertise (gamer vs. non-gamer). To explore the hypotheses, we conducted an experiment where participants were tasked with playing three different video games supporting full-body game gestures. The results suggest a significant correlation and main effect of the three factors on players engagement. The results also suggest three important game properties that affect players preferences: level of cognitive challenge, level of physical challenge and level of realistic interaction.
human factors in computing systems | 2017
Mahmoud Mohamed Hussien Ahmed; Chaklam Silpasuwanchai; Kavous Salehzadeh Niksirat; Xiangshi Ren
In our fast-paced society, stress and anxiety have become increasingly common. Meditation for relaxation has received much attention. Meditation apps exploit various senses, e.g., touch, audio and vision, but the relationship between human senses and interactive meditation is not well understood. This paper empirically evaluates the effects of single and combined human senses on interactive meditation. We found that the effectiveness of human senses can be defined by their respective roles in maintaining the balance between relaxation and focus. This work is the first to attempt to understand these relationships. The findings have broad implications for the field of multi-modal interaction and interactive meditation applications.
human factors in computing systems | 2016
Neil Charness; Mark D. Dunlop; Cosmin Munteanu; Emma Nicol; Antti Oulasvirta; Xiangshi Ren; Sayan Sarcar; Chaklam Silpasuwanchai
This SIG advances the study of mobile user interfaces for the aging population. The topic is timely, as the mobile device has become the most widely used computer terminal and at the same time the number of older people will soon exceed the number of children worldwide. However, most HCI research addresses younger adults and has had little impact on older adults. Some design trends, like the mantra smaller is smarter, contradict the needs of older users. Developments like this may diminish their ability to access information and participate in society. This can lead to further isolation (social and physical) of older adults and increased widening of the digital divide. This SIG aims to discuss mobile interfaces for older adults. The SIG has three goals: (i) to map the state-of-art, (ii) to build a community gathering experts from related areas, and (iii) to raise awareness within the SIGCHI community. The SIG will be open to all at CHI.
human factors in computing systems | 2017
Sayan Sarcar; Cosmin Munteanu; Jussi P. P. Jokinen; Antti Oulasvirta; Chaklam Silpasuwanchai; Neil Charness; Mark D. Dunlop; Xiangshi Ren
We are concurrently witnessing two significant shifts: mobiles are becoming the most used computing device; and older people are becoming the largest demographic group. However, despite the recent increase in related CHI publication, older adults continue to be underrepresented in HCI research as well as commercially, further widening the digital divide they face and hampering their social participation. This workshop aims to increase the momentum for such research within CHI and related fields such as gerontechnology. We plan to create a space for discussing and sharing principles and strategies to design and evaluate mobile user interfaces for the aging population. We thus welcome contributions to empirical studies, theories, design and evaluation of mobile interfaces for older adults.