Xiangshi Ren
Kochi University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiangshi Ren.
ACM Transactions on Computer-Human Interaction | 2000
Xiangshi Ren; Shinju Moriya
Two experiments were conducted to compare pen-based selection strategies and their characteristics. Two state transition models were also formulated which provide new vocabulary that will help in investigating interactions related to target selection issues. Six strategies, which can be described by the state transition models, were used in the experiments. We determined the best strategy of the six to be the “Slide Touch” strategy, where the target is selected at the moment the pen-tip touches the target for the first time after landing on the screen surface. The six strategies were also classified into strategy groups according to their characteristics. We determined the best strategy group to be the “In-Out” strategy group, where the target is selected by contact either inside or outside the target. Analyses show that differences between strategies are influenced by variations in target size; however, the differences between strategies are not affected by the distance to the target (i.e., pen-movement-distance) or the direction of pen movement (i.e., pen-movement-direction). We also found “the smallest maximum size” of five pixels, i.e., the boundary value for the target size below which there are significant differences, and above which there are no significant differences between the strategies in error rate. Relationships between interaction states, routes, and strategy efficiency were also investigated.
user interface software and technology | 2009
Feng Wang; Xiang Cao; Xiangshi Ren; Pourang Irani
Current interactions on direct-touch interactive surfaces are often modeled based on properties of the input channel that are common in traditional graphical user interfaces (GUI) such as x-y coordinate information. Leveraging additional information available on the surfaces could potentially result in richer and novel interactions. In this paper we specifically explore the role of finger orientation. This property is typically ignored in touch-based interactions partly because of the ambiguity in determining it solely from the contact shape. We present a simple algorithm that unambiguously detects the directed finger orientation vector in real-time from contact information only, by considering the dynamics of the finger landing process. Results of an experimental evaluation show that our algorithm is stable and accurate. We then demonstrate how finger orientation can be leveraged to enable novel interactions and to infer higher-level information such as hand occlusion or user position. We present a set of orientation-aware interaction techniques and widgets for direct-touch surfaces.
human factors in computing systems | 2009
Feng Wang; Xiangshi Ren
Current multi-touch interaction techniques typically only use the x-y coordinates of the human fingers contact with the screen. However, when fingers contact a touch-sensitive surface, they usually approach at an angle and cover a relatively large 2D area instead of a precise single point. In this paper, a Frustrated Total Internal Reflection (FTIR) based multi-touch device is used to collect the finger imprint data. We designed a series of experiments to explore human finger input properties and identified several useful properties such as contact area, contact shape and contact orientation which can be exploited to improve the performance of multi-touch selecting and pointing tasks. Based on the experimental results, we discuss some implications for the design of human finger input interfaces and propose several design prototypes which incorporate these implications. A set of raw data and several concrete recommendations which are useful for the research community are also presented.
Proceedings of the IFIP TC2/TC13 WG2.7/WG13.4 Seventh Working Conference on Engineering for Human-Computer Interaction | 1998
Xiangshi Ren; Shinji Moriya
This paper describes six strategies for selecting small targets on pen-based systems. We have classified the strategies into strategy groups according to their characteristics. An experiment was conducted comparing selection time, error rate and user preference ratings for the six selection strategies. We focused our attention on the three variables associated with pen-based selection: size, direction and distance to target. Three target sizes, eight pen-movement-directions and three pen-movement-distances were applied to all six strategies. Experimental results show that the best strategy was the “Landon2” strategy when the strategies were evaluated individually, and the best strategy group was the “In-Out” strategy group when evaluated in groups. Analyses also showed that differences between strategies were influenced by variations in target size, however, they were not influenced by pen-movement-distance and pen-movement-direction. Analyses of grouped strategies produced the same results. Ideas for future research are also presented.
human factors in computing systems | 2008
Xinyong Zhang; Xiangshi Ren; Hongbin Zha
In order to improve the stability of eye cursor, we introduce three methods, force field (FF), speed reduction (SR), and warping to target center (TC) to modulate eye cursor trajectories by counteracting eye jitter, which is the main cause of destabilizing the eye cursor. We evaluate these methods using two controlled experiments. One is an attention task experiment, which indicates that both FF and SR significantly alleviate the instability of eye cursor, but TC is not as we anticipated. The other is a 2D pointing task experiment, which shows that FF and SR as well as the improved implementation of SR (iSR) indeed improve human performance in dominant dwell-based eye pointing tasks of eye-based interactions. The method iSR is especially effective to accelerate eye pointing (10.5% and 8.5%) and reduce error rate (6.1% and 2.7%) when target diameter D = 45 and 60 pixels.
Journal of Information Processing | 2010
Minghui Sun; Xiangshi Ren; Xiang Cao
This paper investigates the relationship between “error feedback” (when tracking or trajectory errors are made) and user performance in steering tasks. The experiment examines feedback presented in visual, auditory and tactile modalities, both individually and in combinations. The results indicate that feedback significantly affects the accuracy of steering tasks but not the movement time. The results also show that users perform most accurately with tactile feedback. This paper contributes to the basic understanding of “error feedback” and how it impacts on steering tasks, and it offers insights and implications for the future design of multimodal feedback mechanisms for steering tasks.
international conference on human computer interaction | 2007
Xiangshi Ren; Jibin Yin; Shengdong Zhao; Yang Li
We present the Adaptive Hybrid Cursor, a novel target acquisition technique for pen-based interfaces. To assist a user in a target selection task, this technique automatically adapts the size of the cursor and/or its contexts (the target size and the selection background) based on pen pressure input. We systematically evaluated the new technique with various 2D target acquisition tasks. The experimental results indicated that the Adaptive Hybrid Cursor had better selection performance, and was particularly effective for small-target and high-density environments in which the regular cursor and the Bubble Cursor [13] failed to show significant advantages. The Adaptive Hybrid Cursor is a novel way to improve target acquisition via pressure input, and our study demonstrated its viability and potential for pen-based interfaces.
international conference on multimodal interfaces | 2003
Yang Li; James A. Landay; Zhiwei Guan; Xiangshi Ren; Guozhong Dai
Informal presentations are a lightweight means for fast and convenient communication of ideas. People communicate their ideas to others on paper and whiteboards, which afford fluid sketching of graphs, words and other expressive symbols. Unlike existing authoring tools that are designed for formal presentations, we created SketchPoint to help presenters design informal presentations via freeform sketching. In SketchPoint, presenters can quickly author presentations by sketching slide content, overall hierarchical structures and hyperlinks. To facilitate the transition from idea capture to communication, a note-taking workspace was built for accumulating ideas and sketching presentation outlines. Informal feedback showed that SketchPoint is a promising tool for idea communication.
Behaviour & Information Technology | 2010
Xiaolei Zhou; Xiangshi Ren
The steering law is an excellent performance model for trajectory-based tasks, such as drawing and writing in GUIs. Current studies on steering tasks focus on the effect of system factors (i.e. path width and amplitude) on the movement time and steering laws related applications. We conducted a series of experiments to further explore the effect of different operational biases (bias speed or accuracy) on steering completion time and standard deviation for two steering trajectory shapes, i.e. a straight steering task and a circular steering task, and then establish a new model accommodating system and subjective factor in steering tasks. Empirical results showed that the new model is more predictive and robust than the traditional steering law.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2015
Chaklam Silpasuwanchai; Xiangshi Ren
Abstract Full body gestures provide alternative input to video games that are more natural and intuitive. However, full-body game gestures designed by developers may not always be the most suitable gestures available. A key challenge in full-body game gestural interfaces lies in how to design gestures such that they accommodate the intensive, dynamic nature of video games, e.g., several gestures may need to be executed simultaneously using different body parts. This paper investigates suitable simultaneous full-body game gestures, with the aim of accommodating high interactivity during intense gameplay. Three user studies were conducted: first, to determine user preferences, a user-elicitation study was conducted where participants were asked to define gestures for common game actions/commands; second, to identify suitable and alternative body parts, participants were asked to rate the suitability of each body part (one and two hands, one and two legs, head, eyes, and torso) for common game actions/commands; third, to explore the consensus of suitable simultaneous gestures, we proposed a novel choice-based elicitation approach where participants were asked to mix and match gestures from a predefined list to produce their preferred simultaneous gestures. Our key findings include (i) user preferences of game gestures, (ii) a set of suitable and alternative body parts for common game actions/commands, (iii) a consensus set of simultaneous full-body game gestures that assist interaction in different interactive game situations, and (iv) generalized design guidelines for future full-body game interfaces. These results can assist designers and practitioners to develop more effective full-body game gestural interfaces or other highly interactive full-body gestural interfaces.