Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where G. Michael Poor is active.

Publication


Featured researches published by G. Michael Poor.


nordic conference on human-computer interaction | 2014

An evaluation of touchless hand gestural interaction for pointing tasks with preferred and non-preferred hands

Alvin Jude; G. Michael Poor; Darren Guinness

Performance evaluations of touchless gestural interaction are generally done by benchmarking pointing performance against existing interactive devices, requiring the use of users preferred hand. However, as there is no reason for this interaction to be limited to only one hand, evaluation should rightfully consider both hands. In this paper we evaluate the performance of touchless gestural interaction for pointer manipulation with both the preferred and non-preferred hands. This interaction is benchmarked against the mouse and the touchpad with a multidirectional task. We compared the performance between all devices, improvement in performance between 2 rounds, and the degradation of performance between hands. The results show the mouse has no performance increase between rounds but high degradation across hands, the touchpad has medium performance increase and medium degradation, and gestural interaction has the highest performance increase and the lowest degradation between hands.


international conference on human computer interaction | 2011

More than speed? an empirical study of touchscreens and body awareness on an object manipulation task

Rachelle Kristof Hippler; Dale S. Klopfer; Laura Marie Leventhal; G. Michael Poor; Brandi A. Klein; Samuel D. Jaffee

Touchscreen interfaces do more than allow users to execute speedy interactions. Three interfaces (touchscreen, mouse-drag, on-screen button) were used in the service of performing an object manipulation task. Results showed that planning time was shortest with touch screens, that touchscreens allowed high action knowledge users to perform the task more efficiently, and that only with touchscreens was the ability to rotate the object the same across all axes of rotation. The concept of closeness is introduced to explain the potential advantages of touchscreen interfaces.


ACM Transactions on Computing Education | 2012

No User Left Behind: Including Accessibility in Student Projects and the Impact on CS Students’ Attitudes

G. Michael Poor; Laura Marie Leventhal; Julie Barnes; Duke Hutchings; Paul B. Albee; Laura A. Campbell

Usability and accessibility have become increasingly important in computing curricula. This article briefly reviews how these concepts may be included in existing courses. The authors conducted a survey of student attitudes toward these issues at the start and end of a usability engineering course that included a group project with an accessibility component. Results of the survey indicate that students’ awareness of issues related to usability and accessibility are increased after taking the course and completing the project. Our work and results are potentially valuable to CS educators in three ways: (1) They validate the usefulness of the survey instrument in assessing pedagogies in usability engineering, (2) They provide useful insights into the attitudes of CS majors relative to the important topics of usability and accessibility, and (3) They point to possible benefits of including usability and accessibility topics into CS curricula.


human factors in computing systems | 2012

Leveraging motor learning for a tangible password system

Martez E. Mott; Thomas J. Donahue; G. Michael Poor; Laura Marie Leventhal

Tangible user interfaces (TUIs) may allow users to have more direct interaction with systems when compared to traditional graphical user interfaces (GUIs). However, the full range of applications where TUIs can be utilized in practice is unclear. To resolve this problem, the benefits of TUIs must be analyzed and matched to an application domain where they hold advantages over more traditional systems. Since TUIs require users to use their hands in order to interact with the system, there is the possibility for these systems to leverage motor learning to help users perform specific tasks. In this paper we will describe an early attempt to understand how motor learning can be used to create a tangible password system. A novel tangible password system was created and a small study conducted in order to identify future research objectives.


conference on computers and accessibility | 2014

Gestures with speech for hand-impaired persons

Darren Guinness; G. Michael Poor; Alvin Jude

Mid-air hand-gestural interaction generally causes a fatigue due to implementations that require the user to hold their arm out during this interaction. Recent research has discovered a new approach to reduce fatigue related to gestural interaction, by allowing users to rest their elbow on a surface, and calibrate their interaction space from this rested position[1]. Additionally, this approach reduced stress on the hand and wrist compared to the mouse, by shifting much of the load to the forearm and shoulder muscles. In this paper we evaluated gesture and speech multimodal interaction as a form of assistive interaction for those with hand impairments. Two participants with hand impairments were recruited to perform the evaluation. We collected qualitative and quantitative data, which showed promising results in using this method for assistive interaction.


ACM Transactions on Computer-Human Interaction | 2016

Applying the Norman 1986 User-Centered Model to Post-WIMP UIs: Theoretical Predictions and Empirical Outcomes

G. Michael Poor; Samuel D. Jaffee; Laura Marie Leventhal; Jordan Ringenberg; Dale S. Klopfer; Guy W. Zimmerman; Brandi A. Klein

In recent decades, “post-WIMP” interactions have revolutionized user interfaces (UIs) and led to improved user experiences. However, accounts of post-WIMP UIs typically do not provide theoretical explanations of why these UIs lead to superior performance. In this article, we use Norman’s 1986 model of interaction to describe how post-WIMP UIs enhance users’ mental representations of UI and task. In addition, we present an empirical study of three UIs; in the study, participants completed a standard three-dimensional object manipulation task. We found that the post-WIMP UI condition led to enhancements of mental representation of UI and task. We conclude that the Norman model is a good theoretical framework to study post-WIMP UIs. In addition, by studying post-WIMP UIs in the context of the Norman model, we conclude that mental representation of task may be influenced by the interaction itself; this supposition is an extension of the original Norman model.


international conference on human-computer interaction | 2013

Mobility Matters: Identifying Cognitive Demands That Are Sensitive to Orientation

G. Michael Poor; Guy W. Zimmerman; Dale S. Klopfer; Samuel D. Jaffee; Laura Marie Leventhal; Julie Barnes

Prior studies have shown benefits of interactions on mobile devices. Device mobility itself changes the nature of the user experience; interactions on mobile devices may present better support for cognition. To better understand cognitive demands related to mobility, the current study investigated presentations on a mobile device for a three-dimensional construction task. The task imposed considerable cognitive load, particularly in demands for mental rotation; individual differences in spatial ability are known to interact with these demands. This study specifically investigated mobile device orientations and participants’ spatial ability. Subjects with low spatial ability were able to complete the task more effectively when shown the presentation in a favorable orientation. Individuals who saw the presentation in an unfavorable orientation and those of low spatial ability, were differentially disadvantaged. We conclude that mobility can reduce cognitive load by limiting demands for spatial processing relating to reorientation.


international conference on human computer interaction | 2011

How do i line up?: reducing mental transformations to improve performance

Guy W. Zimmerman; Dale S. Klopfer; G. Michael Poor; Julie Barnes; Laura Marie Leventhal; Samuel D. Jaffee

Mobile devices and visual-spatial presentations of information are pervasive, especially for tasks in which the mobile device can be moved to close proximity of the task. This mobility allows the user to offload mental workload by allowing physical transformations of the device. In this study, we compared a fixed mobile device, a non-fixed mobile device, and a fixed desktop display to determine the effects imposed by the mental workload of transforming the frames of reference into alignment. Our results indicate that allowing the user to manipulate the devices position can influence performance by reducing the need for mental transformations.


human factors in computing systems | 2017

Bimanual Word Gesture Keyboards for Mid-air Gestures

Garrett Benoit; G. Michael Poor; Alvin Jude

Mid-air hand gestural interaction has generally been researched as a pointing device. However, recent research has shown potential for text input with the use of word gesture keyboards (WGK), where these forms of interactions require the input system to identify when the gesture has started and when it has stopped. Previous research has had success where the same hand moved the cursor, and performed the activation gesture. In this paper we introduce bimanual interaction for gestural interaction to perform text input with WGK, where one hand moves the cursor while the other hand performs the activation. In our user studies, the bimanual method demonstrated significantly higher results than the state-of-the-art single handed method. We achieved 16 words per minute; about 39% higher than the benchmark, and with significantly lower error rates.


symposium on spatial user interaction | 2016

Improving Gestural Interaction With Augmented Cursors

Ashley Dover; G. Michael Poor; Darren Guinness; Alvin Jude

Gesture-based interaction has become more affordable and ubiquitous as an interaction style in recent years. One issue with gestural pointing is the lack of accuracy with smaller targets. In this paper we propose that the use of augmented cursors - which has been shown to improve small target acquisition with a standard mouse - also improves small target acquisition for gestural pointing. In our study we explored the use of Bubble Lens and Bubble Cursor as a means to improve acquisition of smaller targets, and compared it with interactions without them. Our study showed that both methods significantly improved target selection. As part of our study, we also identified the parameters in configuring Bubble Cursor for optimal results.

Collaboration


Dive into the G. Michael Poor's collaboration.

Top Co-Authors

Avatar

Laura Marie Leventhal

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Darren Guinness

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Samuel D. Jaffee

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dale S. Klopfer

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Guy W. Zimmerman

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julie Barnes

Bowling Green State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge