Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michelle Annett is active.

Publication


Featured researches published by Michelle Annett.


user interface software and technology | 2011

Medusa: a proximity-aware multi-touch tabletop

Michelle Annett; Tovi Grossman; Daniel Wigdor; George W. Fitzmaurice

We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a users presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.


canadian conference on artificial intelligence | 2008

A comparison of sentiment analysis techniques: polarizing movie blogs

Michelle Annett; Grzegorz Kondrak

With the ever-growing popularity of online media such as blogs and social networking sites, the Internet is a valuable source of information for product and service reviews. Attempting to classify a subset of these documents using polarity metrics can be a daunting task. After a survey of previous research on sentiment polarity, we propose a novel approach based on Support Vector Machines. We compare our method to previously proposed lexical-based and machine learning (ML) approaches by applying it to a publicly available set of movie reviews. Our algorithm will be integrated within a blog visualization tool.


australasian computer-human interaction conference | 2009

Using a multi-touch tabletop for upper extremity motor rehabilitation

Michelle Annett; Fraser Anderson; Darrell Goertzen; Jonathan Halton; Quentin Ranson; Walter F. Bischof; Pierre Boulanger

Millions of people in Canada have impairments that result in a loss of function and directly affect their ability to carry out activities of daily living. Many individuals with disabilities enter into rehabilitation programs to improve their motor functioning and quality of life. Currently, many of the activities and exercises that are performed are monotonous, uninteresting, and do not inspire patients to perform to the best of their abilities. The usage of traditional exercises can also make it difficult for therapists to objectively measure and track patient progress. The integration of highly interactive and immersive technologies into rehabilitation programs has the potential to benefit both patients and therapists. We have developed a multi-touch tabletop system, the AIR Touch, which combines existing multi-touch technologies with a suite of new rehabilitation-centric applications. The AIR Touch was developed under the guidance of practicing occupational therapists.


human factors in computing systems | 2014

In the blink of an eye: investigating latency perception during stylus interaction

Albert Han Ng; Michelle Annett; Paul Henry Dietz; Anoop Gupta; Walter F. Bischof

While pen computing has become increasingly more popular, device responsiveness, or latency, still plagues such interaction. Although there have been advances in digitizer technology over the last few years, commercial end-to-end latencies are unfortunately similar to those found with touchscreens, i.e., 65 - 120 milliseconds. We report on a prototype stylus-enabled device, the High Performance Stylus System (HPSS), designed to display latencies as low as one millisecond while users ink or perform dragging tasks. To understand the role of latency while inking with a stylus, psychophysical just-noticeable difference experiments were conducted using the HPSS. While participants performed dragging and scribbling tasks, very low levels of latency could be discriminated, i.e., ~1 versus 2 milliseconds while dragging and ~7 versus 40 milliseconds while scribbling. The HPSS and our experimentation have provided further motivation for the implementation of latency saving measures in pen-based hardware and software systems.


ACM Transactions on Computer-Human Interaction | 2014

Exploring and Understanding Unintended Touch during Direct Pen Interaction

Michelle Annett; Anoop Gupta; Walter F. Bischof

The user experience on tablets that support both touch and styli is less than ideal, due in large part to the problem of unintended touch or palm rejection. Devices are often unable to distinguish between intended touch (i.e., interaction on the screen intended for action) and unintended touch (i.e., incidental interaction from the palm, forearm, or fingers). This often results in stray ink strokes and accidental navigation, frustrating users. We present a data collection experiment where participants performed inking tasks, and where natural tablet and stylus behaviors were observed and analyzed from both digitizer and behavioral perspectives. An analysis and comparison of novel and existing unintended touch algorithms revealed that the use of stylus information can greatly reduce unintended touch. Our analysis also revealed many natural stylus behaviors that influence unintended touch, underscoring the importance of application and ecosystem demands, and providing many avenues for future research and technological advancement.


Teleoperators and Virtual Environments | 2010

Investigating the application of virtual reality systems to psychology and cognitive neuroscience research

Michelle Annett; Walter F. Bischof

With interest in virtual reality (VR) technologies, techniques, and devices growing at a quick pace, many researchers in areas such as psychology or cognitive neuroscience want to use VR. The software and VR systems available today do not support the skill sets or experimental requirements of this group of users. We describe a number of concerns and requirements that researchers express and focus on the extent to which todays VR systems support non-VR experts. The work then concludes with a number of suggestions and potential development avenues that should be undertaken to ensure that VR systems are usable by a large range of researchers, regardless of their programming skills or technical backgrounds.


human factors in computing systems | 2013

Your left hand can do it too!: investigating intermanual, symmetric gesture transfer on touchscreens

Michelle Annett; Walter F. Bischof

This work examines intermanual gesture transfer, i.e., learning a gesture with one hand and performing it with the other. Using a traditional retention and transfer paradigm from the motor learning literature, participants learned four gestures on a touchscreen. The study found that touchscreen gestures transfer, and do so symmetrically. Regardless of the hand used during training, gestures were performed with a comparable level of error and speed by the untrained hand, even after 24 hours. In addition, the form of a gesture, i.e., its length or curvature, was found to have no influence on transferability. These results have important implications for the design of stroke-based gestural interfaces: acquisition could occur with either hand and it is possible to interchange the hand used to perform gestures. The work concludes with a discussion of these implications and highlights how they can be applied to gesture learning and current gestural systems.


user interface software and technology | 2017

Frictio: Passive Kinesthetic Force Feedback for Smart Ring Output

Teng Han; Qian Han; Michelle Annett; Fraser Anderson; Da-Yuan Huang; Xing-Dong Yang

Smart rings have a unique form factor suitable for many applications, however, they offer little opportunity to provide the user with natural output. We propose passive kinesthetic force feedback as a novel output method for rotational input on smart rings. With this new output channel, friction force profiles can be designed, programmed, and felt by a user when they rotate the ring. This modality enables new interactions for ring form factors. We demonstrate the potential of this new haptic output method though Frictio, a prototype smart ring. In a controlled experiment, we determined the recognizability of six force profiles, including Hard Stop, Ramp-Up, Ramp-Down, Resistant Force, Bump, and No Force. The results showed that participants could distinguish between the force profiles with 94% accuracy. We conclude by presenting a set of novel interaction techniques that Frictio enables, and discuss insights and directions for future research.


ieee virtual reality conference | 2010

Virtual equine assisted therapy

Fraser Anderson; Michelle Annett; Walter F. Bischof; Pierre Boulanger

People with a wide spectrum of disabilities, ranging from spinal injuries to autism, have benefited from equine assisted therapy (EAT). Using EAT, therapy patients have improved both physically and psychologically (e.g., demonstrating increased attention, motivation, and communication skills). There are still many open questions regarding this therapy and the reasons for its success. Many of these questions have remained unanswered due in large part to the uncontrolled nature of EAT. The Virtual Equine Assisted Therapy (VEAT) Project integrates a robotic platform with virtual reality technologies to provide a safe, controlled environment through which various aspects of EAT can be isolated and studied. The system incorporates realistic equine motions with visual, auditory, olfactory, and somatosensory stimuli to provide highly immersive experiences to patients.


graphics interface | 2017

No Need to Stop What You’re Doing: Exploring No-Handed Smartwatch Interaction

Seongkook Heo; Michelle Annett; Benjamin J. Lafreniere; Tovi Grossman; George W. Fitzmaurice

Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or both hands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.

Collaboration


Dive into the Michelle Annett's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barrett Ens

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge