Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jamie A. Ward is active.

Publication


Featured researches published by Jamie A. Ward.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Eye Movement Analysis for Activity Recognition Using Electrooculography

Andreas Bulling; Jamie A. Ward; Hans Gellersen; Gerhard Tröster

In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals-saccades, fixations, and blinks-and propose a method for assessing repetitive patterns of eye movements. We then devise 90 different features based on these characteristics and select a subset of them using minimum redundancy maximum relevance (mRMR) feature selection. We validate the method using an eight participant study in an office environment using an example set of five activity classes: copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web. We also include periods with no specific activity (the NULL class). Using a support vector machine (SVM) classifier and person-independent (leave-one-person-out) training, we obtain an average precision of 76.1 percent and recall of 70.5 percent over all classes and participants. The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.


international conference on pervasive computing | 2004

Recognizing Workshop Activity Using Body Worn Microphones and Accelerometers

Paul Lukowicz; Jamie A. Ward; Holger Junker; Mathias Stäger; Gerhard Tröster; Amin Atrash; Thad Starner

The paper presents a technique to automatically track the progress of maintenance or assembly tasks using body worn sensors. The technique is based on a novel way of combining data from accelerometers with simple frequency matching sound classification. This includes the intensity analysis of signals from microphones at different body locations to correlate environmental sounds with user activity. To evaluate our method we apply it to activities in a wood shop. On a simulated assembly task our system can successfully segment and identify most shop activities in a continuous data stream with zero false positives and 84.4% accuracy.


ACM Transactions on Intelligent Systems and Technology | 2011

Performance metrics for activity recognition

Jamie A. Ward; Paul Lukowicz; Hans-Werner Gellersen

In this article, we introduce and evaluate a comprehensive set of performance metrics and visualisations for continuous activity recognition (AR). We demonstrate how standard evaluation methods, often borrowed from related pattern recognition problems, fail to capture common artefacts found in continuous AR—specifically event fragmentation, event merging and timing offsets. We support our assertion with an analysis on a set of recently published AR papers. Building on an earlier initial work on the topic, we develop a frame-based visualisation and corresponding set of class-skew invariant metrics for the one class versus all evaluation. These are complemented by a new complete set of event-based metrics that allow a quick graphical representation of system performance—showing events that are correct, inserted, deleted, fragmented, merged and those which are both fragmented and merged. We evaluate the utility of our approach through comparison with standard metrics on data from three different published experiments. This shows that where event- and frame-based precision and recall lead to an ambiguous interpretation of results in some cases, the proposed metrics provide a consistently unambiguous explanation.


international conference on pervasive computing | 2009

Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography

Andreas Bulling; Jamie A. Ward; Hans Gellersen; Gerhard Tröster

In this work we analyse the eye movements of people in transit in an everyday environment using a wearable electrooculographic (EOG) system. We compare three approaches for continuous recognition of reading activities: a string matching algorithm which exploits typical characteristics of reading signals, such as saccades and fixations; and two variants of Hidden Markov Models (HMMs) - mixed Gaussian and discrete. The recognition algorithms are evaluated in an experiment performed with eight subjects reading freely chosen text without pictures while sitting at a desk, standing, walking indoors and outdoors, and riding a tram. A total dataset of roughly 6 hours was collected with reading activity accounting for about half of the time. We were able to detect reading activities over all subjects with a top recognition rate of 80.2% (71.0% recall, 11.6% false positives) using string matching. We show that EOG is a potentially robust technique for reading recognition across a number of typical daily situations.


tests and proofs | 2012

Multimodal recognition of reading activity in transit using body-worn sensors

Andreas Bulling; Jamie A. Ward; Hans Gellersen

Reading is one of the most well-studied visual activities. Vision research traditionally focuses on understanding the perceptual and cognitive processes involved in reading. In this work we recognize reading activity by jointly analyzing eye and head movements of people in an everyday environment. Eye movements are recorded using an electrooculography (EOG) system; body movements using body-worn inertial measurement units. We compare two approaches for continuous recognition of reading: String matching (STR) that explicitly models the characteristic horizontal saccades during reading, and a support vector machine (SVM) that relies on 90 eye movement features extracted from the eye movement data. We evaluate both methods in a study performed with eight participants reading while sitting at a desk, standing, walking indoors and outdoors, and riding a tram. We introduce a method to segment reading activity by exploiting the sensorimotor coordination of eye and head movements during reading. Using person-independent training, we obtain an average precision for recognizing reading of 88.9% (recall 72.3%) using STR and of 87.7% (recall 87.9%) using SVM over all participants. We show that the proposed segmentation scheme improves the performance of recognizing reading events by more than 24%. Our work demonstrates that the joint analysis of eye and body movements is beneficial for reading recognition and opens up discussion on the wider applicability of a multimodal recognition approach to other visual and physical activities.


ubiquitous computing | 2009

Eye movement analysis for activity recognition

Andreas Bulling; Jamie A. Ward; Hans Gellersen; Gerhard Tröster

In this work we investigate eye movement analysis as a new modality for recognising human activity. We devise 90 different features based on the main eye movement characteristics: saccades, fixations and blinks. The features are derived from eye movement data recorded using a wearable electrooculographic (EOG) system. We describe a recognition methodology that combines minimum redundancy maximum relevance feature selection (mRMR) with a support vector machine (SVM) classifier. We validate the method in an eight participant study in an office environment using five activity classes: copying a text, reading a printed paper, taking hand-written notes, watching a video and browsing the web. In addition, we include periods with no specific activity. Using a person-independent (leave-one-out) training scheme, we obtain an average precision of 76.1% and recall of 70.5% over all classes and participants. We discuss the most relevant features and show that eye movement analysis is a rich and thus promising modality for activity recognition.


ambient intelligence | 2005

Gesture spotting using wrist worn microphone and 3-axis accelerometer

Jamie A. Ward; Paul Lukowicz; Gerhard Tröster

We perform continuous activity recognition using only two wrist-worn sensors - a 3-axis accelerometer and a microphone. We build on the intuitive notion that two very different sensors are unlikely to agree in classification of a false activity. By comparing imperfect, jumping window classifications from each of these sensors, we are able discern activities of interest from null or uninteresting activities. Where one sensor alone is unable to perform such partitioning, using comparison we are able to report good overall system performance of up to 70% accuracy. In presenting these results, we attempt to give a more-in depth visualization of the errors than can be gathered from confusion matrices alone.


location and context awareness | 2006

Evaluating performance in continuous context recognition using event-driven error characterisation

Jamie A. Ward; Paul Lukowicz; Gerhard Tröster

Evaluating the performance of a continuous activity recognition system can be a challenging problem. To-date there is no widely accepted standard for dealing with this, and in general methods and measures are adapted from related fields such as speech and vision. Much of the problem stems from the often imprecise and ambiguous nature of the real-world events that an activity recognition system has to deal with. A recognised event might have variable duration, or be shifted in time from the corresponding real-world event. Equally it might be broken up into smaller pieces, or joined together to form larger events. Most evaluation attempts tend to smooth over these issues, using “fuzzy” boundaries, or some other parameter based error decision, so as to make possible the use of standard performance measures (such as insertions and deletions.) However, we argue that reducing the various facets of a activity system into limited error categories – that were originally intended for different problem domains – can be overly restrictive. In this paper we attempt to identify and characterise the errors typical to continuous activity recognition, and develop a method for quantifying them in an unambiguous manner. By way of an initial investigation, we apply the method to an example taken from previous work, and discuss the advantages that this provides over two of the most commonly used methods.


ubiquitous computing | 2016

Towards recognising collaborative activities using multiple on-body sensors

Jamie A. Ward; Gerald Pirkl; Peter Hevesi; Paul Lukowicz

This paper describes the initial stages of a new work on recognising collaborative activities involving two or more people. In the experiment described a physically demanding construction task is completed by a team of 4 volunteers. The task, to build a large video wall, requires communication, coordination, and physical collaboration between group members. Minimal outside assistance is provided to better reflect the ad-hoc and loosely structured nature of real-world construction tasks. On-body inertial measurement units (IMU) record each subjects head and arm movements; a wearable eye-tracker records gaze and ego-centric video; and audio is recorded from each persons head and dominant arm. A first look at the data reveals promising correlations between, for example, the movement patterns of two people carrying a heavy object. Also revealed are clues on how complementary information from different sensor types, such as sound and vision, might further aid collaboration recognition.


ubiquitous computing | 2016

What's my line? glass versus paper for cold reading in duologues

Jamie A. Ward; Paul Lukowicz

Part of an actors job is being able to cold read: to take words directly from the page and to read them as if they were his or her own, often without the chance to read the lines beforehand. This is particularly difficult when two or more actors need to perform a dialogue cold. The need to hold a paper script in hand hinders the actors ability to move freely. It also introduces a visual distraction between actors trying to engage with one another in a scene. This preliminary study uses Google Glass displayed cue cards as an alternative to traditional scripts, and compares the two approaches through a series of two-person, cold-read performances. Each performance was judged by a panel of theatre experts. The study finds that Glass has the potential to aid performance by freeing actors to better engage with one another. However, it also found that by limiting the display to one line of script at a time, the Glass application used here makes it difficult for some actors to grasp the text. In a further study, when asked to later perform the text from memory, actors who had used Glass recalled only slightly fewer lines than when they had learned using paper.

Collaboration


Dive into the Jamie A. Ward's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thad Starner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge