Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Peursum is active.

Publication


Featured researches published by Patrick Peursum.


international conference on computer vision | 2005

Combining image regions and human activity for indirect object recognition in indoor wide-angle views

Patrick Peursum; Geoff A. W. West; Svetha Venkatesh

Traditional methods of object recognition are reliant on shape and so are very difficult to apply in cluttered, wide angle and low detail views such as surveillance scenes. To address this, a method of indirect object recognition is proposed, where human activity is used to infer both the location and identity of objects. No shape analysis is necessary. The concept is dubbed interaction signatures, since the premise is that a human interacts with objects in ways characteristic of the function of that object - for example, a person sits in a chair and drinks from a cup. The human-centred approach means that recognition is possible in low detail views and is largely invariant to the shape of objects within the same functional class. This paper implements a Bayesian network for classifying region patches with object labels, building upon our previous work in automatically segmenting and recognising a humans interactions with the objects. Experiments show that interaction signatures can successfully find and label objects in low detail views and are equally effective at recognising test objects that differ markedly in appearance from the training objects.


computer vision and pattern recognition | 2009

Efficient algorithms for subwindow search in object detection and localization

Senjian An; Patrick Peursum; Wanquan Liu; Svetha Venkatesh

Recently, a simple yet powerful branch-and-bound method called Efficient Subwindow Search (ESS) was developed to speed up sliding window search in object detection. A major drawback of ESS is that its computational complexity varies widely from O(n2) to O(n4) for n × n matrices. Our experimental experience shows that the ESSs performance is highly related to the optimal confidence levels which indicate the probability of the objects presence. In particular, when the object is not in the image, the optimal subwindow scores low and ESS may take a large amount of iterations to converge to the optimal solution and so perform very slow. Addressing this problem, we present two significantly faster methods based on the linear-time Kadanes Algorithm for 1D maximum subarray search. The first algorithm is a novel, computationally superior branch-and-bound method where the worst case complexity is reduced to O(n3). Experiments on the PASCAL VOC 2006 data set demonstrate that this method is significantly and consistently faster (approximately 30 times faster on average) than the original ESS. Our second algorithm is an approximate algorithm based on alternating search, whose computational complexity is typically O(n2). Experiments shows that (on average) it is 30 times faster again than our first algorithm, or 900 times faster than ESS. It is thus well-suited for real time object detection.


ubiquitous intelligence and computing | 2010

A smartphone-based obstacle sensor for the visually impaired

En Peng; Patrick Peursum; Ling Li; Svetha Venkatesh

In this paper, we present a real-time obstacle detection system for the mobility improvement for the visually impaired using a handheld Smartphone. Though there are many existing assistants for the visually impaired, there is not a single one that is low cost, ultra-portable, non-intrusive and able to detect the low-height objects on the floor. This paper proposes a system to detect any objects attached to the floor regardless of their height. Unlike some existing systems where only histogram or edge information is used, the proposed systemcombines both cues and overcomes some limitations of existing systems. The obstacles on the floor in front of the user can be reliably detected in real time using the proposed system implemented on a Smartphone. The proposed system has been tested in different types of floor conditions and a field trial on five blind participants has been conducted. The experimental results demonstrate its reliability in comparison to existing systems.


computer vision and pattern recognition | 2007

Tracking-as-Recognition for Articulated Full-Body Human Motion Analysis

Patrick Peursum; Svetha Venkatesh; Geoff A. W. West

This paper addresses the problem of markerless tracking of a human in full 3D with a high-dimensional (29D) body model. Most work in this area has been focused on achieving accurate tracking in order to replace marker-based motion capture, but do so at the cost of relying on relatively clean observing conditions. This paper takes a different perspective, proposing a body-tracking model that is explicitly designed to handle real-world conditions such as occlusions by scene objects, failure recovery, long-term tracking, auto-initialisation, generalisation to different people and integration with action recognition. To achieve these goals, an actions motions are modelled with a variant of the hierarchical hidden Markov model. The model is quantitatively evaluated with several tests, including comparison to the annealed particle filter, tracking different people and tracking with a reduced resolution and frame rate.


International Journal of Computer Vision | 2010

A Study on Smoothing for Particle-Filtered 3D Human Body Tracking

Patrick Peursum; Svetha Venkatesh; Geoff A. W. West

Stochastic models have become the dominant means of approaching the problem of articulated 3D human body tracking, where approximate inference is employed to tractably estimate the high-dimensional (∼30D) posture space. Of these approximate inference techniques, particle filtering is the most commonly used approach. However filtering only takes into account past observations—almost no body tracking research employs smoothing to improve the filtered inference estimate, despite the fact that smoothing considers both past and future evidence and so should be more accurate. In an effort to objectively determine the worth of existing smoothing algorithms when applied to human body tracking, this paper investigates three approximate smoothed-inference techniques: particle-filtered backwards smoothing, variational approximation and Gibbs sampling. Results are quantitatively evaluated on both the HumanEva dataset as well as a scene containing occluding clutter. Surprisingly, it is found that existing smoothing techniques are unable to provide much improvement on the filtered estimate, and possible reasons as to why are explored and discussed.


pervasive computing and communications | 2003

Object labelling from human action recognition

Patrick Peursum; Svetha Venkatesh; Geoff A. W. West; Hung Hai Bui

The paper presents a method for finding and classifying objects within real-world scenes by using the activity of humans interacting with these objects to infer the objects identity. Objects are labelled using evidence accumulated over time and multiple instances of human interactions. This approach is inspired by the problems and opportunities that exist in recognition tasks for intelligent homes, namely cluttered, wide-angle views coupled with significant and repeated human activity within the scene. The advantages of such an approach include the ability to detect salient objects in a cluttered scene, independent of the objects physical structure, adapt to changes in the scene and resolve conflicts in labels by weight of past evidence. This initial investigation seeks to label chairs and open floor spaces by recognising activities such as walking and silting. Findings show that the approach can locate objects with a reasonably high degree of accuracy, with occlusions of the human actor being a significant aid in reducing over-labelling.


EURASIP Journal on Advances in Signal Processing | 2005

Robust recognition and segmentation of human actions using HMMs with missing observations

Patrick Peursum; Hung Hai Bui; Svetha Venkatesh; Geoff A. W. West

This paper describes the integration of missing observation data with hidden Markov models to create a framework that is able to segment and classify individual actions from a stream of human motion using an incomplete 3D human pose estimation. Based on this framework, a model is trained to automatically segment and classify an activity sequence into its constituent subactions during inferencing. This is achieved by introducing action labels into the observation vector and setting these labels as missing data during inferencing, thus forcing the system to infer the probability of each action label. Additionally, missing data provides recognition-level support for occlusions and imperfect silhouette segmentation, permitting the use of a fast (real-time) pose estimation that delegates the burden of handling undetected limbs onto the action recognition system. Findings show that the use of missing data to segment activities is an accurate and elegant approach. Furthermore, action recognition can be accurate even when almost half of the pose feature data is missing due to occlusions, since not all of the pose data is important all of the time.


international conference on pattern recognition | 2004

Human action segmentation via controlled use of missing data in HMMs

Patrick Peursum; Hung Hai Bui; Svetha Venkatesh; Geoff A. W. West

Segmentation of individual actions from a stream of human motion is an open problem in computer vision. This paper approaches the problem of segmenting higher-level activities into their component sub-actions using hidden Markov models modified to handle missing data in the observation vector. By controlling the use of missing data, action labels can be inferred from the observation vector during inferencing, thus performing segmentation and classification simultaneously. The approach is able to segment both prominent and subtle actions, even when subtle actions are grouped together. The advantage of this method over sliding windows and Viterbi state sequence interrogation is that segmentation is performed as a trainable task, and the temporal relationship between actions is encoded in the model and used as evidence for action labelling.


IEEE Pervasive Computing | 2004

Using interaction signatures to find and label chairs and floors

Patrick Peursum; Svetha Venkatesh; Geoff A. W. West; Hung Hai Bui

Our research takes an action-centered approach to automatically learning and classifying functional objects. Our premise is that interpreting human motion is much easier than recognizing arbitrary objects because the human body has constraints on its motion. Moreover, humans tend to interact differently with different objects, so you should be able to identify an object by analyzing how people move when they manipulate it. We call these motions the human-object interaction signature. An interaction signature is a method to find and classify objects on the basis of how humans interact with those objects. The method addresses many key problems encountered in smart-home monitoring systems.


international conference on pattern recognition | 2006

Observation-Switching Linear Dynamic Systems for Tracking Humans Through Unexpected Partial Occlusions by Scene Objects

Patrick Peursum; Svetha Venkatesh; Geoff A. W. West

This paper focuses on the problem of tracking people through occlusions by scene objects. Rather than relying on models of the scene to predict when occlusions will occur as other researchers have done, this paper proposes a linear dynamic system that switches between two alternatives of the position measurement in order to handle occlusions as they occur. The filter automatically switches between a foot-based measure of position (assuming z = 0) to a head-based position measure (given the persons height) when an occlusion of the persons lower body occurs. No knowledge of the scene or its occluding objects is used. Unlike similar research (Fleuret et al., 2005; Zhao and Nevatia, 2004), the approach does not assume a fixed height for people and so is able to track humans through occlusions even when they change height during the occlusion. The approach is evaluated on three furnished scenes containing tables, chairs, desks and partitions. Occlusions range from occlusions of legs, occlusions whilst being seated and near-total occlusions where only the persons head is visible. Results show that the approach provides a significant reduction in false-positive tracks in a multi-camera environment, and more than halves the number of lost tracks in single monocular camera views

Collaboration


Dive into the Patrick Peursum's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Senjian An

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge