Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David C. Hogg is active.

Publication


Featured researches published by David C. Hogg.


british machine vision conference | 1995

Learning the distribution of object trajectories for event recognition

Neil Johnson; David C. Hogg

The advent in recent years of robust, real-time, model-based tracking techniques for rigid and non-rigid moving objects has made automated surveillance and event recognition a possibility. A statistically based model of object trajectories is presented which is learnt from the observation of long image sequences. Trajectory data is supplied by a tracker using Active Shape Models, from which a model of the distribution of typical trajectories is learnt. Experimental results are included to show the generation of the model for trajectories within a pedestrian scene. We indicate how the resulting model can be used for the identification of atypical events.


Image and Vision Computing | 1983

Model-based vision: a program to see a walking person☆

David C. Hogg

Abstract For a machine to be able to ‘see’, it must know something about the object it is ‘looking’ at. A common method in machine vision is to provide the machine with general rather than specific knowledge about the object. An alternative technique, and the one used in this paper, is a model-based approach in which particulars about the object are given and this drives the analysis. The computer program described here, the WALKER model, maps images into a description in which a person is represented by the series of hierarchical levels, i.e. a person has an arm which has a lower-arm which has a hand. The performance of the program is illustrated by superimposing the machine-generated picture over the original photographic images.


european conference on computer vision | 1994

Learning flexible models from image sequences

Adam Baumberg; David C. Hogg

The “Point Distribution Model”, derived by analysing the modes of variation of a set of training examples, can be a useful tool in machine vision. One of the drawbacks of this approach to date is that the training data is acquired with human intervention where fixed points must be selected by eye from example images. A method is described for generating a similar flexible shape model automatically from real image data. A cubic B-spline is used as the shape vector for training the model. Large training sets are used to generate a robust model of the human profile for use in the labelling and tracking of pedestrians in real-world scenes. Furthermore, an extended model is described which incorporates direction of motion, allowing the extrapolation of direction from shape.


european conference on computer vision | 2008

Detecting Carried Objects in Short Video Sequences

Dima Damen; David C. Hogg

We propose a new method for detecting objects such as bags carried by pedestrians depicted in short video sequences. In common with earlier work [1,2] on the same problem, the method starts by averaging aligned foreground regions of a walking pedestrian to produce a representation of motion and shape (known as a temporal template) that has some immunity to noise in foreground segmentations and phase of the walking cycle. Our key novelty is for carried objects to be revealed by comparing the temporal templates against view-specific exemplars generated offline for unencumbered pedestrians. A likelihood map obtained from this match is combined in a Markov random field with a map of prior probabilities for carried objects and a spatial continuity assumption, from which we obtain a segmentation of carried objects using the MAP solution. We have re-implemented the earlier state of the art method [1] and demonstrate a substantial improvement in performance for the new method on the challenging PETS2006 dataset [3]. Although developed for a specific problem, the method could be applied to the detection of irregularities in appearance for other categories of object that move in a periodic fashion.


Computer Vision and Image Understanding | 2001

Learning Variable-Length Markov Models of Behavior

Aphrodite Galata; Neil Johnson; David C. Hogg

In recent years there has been an increased interest in the modeling and recognition of human activities involving highly structured and semantically rich behavior such as dance, aerobics, and sign language. A novel approach for automatically acquiring stochastic models of the high-level structure of an activity without the assumption of any prior knowledge is presented. The process involves temporal segmentation into plausible atomic behavior components and the use of variable-length Markov models for the efficient representation of behaviors. Experimental results that demonstrate the synthesis of realistic sample behaviors and the performance of models for long-term temporal prediction are presented.


international conference on automatic face and gesture recognition | 1996

Towards 3D hand tracking using a deformable model

Tony Heap; David C. Hogg

In this paper we first describe how we have constructed a 3D deformable Point Distribution Model of the human hand, capturing training data semi-automatically from volume images via a physically-based model. We then show how we have attempted to use this model in tracking an unmarked hand moving with 6 degrees of freedom (plus deformation) in real time using a single video camera. In the course of this we show how to improve on a weighted least-squares pose parameter approximation at little computational cost. We note the successes and shortcomings of our system and discuss how it might be improved.


Isprs Journal of Photogrammetry and Remote Sensing | 1999

Automated reconstruction of 3D models from real environments

Vítor Sequeira; Kia Ng; Erik Wolfart; João G. M. Gonçalves; David C. Hogg

Abstract This paper describes an integrated approach to the construction of textured 3D scene models of building interiors from laser range data and visual images. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction, known as the EST (Environmental Sensor for Telepresence). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. The paper presents the main components of the EST/AEST, and presents some example results obtained from the prototypes. The reconstructed model is encoded in VRML format so that it is possible to access and view the model via the World Wide Web.


Archive | 2009

Computer Vision and Pattern Recognition (CVPR)

Dima Damen; David C. Hogg

The ambiguity inherent in a localized analysis of events from video can be resolved by exploiting constraints between events and examining only feasible global explanations. We show how jointly recognizing and linking events can be formulated as labeling of a Bayesian network. The framework can be extended to multiple linking layers, expressing explanations as compositional hierarchies. The best global explanation is the maximum a posteriori (MAP) solution over a set of feasible explanations. The search space is sampled using reversible jump Markov chain Monte Carlo (RJMCMC). We propose a set of general move types that is extensible to multiple layers of linkage, and use simulated annealing to find the MAP solution given all observations. We provide experimental results for a challenging two-layer linkage problem, demonstrating the ability to recognise and link drop and pick events of bicycles in a rack over five days.


british machine vision conference | 2004

Detecting inexplicable behaviour

Hannah Dee; David C. Hogg

This paper presents a novel approach to the detection of unusual or interesting events in videos involving certain types of intentional behaviour, such as pedestrian scenes. The approach is not based upon a statistical measure of typicality, but upon building an understanding of the way people navigate towards a goal. The activity of agents moving around within the scene is evaluated based upon whether the behaviour in question is consistent with a simple model of goal-directed behaviour and a model of those goals and obstacles known to be in the scene. The advantages of such an approach are multiple: it handles the presence of movable obstacles (for example, parked cars) with ease; trajectories which have never before been presented to the system can be classied as explicable; and the technique as a whole has a prima facie psychological plausibility. A system based upon these principles is demonstrated in two scenes: a car-park, and in a foyer scenario 1 .


Image and Vision Computing | 2000

Constructing qualitative event models automatically from video input

Jonathan H. Fernyhough; Anthony G. Cohn; David C. Hogg

Abstract We describe an implemented technique for generating event models automatically based on qualitative reasoning and a statistical analysis of video input. Using an existing tracking program which generates labelled contours for objects in every frame, the view from a fixed camera is partitioned into semantically relevant regions based on the paths followed by moving objects. The paths are indexed with temporal information so objects moving along the same path at different speeds can be distinguished. Using a notion of proximity based on the speed of the moving objects and qualitative spatial reasoning techniques, event models describing the behaviour of pairs of objects can be built, again using statistical methods. The system has been tested on a traffic domain and learns various event models expressed in the qualitative calculus which represent human observable events. The system can then be used to recognise subsequent selected event occurrences or unusual behaviours.

Collaboration


Dive into the David C. Hogg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hannah Dee

Aberystwyth University

View shared research outputs
Top Co-Authors

Avatar

Kia Ng

University of Leeds

View shared research outputs
Researchain Logo
Decentralizing Knowledge