Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Orwell is active.

Publication


Featured researches published by James Orwell.


IEEE Transactions on Intelligent Transportation Systems | 2011

A Review of Computer Vision Techniques for the Analysis of Urban Traffic

Norbert Erich Buch; Sergio A. Velastin; James Orwell

Automatic video analysis from urban surveillance cameras is a fast-emerging field based on computer vision techniques. We present here a comprehensive review of the state-of-the-art computer vision for traffic video with a critical analysis and an outlook to future research directions. This field is of increasing relevance for intelligent transport systems (ITSs). The decreasing hardware cost and, therefore, the increasing deployment of cameras have opened a wide application field for video analytics. Several monitoring objectives such as congestion, traffic rule violation, and vehicle interaction can be targeted using cameras that were typically originally installed for human operators. Systems for the detection and classification of vehicles on highways have successfully been using classical visual surveillance techniques such as background estimation and motion tracking for some time. The urban domain is more challenging with respect to traffic density, lower camera angles that lead to a high degree of occlusion, and the variety of road users. Methods from object categorization and 3-D modeling have inspired more advanced techniques to tackle these challenges. There is no commonly used data set or benchmark challenge, which makes the direct comparison of the proposed algorithms difficult. In addition, evaluation under challenging weather conditions (e.g., rain, fog, and darkness) would be desirable but is rarely performed. Future work should be directed toward robust combined detectors and classifiers for all road users, with a focus on realistic conditions during evaluation.


ACM Transactions on Computing Education \/ ACM Journal of Educational Resources in Computing | 2005

Automatic test-based assessment of programming: A review

Christopher Douce; David Livingstone; James Orwell

Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems, including descriptions of the earliest systems, and presents some of the most recent developments. The final sections explore a number of directions automated assessment systems may take, presenting current developments alongside a number of important emerging e-learning specifications.


international conference on image analysis and processing | 1999

A multi-agent framework for visual surveillance

James Orwell; Simon Massey; Paolo Remagnino; Darrel Greenhill; Graeme A. Jones

We describe an architecture for implementing scene understanding algorithms in the visual surveillance domain. To achieve a high level description of events observed by multiple cameras, many inter-related event-driven processes must be executed. We use the agent paradigm to provide a framework in which these processes can be managed. Each camera has an associated agent, which detects and tracks moving regions of interest. This is used to construct and update object agents. Each camera is calibrated so that image co-ordinates can be transformed into ground plane locations. By comparing properties, two object agents can infer that they have the same referent, i.e. that two cameras are observing the same entity, and as a consequence merge identities. Each objects trajectory is classified with a type of activity, with reference to a ground plane agent. We demonstrate objects simultaneously tracked by two cameras, which infer this shared observation. The combination of the agent framework, and visual surveillance application provides an excellent environment for development and evaluation of scene understanding algorithms.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Real-Time Modeling of 3-D Soccer Ball Trajectories From Multiple Fixed Cameras

Jinchang Ren; Ming Xu; James Orwell; Graeme A. Jones

In this paper, model-based approaches for real-time 3-D soccer ball tracking are proposed, using image sequences from multiple fixed cameras as input. The main challenges include filtering false alarms, tracking through missing observations, and estimating 3-D positions from single or multiple cameras. The key innovations are: 1. incorporating motion cues and temporal hysteresis thresholding in ball detection; 2. modeling each ball trajectory as curve segments in successive virtual vertical planes so that the 3-D position of the ball can be determined from a single camera view; and 3. introducing four motion phases (rolling, flying, in possession, and out of play) and employing phase-specific models to estimate ball trajectories which enables high-level semantics applied in low-level tracking. In addition, unreliable or missing ball observations are recovered using spatio-temporal constraints and temporal filtering. The system accuracy and robustness are evaluated by comparing the estimated ball positions and phases with manual ground-truth data of real soccer sequences.


Versus | 1999

Multi-camera colour tracking

James Orwell; Paolo Remagnino; Graeme A. Jones

We propose a colour tracker for use in visual surveillance. The tracker is part of a framework designed to monitor a dynamic scene with more than one camera. Colour tracking complements spatial tracking: it can also be used over large temporal intervals, and between spatially uncalibrated cameras. The colour distributions from objects are modelled, and measures of difference between them are discussed. A context is required for assessing the significance of any difference. It is provided by an analysis of the noise processes: first on the camera capture, then on the underlying variability of the signal. We present results comparing parametric and explicit representations, the inclusion and omission of intensity data, and single and multiple cameras.


Computer Vision and Image Understanding | 2009

Tracking the soccer ball using multiple fixed cameras

Jinchang Ren; James Orwell; Graeme A. Jones; Ming Xu

This paper demonstrates innovative techniques for estimating the trajectory of a soccer ball from multiple fixed cameras. Since the ball is nearly always moving and frequently occluded, its size and shape appearance varies over time and between cameras. Knowledge about the soccer domain is utilized and expressed in terms of field, object and motion models to distinguish the ball from other movements in the tracking and matching processes. Using ground plane velocity, longevity, normalized size and color features, each of the tracks obtained from a Kalman filter is assigned with a likelihood measure that represents the ball. This measure is further refined by reasoning through occlusions and back-tracking in the track history. This can be demonstrated to improve the accuracy and continuity of the results. Finally, a simple 3D trajectory model is presented, and the estimated 3D ball positions are fed back to constrain the 2D processing for more efficient and robust detection and tracking. Experimental results with quantitative evaluations from several long sequences are reported.


british machine vision conference | 2002

Learning Surveillance Tracking Models for the Self-Calibrated Ground Plane

J. R. Renno; James Orwell; Graeme A. Jones

Tracking strategies usually employ motion and appearance models to locate observations of the tracked object in successive frames. The subsequent model update procedure renders the approach highly sensitive to the inevitable observation and occlusion noise processes. In this work, two robust mechanisms are proposed which rely on knowledge about the ground plane. First a highly constrained bounding box appearance model is proposed which is determined solely from predicted image location and visual motion. Second, tracking is performed on the ground plane enabling global real-world observation and dynamic noise models to be defined. Finally, a novelauto-calibrationprocedureis developedto recoverthe imageto ground plane homographyby simply accumulating event observations.


international conference on image processing | 2004

A general framework for 3D soccer ball estimation and tracking

Jinchang Ren; James Orwell; Graeme A. Jones; Ming Xu

A general framework for automatic 3D soccer ball estimation and tracking from multiple image sequences is proposed. Firstly, the ball trajectory is modelled as planarcurves in consecutive virtual vertical planes. These planes are then determined by two ball positions with accurately estimated height, namely critical points, which are extracted by curvature thresholding and nearest distance of 3D lines from single or multiple views respectively. Finally, unreliable or missing ball observations are recovered using geometric constraints and polynomial interpolation. Experiments on video sequences from different cameras, with over 5000 frames each, have demonstrated a comprehensive solution for accurate and robust 3D ball estimation and tracking, with over 90% ball estimations within 2.5 metres of manually derived ground truth.


international conference on image processing | 2004

Adaptive eigen-backgrounds for object detection

Jonathan D. Rymel; John-Paul Renno; Darrel Greenhill; James Orwell; Graeme A. Jones

Most tracking algorithms detect moving objects by comparing incoming images against a reference frame. Crucially, this reference image must adapt continuously to the current lighting conditions if objects are to be accurately differentiated. In this work, a novel appearance model method is presented based on the eigen-background approach. The image can be efficiently represented by a set of appearance models with few significant dimensions. Rather than accumulating the necessarily enormous training set to generate the eigen model, the described technique builds and adapts the eigen-model online evolving both the parameters and number of significant dimension. For each incoming image, a reference frame may be efficiently hypothesized from a subsample of the incoming pixels. A comparative evaluation that measures segmentation accuracy using large amounts of manually derived ground truth is presented.


british machine vision conference | 2009

3D extended histogram of oriented gradients (3DHOG) for classification of road users in urban scenes

Norbert Erich Buch; James Orwell; Sergio A. Velastin

This paper proposes and demonstrates a novel method for the detection and classification of individual vehicles and pedestrians in urban scenes. In this scenario, shadows, lights and various occlusions compromise the accuracy of foreground segmentation and hence there are challenges with conventional silhouette-based methods. 2D features derived from histograms of oriented gradients (HOG) have been shown to be effective for detecting pedestrians and other objects. However, the appearance of vehicles varies substantially with the viewing angle and local features may be often occluded. In this paper, a novel method is proposed that overcomes limitations in the use of 2D HOG. Full 3D models are used for the object categories to be detected and the feature patches are defined over these models. A calibrated camera allows an affine transform of the observation into a normalised representation from which ‘3DHOG’ features are defined. A variable set of interest points is used in the detection and classification processes, depending on which points in the 3D model are visible. Experiments on real CCTV data of urban scenes demonstrate the proposed method. The 3DHOG feature is compared with features based on FFT and simple histograms. A baseline method using overlap between wire-frame models and motion silhouettes is also included. The results demonstrate that the proposed method achieves comparable performance. In particular, an advantage of the proposed method is that it is more robust than motion silhouettes which are often compromised in real data by variable lighting, camera quality and occlusions from other objects.

Collaboration


Dive into the James Orwell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Xu

Xi'an Jiaotong-Liverpool University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge