Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian C. Lovell is active.

Publication


Featured researches published by Brian C. Lovell.


Pattern Recognition | 2012

Shadow detection: A survey and comparative evaluation of recent methods

Andres Sanin; Conrad Sanderson; Brian C. Lovell

This paper presents a survey and a comparative evaluation of recent techniques for moving cast shadow detection. We identify shadow removal as a critical step for improving object detection and tracking. The survey covers methods published during the last decade, and places them in a feature-based taxonomy comprised of four categories: chromacity, physical, geometry and textures. A selection of prominent methods across the categories is compared in terms of quantitative performance measures (shadow detection and discrimination rates, colour desaturation) as well as qualitative observations. Furthermore, we propose the use of tracking performance as an unbiased approach for determining the practical usefulness of shadow detection methods. The evaluation indicates that all shadow detection approaches make different contributions and all have individual strength and weaknesses. Out of the selected methods, the geometry-based technique has strict assumptions and is not generalisable to various environments, but it is a straightforward choice when the objects of interest are easy to model and their shadows have different orientation. The chromacity based method is the fastest to implement and run, but it is sensitive to noise and less effective in low saturated scenes. The physical method improves upon the accuracy of the chromacity method by adapting to local shadow models, but fails when the spectral properties of the objects are similar to that of the background. The small-region texture based method is especially robust for pixels whose neighbourhood is textured, but may take longer to implement and is the most computationally expensive. The large-region texture based method produces the most accurate results, but has a significant computational load due to its multiple processing steps.


Signal Processing | 1998

Unsupervised cell nucleus segmentation with active contours

Pascal Bamford; Brian C. Lovell

The task of segmenting cell nuclei from cytoplasm in conventional Papanicolaou (Pap) stained cervical cell images is a classical image analysis problem which may prove to be crucial to the development of successful systems which automate the analysis of Pap smears for detection of cancer of the cervix. Although simple thresholding techniques will extract the nucleus in some cases, accurate unsupervised segmentation of very large image databases is elusive. Conventional active contour models as introduced by Kass, Witkin and Terzopoulos (1988) offer a number of advantages in this application, but suffer from the well-known drawbacks of initialisation and minimisation. Here we show that a Viterbi search-based dual active contour algorithm is able to overcome many of these problems and achieve over 99% accurate segmentation on a database of 20 130 Pap stained cell images


computer vision and pattern recognition | 2011

Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching

Mehrtash Tafazzoli Harandi; Conrad Sanderson; Sareh Shirazi; Brian C. Lovell

A convenient way of dealing with image sets is to represent them as points on Grassmannian manifolds. While several recent studies explored the applicability of discriminant analysis on such manifolds, the conventional formalism of discriminant analysis suffers from not considering the local structure of the data. We propose a discriminant analysis approach on Grassmannian manifolds, based on a graph-embedding framework. We show that by introducing within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, the geometrical structure of data can be exploited. Experiments on several image datasets (PIE, BANCA, MoBo, ETH-80) show that the proposed algorithm obtains considerable improvements in discrimination accuracy, in comparison to three recent methods: Grassmann Discriminant Analysis (GDA), Kernel GDA, and the kernel version of Affine Hull Image Set Distance. We further propose a Grassmannian kernel, based on canonical correlation between subspaces, which can increase discrimination accuracy when used in combination with previous Grassmannian kernels.


computer vision and pattern recognition | 2011

Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition

Yongkang Wong; Shaokang Chen; Sandra Mau; Conrad Sanderson; Brian C. Lovell

In video based face recognition, face images are typically captured over multiple frames in uncontrolled conditions, where head pose, illumination, shadowing, motion blur and focus change over the sequence. Additionally, inaccuracies in face localisation can also introduce scale and alignment variations. Using all face images, including images of poor quality, can actually degrade face recognition performance. While one solution it to use only the ‘best’ of images, current face selection techniques are incapable of simultaneously handling all of the abovementioned issues. We propose an efficient patch-based face image quality assessment algorithm which quantifies the similarity of a face image to a probabilistic face model, representing an ‘ideal’ face. Image characteristics that affect recognition are taken into account, including variations in geometric alignment (shift, rotation and scale), sharpness, head pose and cast shadows. Experiments on FERET and PIE datasets show that the proposed algorithm is able to identify images which are simultaneously the most frontal, aligned, sharp and well illuminated. Further experiments on a new video surveillance dataset (termed ChokePoint) show that the proposed method provides better face subsets than existing face selection techniques, leading to significant improvements in recognition accuracy.


international conference on computer vision | 2013

Unsupervised Domain Adaptation by Domain Invariant Projection

Mahsa Baktashmotlagh; Mehrtash Tafazzoli Harandi; Brian C. Lovell; Mathieu Salzmann

Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.


computer vision and pattern recognition | 2011

Improved anomaly detection in crowded scenes via cell-based analysis of foreground speed, size and texture

Vikas Reddy; Conrad Sanderson; Brian C. Lovell

A robust and efficient anomaly detection technique is proposed, capable of dealing with crowded scenes where traditional tracking based approaches tend to fail. Initial foreground segmentation of the input frames confines the analysis to foreground objects and effectively ignores irrelevant background dynamics. Input frames are split into non-overlapping cells, followed by extracting features based on motion, size and texture from each cell. Each feature type is independently analysed for the presence of an anomaly. Unlike most methods, a refined estimate of object motion is achieved by computing the optical flow of only the foreground pixels. The motion and size features are modelled by an approximated version of kernel density estimation, which is computationally efficient even for large training datasets. Texture features are modelled by an adaptively grown code-book, with the number of entries in the codebook selected in an online fashion. Experiments on the recently published UCSD Anomaly Detection dataset show that the proposed method obtains considerably better results than three recent approaches: MPPCA, social force, and mixture of dynamic textures (MDT). The proposed method is also several orders of magnitude faster than MDT, the next best performing method.


IEEE Transactions on Signal Processing | 1992

The statistical performance of some instantaneous frequency estimators

Brian C. Lovell; Robert C. Williamson

The authors examine the class of smoothed central finite difference (SCFD) instantaneous frequency (IF) estimators which are based on finite differencing of the phase of the analytic signal. These estimators are closely related to IF estimation via the (periodic) first moment, with respect to frequency of discrete time-frequency representations (TFRs) in L. Cohens (1966) class. The authors determine the distribution of this class of estimators and establish a framework which allows the comparison of several other estimators such as the zero-crossing estimator and one based on linear regression on the signal phase. It is found that the regression IF estimator is biased and exhibits a large threshold for much of the frequency range. By replacing the linear convolution operation in the regression estimator with the appropriate convolution operation for circular data the authors obtain the parabolic SCFD (PSCFD) estimator, which is unbiased and has a frequency-independent variance, yet retains the optimal performance and simplicity of the original estimator. >


Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 2012

Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach

Mehrtash Tafazzoli Harandi; Conrad Sanderson; Richard I. Hartley; Brian C. Lovell

Recent advances suggest that a wide range of computer vision problems can be addressed more appropriately by considering non-Euclidean geometry. This paper tackles the problem of sparse coding and dictionary learning in the space of symmetric positive definite matrices, which form a Riemannian manifold. With the aid of the recently introduced Stein kernel (related to a symmetric version of Bregman matrix divergence), we propose to perform sparse coding by embedding Riemannian manifolds into reproducing kernel Hilbert spaces. This leads to a convex and kernel version of the Lasso problem, which can be solved efficiently. We furthermore propose an algorithm for learning a Riemannian dictionary (used for sparse coding), closely tied to the Stein kernel. Experiments on several classification tasks (face recognition, texture classification, person re-identification) show that the proposed sparse coding approach achieves notable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as tensor sparse coding, Riemannian locality preserving projection, and symmetry-driven accumulation of local features.


workshop on applications of computer vision | 2013

Spatio-temporal covariance descriptors for action and gesture recognition

Andres Sanin; Conrad Sanderson; Mehrtash Tafazzoli Harandi; Brian C. Lovell

We propose a new action and gesture recognition method based on spatio-temporal covariance descriptors and a weighted Riemannian locality preserving projection approach that takes into account the curved space formed by the descriptors. The weighted projection is then exploited during boosting to create a final multiclass classification algorithm that employs the most useful spatio-temporal regions. We also show how the descriptors can be computed quickly through the use of integral video representations. Experiments on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets indicate superior performance of the proposed method compared to several recent state-of-the-art techniques. The proposed method is robust and does not require additional processing of the videos, such as foreground detection, interest-point detection or tracking.


workshop on applications of computer vision | 2012

Kernel analysis over Riemannian manifolds for visual recognition of actions, pedestrians and textures

Mehrtash Tafazzoli Harandi; Conrad Sanderson; Arnold Wiliem; Brian C. Lovell

A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy.

Collaboration


Dive into the Brian C. Lovell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnold Wiliem

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Ildiko Horvath

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Shaokang Chen

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Abbas Bigdeli

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ting Shan

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Pascal Bamford

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge