Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Azadeh Alavi is active.

Publication


Featured researches published by Azadeh Alavi.


international conference on image processing | 2012

Clustering on Grassmann manifolds via kernel embedding with application to action analysis

Sareh Shirazi; Mehrtash Tafazzoli Harandi; Conrad Sanderson; Azadeh Alavi; Brian C. Lovell

With the aim of improving the clustering of data (such as image sequences) lying on Grassmann manifolds, we propose to embed the manifolds into Reproducing Kernel Hilbert Spaces. To this end, we define a measure of cluster distortion and embed the manifolds such that the distortion is minimised. We show that the optimal solution is a generalised eigenvalue problem that can be solved very efficiently. Experiments on several clustering tasks (including human action clustering) show that in comparison to the recent intrinsic Grassmann k-means algorithm, the proposed approach obtains notable improvements in clustering accuracy, while also being several orders of magnitude faster.


Pattern Recognition | 2014

Visual learning and classification of human epithelial type 2 cell images through spontaneous activity patterns

Yan Yang; Arnold Wiliem; Azadeh Alavi; Brian C. Lovell; Peter Hobson

Identifying the presence of anti-nuclear antibody (ANA) in human epithelial type 2 (HEp-2) cells via the indirect immunofluorescence (IIF) protocol is commonly used to diagnose various connective tissue diseases in clinical pathology tests. As it is a labour and time intensive diagnostic process, several computer aided diagnostic (CAD) systems have been proposed. However, the existing CAD systems suffer from numerous shortcomings due to the selection of features, which is commonly based on expert experience. Such a choice of features may not work well when the CAD systems are retasked to another dataset. To address this, in our previous work, we proposed a novel approach that learns a set of filters from HEp-2 cell images. It is inspired by the receptive fields in the mammalians vision system, since the receptive fields can be thought as a set of filters for similar shapes. We obtain robust filters for HEp-2 cell classification by employing the independent component analysis (ICA) framework. Although, this approach may be held back due to one particular problem; ICA learning requires a sufficiently large volume of training data which is not always available. In this paper, we demonstrate a biologically inspired solution to address this issue via the use of spontaneous activity patterns (SAP). The spontaneous activity patterns, which are related to the spontaneous neural activities initialised by the chemical release in the brain, are found as the typical stimuli for the visual cell development of newborn animals. In the classification system for HEp-2 cells, we propose to model SAP as a set of small image patches containing randomly positioned Gaussian spots. The SAP image patches are generated and mixed with the training images in order to learn filters via the ICA framework. The obtained filters are adopted to extract the set of responses from a HEp-2 cell image. We then employ regions from this set of responses and stack them into “cubic regions”, and apply a classification based on the correlation information of the features. We show that applying the additional SAP leads to a better classification performance on HEp-2 cell images compared to using only the existing patterns for training ICA filters. The improvement on classification is particularly significant when there are not enough specimen images available in the training set, as SAP adds more variations to the existing data that makes the learned ICA model more robust. We show that the proposed approach consistently outperforms three recently proposed CAD systems on two publicly available datasets: ICPR HEp-2 contest and SNPHEp-2.


international conference on image processing | 2013

Multi-shot person re-identification via relational Stein divergence

Azadeh Alavi; Yan Yang; Mehrtash Tafazzoli Harandi; Conrad Sanderson

Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features.


workshop on applications of computer vision | 2014

Random projections on manifolds of Symmetric Positive Definite matrices for image classification

Azadeh Alavi; Arnold Wiliem; Kun Zhao; Brian C. Lovell; Conrad Sanderson

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.


ieee international conference on automatic face gesture recognition | 2017

KEPLER: Keypoint and Pose Estimation of Unconstrained Faces by Learning Efficient H-CNN Regressors

Amit Kumar; Azadeh Alavi; Rama Chellappa

Keypoint detection is one of the most importantpre-processing steps in tasks such as face modeling, recognitionand verification. In this paper, we present an iterative methodfor Keypoint Estimation and Pose prediction of unconstrainedfaces by Learning Efficient H-CNN Regressors (KEPLER) foraddressing the face alignment problem. Recent state of the artmethods have shown improvements in face keypoint detectionby employing Convolution Neural Networks (CNNs). Althougha simple feed forward neural network can learn the mappingbetween input and output spaces, it cannot learn the inherentstructural dependencies. We present a novel architecture calledH-CNN (Heatmap-CNN) which captures structured global andlocal features and thus favors accurate keypoint detecion. H-CNNis jointly trained on the visibility, fiducials and 3D-pose of theface. As the iterations proceed, the error decreases making thegradients small and thus requiring efficient training of DCNNs tomitigate this. KEPLER performs global corrections in pose andfiducials for the first four iterations followed by local correctionsin a subsequent stage. As a by-product, KEPLER also provides3D pose (pitch, yaw and roll) of the face accurately. In thispaper, we show that without using any 3D information, KEPLERoutperforms state of the art methods for alignment on challengingdatasets such as AFW [38] and AFLW [17].


international conference on image processing | 2013

Classification of human epithelial type 2 cell images using independent component analysis

Yan Yang; Arnold Wiliem; Azadeh Alavi; Peter Hobson

Identifying the presence of Anti-Nuclear Antibody in Human Epithelial type 2 (HEp-2) cells via Indirect Immunofluorescence (IIF) is commonly used to diagnose various connective tissue diseases in clinical pathology tests. This pathology test can be automated by computer vision algorithms. However, the existing automated systems, namely Computer Aided Diagnostic (CAD) systems, suffer from numerous shortcomings such as using pre-selected features. To overcome such shortcomings, we propose a novel approach by learning filters from image statistics. Specifically, we train a filter bank from unlabelled cell images by using Independent Component Analysis (ICA). The filter bank is then applied to images in order to extract a set of filter responses. We extract regions from this set of responses and stack them into “cubic regions”. Average filter responses in 1 × 1, 2 × 2, 4 × 4 grids from the cubic-region are used as “ICA feature”. ICA features in multiple regions are stored in a feature collection matrix to represent each image. Finally, we use Support Vector Machine (SVM) in conjunction with histogram correlation kernel to classify the cell images. We show that our approach outperforms three recently proposed CAD systems on two publicly available datasets: ICPR HEp-2 contest and SNPHEp-2.


international symposium on neural networks | 2009

Automated classification of dopaminergic neurons in the rodent brain

Azadeh Alavi; Brenton Luke Cavanagh; Gervase Tuxworth; Adrian Cuda Banda Meedeniya; Alan Mackay-Sim; Michael Myer Blumenstein

Accurate morphological characterization of the multiple neuronal classes of the brain would facilitate the elucidation of brain function and the functional changes that underlie neurological disorders such as Parkinsons diseases or Schizophrenia. Manual morphological analysis is very time-consuming and suffers from a lack of accuracy because some cell characteristics are not readily quantified. This paper presents an investigation in automating the classification of dopaminergic neurons located in the brainstem of the rodent, a region critical to the regulation of motor behaviour and is implicated in multiple neurological disorders including Parkinsons disease. Using a Carl Zeiss Axioimager Z1 microscope with Apotome, salient information was obtained from images of dopaminergic neurons using a structural feature extraction technique. A data set of 100 images of neurons was generated and a set of 17 features was used to describe their morphology. In order to identify differences between neurons, 2-dimensional and 3-dimensional image representations were analyzed. This paper compares the performance of three popular classification methods in bioimage classification (Support Vector Machines (SVMs), Back Propagation Neural Networks (BPNNs) and Multinomial Logistic Regression (MLR)), and the results show a significant difference between machine classification (with 97% accuracy) and human expert based classification (72% accuracy).


workshop on applications of computer vision | 2013

Relational divergence based classification on Riemannian manifolds

Azadeh Alavi; Mehrtash Tafazzoli Harandi; Conrad Sanderson

A recent trend in computer vision is to represent images through covariance matrices, which can be treated as points on a special class of Riemannian manifolds. A popular way of analysing such manifolds is to embed them in Euclidean spaces, a process which can be interpreted as warping the feature space. Embedding manifolds is not without problems, as the manifold structure may not be accurately preserved. In this paper, we propose a new method for analysing Riemannian manifolds, where embedding into Euclidean spaces is not explicitly required. To this end, we propose to represent Riemannian points through their similarities to a set of reference points on the manifold, with the aid of the recently proposed Stein divergence, which is a symmetrised version of Bregman matrix divergence. Classification problems on manifolds are then effectively converted into the problem of finding appropriate machinery over the space of similarities, which can be tackled by conventional Euclidean learning methods such as linear discriminant analysis. Experiments on face recognition, person re-identification and texture classification show that the proposed method outperforms state-of-the-art approaches, such as Tensor Sparse Coding, Histogram Plus Epitome and the recent Riemannian Locality Preserving Projection.


Archive | 2013

Graph-Embedding Discriminant Analysis on Riemannian Manifolds for Visual Recognition

Sareh Shirazi; Azadeh Alavi; Mehrtash Tafazzoli Harandi; Brian C. Lovell

Recently, several studies have utilised non-Euclidean geometry to address several computer vision problems including object tracking [17], characterising the diffusion of water molecules as in diffusion tensor imaging [24], face recognition [23, 31], human re-identification [4], texture classification [16], pedestrian detection [39] and action recognition [22, 43].


Image and Vision Computing | 2018

KEPLER: Simultaneous estimation of keypoints and 3D pose of unconstrained faces in a unified framework by learning efficient H-CNN regressors

Amit Kumar; Azadeh Alavi; Rama Chellappa

Abstract Keypoint detection is one of the most important pre-processing steps in tasks such as face modeling, recognition and verification. In this paper, we present an iterative method for Keypoint Estimation and Pose prediction of unconstrained faces by Learning Efficient H-CNN Regressors (KEPLER) for addressing the unconstrained face alignment problem. Recent state-of-the-art methods have shown improvements in facial keypoint detection by employing Convolution Neural Networks (CNNs). Although a simple feed forward neural network can learn the mapping between input and output spaces, it does not learn the inherent structural dependencies that well. We present a novel architecture called H-CNN (Heatmap-CNN) acting on an N-dimensional input image which captures informative structured global and local features and thus favors accurate keypoint detecion in in-the wild face images. H-CNN is jointly trained on the visibility, fiducials and 3D-pose of the face. As the iterations proceed, the error decreases making the gradients small and thus requiring efficient training of deep networks to mitigate this. KEPLER performs global corrections in pose and fiducials for the first four iterations followed by local corrections at a later stage. As a by-product, KEPLER also provides robust estimate of 3D pose (pitch, yaw and roll) of the face. We also show that without using any 3D information, KEPLER outperforms recent state-of-the-art methods for alignment on challenging datasets such as AFW [1] and AFLW [2].

Collaboration


Dive into the Azadeh Alavi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnold Wiliem

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Yan Yang

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kun Zhao

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge