Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akshay Asthana is active.

Publication


Featured researches published by Akshay Asthana.


ieee international conference on automatic face gesture recognition | 2011

Emotion recognition using PHOG and LPQ features

Abhinav Dhall; Akshay Asthana; Roland Goecke; Tamas Gedeon

We propose a method for automatic emotion recognition as part of the FERA 2011 competition. The system extracts pyramid of histogram of gradients (PHOG) and local phase quantisation (LPQ) features for encoding the shape and appearance information. For selecting the key frames, K-means clustering is applied to the normalised shape vectors derived from constraint local model (CLM) based face tracking on the image sequences. Shape vectors closest to the cluster centers are then used to extract the shape and appearance features. We demonstrate the results on the SSPNET GEMEP-FERA dataset. It comprises of both person specific and person independent partitions. For emotion classification we use support vector machine (SVM) and largest margin nearest neighbour (LMNN) and compare our results to the pre-computed FERA 2011 emotion challenge baseline.


affective computing and intelligent interaction | 2009

Evaluating AAM fitting methods for facial expression recognition

Akshay Asthana; Jason M. Saragih; Michael Wagner; Roland Goecke

The human face is a rich source of information for the viewer and facial expressions are a major component in judging a persons affective state, intention and personality. Facial expressions are an important part of human-human interaction and have the potential to play an equally important part in human-computer interaction. This paper evaluates various Active Appearance Model (AAM) fitting methods, including both the original formulation as well as several state-of-the-art methods, for the task of automatic facial expression recognition. The AAM is a powerful statistical model for modelling and registering deformable objects. The results of the fitting process are used in a facial expression recognition task using a region-based intermediate representation related to Action Units, with the expression classification task realised using a Support Vector Machine. Experiments are performed for both person-dependent and person-independent setups. Overall, the best facial expression recognition results were obtained by using the Iterative Error Bound Minimisation method, which consistently resulted in accurate face model alignment and facial expression recognition even when the initial face detection used to initialise the fitting procedure was poor.


ieee intelligent vehicles symposium | 2007

Visual Vehicle Egomotion Estimation using the Fourier-Mellin Transform

Roland Goecke; Akshay Asthana; Niklas Pettersson; Lars Petersson

This paper is concerned with the problem of estimating the motion of a single camera from a sequence of images, with an application scenario of vehicle egomotion estimation. Egomotion estimation has been an active area of research for many years and various solutions to the problem have been proposed. Many methods rely on optical flow or local image features to establish the spatial relationship between two images. A new method of egomotion estimation is presented which makes use of the Fourier-Mellin Transform for registering images in a video sequence, from which the rotation and translation of the camera motion can be estimated. The Fourier-Mellin Transform provides an accurate and efficient way of computing the camera motion parameters. It is a global method that takes the contributions from all pixels into account. The performance of the proposed approach is compared to two variants of optical flow methods and results are presented for a real-world video sequence taken from a moving vehicle.


british machine vision conference | 2009

Learning-based face synthesis for pose-robust recognition from single image

Akshay Asthana; Conrad Sanderson; Tamas Gedeon; Roland Goecke

Face recognition in real-world conditions requires the ability to deal with a number of conditions, such as variations in pose, illumination and expression. In this paper, we focus on variations in head pose and use a computationally efficient regression-based approach for synthesising face images in different poses, which are used to extend the face recognition training set. In this data-driven approach, the correspondences between facial landmark points in frontal and non-frontal views are learnt offline from manually annotated training data via Gaussian Process Regression. We then use this learner to synthesise non-frontal face images from any unseen frontal image. To demonstrate the utility of this approach, two frontal face recognition systems (the commonly used PCA and the recent Multi-Region Histograms) are augmented with synthesised non-frontal views for each person. This synthesis and augmentation approach is experimentally validated on the FERET dataset, showing a considerable improvement in recognition rates for ±40◦ and ±60◦ views, while maintaining high recognition rates for ±15◦ and ±25◦ views.


computer vision and pattern recognition | 2009

Learning based automatic face annotation for arbitrary poses and expressions from frontal images only

Akshay Asthana; Roland Goecke; Novi Quadrianto; Tamas Gedeon

Statistical approaches for building non-rigid deformable models, such as the active appearance model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases.


international conference on neural information processing | 2010

Facial expression based automatic album creation

Abhinav Dhall; Akshay Asthana; Roland Goecke

With simple cost effective imaging solutions being widely available these days, there has been an enormous rise in the number of images consumers have been taking. Due to this increase, searching, browsing and managing images in multi-media systems has become more complex. One solution to this problem is to divide images into albums for meaningful and effective browsing. We propose a novel automated, expression driven image album creation for consumer image management systems. The system groups images with faces having similar expressions into albums. Facial expressions of the subjects are grouped into albums by the Structural Similarity Index measure, which is based on the theory on how easily the human visual system can extract the shape information of a scene. We also propose a search by similar expression, in which the user can create albums by providing example facial expression images. A qualitative analysis of the performance of the system is presented on the basis of a user study.


Pattern Recognition | 2011

Regression based automatic face annotation for deformable model building

Akshay Asthana; Simon Lucey; Roland Goecke

A major drawback of statistical models of non-rigid, deformable objects, such as the active appearance model (AAM), is the required pseudo-dense annotation of landmark points for every training image. We propose a regression-based approach for automatic annotation of face images at arbitrary pose and expression, and for deformable model building using only the annotated frontal images. We pose the problem of learning the pattern of manual annotation as a data-driven regression problem and explore several regression strategies to effectively predict the spatial arrangement of the landmark points for unseen face images, with arbitrary expression, at arbitrary poses. We show that the proposed fully sparse non-linear regression approach outperforms other regression strategies by effectively modelling the changes in the shape of the face under varying pose and is capable of capturing the subtleties of different facial expressions at the same time, thus, ensuring the high quality of the generated synthetic images. We show the generalisability of the proposed approach by automatically annotating the face images from four different databases and verifying the results by comparing them with a ground truth obtained from manual annotations.


international conference on pattern recognition | 2010

Linear Facial Expression Transfer with Active Appearance Models

Miles de la Hunty; Akshay Asthana; Roland Goecke

The issue of transferring facial expressions from one persons face to anothers has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, with the proliferation of online image and video collections and web applications, such as Google Street View, the question of preserving privacy through face de-identification has gained interest in the computer vision community. In this paper, we focus on the problem of real-time dynamic facial expression transfer using an Active Appearance Model framework. We provide a theoretical foundation for a generalisation of two well-known expression transfer methods and demonstrate the improved visual quality of the proposed linear extrapolation transfer method on examples of face swapping and expression transfer using the AVOZES data corpus. Realistic talking faces can be generated in real-time at low computational cost.


international conference on neural information processing | 2009

A Hybrid Fuzzy Approach for Human Eye Gaze Pattern Recognition

Dingyun Zhu; B. Sumudu U. Mendis; Tamas Gedeon; Akshay Asthana; Roland Goecke

Face perception and text reading are two of the most developed visual perceptual skills in humans. Understanding which features in the respective visual patterns make them differ from each other is very important for us to investigate the correlation between humans visual behavior and cognitive processes. We introduce our fuzzy signatures with a Levenberg-Marquardt optimization method based hybrid approach for recognizing the different eye gaze patterns when a human is viewing faces or text documents. Our experimental results show the effectiveness of using this method for the real world case. A further comparison with Support Vector Machines (SVM) also demonstrates that by defining the classification process in a similar way to SVM, our hybrid approach is able to provide a comparable performance but with a more interpretable form of the learned structure.


international conference on image processing | 2009

Automatic frontal face annotation and AAM building for arbitrary expressions from a single frontal image only

Akshay Asthana; Asim A. Khwaja; Roland Goecke

In recent years, statistically motivated approaches for the registration and tracking of non-rigid objects, such as the Active Appearance Model (AAM), have become very popular. A major drawback of these approaches is that they require manual annotation of all training images which can be tedious and error prone. In this paper, a MPEG-4 based approach for the automatic annotation of frontal face images, having any arbitrary facial expression, from a single annotated frontal image is presented. This approach utilises the MPEG-4 based facial animation system to generate virtual images having different expressions and uses the existing AAM framework to automatically annotate unseen images. The approach demonstrates an excellent generalisability by automatically annotating face images from two different databases.

Collaboration


Dive into the Akshay Asthana's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamas Gedeon

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Asim A. Khwaja

Australian National University

View shared research outputs
Top Co-Authors

Avatar

B. Sumudu U. Mendis

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dingyun Zhu

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Jason M. Saragih

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Lars Petersson

Australian National University

View shared research outputs
Top Co-Authors

Avatar

M. de la Hunty

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge