Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abhinav Dhall is active.

Publication


Featured researches published by Abhinav Dhall.


IEEE MultiMedia | 2012

Collecting Large, Richly Annotated Facial-Expression Databases from Movies

Abhinav Dhall; Roland Goecke; Simon Lucey; Tamas Gedeon

Two large facial-expression databases depicting challenging real-world conditions were constructed using a semi-automatic approach via a recommender system based on subtitles.Two large facial-expression databases depicting challenging real-world conditions were constructed using a semi-automatic approach via a recommender system based on subtitles.


ieee international conference on automatic face gesture recognition | 2011

Emotion recognition using PHOG and LPQ features

Abhinav Dhall; Akshay Asthana; Roland Goecke; Tamas Gedeon

We propose a method for automatic emotion recognition as part of the FERA 2011 competition. The system extracts pyramid of histogram of gradients (PHOG) and local phase quantisation (LPQ) features for encoding the shape and appearance information. For selecting the key frames, K-means clustering is applied to the normalised shape vectors derived from constraint local model (CLM) based face tracking on the image sequences. Shape vectors closest to the cluster centers are then used to extract the shape and appearance features. We demonstrate the results on the SSPNET GEMEP-FERA dataset. It comprises of both person specific and person independent partitions. For emotion classification we use support vector machine (SVM) and largest margin nearest neighbour (LMNN) and compare our results to the pre-computed FERA 2011 emotion challenge baseline.


international conference on computer vision | 2011

Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark

Abhinav Dhall; Roland Goecke; Simon Lucey; Tamas Gedeon

Quality data recorded in varied realistic environments is vital for effective human face related research. Currently available datasets for human facial expression analysis have been generated in highly controlled lab environments. We present a new static facial expression database Static Facial Expressions in the Wild (SFEW) extracted from a temporal facial expressions database Acted Facial Expressions in the Wild (AFEW) [9], which we have extracted from movies. In the past, many robust methods have been reported in the literature. However, these methods have been experimented on different databases or using different protocols within the same databases. The lack of a standard protocol makes it difficult to compare systems and acts as a hindrance in the progress of the field. Therefore, we propose a person independent training and testing protocol for expression recognition as part of the BEFIT workshop. Further, we compare our dataset with the JAFFE and Multi-PIE datasets and provide baseline results.


international conference on multimodal interfaces | 2015

Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015

Abhinav Dhall; O. V. Ramana Murthy; Roland Goecke; Jyoti Joshi; Tamas Gedeon

The third Emotion Recognition in the Wild (EmotiW) challenge 2015 consists of an audio-video based emotion and static image based facial expression classification sub-challenges, which mimics real-world conditions. The two sub-challenges are based on the Acted Facial Expression in the Wild (AFEW) 5.0 and the Static Facial Expression in the Wild (SFEW) 2.0 databases, respectively. The paper describes the data, baseline method, challenge protocol and the challenge results. A total of 12 and 17 teams participated in the video based emotion and image based expression sub-challenges, respectively.


Journal of the American Chemical Society | 2012

Semisynthetic, Site-Specific Ubiquitin Modification of α-Synuclein Reveals Differential Effects on Aggregation

Franziska Meier; Tharindumala Abeywardana; Abhinav Dhall; Nicholas P. Marotta; Jobin Varkey; Ralf Langen; Champak Chatterjee; Matthew R. Pratt

The process of neurodegeneration in Parkinsons Disease is intimately associated with the aggregation of the protein α-synuclein into toxic oligomers and fibrils. Interestingly, many of these protein aggregates are found to be post-translationally modified by ubiquitin at several different lysine residues. However, the inability to generate homogeneously ubiquitin modified α-synuclein at each site has prevented the understanding of the specific biochemical consequences. We have used protein semisynthesis to generate nine site-specifically ubiquitin modified α-synuclein derivatives and have demonstrated that different ubiquitination sites have differential effects on α-synuclein aggregation.


acm multimedia | 2013

Diagnosis of depression by behavioural signals: a multimodal approach

Nicholas Cummins; Jyoti Joshi; Abhinav Dhall; Vidhyasaharan Sethu; Roland Goecke; Julien Epps

Quantifying behavioural changes in depression using affective computing techniques is the first step in developing an objective diagnostic aid, with clinical utility, for clinical depression. As part of the AVEC 2013 Challenge, we present a multimodal approach for the Depression Sub-Challenge using a GMM-UBM system with three different kernels for the audio subsystem and Space Time Interest Points in a Bag-of-Words approach for the vision subsystem. These are then fused at the feature level to form the combined AV system. Key results include the strong performance of acoustic audio features and the bag-of-words visual features in predicting an individuals level of depression using regression. Interestingly, in the context of the small amount of literature on the subject, is that our feature level multimodal fusion technique is able to outperform both the audio and visual challenge baselines.


Journal on Multimodal User Interfaces | 2013

Multimodal assistive technologies for depression diagnosis and monitoring

Jyoti Joshi; Roland Goecke; Sharifa Alghowinem; Abhinav Dhall; Michael Wagner; Julien Epps; Gordon Parker; Michael Breakspear

Depression is a severe mental health disorder with high societal costs. Current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. The long-term goal of our research is to develop assistive technologies to support clinicians and sufferers in the diagnosis and monitoring of treatment progress in a timely and easily accessible format. In the first phase, we aim to develop a diagnostic aid using affective sensing approaches. This paper describes the progress to date and proposes a novel multimodal framework comprising of audio-video fusion for depression diagnosis. We exploit the proposition that the auditory and visual human communication complement each other, which is well-known in auditory-visual speech processing; we investigate this hypothesis for depression analysis. For the video data analysis, intra-facial muscle movements and the movements of the head and shoulders are analysed by computing spatio-temporal interest points. In addition, various audio features (fundamental frequency f0, loudness, intensity and mel-frequency cepstral coefficients) are computed. Next, a bag of visual features and a bag of audio features are generated separately. In this study, we compare fusion methods at feature level, score level and decision level. Experiments are performed on an age and gender matched clinical dataset of 30 patients and 30 healthy controls. The results from the multimodal experiments show the proposed framework’s effectiveness in depression analysis.


ACS Chemical Biology | 2011

Chemical Approaches To Understand the Language of Histone Modifications

Abhinav Dhall; Champak Chatterjee

Genomic DNA in the eukaryotic cell nucleus is present in the form of chromatin. Histones are the principal protein component of chromatin, and their post-translational modifications play important roles in regulating the structure and function of chromatin and thereby in determining cell development and disease. An understanding of how histone modifications translate into downstream cellular events is important from both developmental and therapeutic perspectives. However, biochemical studies of histone modifications require access to quantities of homogenously modified histones that cannot be easily isolated from natural sources or generated by enzymatic methods. In the past decade, chemical synthesis has proven to be a powerful tool in translating the language of histone modifications by providing access to uniformly modified histones and by the development of stable analogues of thermodynamically labile modifications. This Review highlights the various synthetic and semisynthetic strategies that have enabled biochemical and biophysical characterization of site-specifically modified histones.


Journal of Biological Chemistry | 2014

Sumoylated Human Histone H4 Prevents Chromatin Compaction by Inhibiting Long-range Internucleosomal Interactions

Abhinav Dhall; Sijie Wei; Beat Fierz; Christopher L. Woodcock; Tae-Hee Lee; Champak Chatterjee

Background: Human histone H4 is post-translationally modified at Lys-12 by the small ubiquitin-like modifier protein (SUMO-3). Results: Chemical sumoylation at H4 Lys-12 revealed the inhibition of chromatin compaction and oligomerization by SUMO-3. Conclusion: Sumoylation changes chromatin structure by inhibiting long-range internucleosomal interactions and decreasing the affinity between adjacent nucleosomes. Significance: Learning how sumoylation changes the structure of chromatin suggests that it may mediate gene repression without chromatin compaction. The structure of eukaryotic chromatin directly influences gene function, and is regulated by chemical modifications of the core histone proteins. Modification of the human histone H4 N-terminal tail region by the small ubiquitin-like modifier protein, SUMO-3, is associated with transcription repression. However, the direct effect of sumoylation on chromatin structure and function remains unknown. Therefore, we employed a disulfide-directed strategy to generate H4 homogenously and site-specifically sumoylated at Lys-12 (suH4ss). Chromatin compaction and oligomerization assays with nucleosomal arrays containing suH4ss established that SUMO-3 inhibits array folding and higher order oligomerization, which underlie chromatin fiber formation. Moreover, the effect of sumoylation differed from that of acetylation, and could be recapitulated with the structurally similar protein ubiquitin. Mechanistic studies at the level of single nucleosomes revealed that, unlike acetylation, the effect of SUMO-3 arises from the attenuation of long-range internucleosomal interactions more than from the destabilization of a compacted dinucleosome state. Altogether, our results present the first insight on the direct structural effects of histone H4 sumoylation and reveal a novel mechanism by which SUMO-3 inhibits chromatin compaction.


ieee international conference on automatic face gesture recognition | 2015

The more the merrier: Analysing the affect of a group of people in images

Abhinav Dhall; Jyoti Joshi; Karan Sikka; Roland Goecke; Nicu Sebe

The recent advancement of social media has given users a platform to socially engage and interact with a global population. With millions of images being uploaded onto social media platforms, there is an increasing interest in inferring the emotion and mood display of a group of people in images. Automatic affect analysis research has come a long way but has traditionally focussed on a single subject in a scene. In this paper, we study the problem of inferring the emotion of a group of people in an image. This group affect has wide applications in retrieval, advertisement, content recommendation and security. The contributions of the paper are: 1) a novel emotion labelled database of groups of people in images; 2) a Multiple Kernel Learning based hybrid affect inference model; 3) a scene context based affect inference model; 4) a user survey to better understand the attributes that affect the perception of affect of a group of people in an image. The detailed experimentation validation provides a rich baseline for the proposed database.

Collaboration


Dive into the Abhinav Dhall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamas Gedeon

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Jyoti Joshi

University of Canberra

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karan Sikka

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Santanu Chaudhury

Indian Institute of Technology Delhi

View shared research outputs
Researchain Logo
Decentralizing Knowledge