Muralikrishna Sridhar
University of Leeds
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Muralikrishna Sridhar.
international conference on computer vision systems | 2013
Aryana Tavanai; Muralikrishna Sridhar; Feng Gu; Anthony G. Cohn; David C. Hogg
This paper proposes a novel approach that detects and tracks carried objects by modelling the person-carried object relationship that is characteristic of the carry event. In order to detect a generic class of carried objects, we propose the use of geometric shape models, instead of using pre-trained object class models or solely relying on protrusions. In order to track the carried objects, we propose a novel optimization procedure that combines spatio-temporal consistency characteristic of the carry event, with conventional properties such as appearance and motion smoothness respectively. The proposed approach substantially outperforms a state-of-the-art approach on two challenging datasets PETS2006 and MINDSEYE2012.
Neurocomputing | 2016
Feng Gu; Muralikrishna Sridhar; Anthony G. Cohn; David C. Hogg; Francisco Flrez-Revuelta; Dorothy Ndedi Monekosso; Paolo Remagnino
In computer vision, an increasing number of weakly annotated videos have become available, due to the fact it is often difficult and time consuming to annotate all the details in the videos collected. Learning methods that analyse human activities in weakly annotated video data have gained great interest in recent years. They are categorised as weakly supervised learning, and usually form a multi-instance multi-label (MIML) learning problem. In addition to the commonly known difficulties of MIML learning, i.e. ambiguities in instances and labels, a weakly supervised method also has to cope with large data size, high dimensionality, and a large proportion of noisy examples usually found in video data. In this work, we propose a novel learning framework that iteratively optimises over a scalable MIML model and an instance selection process incorporating pairwise spatio-temporal smoothing during training. Such learned knowledge is then generalised to testing via a noise removal process based on the support vector data description algorithm. According to the experiments on three challenging benchmark video datasets, the proposed framework yields a more discriminative MIML model and less noisy training and testing data, and thus improves the system performance. It outperforms the state-of-the-art weakly supervised and even fully supervised approaches in the literature, in terms of annotating and detecting actions of a single person and interactions between a pair of people.
british machine vision conference | 2015
Aryana Tavanai; Muralikrishna Sridhar; Eris Chinellato; Anthony G. Cohn; David C. Hogg
This paper proposes a novel method for jointly estimating the track of a moving object and the events in which it participates. The method is intended for dealing with generic objects that are hard to localise and track with the performance of current detection algorithms - our focus is on events involving carried objects. The tracks for other objects with which the target object interacts (e.g. the carrying person) are assumed to be given. The method is posed as maximisation of a posterior probability defined over event sequences and temporally-disjoint subsets of the tracklets from an earlier tracking process. The probability function is a Hidden Markov Model coupled with a term that penalises non-smooth tracks and large gaps in the observed data. We evaluate the method using tracklets output by three state of the art trackers on the new created MINDSEYE2015 dataset and demonstrate improved performance.
international conference on pattern recognition | 2014
Aryana Tavanai; Muralikrishna Sridhar; Feng Gu; Anthony G. Cohn; David C. Hogg
This paper presents a novel approach to incorporate multiple contextual factors into a tracking process, for the purpose of reducing false positive detections. While much previous work has focused on improving object detection on static images using context, these have not been integrated into the tracking process. Our hypothesis is that a significant improvement can result from the use of context in dynamically influencing the linking of object detections, during the tracking process. To verify this hypothesis, we augment a state of the art dynamic programming based tracker with contextual information by reformulating the maximum a posteriori (MAP) estimation formulation. This formulation introduces contextual factors that first of all augment detection strengths and secondly provides temporal context. We allow both these types of factors to contribute organically to the linking process by learning the relative contribution of each of these factors jointly during a gradient decent based optimisation process. Our experiments demonstrate that the proposed approach contributes to a significantly superior performance on a recent challenging video dataset, which captures complex scenes with a wide range of object types and diverse backgrounds.
european conference on artificial intelligence | 2008
Muralikrishna Sridhar; Anthony G. Cohn; David C. Hogg
national conference on artificial intelligence | 2010
Muralikrishna Sridhar; Anthony G. Cohn; David C. Hogg
conference on spatial information theory | 2011
Muralikrishna Sridhar; Anthony G. Cohn; David C. Hogg
principles of knowledge representation and reasoning | 2012
Anthony G. Cohn; Jochen Renz; Muralikrishna Sridhar
Archive | 2011
Muralikrishna Sridhar; Anthony G. Cohn; David C. Hogg
european conference on artificial intelligence | 2010
Muralikrishna Sridhar; Anthony G. Cohn; David C. Hogg