Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sadeep Jayasumana is active.

Publication


Featured researches published by Sadeep Jayasumana.


international conference on computer vision | 2015

Conditional Random Fields as Recurrent Neural Networks

Shuai Zheng; Sadeep Jayasumana; Bernardino Romera-Paredes; Vibhav Vineet; Zhizhong Su; Dalong Du; Chang Huang; Philip H. S. Torr

Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.


computer vision and pattern recognition | 2013

Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices

Sadeep Jayasumana; Richard I. Hartley; Mathieu Salzmann; Hongdong Li; Mehrtash Tafazzoli Harandi

Symmetric Positive Definite (SPD) matrices have become popular to encode image information. Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. These kernels are derived from the Gaussian kernel, but exploit different metrics on the manifold. This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. We demonstrate the benefits of our approach on the problems of pedestrian detection, object categorization, texture analysis, 2D motion segmentation and Diffusion Tensor Imaging (DTI) segmentation.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels

Sadeep Jayasumana; Richard I. Hartley; Mathieu Salzmann; Hongdong Li; Mehrtash Tafazzoli Harandi

In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels.


european conference on computer vision | 2014

Expanding the Family of Grassmannian Kernels: An Embedding Perspective

Mehrtash Tafazzoli Harandi; Mathieu Salzmann; Sadeep Jayasumana; Richard I. Hartley; Hongdong Li

Modeling videos and image-sets as linear subspaces has proven beneficial for many visual recognition tasks. However, it also incurs challenges arising from the fact that linear subspaces do not obey Euclidean geometry, but lie on a special type of Riemannian manifolds known as Grassmannian. To leverage the techniques developed for Euclidean spaces (e.g., support vector machines) with subspaces, several recent studies have proposed to embed the Grassmannian into a Hilbert space by making use of a positive definite kernel. Unfortunately, only two Grassmannian kernels are known, none of which -as we will show- is universal, which limits their ability to approximate a target function arbitrarily well. Here, we introduce several positive definite Grassmannian kernels, including universal ones, and demonstrate their superiority over previously-known kernels in various tasks, such as classification, clustering, sparse coding and hashing.


european conference on computer vision | 2016

Higher Order Conditional Random Fields in Deep Neural Networks

Anurag Arnab; Sadeep Jayasumana; Shuai Zheng; Philip H. S. Torr

We address the problem of semantic segmentation using deep learning. Most segmentation systems include a Conditional Random Field (CRF) to produce a structured output that is consistent with the images visual features. Recent deep learning approaches have incorporated CRFs into Convolutional Neural Networks (CNNs), with some even training the CRF end-to-end with the rest of the network. However, these approaches have not employed higher order potentials, which have previously been shown to significantly improve segmentation performance. In this paper, we demonstrate that two types of higher order potential, based on object detections and superpixels, can be included in a CRF embedded within a deep network. We design these higher order potentials to allow inference with the differentiable mean field algorithm. As a result, all the parameters of our richer CRF model can be learned end-to-end with our pixelwise CNN classifier. We achieve state-of-the-art segmentation performance on the PASCAL VOC benchmark with these trainable higher order potentials.


computer vision and pattern recognition | 2014

Optimizing over Radial Kernels on Compact Manifolds

Sadeep Jayasumana; Richard I. Hartley; Mathieu Salzmann; Hongdong Li; Mehrtash Tafazzoli Harandi

We tackle the problem of optimizing over all possible positive definite radial kernels on Riemannian manifolds for classification. Kernel methods on Riemannian manifolds have recently become increasingly popular in computer vision. However, the number of known positive definite kernels on manifolds remain very limited. Furthermore, most kernels typically depend on at least one parameter that needs to be tuned for the problem at hand. A poor choice of kernel, or of parameter value, may yield significant performance drop-off. Here, we show that positive definite radial kernels on the unit n-sphere, the Grassmann manifold and Kendalls shape manifold can be expressed in a simple form whose parameters can be automatically optimized within a support vector machine framework. We demonstrate the benefits of our kernel learning algorithm on object, face, action and shape recognition.


international conference on computer vision | 2013

A Framework for Shape Analysis via Hilbert Space Embedding

Sadeep Jayasumana; Mathieu Salzmann; Hongdong Li; Mehrtash Tafazzoli Harandi

We propose a framework for 2D shape analysis using positive definite kernels defined on Kendalls shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendalls shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the state-of-the-art methods on shape classification, clustering and retrieval.


digital image computing techniques and applications | 2013

Combining Multiple Manifold-Valued Descriptors for Improved Object Recognition

Sadeep Jayasumana; Richard I. Hartley; Mathieu Salzmann; Hongdong Li; Mehrtash Tafazzoli Harandi

We present a learning method for classification using multiple manifold-valued features. Manifold techniques are becoming increasingly popular in computer vision since Riemannian geometry often comes up as a natural model for many descriptors encountered in different branches of computer vision. We propose a feature combination and selection method that optimally combines descriptors lying on different manifolds while respecting the Riemannian geometry of each underlying manifold. We use our method to improve object recognition by combining HOG~\cite{Dalal05Hog} and Region Covariance~\cite{Tuzel06} descriptors that reside on two different manifolds. To this end, we propose a kernel on the


british machine vision conference | 2015

Prototypical Priors: From Improving Classification to Zero-Shot Learning.

Saumya Jetley; Bernardino Romera-Paredes; Sadeep Jayasumana; Philip H. S. Torr

n


IEEE Signal Processing Magazine | 2018

Conditional Random Fields Meet Deep Neural Networks for Semantic Segmentation: Combining Probabilistic Graphical Models with Deep Learning for Structured Prediction

Anurag Arnab; Shuai Zheng; Sadeep Jayasumana; Bernardino Romera-Paredes; Måns Larsson; Alexander Kirillov; Bogdan Savchynskyy; Carsten Rother; Fredrik Kahl; Philip H. S. Torr

-dimensional unit sphere and prove its positive definiteness. Our experimental evaluation shows that combining these two powerful descriptors using our method results in significant improvements in recognition accuracy.

Collaboration


Dive into the Sadeep Jayasumana's collaboration.

Top Co-Authors

Avatar

Mathieu Salzmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Hongdong Li

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard I. Hartley

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Fremantle

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge