Måns Larsson
Chalmers University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Måns Larsson.
scandinavian conference on image analysis | 2017
Måns Larsson; Yuhang Zhang; Fredrik Kahl
A fully automatic system for abdominal organ segmentation is presented. As a first step, an organ localization is obtained via a robust and efficient feature registration method where the center of the organ is estimated together with a region of interest surrounding the center. Then, a convolutional neural network performing voxelwise classification is applied. The convolutional neural network consists of several full 3D convolutional layers and takes both low and high resolution image data as input, which is designed to ensure both local and global consistency. Despite limited training data, our experimental results are on par with state-of-the-art approaches that have been developed over many years. More specifically the method is applied to the MICCAI2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault” in the free competition for organ segmentation in the abdomen. It achieved the best results for 3 out of the 13 organs with a total mean Dice coefficient of \(\mathbf {0.757}\) for all organs. Top scores were achieved for the gallbladder, the aorta and the right adrenal gland.
IEEE Signal Processing Magazine | 2018
Anurag Arnab; Shuai Zheng; Sadeep Jayasumana; Bernardino Romera-Paredes; Måns Larsson; Alexander Kirillov; Bogdan Savchynskyy; Carsten Rother; Fredrik Kahl; Philip H. S. Torr
Semantic segmentation is the task of labeling every pixel in an image with a predefined object category. It has numerous applications in scenarios where the detailed understanding of an image is required, such as in autonomous vehicles and medical diagnosis. This problem has traditionally been solved with probabilistic models known as conditional random fields (CRFs) due to their ability to model the relationships between the pixels being predicted. However, deep neural networks (DNNs) recently have been shown to excel at a wide range of computer vision problems due to their ability to automatically learn rich feature representations from data, as opposed to traditional handcrafted features. The idea of combining CRFs and DNNs have achieved state-of-the-art results in a number of domains. We review the literature on combining the modeling power of CRFs with the representation-learning ability of DNNs, ranging from early work that combines these two techniques as independent stages of a common pipeline to recent approaches that embed inference of probabilistic models directly in the neural network itself. Finally, we summarize future research directions.
scandinavian conference on image analysis | 2017
Måns Larsson; Jennifer Alvén; Fredrik Kahl
During the last few years most work done on the task of image segmentation has been focused on deep learning and Convolutional Neural Networks (CNNs) in particular. CNNs are powerful for modeling complex connections between input and output data but lack the ability to directly model dependent output structures, for instance, enforcing properties such as smoothness and coherence. This drawback motivates the use of Conditional Random Fields (CRFs), widely applied as a post-processing step in semantic segmentation. In this paper, we propose a learning framework that jointly trains the parameters of a CNN paired with a CRF. For this, we develop theoretical tools making it possible to optimize a max-margin objective with back-propagation. The max-margin loss function gives the model good generalization capabilities. Thus, the method is especially suitable for applications where labelled data is limited, for example, medical applications. This generalization capability is reflected in our results where we are able to show good performance on two relatively small medical datasets. The method is also evaluated on a public benchmark (frequently used for semantic segmentation) yielding results competitive to state-of-the-art. Overall, we demonstrate that end-to-end max-margin training is preferred over piecewise training when combining a CNN with a CRF.
energy minimization methods in computer vision and pattern recognition | 2017
Måns Larsson; Anurag Arnab; Fredrik Kahl; Shuai Zheng; Philip H. S. Torr
Are we using the right potential functions in the Conditional Random Field models that are popular in the Vision community? Semantic segmentation and other pixel-level labelling tasks have made significant progress recently due to the deep learning paradigm. However, most state-of-the-art structured prediction methods also include a random field model with a hand-crafted Gaussian potential to model spatial priors, label consistencies and feature-based image conditioning. In this paper, we challenge this view by developing a new inference and learning framework which can learn pairwise CRF potentials restricted only by their dependence on the image pixel values and the size of the support. Both standard spatial and high-dimensional bilateral kernels are considered. Our framework is based on the observation that CRF inference can be achieved via projected gradient descent and consequently, can easily be integrated in deep neural networks to allow for end-to-end training. It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential. In addition, we compare our inference method to the commonly used mean-field algorithm. Our framework is evaluated on several public benchmarks for semantic segmentation with improved performance compared to previous state-of-the-art CNN+CRF models.
Archive | 2017
Måns Larsson; Fredrik Kahl; Shuai Zheng; Anurag Arnab; Philip H. S. Torr; Richard I. Hartley
Archive | 2018
Måns Larsson
Archive | 2017
Måns Larsson; Anurag Arnab; Fredrik Kahl; Shuai Zheng; Philip H. S. Torr
SSBA | 2016
Måns Larsson; Yuhang Zhang; Fredrik Kahl