Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Rupprecht is active.

Publication


Featured researches published by Christian Rupprecht.


international conference on 3d vision | 2016

Deeper Depth Prediction with Fully Convolutional Residual Networks

Iro Laina; Christian Rupprecht; Vasileios Belagiannis; Federico Tombari; Nassir Navab

This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.


international conference on computer vision | 2015

Robust Optimization for Deep Regression

Vasileios Belagiannis; Christian Rupprecht; Gustavo Carneiro; Nassir Navab

Convolutional Neural Networks (ConvNets) have successfully contributed to improve the accuracy of regression-based methods for computer vision tasks such as human pose estimation, landmark localization, and object detection. The network optimization has been usually performed with L2 loss and without considering the impact of outliers on the training process, where an outlier in this context is defined by a sample estimation that lies at an abnormal distance from the other training sample estimations in the objective space. In this work, we propose a regression model with ConvNets that achieves robustness to such outliers by minimizing Tukeys biweight function, an M-estimator robust to outliers, as the loss function for the ConvNet. In addition to the robust loss, we introduce a coarse-to-fine model, which processes input images of progressively higher resolutions for improving the accuracy of the regressed values. In our experiments, we demonstrate faster convergence and better generalization of our robust loss function for the tasks of human pose estimation and age estimation from face images. We also show that the combination of the robust loss function with the coarse-to-fine model produces comparable or better results than current state-of-the-art approaches in four publicly available human pose estimation datasets.


computer vision and pattern recognition | 2015

Image segmentation in Twenty Questions

Christian Rupprecht; Loïc Peter; Nassir Navab

Consider the following scenario between a human user and the computer. Given an image, the user thinks of an object to be segmented within this picture, but is only allowed to provide binary inputs to the computer (yes or no). In these conditions, can the computer guess this hidden segmentation by asking well-chosen questions to the user? We introduce a strategy for the computer to increase the accuracy of its guess in a minimal number of questions. At each turn, the current belief about the answer is encoded in a Bayesian fashion via a probability distribution over the set of all possible segmentations. To efficiently handle this huge space, the distribution is approximated by sampling representative segmentations using an adapted version of the Metropolis-Hastings algorithm, whose proposal moves build on a geodesic distance transform segmentation method. Following a dichotomic search, the question halving the weighted set of samples is finally picked, and the provided answer is used to update the belief for the upcoming rounds. The performance of this strategy is assessed on three publicly available datasets with diverse visual properties. Our approach shows to be a tractable and very adaptive solution to this problem.


medical image computing and computer-assisted intervention | 2017

Concurrent Segmentation and Localization for Tracking of Surgical Instruments.

Iro Laina; Nicola Rieke; Christian Rupprecht; Josué Page Vizcaíno; Abouzar Eslami; Federico Tombari; Nassir Navab

Real-time instrument tracking is a crucial requirement for various computer-assisted interventions. To overcome problems such as specular reflection and motion blur, we propose a novel method that takes advantage of the interdependency between localization and segmentation of the surgical tool. In particular, we reformulate the 2D pose estimation as a heatmap regression and thereby enable a robust, concurrent regression of both tasks via deep learning. Throughout experimental results, we demonstrate that this modeling leads to a significantly better performance than directly regressing the tool position and that our method outperforms the state-of-the-art on a Retinal Microsurgery benchmark and the MICCAI EndoVis Challenge 2015.


Scientific Reports | 2017

Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease

Kanishka Sharma; Christian Rupprecht; Anna Caroli; Maria Carolina Aparicio; Andrea Remuzzi; Maximilian Baust; Nassir Navab

Autosomal Dominant Polycystic Kidney Disease (ADPKD) is the most common inherited disorder of the kidneys. It is characterized by enlargement of the kidneys caused by progressive development of renal cysts, and thus assessment of total kidney volume (TKV) is crucial for studying disease progression in ADPKD. However, automatic segmentation of polycystic kidneys is a challenging task due to severe alteration in the morphology caused by non-uniform cyst formation and presence of adjacent liver cysts. In this study, an automated segmentation method based on deep learning has been proposed for TKV computation on computed tomography (CT) dataset of ADPKD patients exhibiting mild to moderate or severe renal insufficiency. The proposed method has been trained (n = 165) and tested (n = 79) on a wide range of TKV (321.2–14,670.7 mL) achieving an overall mean Dice Similarity Coefficient of 0.86 ± 0.07 (mean ± SD) between automated and manual segmentations from clinical experts and a mean correlation coefficient (ρ) of 0.98 (p < 0.001) for segmented kidney volume measurements in the entire test set. Our method facilitates fast and reproducible measurements of kidney volumes in agreement with manual segmentations from clinical experts.


intelligent robots and systems | 2016

Sensor substitution for video-based action recognition

Christian Rupprecht; Colin Lea; Federico Tombari; Nassir Navab; Gregory D. Hager

There are many applications where domain-specific sensing, such as accelerometers, kinematics, or force sensing, provide unique and important information for control or for analysis of motion. However, it is not always the case that these sensors can be deployed or accessed beyond laboratory environments. For example, it is possible to instrument humans or robots to measure motion in the laboratory in ways that it is not possible to replicate in the wild. An alternative, which we explore in this paper, is to address situations where accurate sensing is available while training an algorithm, but for which only video is available for deployment. We present two examples of this sensory substitution methodology. The first variation trains a convolutional neural network to regress real-valued signals, including robot end-effector pose, from video. The second example regresses binary signals derived from accelerometer data which signifies when specific objects are in motion. We evaluate these on the JIGSAWS dataset for robotic surgery training assessment and the 50 Salads dataset for modeling complex structured cooking tasks. We evaluate the trained models for video-based action recognition and show that the trained models provide information that is comparable to the sensory signals they replace.


international conference on computer vision | 2017

Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses

Christian Rupprecht; Iro Laina; Robert S. DiPietro; Maximilian Baust

Many prediction tasks contain uncertainty. In some cases, uncertainty is inherent in the task itself. In future prediction, for example, many distinct outcomes are equally valid. In other cases, uncertainty arises from the way data is labeled. For example, in object detection, many objects of interest often go unlabeled, and in human pose estimation, occluded joints are often labeled with ambiguous values. In this work we focus on a principled approach for handling such scenarios. In particular, we propose a frame-work for reformulating existing single-prediction models as multiple hypothesis prediction (MHP) models and an associated meta loss and optimization procedure to train them. To demonstrate our approach, we consider four diverse applications: human pose estimation, future prediction, image classification and segmentation. We find that MHP models outperform their single-hypothesis counterparts in all cases, and that MHP models simultaneously expose valuable insights into the variability of predictions.


international conference on 3d vision | 2013

3D Semantic Parameterization for Human Shape Modeling: Application to 3D Animation

Christian Rupprecht; Olivier Pauly; Christian Theobalt; Slobodan Ilic

Statistical human body models, like SCAPE, capture static 3D human body shapes and poses and are applied to many Computer Vision problems. Defined in a statistical context, their parameters do not explicitly capture semantics of the human body shapes such as height, weight, limb length, etc. Having a set of semantic parameters would allow users and automated algorithms to sample the space of possible body shape variations in a more intuitive way. Therefore, in this paper we propose a method for re-parameterization of statistical human body models such that shapes are controlled by a small set of intuitive semantic parameters. These parameters are learned directly from the available statistical human body model. In order to apply any arbitrary animation to our human body shape model we perform retargeting. From any set of 3D scans, a semantic parametrized model can be generated and animated with the presented methods using any animation data. We quantitatively show that our semantic parameterization is more reliable than standard semantic parameterizations, and show a number of animations retargeted to our semantic body shape model.


arXiv: Computer Vision and Pattern Recognition | 2016

Hands-free segmentation of medical volumes via binary inputs

Florian Dubost; Loïc Peter; Christian Rupprecht; Benjamin Gutierrez Becker; Nassir Navab

We propose a novel hands-free method to interactively segment 3D medical volumes. In our scenario, a human user progressively segments an organ by answering a series of questions of the form “Is this voxel inside the object to segment?”. At each iteration, the chosen question is defined as the one halving a set of candidate segmentations given the answered questions. For a quick and efficient exploration, these segmentations are sampled according to the Metropolis-Hastings algorithm. Our sampling technique relies on a combination of relaxed shape prior, learnt probability map and consistency with previous answers. We demonstrate the potential of our strategy on a prostate segmentation MRI dataset. Through the study of failure cases with synthetic examples, we demonstrate the adaptation potential of our method. We also show that our method outperforms two intuitive baselines: one based on random questions, the other one being the thresholded probability map.


arXiv: Computer Vision and Pattern Recognition | 2016

A Taxonomy and Library for Visualizing Learned Features in Convolutional Neural Networks.

Felix Grün; Christian Rupprecht; Nassir Navab; Federico Tombari

Collaboration


Dive into the Christian Rupprecht's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Caroli

Mario Negri Institute for Pharmacological Research

View shared research outputs
Top Co-Authors

Avatar

Chris Paxton

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Colin Lea

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raman Arora

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge