Alexander Krull
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Krull.
Cell | 2013
Vaishnavi Ananthanarayanan; Martin H. Schattat; Sven K. Vogel; Alexander Krull; Nenad Pavin; Iva M. Tolić-Nørrelykke
Cytoplasmic dynein is a motor protein that exerts force on microtubules. To generate force for the movement of large organelles, dynein needs to be anchored, with the anchoring sites being typically located at the cell cortex. However, the mechanism by which dyneins target sites where they can generate large collective forces is unknown. Here, we directly observe single dyneins during meiotic nuclear oscillations in fission yeast and identify the steps of the dynein binding process: from the cytoplasm to the microtubule and from the microtubule to cortical anchors. We observed that dyneins on the microtubule move either in a diffusive or directed manner, with the switch from diffusion to directed movement occurring upon binding of dynein to cortical anchors. This dual behavior of dynein on the microtubule, together with the two steps of binding, enables dyneins to self-organize into a spatial pattern needed for them to generate large collective forces.
computer vision and pattern recognition | 2016
Eric Brachmann; Frank Michel; Alexander Krull; Michael Ying Yang; Stefan Gumhold; Carsten Rother
In recent years, the task of estimating the 6D pose of object instances and complete scenes, i.e. camera localization, from a single input image has received considerable attention. Consumer RGB-D cameras have made this feasible, even for difficult, texture-less objects and scenes. In this work, we show that a single RGB image is sufficient to achieve visually convincing results. Our key concept is to model and exploit the uncertainty of the system at all stages of the processing pipeline. The uncertainty comes in the form of continuous distributions over 3D object coordinates and discrete distributions over object labels. We give three technical contributions. Firstly, we develop a regularized, auto-context regression framework which iteratively reduces uncertainty in object coordinate and object label predictions. Secondly, we introduce an efficient way to marginalize object coordinate distributions over depth. This is necessary to deal with missing depth information. Thirdly, we utilize the distributions over object labels to detect multiple objects simultaneously with a fixed budget of RANSAC hypotheses. We tested our system for object pose estimation and camera localization on commonly used data sets. We see a major improvement over competing systems.
Nature Cell Biology | 2013
Iana Kalinina; Amitabha Nandi; Petrina Delivani; Mariola R. Chacón; Anna H. Klemm; Damien Ramunno-Johnson; Alexander Krull; Benjamin Lindner; Nenad Pavin; Iva M. Tolić-Nørrelykke
During cell division, spindle microtubules attach to chromosomes through kinetochores, protein complexes on the chromosome. The central question is how microtubules find kinetochores. According to the pioneering idea termed search-and-capture, numerous microtubules grow from a centrosome in all directions and by chance capture kinetochores. The efficiency of search-and-capture can be improved by a bias in microtubule growth towards the kinetochores, by nucleation of microtubules at the kinetochores and at spindle microtubules, by kinetochore movement, or by a combination of these processes. Here we show in fission yeast that kinetochores are captured by microtubules pivoting around the spindle pole, instead of growing towards the kinetochores. This pivoting motion of microtubules is random and independent of ATP-driven motor activity. By introducing a theoretical model, we show that the measured random movement of microtubules and kinetochores is sufficient to explain the process of kinetochore capture. Our theory predicts that the speed of capture depends mainly on how fast microtubules pivot, which was confirmed experimentally by speeding up and slowing down microtubule pivoting. Thus, pivoting motion allows microtubules to explore space laterally, as they search for targets such as kinetochores.
asian conference on computer vision | 2014
Alexander Krull; Frank Michel; Eric Brachmann; Stefan Gumhold; Stephan Ihrke; Carsten Rother
This work investigates the problem of 6-Degrees-Of-Freedom (6-DOF) object tracking from RGB-D images, where the object is rigid and a 3D model of the object is known. As in many previous works, we utilize a Particle Filter (PF) framework. In order to have a fast tracker, the key aspect is to design a clever proposal distribution which works reliably even with a small number of particles. To achieve this we build on a recently developed state-of-the-art system for single image 6D pose estimation of known 3D objects, using the concept of so-called 3D object coordinates. The idea is to train a random forest that regresses the 3D object coordinates from the RGB-D image. Our key technical contribution is a two-way procedure to integrate the random forest predictions in the proposal distribution generation. This has many practical advantages, in particular better generalization ability with respect to occlusions, changes in lighting and fast-moving objects. We demonstrate experimentally that we exceed state-of-the-art on a given, public dataset. To raise the bar in terms of fast-moving objects and object occlusions, we also create a new dataset, which will be made publicly available.
Optics Express | 2014
Alexander Krull; André Steinborn; Vaishnavi Ananthanarayanan; Damien Ramunno-Johnson; Uwe Petersohn; Iva M. Tolić-Nørrelykke
In cell biology and other fields the automatic accurate localization of sub-resolution objects in images is an important tool. The signal is often corrupted by multiple forms of noise, including excess noise resulting from the amplification by an electron multiplying charge-coupled device (EMCCD). Here we present our novel Nested Maximum Likelihood Algorithm (NMLA), which solves the problem of localizing multiple overlapping emitters in a setting affected by excess noise, by repeatedly solving the task of independent localization for single emitters in an excess noise-free system. NMLA dramatically improves scalability and robustness, when compared to a general purpose optimization technique. Our method was successfully applied for in vivo localization of fluorescent proteins.
computer vision and pattern recognition | 2017
Eric Brachmann; Alexander Krull; Sebastian Nowozin; Jamie Shotton; Frank Michel; Stefan Gumhold; Carsten Rother
RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component.
computer vision and pattern recognition | 2017
Frank Michel; Alexander Kirillov; Eric Brachmann; Alexander Krull; Stefan Gumhold; Bogdan Savchynskyy; Carsten Rother
This paper addresses the task of estimating the 6D-pose of a known 3D object from a single RGB-D image. Most modern approaches solve this task in three steps: i) compute local features, ii) generate a pool of pose-hypotheses, iii) select and refine a pose from the pool. This work focuses on the second step. While all existing approaches generate the hypotheses pool via local reasoning, e.g. RANSAC or Hough-Voting, we are the first to show that global reasoning is beneficial at this stage. In particular, we formulate a novel fully-connected Conditional Random Field (CRF) that outputs a very small number of pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian, we give a new, efficient two-step optimization procedure, with some guarantees for optimality. We utilize our global hypotheses generation procedure to produce results that exceed state-of-the-art for the challenging Occluded Object Dataset.
british machine vision conference | 2015
Frank Michel; Alexander Krull; Eric Brachmann; Michael Ying Yang; Stefan Gumhold; Carsten Rother
Accurate pose estimation of object instances is a key aspect in many applications, including augmented reality or robotics. For example, a task of a domestic robot could be to fetch an item from an open drawer. The poses of both, the drawer and the item have to be known by the robot in order to fulfil the task. 6D pose estimation of rigid objects has been addressed with great success in recent years. In large part, this has been due to the advent of consumer-level RGB-D cameras, which provide rich, robust input data. However, the practical use of state-of-the-art pose estimation approaches is limited by the assumption that objects are rigid. In cluttered, domestic environments this assumption does often not hold. Examples are doors, many types of furniture, certain electronic devices and toys. A robot might encounter these items in any state of articulation. This work considers the task of one-shot pose estimation of articulated object instances from an RGB-D image. In particular, we address objects with the topology of a kinematic chain of any length, i.e. objects are composed of a chain of parts interconnected by joints. We restrict joints to either revolute joints with 1 DOF (degrees of freedom) rotational movement or prismatic joints with 1 DOF translational movement. This topology covers a wide range of common objects (see our dataset for examples). However, our approach can easily be expanded to any topology, and to joints with higher degrees of freedom.
computer vision and pattern recognition | 2017
Alexander Krull; Eric Brachmann; Sebastian Nowozin; Frank Michel; Jamie Shotton; Carsten Rother
State-of-the-art computer vision algorithms often achieve efficiency by making discrete choices about which hypotheses to explore next. This allows allocation of computational resources to promising candidates, however, such decisions are non-differentiable. As a result, these algorithms are hard to train in an end-to-end fashion. In this work we propose to learn an efficient algorithm for the task of 6D object pose estimation. Our system optimizes the parameters of an existing state-of-the art pose estimation system using reinforcement learning, where the pose estimation system now becomes the stochastic policy, parametrized by a CNN. Additionally, we present an efficient training algorithm that dramatically reduces computation time. We show empirically that our learned pose estimation procedure makes better use of limited resources and improves upon the state-of-the-art on a challenging dataset. Our approach enables differentiable end-to-end training of complex algorithmic pipelines and learns to make optimal use of a given computational budget.
international conference on robotics and automation | 2017
Daniela Massiceti; Alexander Krull; Eric Brachmann; Carsten Rother; Philip H. S. Torr
This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.