Carlos Joaquin Becker
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlos Joaquin Becker.
medical image computing and computer assisted intervention | 2013
Carlos Joaquin Becker; Roberto Rigamonti; Vincent Lepetit; Pascal Fua
We present a novel, fully-discriminative method for curvilinear structure segmentation that simultaneously learns a classifier and the features it relies on. Our approach requires almost no parameter tuning and, in the case of 2D images, removes the requirement for hand-designed features, thus freeing the practitioner from the time-consuming tasks of parameter and feature selection. Our approach relies on the Gradient Boosting framework to learn discriminative convolutional filters in closed form at each stage, and can operate on raw image pixels as well as additional data sources, such as the output of other methods like the Optimally Oriented Flux. We will show that it outperforms state-of-the-art curvilinear segmentation methods on both 2D images and 3D image stacks.
IEEE Transactions on Medical Imaging | 2013
Carlos Joaquin Becker; Karim Ali; Graham Knott; Pascal Fua
We present a new approach for the automated segmentation of synapses in image stacks acquired by electron microscopy (EM) that relies on image features specifically designed to take spatial context into account. These features are used to train a classifier that can effectively learn cues such as the presence of a nearby post-synaptic region. As a result, our algorithm successfully distinguishes synapses from the numerous other organelles that appear within an EM volume, including those whose local textural properties are relatively similar. Furthermore, as a by-product of the segmentation, our method flawlessly determines synaptic orientation, a crucial element in the interpretation of brain circuits. We evaluate our approach on three different datasets, compare it against the state-of-the-art in synapse segmentation and demonstrate our ability to reliably collect shape, density, and orientation statistics over hundreds of synapses.
medical image computing and computer assisted intervention | 2014
Raphael Sznitman; Carlos Joaquin Becker; Pascal Fua
Automatic visual detection of instruments in minimally invasive surgery (MIS) can significantly augment the procedure experience for operating clinicians. In this paper, we present a novel technique for detecting surgical instruments by constructing a robust and reliable instrument-part detector. While such detectors are typically slow to use, we introduce a novel early stopping scheme for multiclass ensemble classifiers which acts as a cascade and significantly reduces the computational requirements at test time, ultimately allowing it to run at framerate. We evaluate the effectiveness of our approach on instrument detection in retinal microsurgery and laparoscopic image sequences and demonstrate significant improvements in both accuracy and speed.
medical image computing and computer assisted intervention | 2012
Carlos Joaquin Becker; Karim Ali; Graham Knott; Pascal Fua
We present a new approach for the automated segmentation of excitatory synapses in image stacks acquired by electron microscopy. We rely on a large set of image features specifically designed to take spatial context into account and train a classifier that can effectively utilize cues such as the presence of a nearby post-synaptic region. As a result, our algorithm successfully distinguishes synapses from the numerous other organelles that appear within an EM volume, including those whose local textural properties are relatively similar. This enables us to achieve very high detection rates with very few false positives.
international conference on computer vision | 2013
Engin Türetken; Carlos Joaquin Becker; Przemyslaw Glowacki; Fethallah Benmansour; Pascal Fua
We propose a new approach to detecting irregular curvilinear structures in noisy image stacks. In contrast to earlier approaches that rely on circular models of the cross-sections, ours allows for the arbitrarily-shaped ones that are prevalent in biological imagery. This is achieved by maximizing the image gradient flux along multiple directions and radii, instead of only two with a unique radius as is usually done. This yields a more complex optimization problem for which we propose a computationally efficient solution. We demonstrate the effectiveness of our approach on a wide range of challenging gray scale and color datasets and show that it outperforms existing techniques, especially on very irregular structures.
computer vision and pattern recognition | 2013
Raphael Sznitman; Carlos Joaquin Becker; François Fleuret; Pascal Fua
Cascade-style approaches to implementing ensemble classifiers can deliver significant speed-ups at test time. While highly effective, they remain challenging to tune and their overall performance depends on the availability of large validation sets to estimate rejection thresholds. These characteristics are often prohibitive and thus limit their applicability. We introduce an alternative approach to speeding-up classifier evaluation which overcomes these limitations. It involves maintaining a probability estimate of the class label at each intermediary response and stopping when the corresponding uncertainty becomes small enough. As a result, the evaluation terminates early based on the sequence of responses observed. Furthermore, it does so independently of the type of ensemble classifier used or the way it was trained. We show through extensive experimentation that our method provides 2 to 10 fold speed-ups, over existing state-of-the-art methods, at almost no loss in accuracy on a number of object classification tasks.
IEEE Transactions on Medical Imaging | 2015
Aurelien Lucchi; Pablo Márquez-Neila; Carlos Joaquin Becker; Yunpeng Li; Kevin Smith; Graham Knott; Pascal Fua
Efficient and accurate segmentation of cellular structures in microscopic data is an essential task in medical imaging. Many state-of-the-art approaches to image segmentation use structured models whose parameters must be carefully chosen for optimal performance. A popular choice is to learn them using a large-margin framework and more specifically structured support vector machines (SSVM). Although SSVMs are appealing, they suffer from certain limitations. First, they are restricted in practice to linear kernels because the more powerful nonlinear kernels cause the learning to become prohibitively expensive. Second, they require iteratively finding the most violated constraints, which is often intractable for the loopy graphical models used in image segmentation. This requires approximation that can lead to reduced quality of learning. In this paper, we propose three novel techniques to overcome these limitations. We first introduce a method to “kernelize” the features so that a linear SSVM framework can leverage the power of nonlinear kernels without incurring much additional computational cost. Moreover, we employ a working set of constraints to increase the reliability of approximate subgradient methods and introduce a new way to select a suitable step size at each iteration. We demonstrate the strength of our approach on both 2-D and 3-D electron microscopic (EM) image data and show consistent performance improvement over state-of-the-art approaches.
IEEE Transactions on Medical Imaging | 2015
Carlos Joaquin Becker; C. Mario Christoudias; Pascal Fua
Electron and light microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While machine learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack. In this paper, we present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, nonlinear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes. We evaluate our approach on four challenging electron and light microscopy applications that exhibit very different image modalities and where annotation is very costly. Across all applications we achieve a significant improvement over the state-of-the-art machine learning methods and demonstrate our ability to greatly reduce human annotation effort.
medical image computing and computer-assisted intervention | 2014
Aurelien Lucchi; Carlos Joaquin Becker; Pablo Márquez Neila; Pascal Fua
In this paper, we improve upon earlier approaches to segmenting mitochondria in Electron Microscopy images by explicitly modeling the double membrane that encloses mitochondria, as well as using features that capture context over an extended neighborhood. We demonstrate that this results in both improved classification accuracy and reduced computational requirements for training.
medical image computing and computer assisted intervention | 2016
Róger Bermúdez-Chacón; Carlos Joaquin Becker; Mathieu Salzmann; Pascal Fua
While Machine Learning algorithms are key to automating organelle segmentation in large EM stacks, they require annotated data, which is hard to come by in sufficient quantities. Furthermore, images acquired from one part of the brain are not always representative of another due to the variability in the acquisition and staining processes. Therefore, a classifier trained on the first may perform poorly on the second and additional annotations may be required. To remove this cumbersome requirement, we introduce an Unsupervised Domain Adaptation approach that can leverage annotated data from one brain area to train a classifier that applies to another for which no labeled data is available. To this end, we establish noisy visual correspondences between the two areas and develop a Multiple Instance Learning approach to exploiting them. We demonstrate the benefits of our approach over several baselines for the purpose of synapse and mitochondria segmentation in EM stacks of different parts of mouse brains.