Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tal Arbel is active.

Publication


Featured researches published by Tal Arbel.


IEEE Transactions on Medical Imaging | 2015

The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

Bjoern H. Menze; András Jakab; Stefan Bauer; Jayashree Kalpathy-Cramer; Keyvan Farahani; Justin S. Kirby; Yuliya Burren; Nicole Porz; Johannes Slotboom; Roland Wiest; Levente Lanczi; Elizabeth R. Gerstner; Marc-André Weber; Tal Arbel; Brian B. Avants; Nicholas Ayache; Patricia Buendia; D. Louis Collins; Nicolas Cordier; Jason J. Corso; Antonio Criminisi; Tilak Das; Hervé Delingette; Çağatay Demiralp; Christopher R. Durst; Michel Dojat; Senan Doyle; Joana Festa; Florence Forbes; Ezequiel Geremia

In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.


IEEE Transactions on Medical Imaging | 2011

Evaluation of Registration Methods on Thoracic CT: The EMPIRE10 Challenge

K. Murphy; B. van Ginneken; Joseph M. Reinhardt; Sven Kabus; Kai Ding; Xiang Deng; Kunlin Cao; Kaifang Du; Gary E. Christensen; V. Garcia; Tom Vercauteren; Nicholas Ayache; Olivier Commowick; Grégoire Malandain; Ben Glocker; Nikos Paragios; Nassir Navab; V. Gorbunova; Jon Sporring; M. de Bruijne; Xiao Han; Mattias P. Heinrich; Julia A. Schnabel; Mark Jenkinson; Cristian Lorenz; Marc Modat; Jamie R. McClelland; Sebastien Ourselin; S. E. A. Muenzing; Max A. Viergever

EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intra patient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This is compounded by the fact that researchers typically test only on their own data, which varies widely. For this reason, reliable assessment and comparison of different registration algorithms has been virtually impossible in the past. In this work we present the results of the launch phase of EMPIRE10, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the con figuration of their own method and the evaluation is independent, using the same criteria for all participants. All results are published on the EMPIRE10 website (http://empire10.isi.uu.nl). The challenge remains ongoing and open to new participants. Full results from 24 algorithms have been published at the time of writing. This paper details the organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Detection, Localization, and Sex Classification of Faces from Arbitrary Viewpoints and under Occlusion

Matthew Toews; Tal Arbel

This paper presents a novel framework for detecting, localizing, and classifying faces in terms of visual traits, e.g., sex or age, from arbitrary viewpoints and in the presence of occlusion. All three tasks are embedded in a general viewpoint-invariant model of object class appearance derived from local scale-invariant features, where features are probabilistically quantified in terms of their occurrence, appearance, geometry, and association with visual traits of interest. An appearance model is first learned for the object class, after which a Bayesian classifier is trained to identify the model features indicative of visual traits. The framework can be applied in realistic scenarios in the presence of viewpoint changes and partial occlusion, unlike other techniques assuming data that are single viewpoint, upright, prealigned, and cropped from background distraction. Experimentation establishes the first result for sex classification from arbitrary viewpoints, an equal error rate of 16.3 percent, based on the color FERET database. The method is also shown to work robustly on faces in cluttered imagery from the CMU profile database. A comparison with the geometry-free bag-of-words model shows that geometrical information provided by our framework improves classification. A comparison with support vector machines demonstrates that Bayesian classification results in superior performance.


NeuroImage | 2010

Feature-Based Morphometry: Discovering Group-related Anatomical Patterns

Matthew Toews; William M. Wells; D. Louis Collins; Tal Arbel

This paper presents feature-based morphometry (FBM), a new fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimers (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1).


international conference on computer vision | 1999

Viewpoint selection by navigation through entropy maps

Tal Arbel; Frank P. Ferrie

In this paper, we show how entropy maps can be used to guide an active observer along an optimal trajectory, by which the identity and pose of objects in the world can be inferred with confidence, while minimizing the amount of data that must be gathered. Specifically we consider the case of active object recognition where entropy maps are used to encode prior knowledge about the discriminability of objects as a function of viewing position. The paper describes how these maps are computed using optical flow signatures as a case study, and how a gaze-planning strategy can be formulated by using entropy minimization as a basis for choosing a next best view. Experimental results are presented which show the strategys effectiveness for active object recognition using a single monochrome television camera.


Computer Aided Surgery | 2004

Automatic non-linear MRI-ultrasound registration for the correction of intra-operative brain deformations

Tal Arbel; Xavier Morandi; Roch M. Comeau; D. L. Collins

Objective: Movements of brain tissue during neurosurgical procedures reduce the effectiveness of using pre-operative images for intra-operative surgical guidance. In this paper, we explore the use of acquiring intra-operative ultrasound (US) images for the quantification of and correction for non-linear brain deformations. Materials and Methods: We will present a multi-modal registration strategy that automatically matches pre-operative images (e.g., MRI) to intra-operative US to correct for these deformations. The strategy involves using the predicted appearance of neuroanatomical structures in US images to build “pseudo ultrasound” images based on pre-operative segmented MRI. These images can then be non-linearly registered to intra-operative US using cross-correlation measurements within the ANIMAL package. The feasibility of the theory is demonstrated through its application to clinical patient data acquired during 12 neurosurgical procedures. Results: Results of applying the method to 12 surgical cases, including those with brain tumors and selective amygdalo-hippocampectomies, indicate that our strategy significantly recovers from non-linear brain deformations occurring during surgery. Quantitative results at tumor boundaries indicate up to 87% correction for brain shift. Conclusions: Qualitative and quantitative examination of the results indicate that the system is able to correct for non-linear brain deformations in clinical patient data.


Image and Vision Computing | 2001

Entropy-based gaze planning

Tal Arbel; Frank P. Ferrie

Abstract This paper describes an algorithm for recognizing known objects in an unstructured environment (e.g. landmarks) from measurements acquired with a single monochrome television camera mounted on a mobile observer. The approach is based on the concept of an entropy map , which is used to guide the mobile observer along an optimal trajectory that minimizes the ambiguity of recognition as well as the amount of data that must be gathered. Recognition itself is based on the optical flow signatures that result from the camera motion — signatures that are inherently ambiguous due to the confounding of motion, structure and imaging parameters. We show how gaze planning partially alleviates this problem by generating trajectories that maximize discriminability. A sequential Bayes approach is used to handle the remaining ambiguity by accumulating evidence for different object hypotheses over time until a clear assertion can be made. Results from an experimental recognition system using a gantry-mounted television camera are presented to show the effectiveness of the algorithm on a large class of common objects.


IEEE Transactions on Medical Imaging | 2012

Multi-Modal Image Registration Based on Gradient Orientations of Minimal Uncertainty

Dante De Nigris; D. L. Collins; Tal Arbel

In this paper, we propose a new multi-scale technique for multi-modal image registration based on the alignment of selected gradient orientations of reduced uncertainty. We show how the registration robustness and accuracy can be improved by restricting the evaluation of gradient orientation alignment to locations where the uncertainty of fixed image gradient orientations is minimal, which we formally demonstrate correspond to locations of high gradient magnitude. We also embed a computationally efficient technique for estimating the gradient orientations of the transformed moving image (rather than resampling pixel intensities and recomputing image gradients). We have applied our method to different rigid multi-modal registration contexts. Our approach outperforms mutual information and other competing metrics in the context of rigid multi-modal brain registration, where we show sub-millimeter accuracy with cases obtained from the retrospective image registration evaluation project. Furthermore, our approach shows significant improvements over standard methods in the highly challenging clinical context of image guided neurosurgery, where we demonstrate misregistration of less than 2 mm with relation to expert selected landmarks for the registration of pre-operative brain magnetic resonance images to intra-operative ultrasound images.


Medical Image Analysis | 2011

Evaluating intensity normalization on MRIs of human brain with multiple sclerosis

Mohak Shah; Yiming Xiao; Nagesh K. Subbanna; Simon J. Francis; Douglas L. Arnold; D. Louis Collins; Tal Arbel

Intensity normalization is an important pre-processing step in the study and analysis of Magnetic Resonance Images (MRI) of human brains. As most parametric supervised automatic image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. One of the fast and accurate approaches proposed for intensity normalization is that of Nyul and colleagues. In this work, we present, for the first time, an extensive validation of this approach in real clinical domain where even after intensity inhomogeneity correction that accounts for scanner-specific artifacts, the MRI volumes can be affected from variations such as data heterogeneity resulting from multi-site multi-scanner acquisitions, the presence of multiple sclerosis (MS) lesions and the stage of disease progression in the brain. Using the distributional divergence criteria, we evaluate the effectiveness of the normalization in rendering, under the distributional assumptions of segmentation approaches, intensities that are more homogenous for the same tissue type while simultaneously resulting in better tissue type separation. We also demonstrate the advantage of the decile based piece-wise linear approach on the task of MS lesion segmentation against a linear normalization approach over three image segmentation algorithms: a standard Bayesian classifier, an outlier detection based approach and a Bayesian classifier with Markov Random Field (MRF) based post-processing. Finally, to demonstrate the independence of the effectiveness of normalization from the complexity of segmentation algorithm, we evaluate the Nyul method against the linear normalization on Bayesian algorithms of increasing complexity including a standard Bayesian classifier with Maximum Likelihood parameter estimation and a Bayesian classifier with integrated data priors, in addition to the above Bayesian classifier with MRF based post-processing to smooth the posteriors. In all relevant cases, the observed results are verified for statistical relevance using significance tests.


International Journal of Computer Vision | 2006

Efficient Discriminant Viewpoint Selection for Active Bayesian Recognition

Catherine Laporte; Tal Arbel

This paper presents a novel viewpoint selection criterion for active object recognition and pose estimation whose key advantage resides in its low computational cost with respect to current popular approaches in the literature. The proposed observation selection criterion associates high utility with observations that predictably facilitate distinction between pairs of competing hypotheses by a Bayesian classifier. Rigorous experimentation of the proposed approach was conducted on two case studies, involving synthetic and real data, respectively. The results show the proposed algorithm to perform better than a random navigation strategy in terms of the amount of data required for recognition while being much faster than a strategy based on mutual information, without compromising accuracy.

Collaboration


Dive into the Tal Arbel's collaboration.

Top Co-Authors

Avatar

D. Louis Collins

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Toews

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Laporte

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon J. Francis

Montreal Neurological Institute and Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge