Jacinto C. Nascimento
Instituto Superior Técnico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jacinto C. Nascimento.
IEEE Journal of Selected Topics in Signal Processing | 2009
Margarida Silveira; Jacinto C. Nascimento; Jorge S. Marques; André R. S. Marçal; Teresa Mendonça; Syogo Yamauchi; Junji Maeda; Jorge Rozeira
In this paper, we propose and evaluate six methods for the segmentation of skin lesions in dermoscopic images. This set includes some state of the art techniques which have been successfully used in many medical imaging problems (gradient vector flow (GVF) and the level set method of Chan et al.[(C-LS)]. It also includes a set of methods developed by the authors which were tailored to this particular application (adaptive thresholding (AT), adaptive snake (AS), EM level set (EM-LS), and fuzzy-based split-and-merge algorithm (FBSM)]. The segmentation methods were applied to 100 dermoscopic images and evaluated with four different metrics, using the segmentation result obtained by an experienced dermatologist as the ground truth. The best results were obtained by the AS and EM-LS methods, which are semi-supervised methods. The best fully automatic method was FBSM, with results only slightly worse than AS and EM-LS.
IEEE Transactions on Multimedia | 2006
Jacinto C. Nascimento; Jorge S. Marques
In this paper, we propose novel methods to evaluate the performance of object detection algorithms in video sequences. This procedure allows us to highlight characteristics (e.g., region splitting or merging) which are specific of the method being used. The proposed framework compares the output of the algorithm with the ground truth and measures the differences according to objective metrics. In this way it is possible to perform a fair comparison among different methods, evaluating their strengths and weaknesses and allowing the user to perform a reliable choice of the best method for a specific application. We apply this methodology to segmentation algorithms recently proposed and describe their performance. These methods were evaluated in order to assess how well they can detect moving regions in an outdoor scene in fixed-camera situations
international conference on computer communications and networks | 2005
D. Hall; Jacinto C. Nascimento; P. Ribeiro; E. Andrade; Plinio Moreno; S. Pesnel; Thor List; R. Emonet; Robert B. Fisher; J.S. Victor; J.L. Crowley
This article compares the performance of target detectors based on adaptive background differencing on public benchmark data. Five state of the art methods are described. The performance is evaluated using state of the art measures with respect to ground truth. The original points are the comparison to hand labelled ground truth and the evaluation on a large database. The simpler methods LOTS and SGM are more appropriate to the particular task as MGM using a more complex background model.
IEEE Transactions on Image Processing | 2008
João M. Sanches; Jacinto C. Nascimento; Jorge S. Marques
Multiplicative noise is often present in medical and biological imaging, such as magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), single photon emission computed tomography (SPECT), and fluorescence microscopy. Noise reduction in medical images is a difficult task in which linear filtering algorithms usually fail. Bayesian algorithms have been used with success but they are time consuming and computationally demanding. In addition, the increasing importance of the 3-D and 4-D medical image analysis in medical diagnosis procedures increases the amount of data that must be efficiently processed. This paper presents a Bayesian denoising algorithm which copes with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions. The algorithm is based on the maximum a posteriori (MAP) criterion, and edge preserving priors which avoid the distortion of relevant anatomical details. The main contribution of the paper is the unification of a set of Bayesian denoising algorithms for additive and multiplicative noise using a well-known mathematical framework, the Sylvester-Lyapunov equation, developed in the context of the control theory.
IEEE Transactions on Image Processing | 2010
Jacinto C. Nascimento; Mário A. T. Figueiredo; Jorge S. Marques
This paper proposes an approach for recognizing human activities (more specifically, pedestrian trajectories) in video sequences, in a surveillance context. A system for automatic processing of video information for surveillance purposes should be capable of detecting, recognizing, and collecting statistics of human activity, reducing human intervention as much as possible. In the method described in this paper, human trajectories are modeled as a concatenation of segments produced by a set of low level dynamical models. These low level models are estimated in an unsupervised fashion, based on a finite mixture formulation, using the expectation-maximization (EM) algorithm; the number of models is automatically obtained using a minimum message length (MML) criterion. This leads to a parsimonious set of models tuned to the complexity of the scene. We describe the switching among the low-level dynamic models by a hidden Markov chain; thus, the complete model is termed a switched dynamical hidden Markov model (SD-HMM). The performance of the proposed method is illustrated with real data from two different scenarios: a shopping center and a university campus. A set of human activities in both scenarios is successfully recognized by the proposed system. These experiments show the ability of our approach to properly describe trajectories with sudden changes.
IEEE Transactions on Image Processing | 2008
Jacinto C. Nascimento; Jorge S. Marques
This paper addresses object tracking in ultrasound images using a robust multiple model tracker. The proposed tracker has the following features: 1) it uses multiple dynamic models to track the evolution of the object boundary, and 2) it models invalid observations (outliers), reducing their influence on the shape estimates. The problem considered in this paper is the tracking of the left ventricle which is known to be a challenging problem. The heart motion presents two phases (diastole and systole) with different dynamics, the multiple models used in this tracker try to solve this difficulty. In addition, ultrasound images are corrupted by strong multiplicative noise which prevents the use of standard deformable models. Robust estimation techniques are used to address this difficulty. The multiple model data association (MMDA) tracker proposed in this paper is based on a bank of nonlinear filters, organized in a tree structure. The algorithm determines which model is active at each instant of time and updates its state by propagating the probability distribution, using robust estimation techniques.
IEEE Transactions on Image Processing | 2012
Gustavo Carneiro; Jacinto C. Nascimento; António Freitas
We present a new supervised learning model designed for the automatic segmentation of the left ventricle (LV) of the heart in ultrasound images. We address the following problems inherent to supervised learning models: 1) the need of a large set of training images; 2) robustness to imaging conditions not present in the training data; and 3) complex search process. The innovations of our approach reside in a formulation that decouples the rigid and nonrigid detections, deep learning methods that model the appearance of the LV, and efficient derivative-based search algorithms. The functionality of our approach is evaluated using a data set of diseased cases containing 400 annotated images (from 12 sequences) and another data set of normal cases comprising 80 annotated images (from two sequences), where both sets present long axis views of the LV. Using several error measures to compute the degree of similarity between the manual and automatic segmentations, we show that our method not only has high sensitivity and specificity but also presents variations with respect to a gold standard (computed from the manual annotations of two experts) within interuser variability on a subset of the diseased cases. We also compare the segmentations produced by our approach and by two state-of-the-art LV segmentation models on the data set of normal cases, and the results show that our approach produces segmentations that are comparable to these two approaches using only 20 training images and increasing the training set to 400 images causes our approach to be generally more accurate. Finally, we show that efficient search methods reduce up to tenfold the complexity of the method while still producing competitive segmentations. In the future, we plan to include a dynamical model to improve the performance of the algorithm, to use semisupervised learning methods to reduce even more the dependence on rich and large training sets, and to design a shape model less dependent on the training set.
IEEE Transactions on Image Processing | 2005
Jacinto C. Nascimento; Jorge S. Marques
Deformable models (e.g., snakes) perform poorly in many image analysis problems. The contour model is attracted by edge points detected in the image. However, many edge points do not belong to the object contour, preventing the active contour from converging toward the object boundary. A new algorithm is proposed in this paper to overcome this difficulty. The algorithm is based on two key ideas. First, edge points are associated in strokes. Second, each stroke is classified as valid (inlier) or invalid (outlier) and a confidence degree is associated to each stroke. The expectation maximization algorithm is used to update the confidence degrees and to estimate the object contour. It is shown that this is equivalent to the use of an adaptive potential function which varies during the optimization process. Valid strokes receive high confidence degrees while confidence degrees of invalid strokes tend to zero during the optimization process. Experimental results are presented to illustrate the performance of the proposed algorithm in the presence of clutter, showing a remarkable robustness.
medical image computing and computer assisted intervention | 2015
Gustavo Carneiro; Jacinto C. Nascimento; Andrew P. Bradley
We show two important findings on the use of deep convolutional neural networks (CNN) in medical image analysis. First, we show that CNN models that are pre-trained using computer vision databases (e.g., Imagenet) are useful in medical image applications, despite the significant differences in image appearance. Second, we show that multiview classification is possible without the pre-registration of the input images. Rather, we use the high-level features produced by the CNNs trained in each view separately. Focusing on the classification of mammograms using craniocaudal (CC) and mediolateral oblique (MLO) views and their respective mass and micro-calcification segmentations of the same breast, we initially train a separate CNN model for each view and each segmentation map using an Imagenet pre-trained model. Then, using the features learned from each segmentation map and unregistered views, we train a final CNN classifier that estimates the patient’s risk of developing breast cancer using the Breast Imaging-Reporting and Data System (BI-RADS) score. We test our methodology in two publicly available datasets (InBreast and DDSM), containing hundreds of cases, and show that it produces a volume under ROC surface of over 0.9 and an area under ROC curve (for a 2-class problem - benign and malignant) of over 0.9. In general, our approach shows state-of-the-art classification results and demonstrates a new comprehensive way of addressing this challenging classification problem.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Gustavo Carneiro; Jacinto C. Nascimento
We present a new statistical pattern recognition approach for the problem of left ventricle endocardium tracking in ultrasound data. The problem is formulated as a sequential importance resampling algorithm such that the expected segmentation of the current time step is estimated based on the appearance, shape, and motion models that take into account all previous and current images and previous segmentation contours produced by the method. The new appearance and shape models decouple the affine and nonrigid segmentations of the left ventricle to reduce the running time complexity. The proposed motion model combines the systole and diastole motion patterns and an observation distribution built by a deep neural network. The functionality of our approach is evaluated using a dataset of diseased cases containing 16 sequences and another dataset of normal cases comprised of four sequences, where both sets present long axis views of the left ventricle. Using a training set comprised of diseased and healthy cases, we show that our approach produces more accurate results than current state-of-the-art endocardium tracking methods in two test sequences from healthy subjects. Using three test sequences containing different types of cardiopathies, we show that our method correlates well with interuser statistics produced by four cardiologists.