Filipe Condessa
Instituto Superior Técnico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Filipe Condessa.
Proceedings of SPIE | 2014
José M. Bioucas-Dias; Filipe Condessa; Jelena Kovacevic
Image segmentation is fundamentally a discrete problem. It consists of finding a partition of the image domain such that the pixels in each element of the partition exhibit some kind of similarity. The solution is often obtained by minimizing an objective function containing terms measuring the consistency of the candidate partition with respect to the observed image, and regularization terms promoting solutions with desired properties. This formulation ends up being an integer optimization problem that, apart from a few exceptions, is NP-hard and thus impossible to solve exactly. This roadblock has stimulated active research aimed at computing “good” approximations to the solutions of those integer optimization problems. Relevant lines of attack have focused on the representation of the regions (i.e., the partition elements) in terms of functions, instead of subsets, and on convex relaxations which can be solved in polynomial time. In this paper, inspired by the “hidden Markov measure field” introduced by Marroquin et al. in 2003, we sidestep the discrete nature of image segmentation by formulating the problem in the Bayesian framework and introducing a hidden set of real-valued random fields determining the probability of a given partition. Armed with this model, the original discrete optimization is converted into a convex program. To infer the hidden fields, we introduce the Segmentation via the Constrained Split Augmented Lagrangian Shrinkage Algorithm (SegSALSA). The effectiveness of the proposed methodology is illustrated with simulated and real hyperspectral and medical images.
international conference on image analysis and recognition | 2012
Filipe Condessa; José M. Bioucas-Dias
In this paper we introduce a new methodology to segment and detect colorectal polyps in endoscopic images obtained by a wireless capsule endoscopic device. The cornerstone of our approach is the fact that polyps are protrusions emerging from colonic walls. Thus, they can be segmented by simple curvature descriptors. Curvature is based on derivatives, thus very sensitive to noise and image artifacts. Furthermore, the acquired images are sampled on a grid which further complicates the computation of derivatives. To cope with these degradation mechanisms, we use use Local Polynomial Approximation, which, simultaneously, denoise the observed images and provides a continuous representation suitable to compute derivatives. On the top of the image segmentation, we built a support vector machine to classify the segmented regions as polyps or non-polyps. The features used in the classifier are selected with a wrapper selection algorithm (greedy forward feature selection algorithm with support vector machines). The proposed segmentation and detection methodology is tested in several scenarios presenting very good results both using the same video sequences as training data and testing data (cross-feature validation) and different video sequences as training and testing data.
workshop on hyperspectral image and signal processing evolution in remote sensing | 2014
Filipe Condessa; José M. Bioucas-Dias; Jelena Kovacevic
Image segmentation is fundamentally a discrete problem. It consists of finding a partition of the image domain such that the pixels in each element of the partition exhibit some kind of similarity. The optimization is obtained via integer optimization which is NP-hard, apart from few exceptions. We sidestep from the discrete nature of image segmentation by formulating the problem in the Bayesian framework and introducing a hidden set of real-valued random fields determining the probability of a given partition. Armed with this model, the original discrete optimization is converted into a convex program. To infer the hidden fields, we introduce the Segmentation via the Constrained Split Augmented Lagrangian Shrinkage Algorithm (SegSALSA). The effectiveness of the proposed methodology is illustrated with hyperspectral image segmentation.
international symposium on biomedical imaging | 2013
Filipe Condessa; José M. Bioucas-Dias; Carlos A. Castro; John A. Ozolek; Jelena Kovacevic
We propose a new algorithm for classification that merges classification with reject option with classification using contextual information. A reject option is desired in many image-classification applications requiring a robust classifier and when the need for high classification accuracy surpasses the need to classify the entire image. Moreover, our algorithm improves the classifier performance by including local and nonlocal contextual information, at the expense of rejecting a fraction of the samples. As a probabilistic model, we adopt a multinomial logistic regression. We use a discriminative random model for the description of the problem; we introduce reject option into the classification problem through association potential, and contextual information through interaction potential. We validate the method on the images of H&E-stained teratoma tissues and show the increase in the classifier performance when rejecting part of the assigned class labels.
Pattern Recognition | 2017
Filipe Condessa; José M. Bioucas-Dias; Jelena Kovacevic
Classifiers with rejection are essential in real-world applications where misclassifications and their effects are critical. However, if no problem specific cost function is defined, there are no established measures to assess the performance of such classifiers. We introduce a set of desired properties for performance measures for classifiers with rejection, based on which we propose a set of three performance measures for the evaluation of the performance of classifiers with rejection that satisfy the desired properties. The nonrejected accuracy measures the ability of the classifier to accurately classify nonrejected samples; the classification quality measures the correct decision making of the classifier with rejector; and the rejection quality measures the ability to concentrate all misclassified samples onto the set of rejected samples. From the measures, we derive the concept of relative optimality that allows us to connect the measures to a family of cost functions that take into account the trade-off between rejection and misclassification. We illustrate the use of the proposed performance measures on classifiers with rejection applied to synthetic and real-world data.
international geoscience and remote sensing symposium | 2015
Filipe Condessa; José M. Bioucas-Dias; Jelena Kovacevic
Hyperspectral image classification is a challenging problem as obtaining complete and representative training sets is costly, pixels can belong to unknown classes, and it is generally an ill-posed problem. The need to achieve high classification accuracy may surpass the need to classify the entire image. To account for this scenario, we use classification with rejection by providing the classifier with an option not to classify a pixel and consequently reject it. We present and analyze two approaches for supervised hyperspectral image classification that combine the use of contextual priors with classification with rejection: 1) by jointly computing context and rejection and 2) by sequentially computing context and rejection. In the joint approach, rejection is introduced as an extra class that models the probability of classifier failure. In the sequential approach, rejection results from the hidden field associated with a marginal maximum a posteriori classification of the image. We validate both approaches on real hyperspectral data.
international geoscience and remote sensing symposium | 2016
Yi Liu; Filipe Condessa; José M. Bioucas-Dias; Jun Li; Antonio Plaza
The superpixels provided by an unsupervised segmentation algorithm are sets of neighboring pixels homogeneous in some sense. Therefore it is very likely that, in a classification problem, most pixels in a superpixel belong to the same class, namely if the homogeneity criterion is compatible with the class statistics. Superpixels are, therefore, a powerful device to express spatial contextual information. However, the exploitation of superpixels in a principled way is not straightforward. Recent efforts attack this problem under a discrete optimization framework, by including regularization terms promoting consistence of the labels in the superpixels and computing approximate labelings with graph-cut algorithms. The well known hardness of integer optimization problems is a major limitation of this line of attack. In this paper, we introduce a new strategy, based on convex relaxation, to include the spatial information provided by superpixels in classification problems. The convex relaxation of an integer optimization problem opens a door to include extra information, such as spatial partitioning information given by over-segmented superpixels. The convex optimization problem thus obtained is solved by using SALSA algorithm. Experimental results with the ROSIS Pavia University dataset illustrate the effectiveness of the proposed framework.
workshop on hyperspectral image and signal processing evolution in remote sensing | 2015
Filipe Condessa; José M. Bioucas-Dias; Jelena Kovacevic
In this paper we present a supervised hyperspectral image segmentation algorithm based on a convex formulation of a marginal maximum a posteriori segmentation with hidden fields and structure tensor regularization: Segmentation via the Constraint Split Augmented Lagrangian Shrinkage by Structure Tensor Regularization (SegSALSA-STR). This formulation avoids the generally discrete nature of segmentation problems and the inherent NP-hardness of the integer optimization associated. We extend the Segmentation via the Constraint Split Augmented Lagrangian Shrinkage (SegSALSA) algorithm [1] by generalizing the vectorial total variation prior using a structure tensor prior constructed from a patch-based Jacobian [2]. The resulting algorithm is convex, time-efficient and highly parallelizable. This shows the potential of combining hidden fields with convex optimization through the inclusion of different regularizers. The SegSALSA-STR algorithm is validated in the segmentation of real hyperspectral images.
Proceedings of the 2nd International Workshop on Social Sensing | 2017
Filipe Condessa; Radu Marculescu
Social media activity analysis can provide an open window to the inception and evolution of ideas. In this paper, we introduce a general model of spatiotemporal evolution of an arbitrary number of ideas in social media. As the main theoretical contribution, we map user messages into a latent hidden field and derive a multidimensional social signal that encapsulates an arbitrary number of ideas. We then analyze the distance (in time and space) of individual ideas when compared to a general stream of ideas, thus allowing the characterization of the spatiotemporal behavior of individual idea trajectories. Finally, using Twitter data, we observe that the spatiotemporal behavior of ideas is contents dependent, that is, different ideas evolve differently in time and space. Consequently, we identify four major patterns of behavior of ideas in space (local vs. global) and time (rare vs. pervasive), which can be used to understand the spatiotemporal nature social media dynamics.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016
Filipe Condessa; José M. Bioucas-Dias; Jelena Kovacevic
Hyperspectral image classification is a challenging classification problem: obtaining complete and representative training sets is costly; pixels can belong to unknown classes; and it is generally an ill-posed problem. The need to achieve high classification accuracy surpasses the need to classify the entire image. To achieve this, we use classification with rejection by providing the classifier an option not to classify a pixel and consequently reject it. We propose a method for supervised hyperspectral image classification combining the use of contextual priors with classification with rejection. Rejection is introduced as an extra class that models the probability of classifier failure. We validate the resulting algorithm in the AVIRIS Indian Pines scene and illustrate the performance increase resulting from classification with rejection.