Pedro Quelhas
University of Porto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pedro Quelhas.
international conference on computer vision | 2005
Pedro Quelhas; Florent Monay; Jean-Marc Odobez; Daniel Gatica-Perez; T. Tuytelaars; L. Van Gool
We present a new approach to model visual scenes in image collections, based on local invariant features and probabilistic latent space models. Our formulation provides answers to three open questions:(l) whether the invariant local features are suitable for scene (rather than object) classification; (2) whether unsupennsed latent space models can be used for feature extraction in the classification task; and (3) whether the latent space formulation can discover visual co-occurrence patterns, motivating novel approaches for image organization and segmentation. Using a 9500-image dataset, our approach is validated on each of these issues. First, we show with extensive experiments on binary and multi-class scene classification tasks, that a bag-of-visterm representation, derived from local invariant descriptors, consistently outperforms state-of-the-art approaches. Second, we show that probabilistic latent semantic analysis (PLSA) generates a compact scene representation, discriminative for accurate classification, and significantly more robust when less training data are available. Third, we have exploited the ability of PLSA to automatically extract visually meaningful aspects, to propose new algorithms for aspect-based image ranking and context-sensitive image segmentation.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007
Pedro Quelhas; Florent Monay; Jean-Marc Odobez; Daniel Gatica-Perez; Tinne Tuytelaars
This paper presents a novel approach for visual scene modeling and classification, investigating the combined use of text modeling methods and local invariant features. Our work attempts to elucidate (1) whether a textlike bag-of-visterms (BOV) representation (histogram of quantized local visual features) is suitable for scene (rather than object) classification, (2) whether some analogies between discrete scene representations and text documents exist, and 3) whether unsupervised, latent space models can be used both as feature extractors for the classification task and to discover patterns of visual co-occurrence. Using several data sets, we validate our approach, presenting and discussing experiments on each of these issues. We first show, with extensive experiments on binary and multiclass scene classification tasks using a 9,500-image data set, that the BOV representation consistently outperforms classical scene classification approaches. In other data sets, we show that our approach competes with or outperforms other recent more complex methods. We also show that probabilistic latent semantic analysis (PLSA) generates a compact scene representation, is discriminative for accurate classification, and is more robust than the BOV representation when less labeled training data is available. Finally, through aspect-based image ranking experiments, we show the ability of PLSA to automatically extract visually meaningful scene patterns, making such representation useful for browsing image collections.
The Plant Cell | 2011
Luis Sanz; Walter Dewitte; Celine Forzani; Farah Patell; Jeroen Nieuwland; Bo Wen; Pedro Quelhas; Sarah M. de Jager; Craig Titmus; Aurélio Campilho; Hong Ren; Mark Estelle; Hong Wang; James Augustus Henry Murray
Root branching is stimulated by auxin and occurs as a result of lateral roots initiated from the pericycle. Here, new light is shed on how the cell cycle is regulated during lateral root initiation by two interacting cell cycle regulatory proteins, the D-type cyclin CYCD2;1 and the auxin-regulated inhibitory protein ICK2/KRP2. The integration of cell division in root growth and development requires mediation of developmental and physiological signals through regulation of cyclin-dependent kinase activity. Cells within the pericycle form de novo lateral root meristems, and D-type cyclins (CYCD), as regulators of the G1-to-S phase cell cycle transition, are anticipated to play a role. Here, we show that the D-type cyclin protein CYCD2;1 is nuclear in Arabidopsis thaliana root cells, with the highest concentration in apical and lateral meristems. Loss of CYCD2;1 has a marginal effect on unstimulated lateral root density, but CYCD2;1 is rate-limiting for the response to low levels of exogenous auxin. However, while CYCD2;1 expression requires sucrose, it does not respond to auxin. The protein Inhibitor-Interactor of CDK/Kip Related Protein2 (ICK2/KRP2), which interacts with CYCD2;1, inhibits lateral root formation, and ick2/krp2 mutants show increased lateral root density. ICK2/KRP2 can modulate the nuclear levels of CYCD2;1, and since auxin reduces ICK2/KRP2 protein levels, it affects both activity and cellular distribution of CYCD2;1. Hence, as ICK2/KRP2 levels decrease, the increase in lateral root density depends on CYCD2;1, irrespective of ICK2/CYCD2;1 nuclear localization. We propose that ICK2/KRP2 restrains root ramification by maintaining CYCD2;1 inactive and that this modulates pericycle responses to auxin fluctuations.
conference on image and video retrieval | 2006
Pedro Quelhas; Jean-Marc Odobez
This paper presents a novel approach for visual scene representation, combining the use of quantized color and texture local invariant features (referred to here as visterms) computed over interest point regions. In particular we investigate the different ways to fuse together local information from texture and color in order to provide a better visterm representation. We develop and test our methods on the task of image classification using a 6-class natural scene database. We perform classification based on the bag-of-visterms (BOV) representation (histogram of quantized local descriptors), extracted from both texture and color features. We investigate two different fusion approaches at the feature level: fusing local descriptors together and creating one representation of joint texture-color visterms, or concatenating the histogram representation of both color and texture, obtained independently from each local feature. On our classification task we show that the appropriate use of color improves the results w.r.t. a texture only representation.
Nature Methods | 2017
Vladimír Ulman; Martin Maška; Klas E. G. Magnusson; Olaf Ronneberger; Carsten Haubold; Nathalie Harder; Pavel Matula; Petr Matula; David Svoboda; Miroslav Radojevic; Ihor Smal; Karl Rohr; Joakim Jaldén; Helen M. Blau; Oleh Dzyubachyk; Boudewijn P. F. Lelieveldt; Pengdong Xiao; Yuexiang Li; Siu-Yeung Cho; Alexandre Dufour; Jean-Christophe Olivo-Marin; Constantino Carlos Reyes-Aldasoro; José Alonso Solís-Lemus; Robert Bensch; Thomas Brox; Johannes Stegmaier; Ralf Mikut; Steffen Wolf; Fred A. Hamprecht; Tiago Esteves
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays todays state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.
PLOS ONE | 2011
Diana S. Nascimento; Mariana Valente; Tiago Esteves; Maria de Fátima de Pina; Joana G. Guedes; Ana G. Freire; Pedro Quelhas; Perpétua Pinto-do-Ó
Background The cardiac regenerative potential of newly developed therapies is traditionally evaluated in rodent models of surgically induced myocardial ischemia. A generally accepted key parameter for determining the success of the applied therapy is the infarct size. Although regarded as a gold standard method for infarct size estimation in heart ischemia, histological planimetry is time-consuming and highly variable amongst studies. The purpose of this work is to contribute towards the standardization and simplification of infarct size assessment by providing free access to a novel semi-automated software tool. The acronym MIQuant was attributed to this application. Methodology/Principal Findings Mice were subject to permanent coronary artery ligation and the size of chronic infarcts was estimated by area and midline-length methods using manual planimetry and with MIQuant. Repeatability and reproducibility of MIQuant scores were verified. The validation showed high correlation (r midline length = 0.981; r area = 0.970 ) and agreement (Bland-Altman analysis), free from bias for midline length and negligible bias of 1.21% to 3.72% for area quantification. Further analysis demonstrated that MIQuant reduced by 4.5-fold the time spent on the analysis and, importantly, MIQuant effectiveness is independent of user proficiency. The results indicate that MIQuant can be regarded as a better alternative to manual measurement. Conclusions We conclude that MIQuant is a reliable and an easy-to-use software for infarct size quantification. The widespread use of MIQuant will contribute towards the standardization of infarct size assessment across studies and, therefore, to the systematization of the evaluation of cardiac regenerative potential of emerging therapies.
computer vision and pattern recognition | 2006
Florent Monay; Pedro Quelhas; Jean-Marc Odobez; Daniel Gatica-Perez
We present a novel approach for contextual segmentation of complex visual scenes, based on the use of bags of local invariant features (visterms) and probabilistic aspect models. Our approach uses context in two ways: (1) by using the fact that specific learned aspects correlate with the semantic classes, which resolves some cases of visual polysemy, and (2) by formalizing the notion that scene context is image-specific -what an individual visterm represents depends on what the rest of the visterms in the same bag represent too-. We demonstrate the validity of our approach on a man-made vs. natural visterm classification problem. Experiments on an image collection of complex scenes show that the approach improves region discrimination, producing satisfactory results, and outperforming a non-contextual method. Furthermore, through the later use of a Markov Random Field model, we also show that co-occurrence and spatial contextual information can be conveniently integrated for improved visterm classification.
machine vision applications | 2012
Tiago Esteves; Pedro Quelhas; Ana Maria Mendonça; Aurélio Campilho
Computational methods used in microscopy cell image analysis have largely augmented the impact of imaging techniques, becoming fundamental for biological research. The understanding of cell regulation processes is very important in biology, and in particular confocal fluorescence imaging plays a relevant role for the in vivo observation of cells. However, most biology researchers still analyze cells by visual inspection alone, which is time consuming and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cells. While the classic approach for automatic cell analysis is to use image segmentation, for in vivo confocal fluorescence microscopy images of plants, such approach is neither trivial nor is it robust to image quality variations. To analyze plant cells in in vivo confocal fluorescence microscopy images with robustness and increased performance, we propose the use of local convergence filters (LCF). These filters are based in gradient convergence and as such can handle illumination variations, noise and low contrast. We apply a range of existing convergence filters for cell nuclei analysis of the Arabidopsis thaliana plant root tip. To further increase contrast invariance, we present an augmentation to local convergence approaches based on image phase information. Through the use of convergence index filters we improved the results for cell nuclei detection and shape estimation when compared with baseline approaches. Using phase congruency information we were able to further increase performance by 11% for nuclei detection accuracy and 4% for shape adaptation. Shape regularization was also applied, but with no significant gain, which indicates shape estimation was good for the applied filters.
Lecture Notes in Computer Science | 2005
Florent Monay; Pedro Quelhas; Daniel Gatica-Perez; Jean-Marc Odobez
We propose the use of latent space models applied to local invariant features for object classification. We investigate whether using latent space models enables to learn patterns of visual co-occurrence and if the learned visual models improve performance when less labeled data are available. We present and discuss results that support these hypotheses. Probabilistic Latent Semantic Analysis (PLSA) automatically identifies aspects from the data with semantic meaning, producing unsupervised soft clustering. The resulting compact representation retains sufficient discriminative information for accurate object classification, and improves the classification accuracy through the use of unlabeled data when less labeled training data are available. We perform experiments on a 7-class object database containing 1776 images.
conference on image and video retrieval | 2007
Pedro Quelhas; Jean-Marc Odobez
In the past, quantized local descriptors have been shown to be a good base for the representation of images, that can be applied to a wide range of tasks. However, current approaches typically consider only one level of quantization to create the final image representation. In this view they somehow restrict the image description to one level of visual detail. We propose to build image representations from multi-level quantization of local interest point descriptors, automatically extracted from the images. The use of this new multi-level representation will allow for the description of fine and coarse local image detail in one framework. To evaluate the performance of our approach we perform scene image classification using a 13-class data set. We show that the use of information from multiple quantization levels increases the classification performance, which suggests that the different granularity captured by the multi-level quantization produces a more discriminant image representation. Moreover, by using a multi-level approach, the time necessary to learn the quantization models can be reduced by learning the different models in parallel.