Glauco Vitor Pedrosa
University of São Paulo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Glauco Vitor Pedrosa.
Pattern Recognition Letters | 2010
Glauco Vitor Pedrosa; Célia A. Zorzo Barcelos
This work presents a new method for detecting shape corner points. These points are characterized as having high curvature value and their detection is an important task in several applications, including motion tracking and object recognition. As noisy points also have high curvature value we propose a framework that includes smoothing and corner point localization. First, we defined a function that associates each shape contour point with its curvature value, then the proposed method automatically smooths this function via an anisotropic filter based on an evolutionary equation, simultaneously localizing the corner points. The results obtained show that the proposed model has good performance when compared with three other techniques.
Neurocomputing | 2013
Glauco Vitor Pedrosa; Marcos Aurélio Batista; Célia A. Zorzo Barcelos
Abstract The work presented in this article aims at shape feature extraction and description. In this paper, we propose a shape-based image retrieval technique using salience points to describe shapes. The saliences of a shape are defined as the higher curvature points along the shape contour. The technique presented here consists of: a salience point detector; a salience representation using angular relative position and curvature value analyzed from a multi-scale perspective; and a matching algorithm considering local and global features to calculate the dissimilarity. The proposed technique is robust to noise and presents good performance when dealing with shapes of different classes but visually similar. The experiments were made in order to illustrate the performance of the proposed technique, and the results show the good performance of our method when compared with other shape-based methods in literature.
international symposium on circuits and systems | 2011
Glauco Vitor Pedrosa; Célia A. Zorzo Barcelos; Marcos Aurélio Batista
Content-Based Image Retrieval (CBIR) systems have been developed to support the image retrieval based on image properties, such as color, shape and texture. In this paper, we are concerned with shape-based image retrieval. In this context, we propose a method to describe shapes based on salience points. The proposed descriptor utilizes a salience detector which is robust to noise, and an elastic matching algorithm to measure the similarity between two shapes represented by their salience points. The proposed approach is robust to noise and gives good results in recognizing shapes of the same class, even if they are represented by a different number of salience points.
international conference on image processing | 2011
Glauco Vitor Pedrosa; Célia A. Zorzo Barcelos; Marcos Aurélio Batista
In this paper, we propose a shape-based image retrieval technique using salience points to describe shapes. This technique consists of a salience point detector robust to noise, a salience representation using angular relative position and curvature value, invariant to rotation, translation and scaling, and an elastic matching algorithm to analyze the similarity. The proposed technique is robust to noise and presents good performance when dealing with shapes of different class but visually similar. The experiments were made in order to illustrate the performance of the proposed technique. The results show the good performance of our method when comparing with other shape-based methods in the literature.
computer based medical systems | 2014
Glauco Vitor Pedrosa; Agma J. M. Traina; Caetano Traina
Bag-of-Visual-Words (BoVW) is a well known approach to represent images for visual recognition and retrieval tasks. This approach represents an image as a histogram of visual words and the dissimilarity between two images is measured by comparing those histograms. When performing comparisons involving a specific type of images, some visual words can be more informative and discriminative than others. To take advantage of this fact, assigning appropriate weights can improve the performance of image retrieval. In this paper, we developed a novel modeling approach based on sub dictionaries. We extracted a sub-dictionary as a subset of visual words that best represents a specific image class. To measure the dissimilarity distance between images, we take into account the distance of the histogram obtained using the visual dictionary and the distances of the sub histograms obtained by each sub-dictionary. The proposed approach was evaluated by classifying a standard biomedical image dataset into categories defined by image modality and body part and also natural image scenes. The experimental results demonstrate the gain obtained of the proposed weighting approach when compared to the traditional weighting approach based on TF-IDF (Term Frequency-Inverse Document Frequency). Our proposed approach has shown promising results to boost the classification accuracy as well as the retrieval precision. Moreover, it does that without increasing the feature vector dimensionality.
acm symposium on applied computing | 2015
Glauco Vitor Pedrosa; Agma J. M. Traina
The Bag-of-Visual-Words approach has been successfully used for video and image analysis by encoding local features as visual words, and the final representation is a histogram of the visual words detected in the image. One limitation of this approach relies on its inability of encoding spatial distribution of the visual words within an image, which is important for similarity measurement between images. In this paper, we present a novel technique to incorporate spatial information, called Global Spatial Arrangement (GSA). The idea is to split the image space into quadrants using each detected point as origin. To ensure rotation invariance, we use the information of the gradient of each detected point to define each quarter of the quadrant. The final representation uses only two extra information into the final feature vector to encode the spatial arrangement of visual words, with the advantage of being invariant to rotation. We performed representative experimental evaluations using several public datasets. Compared to other techniques, such as the Spatial Pyramid (SP), the proposed method needs 90% less information to encode spatial information of visual words. The results in image retrieval and classification demonstrated that our proposed approach improved the retrieval accuracy compared to other traditional techniques, while being the most compact descriptor.
Multimedia Tools and Applications | 2017
Glauco Vitor Pedrosa; Agma J. M. Traina; Célia A. Zorzo Barcelos
This paper proposes a novel shape feature description based on salient points, called Bag-of-Salience-Points (BoSP). The proposed descriptor is compact and provides a fast solution for finding the correspondences of two set of salient points, contributing to speed-up the task of shape matching. The novelty of the proposed descriptor lies in combining local sparse features (salient points) encoded in global and spatial-based histograms with a few other shape factors like eccentricity. The proposed shape descriptor retrieves the best matching, even in occlusions situations, where points in the two shapes cannot be properly matched. The BoSP is validated on several benchmark datasets for 2D shape matching algorithms, and it is observed that the BoSP maintains superior discriminative, while being invariant to geometric transformations as well as demanding a low computational cost to measure the similarity of shapes.
brazilian symposium on multimedia and the web | 2012
Glauco Vitor Pedrosa; Solange Oliveira Rezende; Agma J. M. Traina
The Bag-of-Features is a popular approach to describe multimedia information by using visual words. The SIFT (Scale Invariant Feature Transform) is one of the most utilized descriptor to model multimedia information in Bag of-Features. The data is described as a set of keypoints and a feature vector is assigned for each of the keypoints. This feature vector is composed of 128 values, which represent the region around each keypoint. In general, some of the detected keypoints are not relevant and can be discarded without losing the local discriminative power. In this paper, we propose a technique to reduce the detected keypoints by SIFT, as well as a technique to reduce the feature vector dimensionality. Experiments were made in order to analyze the performance of the proposed reduction techniques using two different image databases. The results demonstrated that the proposed techniques improve the performance of the image retrieval by reducing up to 50% the feature vector dimensionality of SIFT and at the same time providing a gain of computational time of modeling an image employing Bag-of-Features.
computer-based medical systems | 2016
Glauco Vitor Pedrosa; Agma J. M. Traina
This paper proposes a novel model, called Similarity Based on Visual Attention Features (SimVisual), to enhance the similarity analysis between images by considering features extracted from salient regions mapped by visual attention models. Visual attention models have demonstrated to be very useful for encoding perceptual semantic information of the image content. Thus, aggregating saliency features into the final image representation is a powerful asset to enhance the similarity analysis between images, while increasing the accuracy in retrieval tasks. The goal of SimVisual is to combine different saliency models with traditional image descriptors, aimed at increasing the descriptive power of these descriptors without modifying the original algorithms. We performed some experiments using a large dataset composed of 32 different biomedical images categories, and the results show that SimVisual boosts the retrieval accuracy up to 13% considering simple image descriptors, such as Color Histograms. The experiments on SimVisual shows that it is a valuable approach to increase the efficacy of content-based image retrieval systems, without user interactions.
acm symposium on applied computing | 2015
Glauco Vitor Pedrosa; Agma J. M. Traina; Célia A. Zorzo Barcelos
Salient points are very important for image description because they are related to the visually most important parts of the image, leading to a compact and more discriminative representation close to human perception. Based on these promising features, in this paper we propose a new shape descriptor, namely Bag-of-Salience-Points (BoSP), using the shape salience points combined with the Bag-of-Visual-Words modeling approach. Each salience point, after extracted from the shape contour, is represented by its curvature value using a multi-scale procedure proposed in this work. Taking advantage of this representation, each salience is assigned to a visual word according to a Dictionary of Curvatures. The final shape representation is given by computing a histogram of visual words detected in the shape, combined with a spatial pooling approach that encodes the distance distribution of the visual words in relation the shape centroid. This proposed new shape description allows to analyze the dissimilarity between shapes using fast distance functions, such as the City-block distance, even if two shapes have different number of salience points. This is a powerful asset to reduce the computational complexity when retrieving images. Compared to other shape descriptors, the BoSP descriptor has the advantage of proving a powerful shape description with high recognition accuracy, a compact representation invariant to geometric transformations while demanding a low computational cost to measure the dissimilarity of shapes.
Collaboration
Dive into the Glauco Vitor Pedrosa's collaboration.
National Council for Scientific and Technological Development
View shared research outputs