Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel L. Oliveira is active.

Publication


Featured researches published by Gabriel L. Oliveira.


iberoamerican congress on pattern recognition | 2012

STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences

Antônio Wilson Vieira; Erickson R. Nascimento; Gabriel L. Oliveira; Zicheng Liu; Mario Fernando Montenegro Campos

This paper presents Space-Time Occupancy Patterns (STOP), a new visual representation for 3D action recognition from sequences of depth maps. In this new representation, space and time axes are divided into multiple segments to define a 4D grid for each depth map sequence. The advantage of STOP is that it preserves spatial and temporal contextual information between space-time cells while being flexible enough to accommodate intra-action variations. Our visual representation is validated with experiments on a public 3D human action dataset. For the challenging cross-subject test, we significantly improved the recognition accuracy from the previously reported 74.7% to 84.8%. Furthermore, we present an automatic segmentation and time alignment method for online recognition of depth sequences.


Pattern Recognition Letters | 2014

On the improvement of human action recognition from depth map sequences using Space-Time Occupancy Patterns

Antônio Wilson Vieira; Erickson R. Nascimento; Gabriel L. Oliveira; Zicheng Liu; Mario Fernando Montenegro Campos

We present a new visual representation for 3D action recognition from sequences of depth maps. In this new representation, space and time axes are divided into multiple segments to define a 4D grid for each depth map sequences. Each cell in the grid is associated with an occupancy value which is a function of the number of space-time points falling into this cell. The occupancy values of all the cells form a high dimensional feature vector, called Space-Time Occupancy Pattern (STOP). We then perform dimensionality reduction to obtain lower-dimensional feature vectors. The advantage of STOP is that it preserves spatial and temporal contextual information between space and time cells while being flexible enough to accommodate intra-action variations. Furthermore, we combine depth maps with skeletons in order to obtain view invariance and present an automatic segmentation and time alignment method for on-line recognition of depth sequences. Our visual representation is validated with experiments on a public 3D human action dataset.


international conference on robotics and automation | 2012

Sparse Spatial Coding: A novel approach for efficient and accurate object recognition

Gabriel L. Oliveira; Erickson R. Nascimento; Antônio Wilson Vieira; Mario Fernando Montenegro Campos

Successful state-of-the-art object recognition techniques from images have been based on powerful methods, such as sparse representation, in order to replace the also popular vector quantization (VQ) approach. Recently, sparse coding, which is characterized by representing a signal in a sparse space, has raised the bar on several object recognition benchmarks. However, one serious drawback of sparse space based methods is that similar local features can be quantized into different visual words. We present in this paper a new method, called Sparse Spatial Coding (SSC), which combines a sparse coding dictionary learning, a spatial constraint coding stage and an online classification method to improve object recognition. An efficient new off-line classification algorithm is also presented. We overcome the problem of techniques which make use of sparse representation alone by generating the final representation with SSC and max pooling, presented for an online learning classifier. Experimental results obtained on the Caltech 101, Caltech 256, Corel 5000 and Corel 10000 databases, show that, to the best of our knowledge, our approach supersedes in accuracy the best published results to date on the same databases. As an extension, we also show high performance results on the MIT-67 indoor scene recognition dataset.


intelligent robots and systems | 2012

BRAND: A robust appearance and depth descriptor for RGB-D images

Erickson R. Nascimento; Gabriel L. Oliveira; Mario Fernando Montenegro Campos; Antônio Wilson Vieira; William Robson Schwartz

This work introduces a novel descriptor called Binary Robust Appearance and Normals Descriptor (BRAND), that efficiently combines appearance and geometric shape information from RGB-D images, and is largely invariant to rotation and scale transform. The proposed approach encodes point information as a binary string providing a descriptor that is suitable for applications that demand speed performance and low memory consumption. Results of several experiments demonstrate that as far as precision and robustness are concerned, BRAND achieves improved results when compared to state of the art descriptors based on texture, geometry and combination of both information. We also demonstrate that our descriptor is robust and provides reliable results in a registration task even when a sparsely textured and poorly illuminated scene is used.


Neurocomputing | 2013

On the development of a robust, fast and lightweight keypoint descriptor

Erickson R. Nascimento; Gabriel L. Oliveira; Antônio Wilson Vieira; Mario Fernando Montenegro Campos

Abstract In this paper we introduce BRAND—Binary Robust Appearance and Normal Descriptor, a novel descriptor which efficiently combines appearance and geometric information from RGB-D images, that is largely invariant to rotation and scale transformations. Based on relevant characteristics of successful image only descriptors, we define a set of eight fundamental requirements to guide the design and evaluation of descriptors that also use depth information. We then describe the design of BRAND, followed by the evaluation of its performance according to those requirements. We also show how BRAND can be simplified in order to obtain a higher performance version, that we named BASE, for applications that require speed performance, but do not demand rigorous scale and rotation invariance. We compare the performance of BRAND against three standard descriptors on real world data. Results of several experiments demonstrate that as far as precision and robustness is concerned, BRAND compares favorably to SIFT and SURF for textured images, and to Spin-Image, for geometrical shape information. Furthermore, BRAND attains improved results when compared to state of the art descriptors that are based either on texture or geometry alone, or on their combination. Finally, we report on the use of BRAND in two applications for which we show that it provides reliable results for the registration of indoor textured depth maps and for object recognition in tasks that require the extraction of semantic knowledge.


brazilian symposium on computer graphics and image processing | 2012

Appearance and Geometry Fusion for Enhanced Dense 3D Alignment

Erickson R. Nascimento; William Robson Schwartz; Gabriel L. Oliveira; Antônio W. Veira; Mario Fernando Montenegro Campos; Daniel Balbino de Mesquita

This work proposes a novel RGB-D feature descriptor called Binary Appearance and Shape Elements (BASE) that efficiently combines intensity and shape information to improve the discriminative power and enable an enhanced and faster matching process. The new descriptor is used to align a set of RGB point clouds to generate dense three dimensional models of indoor environments. We compare the performance of state-of-the-art feature descriptors with the proposed descriptor for scene alignment through the registration of multiple indoor textured depth maps. Experimental results show that the proposed descriptor outperforms the other approaches in computational cost, memory consumption and match quality. Additionally, experiments based on cloud alignment show that the BASE descriptor is suitable to be used in the registration of RBG-D data even when the environment is partially illuminated.


latin american robotics symposium | 2010

Growing Cell Structures Applied to Sensor Fusion and SLAM

Silvia Silva da Costa Botelho; Celina da Rocha; Monica Figueiredo; Paulo Drews; Gabriel L. Oliveira

This paper proposes the use of topological maps in order to implement a SLAM approach, based on sensor fusion, to deal better with the problem of inaccuracy and uncertainty in sensor data. The contribution of this work is an algorithm that uses multiple sensory sources and multiple topological maps to improve the estimation of localization as generic as possible. We can obtain better results when this is made with sensors of clashing characteristics, because something not perceived by a sensor might be perceived by others, therefore we can also reduce the effects of measurement error, obtaining a method that works with uncertainties of the sensors. A system was developed to validate the proposed method, through a series of tests with a set of real data. The results show the robustness of the system in relation to sensorial imprecision and gain in predicting the robots location, resulting in a more appropriate method to deal with errors associated to each sensor.


Cadernos do LESTE | 2018

ÍNDICE DE ATENDIMENTO ESCOLAR: UM MODELO PARA AVALIAÇÃO COMPARATIVA ENTRE OFERTA E DEMANDA DE ESCOLAS DO ENSINO FUNDAMENTAL EM BELO HORIZONTE

Gabriel L. Oliveira


Revista Espinhaço | UFVJM | 2017

Municípios recém-criados no Vale do Jequitinhonha e promoção da cidadania: uma análise comparativa dos indicadores de bem-estar social

Marcos Antônio Nunes; Gabriel L. Oliveira


Revista Espinhaço (UFVJM) | 2015

Municípios recém-criados no Vale do Jequitinhonha e promoção da cidadania: uma análise comparativa dos indicadores de bem-estar social / Newly created municipalities in the Vale do Jequitinhonha and promotion of citizenship: a comparative analysis of...

Marcos Antônio Nunes; Gabriel L. Oliveira

Collaboration


Dive into the Gabriel L. Oliveira's collaboration.

Top Co-Authors

Avatar

Erickson R. Nascimento

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Mario Fernando Montenegro Campos

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Antônio Wilson Vieira

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Marcos Antônio Nunes

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

William Robson Schwartz

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Celina da Rocha

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Daniel Balbino de Mesquita

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Monica Figueiredo

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Ralfo Matos

Universidade Federal de Minas Gerais

View shared research outputs
Researchain Logo
Decentralizing Knowledge