Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erickson R. Nascimento is active.

Publication


Featured researches published by Erickson R. Nascimento.


iberoamerican congress on pattern recognition | 2012

STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences

Antônio Wilson Vieira; Erickson R. Nascimento; Gabriel L. Oliveira; Zicheng Liu; Mario Fernando Montenegro Campos

This paper presents Space-Time Occupancy Patterns (STOP), a new visual representation for 3D action recognition from sequences of depth maps. In this new representation, space and time axes are divided into multiple segments to define a 4D grid for each depth map sequence. The advantage of STOP is that it preserves spatial and temporal contextual information between space-time cells while being flexible enough to accommodate intra-action variations. Our visual representation is validated with experiments on a public 3D human action dataset. For the challenging cross-subject test, we significantly improved the recognition accuracy from the previously reported 74.7% to 84.8%. Furthermore, we present an automatic segmentation and time alignment method for online recognition of depth sequences.


Pattern Recognition Letters | 2014

On the improvement of human action recognition from depth map sequences using Space-Time Occupancy Patterns

Antônio Wilson Vieira; Erickson R. Nascimento; Gabriel L. Oliveira; Zicheng Liu; Mario Fernando Montenegro Campos

We present a new visual representation for 3D action recognition from sequences of depth maps. In this new representation, space and time axes are divided into multiple segments to define a 4D grid for each depth map sequences. Each cell in the grid is associated with an occupancy value which is a function of the number of space-time points falling into this cell. The occupancy values of all the cells form a high dimensional feature vector, called Space-Time Occupancy Pattern (STOP). We then perform dimensionality reduction to obtain lower-dimensional feature vectors. The advantage of STOP is that it preserves spatial and temporal contextual information between space and time cells while being flexible enough to accommodate intra-action variations. Furthermore, we combine depth maps with skeletons in order to obtain view invariance and present an automatic segmentation and time alignment method for on-line recognition of depth sequences. Our visual representation is validated with experiments on a public 3D human action dataset.


international conference on robotics and automation | 2012

Sparse Spatial Coding: A novel approach for efficient and accurate object recognition

Gabriel L. Oliveira; Erickson R. Nascimento; Antônio Wilson Vieira; Mario Fernando Montenegro Campos

Successful state-of-the-art object recognition techniques from images have been based on powerful methods, such as sparse representation, in order to replace the also popular vector quantization (VQ) approach. Recently, sparse coding, which is characterized by representing a signal in a sparse space, has raised the bar on several object recognition benchmarks. However, one serious drawback of sparse space based methods is that similar local features can be quantized into different visual words. We present in this paper a new method, called Sparse Spatial Coding (SSC), which combines a sparse coding dictionary learning, a spatial constraint coding stage and an online classification method to improve object recognition. An efficient new off-line classification algorithm is also presented. We overcome the problem of techniques which make use of sparse representation alone by generating the final representation with SSC and max pooling, presented for an online learning classifier. Experimental results obtained on the Caltech 101, Caltech 256, Corel 5000 and Corel 10000 databases, show that, to the best of our knowledge, our approach supersedes in accuracy the best published results to date on the same databases. As an extension, we also show high performance results on the MIT-67 indoor scene recognition dataset.


brazilian symposium on computer graphics and image processing | 2009

Stereo Based Structure Recovery of Underwater Scenes from Automatically Restored Images

Erickson R. Nascimento; Mario Fernando Montenegro Campos; Wagner Barros

In this paper we present a fully automatic methodology for underwater image restoration which is based on classical physical models of light propagation in participating media. The technique uses pairs of images acquired from distinct viewpoints under the same environmental conditions. At the kernel of the method is an iterative algorithm that is based on a contrast metric that automatically estimates all parameters of the model with good accuracy at a significantly low computational cost. We then present an algorithm that uses the model with the estimated parameters to improve the quality of images of underwater scenes taken under natural illumination, (i.e. without any special light source). First we show the quality of parameters estimated by our approach by comparing against the same parameters obtained manually like in other works in the literature. Once better estimated parameters greatly influence the quality of restored images, we performed experiments with images taken from both synthesized and real scenes to verify the performance of the proposed method. Two main aspects were considered: image quality and quality of disparity maps produced by a standard stereo algorithm. Image quality was assessed by a quantitative measure of contrast, which is typically used in related literature. We also compare the results obtained by our methodology with those obtained with classic image enhancement tools. The results obtained with our methodology demonstrate improvement both in scene contrast of recovered underwater images and in the accuracy of the disparity maps under different water turbidity levels.


Image and Vision Computing | 2007

Fully automatic coloring of grayscale images

Luiz Filipe M. Vieira; Erickson R. Nascimento; Fernando A. Fernandes; Rodrigo L. Carceroni; Rafael D. Vilela; Arnaldo de Albuquerque Araújo

This paper introduces a methodology for adding color to grayscale images in a way that is completely automatic. Towards this goal, we build on a technique that was recently developed to transfer colors from a user-selected source image to a target grayscale image. More specifically, in order to eliminate the need for manual selection of the source image, we use content-based image retrieval methods to find suitable source images in an image database. To assess the merit of our methodology, we performed a survey where volunteers were asked to rate the plausibility of the colorings generated automatically for grayscale images. In most cases, automatically-colored images were rated either as totally plausible or as


IEEE Computer Graphics and Applications | 2016

Underwater Depth Estimation and Image Restoration Based on Single Images

Paulo Drews; Erickson R. Nascimento; Silvia Silva da Costa Botelho; Mario Fernando Montenegro Campos

In underwater environments, the scattering and absorption phenomena affect the propagation of light, degrading the quality of captured images. In this work, the authors present a method based on a physical model of light propagation that takes into account the most significant effects to image degradation: absorption, scattering, and backscattering. The proposed method uses statistical priors to restore the visual quality of the images acquired in typical underwater scenarios.


intelligent robots and systems | 2012

BRAND: A robust appearance and depth descriptor for RGB-D images

Erickson R. Nascimento; Gabriel L. Oliveira; Mario Fernando Montenegro Campos; Antônio Wilson Vieira; William Robson Schwartz

This work introduces a novel descriptor called Binary Robust Appearance and Normals Descriptor (BRAND), that efficiently combines appearance and geometric shape information from RGB-D images, and is largely invariant to rotation and scale transform. The proposed approach encodes point information as a binary string providing a descriptor that is suitable for applications that demand speed performance and low memory consumption. Results of several experiments demonstrate that as far as precision and robustness are concerned, BRAND achieves improved results when compared to state of the art descriptors based on texture, geometry and combination of both information. We also demonstrate that our descriptor is robust and provides reliable results in a registration task even when a sparsely textured and poorly illuminated scene is used.


brazilian symposium on computer graphics and image processing | 2003

Automatically choosing source color images for coloring grayscale images

Luiz Filipe M. Vieira; Rafael D. Vilela; Erickson R. Nascimento; Fernando A. Fernandes; Rodrigo L. Carceroni; Arnaldo de Albuquerque Araújo

We introduce a methodology for adding color to grayscale images in a way that is completely automatic. Towards this goal, we build on a technique that was recently developed to transfer colors from a user-selected source image to a target grayscale image. More specifically, we eliminate the need for human intervention in the selection of the source color images, which can be regarded as a first step towards real-time video colorization. To assess the merit of our methodology, we performed a survey where volunteers were asked to rate the plausibility of the colorings generated automatically for individual images. In most cases automatically-colored images were rated either as totally plausible or as mostly plausible.


Neurocomputing | 2013

On the development of a robust, fast and lightweight keypoint descriptor

Erickson R. Nascimento; Gabriel L. Oliveira; Antônio Wilson Vieira; Mario Fernando Montenegro Campos

Abstract In this paper we introduce BRAND—Binary Robust Appearance and Normal Descriptor, a novel descriptor which efficiently combines appearance and geometric information from RGB-D images, that is largely invariant to rotation and scale transformations. Based on relevant characteristics of successful image only descriptors, we define a set of eight fundamental requirements to guide the design and evaluation of descriptors that also use depth information. We then describe the design of BRAND, followed by the evaluation of its performance according to those requirements. We also show how BRAND can be simplified in order to obtain a higher performance version, that we named BASE, for applications that require speed performance, but do not demand rigorous scale and rotation invariance. We compare the performance of BRAND against three standard descriptors on real world data. Results of several experiments demonstrate that as far as precision and robustness is concerned, BRAND compares favorably to SIFT and SURF for textured images, and to Spin-Image, for geometrical shape information. Furthermore, BRAND attains improved results when compared to state of the art descriptors that are based either on texture or geometry alone, or on their combination. Finally, we report on the use of BRAND in two applications for which we show that it provides reliable results for the registration of indoor textured depth maps and for object recognition in tasks that require the extraction of semantic knowledge.


IEEE Transactions on Image Processing | 2014

Sparse Spatial Coding: A Novel Approach to Visual Recognition

Gabriel Leivas Oliveira; Erickson R. Nascimento; Antônio Wilson Vieira; Mario Fernando Montenegro Campos

Successful image-based object recognition techniques have been constructed founded on powerful techniques such as sparse representation, in lieu of the popular vector quantization approach. However, one serious drawback of sparse space-based methods is that local features that are quite similar can be quantized into quite distinct visual words. We address this problem with a novel approach for object recognition, called sparse spatial coding, which efficiently combines a sparse coding dictionary learning and spatial constraint coding stage. We performed experimental evaluation using the Caltech 101, Caltech 256, Corel 5000, and Corel 10000 data sets, which were specifically designed for object recognition evaluation. Our results show that our approach achieves high accuracy comparable with the best single feature method previously published on those databases. Our method outperformed, for the same bases, several multiple feature methods, and provided equivalent, and in few cases, slightly less accurate results than other techniques specifically designed to that end. Finally, we report state-of-the-art results for scene recognition on COsy Localization Dataset (COLD) and high performance results on the MIT-67 indoor scene recognition, thus demonstrating the generalization of our approach for such tasks.

Collaboration


Dive into the Erickson R. Nascimento's collaboration.

Top Co-Authors

Avatar

Mario Fernando Montenegro Campos

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Gabriel L. Oliveira

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

William Robson Schwartz

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Antônio Wilson Vieira

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Felipe Cadar Chamone

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

João Pedro Klock Ferreira

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Paulo Drews

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Daniel Balbino de Mesquita

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Marco Túlio Alves N. Rodrigues

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Arnaldo de Albuquerque Araújo

Universidade Federal de Minas Gerais

View shared research outputs
Researchain Logo
Decentralizing Knowledge