Keiller Nogueira
Universidade Federal de Minas Gerais
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keiller Nogueira.
computer vision and pattern recognition | 2015
Otávio Augusto Bizetto Penatti; Keiller Nogueira; Jefersson Alex dos Santos
In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.
Pattern Recognition | 2017
Keiller Nogueira; Otávio Augusto Bizetto Penatti; Jefersson Alex dos Santos
Abstract We present an analysis of three possible strategies for exploiting the power of existing convolutional neural networks (ConvNets or CNNs) in different scenarios from the ones they were trained: full training, fine tuning, and using ConvNets as feature extractors. In many applications, especially including remote sensing, it is not feasible to fully design and train a new ConvNet, as this usually requires a considerable amount of labeled data and demands high computational costs. Therefore, it is important to understand how to better use existing ConvNets. We perform experiments with six popular ConvNets using three remote sensing datasets. We also compare ConvNets in each strategy with existing descriptors and with state-of-the-art baselines. Results point that fine tuning tends to be the best performing strategy. In fact, using the features from the fine-tuned ConvNet with linear SVM obtains the best results. We also achieved state-of-the-art results for the three datasets used.
brazilian symposium on computer graphics and image processing | 2015
Keiller Nogueira; Waner O. Miranda; Jefersson Alex dos Santos
The performance of image classification is highly dependent on the quality of extracted features. Concerning high resolution remote image images, encoding the spatial features in an efficient and robust fashion is the key to generating discriminatory models to classify them. Even though many visual descriptors have been proposed or successfully used to encode spatial features of remote sensing images, some applications, using this sort of images, demand more specific description techniques. Deep Learning, an emergent machine learning approach based on neural networks, is capable of learning specific features and classifiers at the same time and adjust at each step, in real time, to better fit the need of each problem. For several task, such image classification, it has achieved very good results, mainly boosted by the feature learning performed which allows the method to extract specific and adaptable visual features depending on the data. In this paper, we propose a novel network capable of learning specific spatial features from remote sensing images, with any pre-processing step or descriptor evaluation, and classify them. Specifically, automatic feature learning task aims at discovering hierarchical structures from the raw data, leading to a more representative information. This task not only poses interesting challenges for existing vision and recognition algorithms, but also brings huge opportunities for urban planning, crop and forest management and climate modelling. The propose convolutional neural network has six layers: three convolutional, two fully-connected and one classifier layer. So, the five first layers are responsible to extract visual features while the last one is responsible to classify the images. We conducted a systematic evaluation of the proposed method using two datasets: (i) the popular aerial image dataset UCMerced Land-use and, (ii) a multispectral high-resolution scenes of the Brazilian Coffee Scenes. The experiments show that the proposed method outperforms state-of-the-art algorithms in terms of overall accuracy.
international conference on pattern recognition | 2016
Keiller Nogueira; Mauro Dalla Mura; Jocelyn Chanussot; William Robson Schwartz; Jefersson Alex dos Santos
Land cover classification is a task that requires methods capable of learning high-level features while dealing with high volume of data. Overcoming these challenges, Convolutional Networks (ConvNets) can learn specific and adaptable features depending on the data while, at the same time, learn classifiers. In this work, we propose a novel technique to automatically perform pixel-wise land cover classification. To the best of our knowledge, there is no other work in the literature that perform pixel-wise semantic segmentation based on data-driven feature descriptors for high-resolution remote sensing images. The main idea is to exploit the power of ConvNet feature representations to learn how to semantically segment remote sensing images. First, our method learns each label in a pixel-wise manner by taking into account the spatial context of each pixel. In a predicting phase, the probability of a pixel belonging to a class is also estimated according to its spatial context and the learned patterns. We conducted a systematic evaluation of the proposed algorithm using two remote sensing datasets with very distinct properties. Our results show that the proposed algorithm provides improvements when compared to traditional and state-of-the-art methods that ranges from 5 to 15% in terms of accuracy.
Multimedia Tools and Applications | 2016
Keiller Nogueira; Adriano Veloso; Jefersson Alex dos Santos
In this paper, we present effective algorithms to automatically annotate clothes from social media data, such as Facebook and Instagram. Clothing annotation can be informally stated as recognizing, as accurately as possible, the garment items appearing in the query photo. This task brings huge opportunities for recommender and e-commerce systems, such as capturing new fashion trends based on which clothes have been used more recently. It also poses interesting challenges for existing vision and recognition algorithms, such as distinguishing between similar but different types of clothes or identifying a pattern of a cloth even if it has different colors and shapes. We formulate the annotation task as a multi-label and multi-modal classification problem: (i) both image and textual content (i.e., tags about the image) are available for learning classifiers, (ii) the classifiers must recognize a set of labels (i.e., a set of garment items), and (iii) the decision on which labels to assign to the query photo comes from a set of instances that is used to build a function, which separates labels that should be assigned to the query photo, from those that should not be assigned. Using this configuration, we propose two approaches: (i) the pointwise one, called MMCA, which receives a single image as input, and (ii) a multi-instance classification, called M3CA, also known as pairwise approach, which uses pair of images to create the classifiers. We conducted a systematic evaluation of the proposed algorithms using everyday photos collected from two major fashion-related social media, namely pose.com and chictopia.com. Our results show that the proposed approaches provide improvements when compared to popular first choice multi-label, multi-modal, multi-instance algorithms that range from 20 % to 30 % in terms of accuracy.
iberoamerican congress on pattern recognition | 2015
Keiller Nogueira; William Robson Schwartz; Jefersson Alex dos Santos
Identifying crops from remote sensing images is a fundamental to know and monitor land-use. However, manual identification is expensive and maybe impracticable given the amount data. Automatic methods, although interesting, are highly dependent on the quality of extracted features, since encoding the spatial features in an efficient and robust fashion is the key to generating discriminatory models. Even though many visual descriptors have been proposed or successfully used to encode spatial features, in some cases, more specific description are needed. Deep learning has achieved very good results in some tasks, mainly boosted by the feature learning performed which allows the method to extract specific and adaptable visual features depending on the data In this paper, we propose two multi-scale methods, based on deep learning, to identify coffee crops. Specifically, we propose the Cascade Convolutional Neural Networks, or simply CCNN, that identifies crops considering a hierarchy of networks and, also, propose the Iterative Convolutional Neural Network, called ICNN, which feeds a same network with data several times. We conducted a systematic evaluation of the proposed algorithms using a remote sensing dataset. The experiments show that the proposed methods outperform the baseline consistent of state-of-the-art components by a factor that ranges from 3 to 6%, in terms of average accuracy.
2016 9th IAPR Workshop on Pattern Recogniton in Remote Sensing (PRRS) | 2016
Keiller Nogueira; Jefersson Alex dos Santos; Tamires Fornazari; Thiago Sanna Freire Silva; Leonor Patricia C. Morellato; Ricardo da Silva Torres
Abstract—In this paper, we analyse the use of Convolutional Neural Networks (CNNs or ConvNets) to discriminate vegetation species with few labelled samples. To the best of our knowledge, this is the first work dedicated to the investigation of the use of deep features in such task. The experimental evaluation demonstrate that deep features significantly outperform wellknown feature extraction techniques. The achieved results also show that it is possible to learn and classify vegetation patterns even with few samples. This makes the use of our approach feasible for real-world mapping applications, where it is often difficult to obtain large training sets.
brazilian symposium on computer graphics and image processing | 2014
Adriano Veloso; Jefersson Alex dos Santos; Keiller Nogueira
In this paper, we present an effective algorithm to automatically annotate clothes in everyday photos posted in online social networks, such as Facebook and Instagram. Specifically, clothing annotation can be informally stated as predicting, as accurately as possible, the garment items appearing in the target photo. This task not only poses interesting challenges for existing vision and recognition algorithms, but also brings huge opportunities for recommender and e-commerce systems. We formulate the annotation task as a multi-modal, multi-label and multi-instance classification problem: (i) both image and textual content (i.e., comments about the image) are available for learning classifiers, (ii) the classifiers must predict a set of labels (i.e., a set of garment items), and (iii) the decision on which labels to predict comes from a bag of instances that are used to build a function, which separates labels that should be predicted from those that should not be. Under this setting, we propose a classification algorithm which employs association rules in order to build a prediction model that combines image and textual information, and adopts an entropy-minimization strategy in order to the find the best set of labels to predict. We conducted a systematic evaluation of the proposed algorithm using everyday photos collected from two major fashion-related social networks, namely pose.com and chictopia.com. Our results show that the proposed algorithm provides improvements when compared to popular first choice multi-label algorithms that range from 2% to 40% in terms of accuracy.
brazilian symposium on computer graphics and image processing | 2017
Rafael Baeta; Keiller Nogueira; David Menotti; Jefersson Alex dos Santos
Geographic mapping of coffee crops by using remote sensing images and supervised classification has been a challenging research subject. Besides the intrinsic problems caused by the nature of multi-spectral information, coffee crops are non-seasonal and usually planted in mountains, which requires encoding and learning a huge diversity of patterns during the classifier training. In this paper, we propose a new approach for automatic mapping coffee crops by combining two recent trends on pattern recognition for remote sensing applications: deep learning and fusion/selection of features from multiple scales. The proposed approach is a pixel-wise strategy that consists in the training and combination of convolutional neural networks designed to receive as input different context windows around labeled pixels. Final maps are created by combining the output of those networks for a non-labeled set of pixels. Experimental results show that multiple scales produces better coffee crop maps than using single scales. Experiments also show the proposed approach is effective in comparison with baselines.
MediaEval | 2017
Keiller Nogueira; Samuel G. Fadel; Ícaro C. Dourado; Rafael de Oliveira Werneck; Javier A. V. Muñoz; Otávio Augusto Bizetto Penatti; Rodrigo Tripodi Calumby; Lin Li; Jefersson Alex dos Santos; Ricardo da Silva Torres