Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luciano Oliveira is active.

Publication


Featured researches published by Luciano Oliveira.


IEEE Transactions on Intelligent Transportation Systems | 2010

On Exploration of Classifier Ensemble Synergism in Pedestrian Detection

Luciano Oliveira; Urbano Nunes; Paulo Peixoto

A single feature extractor-classifier is not usually able to deal with the diversity of multiple image scenarios. Therefore, integration of features and classifiers can bring benefits to cope with this problem, particularly when the parts are carefully chosen and synergistically combined. In this paper, we address the problem of pedestrian detection by a novel ensemble method. Initially, histograms of oriented gradients (HOGs) and local receptive fields (LRFs), which are provided by a convolutional neural network, have been both classified by multilayer perceptrons (MLPs) and support vector machines (SVMs). A diversity measure is used to refine the initial set of feature extractors and classifiers. A final classifier ensemble was then structured by an HOG and an LRF as features, classified by two SVMs and one MLP. We have analyzed the following two classes of fusion methods of combining the outputs of the component classifiers: (1) majority vote and (2) fuzzy integral. The first part of the performance evaluation consisted of running the final proposed ensemble over the DaimlerChrysler cropwise data set, which was also artificially modified to simulate sunny and shadowy illumination conditions, which is typical of outdoor scenarios. Then, a window-wise study has been performed over a collected video sequence. Experiments have highlighted a state-of-the-art classification system, performing consistently better than the component classifiers and other methods.


Pattern Recognition | 2010

Semantic fusion of laser and vision in pedestrian detection

Luciano Oliveira; Urbano Nunes; Paulo Peixoto; Marco Silva; Fernando Moita

Fusion of laser and vision in object detection has been accomplished by two main approaches: (1) independent integration of sensor-driven features or sensor-driven classifiers, or (2) a region of interest (ROI) is found by laser segmentation and an image classifier is used to name the projected ROI. Here, we propose a novel fusion approach based on semantic information, and embodied on many levels. Sensor fusion is based on spatial relationship of parts-based classifiers, being performed via a Markov logic network. The proposed system deals with partial segments, it is able to recover depth information even if the laser fails, and the integration is modeled through contextual information-characteristics not found on previous approaches. Experiments in pedestrian detection demonstrate the effectiveness of our method over data sets gathered in urban scenarios.


ieee intelligent vehicles symposium | 2010

Context-aware pedestrian detection using LIDAR

Luciano Oliveira; Urbano Nunes

LIDAR-based object detection usually relies on geometric feature extraction, followed by a generative or discriminative classification approach. Instead, we propose to change the way of detecting objects using LIDAR by means of not only a featureless approach, but also inferring context-aware relations of object parts. For the first feature, a coarse-to-fine segmentation based on β-skeleton random graph is proposed; after segmentation, each segment is labeled, and scored by a Procrustes analysis. For the second feature, after defining the sub-segments of each object, a contextual analysis is in charge of assessing levels of intra-object or inter-object relationship, ultimately integrated into a Markov logic network. This way, we contribute with a system which deals with partial segmentation, also embodying contextual information. The system proof-of-concept is in pedestrian detection, but the rationale of the approach can be applied to any other object after the definition of its physical structure. The effectiveness of the proposed method was assessed over a data set gathered in challenging scenarios, with a significant gain in accuracy over a full segmentation version of the system.


international conference on intelligent transportation systems | 2008

On Integration of Features and Classifiers for Robust Vehicle Detection

Luciano Oliveira; Urbano Nunes

Some researches have demonstrated that a single recognition system is not usually able to deal with the diversity of environment situations in images. In this paper, with the aim of finding a robust method to compensate single classifier inability under certain circumstances, an extensive study on how to combine features and classifiers is performed. Two ways of integrating features and classifiers are proposed: concatenated vector and ensemble architecture. These two methods are composed by Histogram of Oriented Gradients and Local Receptive Fields as feature extractors, and a Multi Layer Perceptron and Support Vector Machines as classifiers. A thorough analysis with respect to the robustness of the proposed methods over artificial illumination changing has been experimentally carried out at a front and rear vehicle recognition task. Results have demonstrated that the ensemble architecture with a heuristic Majority Voting presented the best performance (other four classification fusion methods based on majority voting and fuzzy integral were also evaluated). The ensemble classifier obtained an average hit rate of 92.4% and less than 1% of false alarm rate under multiple datasets and environment conditions.


ieee intelligent vehicles symposium | 2013

Pedestrian detection based on LIDAR-driven sliding window and relational parts-based detection

Luciano Oliveira; Urbano Nunes

The most standard image object detectors are usually comprised of one or multiple feature extractors or classifiers within a sliding window framework. Nevertheless, this type of approach has demonstrated a very limited performance under datasets of cluttered scenes and real life situations. To tackle these issues, LIDAR space is exploited here in order to detect 2D objects in 3D space, avoiding all the inherent problems of regular sliding window techniques. Additionally, we propose a relational parts-based pedestrian detection in a probabilistic non-iid framework. With the proposed framework, we have achieved state-of-the-art performance in a pedestrian dataset gathered in a challenging urban scenario. The proposed system demonstrated superior performance in comparison with pure sliding-window-based image detectors.


brazilian symposium on computer graphics and image processing | 2012

Multi-Scale Spectral Residual Analysis to Speed up Image Object Detection

Grimaldo Silva; Leizer Schnitman; Luciano Oliveira

Accuracy in image object detection has been usually achieved at the expense of much computational load. Therefore a trade-off between detection performance and fast execution commonly represents the ultimate goal of an object detector in real life applications. In this present work, we propose a novel method toward that goal. The proposed method was grounded on a multi-scale spectral residual (MSR) analysis for saliency detection. Compared to a regular sliding window search over the images, in our experiments, MSR was able to reduce by 75% (in average) the number of windows to be evaluated by an object detector. The proposed method was thoroughly evaluated over a subset of Label Me dataset (person images), improving detection performance in most cases.


ieee intelligent vehicles symposium | 2013

Learning to segment roads for traffic analysis in urban images

Marcelo Santos; Marcelo Linder; Leizer Schnitman; Urbano Nunes; Luciano Oliveira

Road segmentation plays an important role in many computer vision applications, either for in-vehicle perception or traffic surveillance. In camera-equipped vehicles, road detection methods are being developed for advanced driver assistance, lane departure, and aerial incident detection, just to cite a few. In traffic surveillance, segmenting road information brings special benefits: to automatically wrap regions of traffic analysis (consequently, speeding up flow analysis in videos), to help with the detection of driving violations (to improve contextual information in videos of traffic), and so forth. Methods and techniques can be used interchangeably for both types of application. Particularly, we are interested in segmenting road regions from the remaining of an image, aiming to support traffic flow analysis tasks. In our proposed method, road segmentation relies on a superpixel detection based on a novel edge density estimation method; in each superpixel, priors are extracted from features of gray-amount, texture homogeneity, traffic motion and horizon line. A feature vector with all those priors feeds a support vector machine classifier, which ultimately takes the superpixel-wise decision of being a road or not. A dataset of challenging scenes was gathered from traffic video surveillance cameras, in our city, to demonstrate the effectiveness of the method.


computer aided systems theory | 2007

Towards a robust vision-based obstacle perception with classifier fusion in cybercars

Luciano Oliveira; Gonçalo Monteiro; Paulo Peixoto; Urbano Nunes

Several single classifiers have been proposed to recognize objects in images. Since this approach has restrictions when applied in certain situations, one has suggested some methods to combine the outcomes of classifiers in order to increase overall classification accuracy. In this sense, we propose an effective method for a frame-by-frame classification task, in order to obtain a trade-off between false alarm decrease and true positive detection rate increase. The strategy relies on the use of a Class Set Reduction method, using a Mamdani fuzzy system, and it is applied to recognize pedestrians and vehicles in typical cybercar scenarios. The proposed system brings twofold contributions: i) overperformance with respect to the component classifiers and ii) expansibility to include other types of classifiers and object classes. The final results have shown the effectiveness of the system.


Computational Intelligence Paradigms | 2008

Support Vector Machines and Features for Environment Perception in Mobile Robotics

Rui Araújo; Urbano Nunes; Luciano Oliveira; Pedro Angelo Morais de Sousa; Paulo Peixoto

Environment perception is one of the most challenging and underlying task which allows a mobile robot to perceive obstacles, landmarks and extract useful information to navigate safely. In this sense, classification techniques applied to sensor data may enhance the way mobile robots sense their surroundings. Amongst several techniques to classify data and to extract relevant information from the environment, Support Vector Machines (SVM) have demonstrated promising results, being used in several practical approaches. This chapter presents the core theory of SVM, and applications in two different scopes: using Lidar (Light Detection and Ranging) to label specific places, and vision-based human detection aided by Lidar.


Image and Vision Computing | 2018

ISEC: Iterative over-segmentation via edge clustering

Marcelo Mendonça; Luciano Oliveira

Several image pattern recognition tasks rely on superpixel generation as a fundamental step. Image analysis based on superpixels facilitates domain-specific applications, also speeding up the overall processing time of the task. Recent superpixel methods have been designed to fit boundary adherence, usually regulating the size and shape of each superpixel in order to mitigate the occurrence of undersegmentation failures. Superpixel regularity and compactness sometimes imposes an excessive number of segments in the image, which ultimately decreases the efficiency of the final segmentation, specially in video segmentation. We propose here a novel method to generate superpixels, called iterative over-segmentation via edge clustering (ISEC), which addresses the over-segmentation problem from a different perspective in contrast to recent state-of-the-art approaches. ISEC iteratively clusters edges extracted from the image objects, providing adaptive superpixels in size, shape and quantity, while preserving suitable adherence to the real object boundaries. All this is achieved at a very low computational cost. Experiments show that ISEC stands out from existing methods, meeting a favorable balance between segmentation stability and accurate representation of motion discontinuities, which are features specially suitable to video segmentation.

Collaboration


Dive into the Luciano Oliveira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leizer Schnitman

Federal University of Bahia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grimaldo Silva

Federal University of Bahia

View shared research outputs
Researchain Logo
Decentralizing Knowledge