Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristiano Premebida is active.

Publication


Featured researches published by Cristiano Premebida.


international conference on intelligent transportation systems | 2007

A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking

Cristiano Premebida; Gonçalo Monteiro; Urbano Nunes; Paulo Peixoto

This paper presents a sensorial-cooperative architecture to detect, track and classify entities in semi-structured outdoor scenarios for intelligent vehicles. In order to accomplish this task, information provided by in-vehicle Lidar and monocular vision is used. The detection and tracking phases are performed in the laser space, and the object classification methods work both in laser space (using a Gaussian Mixture Model classifier) and in vision spaces (AdaBoost classifier). A Bayesian-sum decision rule is used in order to combine the results of both classification techniques, and hence a more reliable object classification is achieved. Experiments confirm the effectiveness of the proposed architecture.


robot and human interactive communication | 2014

A probabilistic approach for human everyday activities recognition using body motion from RGB-D images

Diego R. Faria; Cristiano Premebida; Urbano Nunes

In this work, we propose an approach that relies on cues from depth perception from RGB-D images, where features related to human body motion (3D skeleton features) are used on multiple learning classifiers in order to recognize human activities on a benchmark dataset. A Dynamic Bayesian Mixture Model (DBMM) is designed to combine multiple classifier likelihoods into a single form, assigning weights (by an uncertainty measure) to counterbalance the likelihoods as a posterior probability. Temporal information is incorporated in the DBMM by means of prior probabilities, taking into consideration previous probabilistic inference to reinforce current-frame classification. The publicly available Cornell Activity Dataset [1] with 12 different human activities was used to evaluate the proposed approach. Reported results on testing dataset show that our approach overcomes state of the art methods in terms of precision, recall and overall accuracy. The developed work allows the use of activities classification for applications where the human behaviour recognition is important, such as human-robot interaction, assisted living for elderly care, among others.


intelligent robots and systems | 2014

Pedestrian detection combining RGB and dense LIDAR data

Cristiano Premebida; Joao Carreira; Jorge Batista; Urbano Nunes

Why is pedestrian detection still very challenging in realistic scenes? How much would a successful solution to monocular depth inference aid pedestrian detection? In order to answer these questions we trained a state-of-the-art deformable parts detector using different configurations of optical images and their associated 3D point clouds, in conjunction and independently, leveraging upon the recently released KITTI dataset. We propose novel strategies for depth upsampling and contextual fusion that together lead to detection performance which exceeds that of the RGB-only systems. Our results suggest depth cues as a very promising mid-level target for future pedestrian detection approaches.


international conference on intelligent transportation systems | 2009

Exploiting LIDAR-based features on pedestrian detection in urban scenarios

Cristiano Premebida; Oswaldo Ludwig; Urbano Nunes

Reliable detection and classification of vulnerable road users constitute a critical issue on safety/protection systems for intelligent vehicles driving in urban zones. In this subject, most of the perception systems have LIDAR and/or Radar as primary detection modules and vision-based systems for object classification. This work, on the other hand, presents a valuable analysis of pedestrian detection in urban scenario using exclusively LIDAR-based features. The aim is to explore how much information can be extracted from LIDAR sensors for pedestrian detection. Moreover, this study will be useful to compose multi-sensor based pedestrian detection systems using not only LIDAR but also vision sensors. Experimental results using our data set and a detailed classification performance analysis are presented, with comparisons among various classification techniques.


The International Journal of Robotics Research | 2013

Fusing LIDAR, camera and semantic information: A context-based approach for pedestrian detection

Cristiano Premebida; Urbano Nunes

In this work, a context-based multisensor system, applied to pedestrian detection in urban environments, is presented. The proposed system comprises three main processing modules: (i) a LIDAR-based module acting as the primary object detector, (ii) a module which supplies the system with contextual information obtained from a semantic map of the roads, and (iii) an image-based detection module, using sliding window detectors, with the role of validating the presence of pedestrians in the regions of interest generated by the LIDAR module. A Bayesian strategy is used to combine information from sensors onboard the vehicle (‘local’ information) with information contained in a digital map of the roads (‘global’ information). To support experimental analysis, a multisensor dataset, named the Laser and Image Pedestrian Detection dataset (LIPD), is used. The LIPD dataset was collected in an urban environment, under daylight conditions, using an electrical vehicle driven at low speed. A down-sampling method, using support vectors extracted from multiple linear SVMs, was used to reduce the cardinality of the training set and, as a consequence, to decrease the CPU time during the training process of the image-based classifiers. The performance of the system is evaluated, in terms of detection rate and the number of false positives per frame, using three image-detectors: a linear SVM, a SVM-cascade, and a benchmark method. Additionally, experiments are performed to assess the impact of contextual information on the performance of the detection system.


international conference on intelligent transportation systems | 2006

A Multi-Target Tracking and GMM-Classifier for Intelligent Vehicles

Cristiano Premebida; Urbano Nunes

Intelligent vehicles need reliable information about the environment in order to operate with total safety. In this paper we propose a flexible multi-module architecture for a multi-target detection and tracking system (MTDTS) complemented with a Bayesian object classification layer based on finite Gaussian mixture models (GMM). The GMM parameters are estimated by an expectation maximization (EM) algorithm, hence finite-component models were generated based on feature-vectors extracted from objects classes during the training stage. Using the joint mixture Gaussian pdf modelled for each class, a Bayesian approach is used to distinct the objects categories (persons, tree-trunks/posts, and cars) in a semi-structured outdoor environment based on data from a laser range finder (LRF). Experiments using real data scan confirm the robustness of the proposed architecture. This paper investigates a particular problem: detection, tracking and classification of objects in cybercars-like outdoor environments


Computer-Aided Engineering | 2013

Pedestrian detection in far infrared images

Daniel Olmeda; Cristiano Premebida; Urbano Nunes; José María Armingol; Arturo de la Escalera

This paper presents an experimental study on pedestrian classification and detection in far infrared FIR images. The study includes an in-depth evaluation of several combinations of features and classifiers, which include features previously used for daylight scenarios, as well as a new descriptor HOPE --Histograms of Oriented Phase Energy, specifically targeted to infrared images, and a new adaptation of a latent variable SVM approach to FIR images. The presented results are validated on a new classification and detection dataset of FIR images collected in outdoor environments from a moving vehicle. The classification space contains 16152 pedestrians and 65440 background samples evenly selected from several sequences acquired at different temperatures and different illumination conditions. The detection dataset consist on 15224 images with ground truth information. The authors are making this dataset public for benchmarking new detectors in the area of intelligent vehicles and field robotics applications.


international conference on intelligent transportation systems | 2010

A cascade classifier applied in pedestrian detection using laser and image-based features

Cristiano Premebida; Oswaldo Ludwig; Marco Silva; Urbano Nunes

In this paper we present a multistage method applied in pedestrian detection using information from a LIDAR and a monocular-camera mounted on an electric vehicle driving in urban scenarios. The proposed method is a cascade of classifiers trained in two subsets of features, one with laser-based features and the other with a set of image-based features. A specific training approach was developed to adjust the cascade stages in order to enhance the classification performance. The proposed method differs from the conventional cascade regarding the way the selected samples are propagated through the cascade. Thus, the subsequent stages of the proposed cascade receive both negatives and positives from previous ones, relying on a decision margin process. Experiments were conducted in off-line mode, for a set of single component classifiers and for the proposed cascade technique. The results are compared in terms of classification performance metrics and ROC curves.


robot and human interactive communication | 2015

Probabilistic human daily activity recognition towards robot-assisted living

Diego R. Faria; Mario Vieira; Cristiano Premebida; Urbano Nunes

In this work, we present a human-centered robot application in the scope of daily activity recognition towards robot-assisted living. Our approach consists of a probabilistic ensemble of classifiers as a dynamic mixture model considering the Bayesian probability, where each base classifier contributes to the inference in proportion to its posterior belief. The classification model relies on the confidence obtained from an uncertainty measure that assigns a weight for each base classifier to counterbalance the joint posterior probability. Spatio-temporal 3D skeleton-based features extracted from RGB-D sensor data are modeled in order to characterize daily activities, including risk situations (e.g.: falling down, running or jumping in a room). To assess our proposed approach, challenging public datasets such as MSR-Action3D and MSR-Activity3D [1] [2] were used to compare the results with other recent methods. Reported results show that our proposed approach outperforms state-of-the-art methods in terms of overall accuracy. Moreover, we implemented our approach using Robot Operating System (ROS) environment to validate the DBMM running on-the-fly in a mobile robot with an RGB-D sensor onboard to identify daily activities for a robot-assisted living application.


international conference on intelligent transportation systems | 2011

Evaluation of Boosting-SVM and SRM-SVM cascade classifiers in laser and vision-based pedestrian detection

Oswaldo Ludwig; Cristiano Premebida; Urbano Nunes; Rui Araújo

Pedestrian detection systems constitute an important field of research and development in computer vision, specially when applied in protection/safety systems in urban scenarios due to their direct impact in the society, specifically in terms of traffic casualties. In order to face such challenge, this work exploits some developments on statistical machine learning theory, in particular structural risk minimization (SRM) in a cascade ensemble. Namely, the ensemble applies the principle of SRM on a set of linear support vector machines (SVM). The linear SVM complexity, in the Vapnik sense, is controlled by choosing the dimension of the feature space in each cascade stage. To support experimental analysis, a multi-sensor dataset constituted by data from a LIDAR, a monocular camera, an IMU, encoder and a DGPS is introduced in this paper. The dataset, named Laser and Image Pedestrian Detection (LIPD) dataset, was collected in an urban environment, at day light conditions, using an electrical vehicle driven at low speed. Labeled pedestrians and non-pedestrians samples are also available for benchmarking purpose. The cascade of SVMs, trained with image-based features (HOG and COV descriptors), is used to detect pedestrian evidences on regions of interest (ROI) generated by a LIDAR-based processing system. Finally, the paper presents experimental results comparing the performance of a Boosting-SVM cascade and the proposed SRM-SVM cascade classifiers, in terms of detection errors.

Collaboration


Dive into the Cristiano Premebida's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge