Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dario Figueira is active.

Publication


Featured researches published by Dario Figueira.


advanced video and signal based surveillance | 2013

Semi-supervised multi-feature learning for person re-identification

Dario Figueira; Loris Bazzani; Ha Quang Minh; Marco Cristani; Alexandre Bernardino; Vittorio Murino

Person re-identification is probably the open challenge for low-level video surveillance in the presence of a camera network with non-overlapped fields of view. A large number of direct approaches has emerged in the last five years, often proposing novel visual features specifically designed to highlight the most discriminant aspects of people, which are invariant to pose, scale and illumination. On the other hand, learning-based methods are usually based on simpler features, and are trained on pairs of cameras to discriminate between individuals. In this paper, we present a method that joins these two ideas: given an arbitrary state-of-the-art set of features, no matter their number, dimensionality or descriptor, the proposed multi-class learning approach learns how to fuse them, ensuring that the features agree on the classification result. The approach consists of a semi-supervised multi-feature learning strategy, that requires at least a single image per person as training data. To validate our approach, we present results on different datasets, using several heterogeneous features, that set a new level of performance in the person re-identification problem.


intelligent robots and systems | 2009

ISROBOTNET: A testbed for sensor and robot network systems

Marco Barbosa; Alexandre Bernardino; Dario Figueira; José António Gaspar; Nelson Gonçalves; Pedro U. Lima; Plinio Moreno; Abdolkarim Pahliani; José Santos-Victor; Matthijs T. J. Spaan; João Sequeira

This paper introduces a testbed for sensor and robot network systems, currently composed of 10 cameras and 5 mobile wheeled robots equipped with several sensors for self-localization, obstacle avoidance and vision cameras, and wireless communications. The testbed includes a service-oriented middleware to enable fast prototyping and implementation of algorithms previously tested in simulation, as well as to simplify integration of subsystems developed by different partners. We survey an integrated approach to human-robot interaction that has been developed supported by the testbed under an European research project. The application integrates innovative methods and algorithms for people tracking and waving detection, cooperative perception among static and mobile cameras to improve people tracking accuracy, as well as decision-theoretical approaches to sensor selection and task allocation within the sensor network.


european conference on computer vision | 2014

The HDA+ Data Set for Research on Fully Automated Re-identification Systems

Dario Figueira; Matteo Taiana; Athira M. Nambiar; Jacinto C. Nascimento; Alexandre Bernardino

There are no available datasets to evaluate integrated Pedestrian Detectors and Re-Identification systems, and the standard evaluation metric for Re-Identification (Cumulative Matching Characteristic curves) does not properly assess the errors that arise from integrating Pedestrian Detectors with Re-Identification (False Positives and Missed Detections). Real world Re-Identification systems require Pedestrian Detectors to be able to function automatically and the integration of Pedestrian Detector algorithms with Re-Identification produces errors that must be dealt with. We provide not only a dataset that allows for the evaluation of integrated Pedestrian Detector and Re-Identification systems but also sample Pedestrian Detection data and meaningful evaluation metrics and software, such as to make it “one-click easy” to test your own Re-Identification algorithm in an Integrated PD+REID system without having to implement a Pedestrian Detector algorithm yourself. We also provide body-part detection data on top of the manually labeled data and the Pedestrian Detection data, such as to make it trivial to extract your features from relevant local regions (actual body-parts). Finally we provide camera synchronization data to allow for the testing of inter-camera tracking algorithms. We expect this dataset and software to be widely used and boost research in integrated Pedestrian Detector and Re-Identification systems, bringing them closer to reality.


International Journal of Machine Intelligence and Sensory Signal Processing | 2014

A multi-camera video dataset for research on high-definition surveillance

Athira M. Nambiar; Matteo Taiana; Dario Figueira; Jacinto C. Nascimento; Alexandre Bernardino

We present a fully labelled image sequence dataset for benchmarking video surveillance algorithms. The dataset was acquired from 13 indoor cameras distributed over three floors of one building, recording simultaneously for 30 minutes. The dataset was specially designed and labelled to tackle the person detection and re-identification problems. Around 80 persons participated in the data collection, most of them appearing in more than one camera. The dataset is heterogeneous: there are three distinct types of cameras (standard, high and very high resolution), different view types (corridors, doors, open spaces) and different frame rates. This diversity is essential for a proper assessment of the robustness of video analytics algorithms in different imaging conditions. We illustrate the application of pedestrian detection and re-identification algorithms to the given dataset, pointing out important criteria for benchmarking and the impact of high-resolution imagery on the performance of the algorithms.


international conference on robotics and automation | 2009

From pixels to objects: Enabling a spatial model for humanoid social robots

Dario Figueira; Manuel Lopes; Rodrigo Ventura; Jonas Ruesch

This work adds the concept of object to an existent low-level attention system of the humanoid robot iCub. The objects are defined as clusters of SIFT visual features. When the robot first encounters an unknown object, found to be within a certain (small) distance from its eyes, it stores a cluster of the features present within an interval about that distance, using depth perception. Whenever a previously stored object crosses the robots field of view again, it is recognized, mapped into an egocentrical frame of reference, and gazed at. This mapping is persistent, in the sense that its identification and position are kept even if not visible by the robot. Features are stored and recognized in a bottom-up way. Experimental results on the humanoid robot iCub validate this approach. This work creates the foundation for a way of linking the bottom-up attention system with top-down, object-oriented information provided by humans.


international conference on computer vision | 2011

Multiple Hypothesis Tracking in camera networks

David Miguel Antunes; Dario Figueira; David Martins de Matos; Alexandre Bernardino; José António Gaspar

In this paper we address the problem of tracking multiple targets across a network of cameras with non-overlapping fields of view. Existing methods to measure similarity between detected targets and the ones previously encountered in the network (the re-identification problem) frequently produce incorrect correspondences between observations and existing targets. We show that these issues can be corrected by Multiple Hypothesis Tracking (MHT), using its capability of disambiguation when new information is available. MHT is recognized in the multi-target tracking field by its ability to solve difficult assignment problems. Experiments both in simulation and in real world present clear advantages when using MHT with respect to the simpler MAP approach.


international conference on image analysis and recognition | 2011

Re-identification of visual targets in camera networks: a comparison of techniques

Dario Figueira; Alexandre Bernardino

In this paper we address the problem of re-identification of people: given a camera network with non-overlapping fields of view, we study the problem of how to correctly pair detections in different cameras (one to many problem, search for similar cases) ormatch detections to a database of individuals (one to one, search for best match case). We propose a novel color histogram based features which increases the re-identification rate. Furthermore we evaluate five different classifiers: three fixed distance metrics, one learned distance metric and a classifier based on sparse representation, novel to the field of re-identification. A new database alongside with the matlab code produced are made available on request.


international symposium on visual computing | 2009

Optical Flow Based Detection in Mixed Human Robot Environments

Dario Figueira; Plinio Moreno; Alexandre Bernardino; José António Gaspar; José Santos-Victor

In this paper we compare several optical flow based features in order to distinguish between humans and robots in a mixed human-robot environment. In addition, we propose two modifications to the optical flow computation: (i) a way to standardize the optical flow vectors, which relates the real world motions to the image motions, and (ii) a way to improve flow robustness to noise by selecting the sampling times as a function of the spatial displacement of the target in the world. We add temporal consistency to the flow-based features by using a temporalBoost algorithm. We compare combinations of: (i) several temporal supports, (ii) flow-based features, (iii) flow standardization, and (iv) flow sub-sampling. We implement the approach with better performance and validate it in a real outdoor setup, attaining real-time performance.


systems man and cybernetics | 2016

A Window-Based Classifier for Automatic Video-Based Reidentification

Dario Figueira; Matteo Taiana; Jacinto C. Nascimento; Alexandre Bernardino

The vast quantity of visual data generated by the rapid expansion of large scale distributed multicamera networks, makes automated person detection and reidentification (RE-ID) essential components of modern surveillance systems. However, the integration of automated person detection and RE-ID algorithms is not without problems, and the errors arising in this integration must be measured (e.g., detection failures that may hamper the RE-ID performance). In this paper, we present a window-based classifier based on a recently proposed architecture for the integration of pedestrian detectors and RE-ID algorithms, that takes the output of any bounding-box RE-ID classifier and exploits the temporal continuity of persons in video streams. We evaluate our contributions on a recently proposed dataset featuring 13 high-definition cameras and over 80 people, acquired during 30 min at rush hour in an office space scenario. We expect our contributions to drive research in integrated pedestrian detection and RE-ID systems, bringing them closer to practical applications.


International Journal of Pattern Recognition and Artificial Intelligence | 2015

People and Mobile Robot Classification Through Spatio-Temporal Analysis of Optical Flow

Plinio Moreno; Dario Figueira; Alexandre Bernardino; José Santos-Victor

The goal of this work is to distinguish between humans and robots in a mixed human-robot environment. We analyze the spatio-temporal patterns of optical flow-based features along several frames. We consider the Histogram of Optical Flow (HOF) and the Motion Boundary Histogram (MBH) features, which have shown good results on people detection. The spatio-temporal patterns are composed of groups of feature components that have similar values on previous frames. The groups of features are fed into the FuzzyBoost algorithm, which at each round selects the spatio-temporal pattern (i.e. feature set) having the lowest classification error. The search for patterns is guided by grouping feature dimensions, considering three algorithms: (a) similarity of weights from dimensionality reduction matrices, (b) Boost Feature Subset Selection (BFSS) and (c) Sequential Floating Feature Selection (SFSS), which avoid the brute force approach. The similarity weights are computed by the Multiple Metric Learning for large Margin Nearest Neighbor (MMLMNN), a linear dimensionality algorithm that provides a type of Mahalanobis metric Weinberger and Saul, J. MaCh. Learn. Res.10 (2009) 207–244. The experiments show that FuzzyBoost brings good generalization properties, better than the GentleBoost, the Support Vector Machines (SVM) with linear kernels and SVM with Radial Basis Function (RBF) kernels. The classifier was implemented and tested in a real-time, multi-camera dynamic setting.

Collaboration


Dive into the Dario Figueira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matteo Taiana

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Athira M. Nambiar

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Plinio Moreno

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Jonas Ruesch

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Rodrigo Ventura

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Abdolkarim Pahliani

Technical University of Lisbon

View shared research outputs
Researchain Logo
Decentralizing Knowledge