Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gustavo Fernández is active.

Publication


Featured researches published by Gustavo Fernández.


european conference on computer vision | 2016

The Visual Object Tracking VOT2014 Challenge Results

Matej Kristan; Roman P. Pflugfelder; Aleš Leonardis; Jiri Matas; Luka Cehovin; Georg Nebehay; Tomas Vojir; Gustavo Fernández; Alan Lukezic; Aleksandar Dimitriev; Alfredo Petrosino; Amir Saffari; Bo Li; Bohyung Han; CherKeng Heng; Christophe Garcia; Dominik Pangersic; Gustav Häger; Fahad Shahbaz Khan; Franci Oven; Horst Bischof; Hyeonseob Nam; Jianke Zhu; Jijia Li; Jin Young Choi; Jin-Woo Choi; João F. Henriques; Joost van de Weijer; Jorge Batista; Karel Lebeda

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge.net).


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

A Novel Performance Evaluation Methodology for Single-Target Trackers

Matej Kristan; Jiri Matas; Aleš Leonardis; Tomas Vojir; Roman P. Pflugfelder; Gustavo Fernández; Georg Nebehay; Fatih Porikli; Luka Cehovin

This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.


international conference on computer vision | 2015

The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results

Michael Felsberg; Amanda Berg; Gustav Häger; Jörgen Ahlberg; Matej Kristan; Jiri Matas; Aleš Leonardis; Luka Cehovin; Gustavo Fernández; Tomas Vojir; Georg Nebehay; Roman P. Pflugfelder

The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Link -- ping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.


international conference on distributed smart cameras | 2013

Ella: Middleware for multi-camera surveillance in heterogeneous visual sensor networks

Bernhard Dieber; Jennifer Simonjan; Lukas Esterle; Bernhard Rinner; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández

Despite significant interest in the research community, the development of multi-camera applications is still quite challenging. This paper presents Ella - a dedicated publish/subscribe middleware system that facilitates distribution, component reuse and communication for heterogeneous multi-camera applications. We present the key components of this middleware system and demonstrate its applicability based on an autonomous multi-camera person tracking application. Ella is able to run on resource-limited and heterogeneous VSNs. We present performance measurements on different hardware platforms as well as operating systems.


international conference on intelligent transportation systems | 2008

Video based Traffic Congestion Prediction on an Embedded System

Holger Glasl; David Schreiber; Nikolaus Viertl; Stephan Veigl; Gustavo Fernández

In recent years, computer vision methods have been exploited in traffic surveillance systems to perform video image analysis, e.g. extracting statistical traffic information and detecting events. However, not much work was dedicated to the prediction of events, in particular of traffic congestions. This paper has two contributions: First, it presents an embedded computer vision system which collects traffic data, and secondly, it reports an innovative method for predicting traffic congestions. For the latter purpose, three traffic parameters are measured and analysed: average speed, vehicle density and the amount of lane changes. The novelty of the current work resides in the use of lane changes in order to predict a traffic congestion. It is shown how the amount of lane changes can be used for improving the prediction of a traffic congestion event some minutes before the traffic congestion starts. The validity of the proposed method is tested using data from a real scenario, which have been collected by the embedded computer vision system also presented in this work. The obtained results are discussed, along with possible future improvements and new research directions.


international conference on intelligent transportation systems | 2008

Sensor Fusion on an Embedded System for Traffic Data Analysis - ETRADA-V System

Martin Litzenberger; Holger Glasl; Bernhard Kohn; Bernhard Schalko; Gustavo Fernández

This paper describes an embedded system for traffic flow analysis based on fused traffic information coming from two different sensors. This can be used as test bed for traffic monitoring. Data is acquired by two sensor systems, Embedded Traffic Data Sensor (TDS) and a CCTV camera. Video data acquired by the CCTV camera is transmitted to an embedded hardware platform, where image processing software runs in order to detect and track objects with the final goal of event detection and event prediction. The system is tested in a real scenario. Experimental results are presented considering the image processing analysis, i.e. object detection and object tracking, and statistical data inferred from previous analysis.


european conference on computer vision | 2016

The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results

Michael Felsberg; Matej Kristan; Aleš Leonardis; Roman P. Pflugfelder; Gustav Häger; Amanda Berg; Abdelrahman Eldesokey; Jörgen Ahlberg; Luka Cehovin; Tomáš Vojír̃; Alan Lukežič; Gustavo Fernández; Alfredo Petrosino; Álvaro García-Martín; Andres Solis Montero; Anton Varfolomieiev; Aykut Erdem; Bohyung Han; Chang-Ming Chang; Dawei Du; Erkut Erdem; Fahad Shahbaz Khan; Fatih Porikli; Fei Zhao; Filiz Bunyak; Francesco Battistone; Gao Zhu; Hongdong Li; Honggang Qi; Horst Bischof

The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Link -- ping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.


Proceedings of the 10th International Conference on Distributed Smart Camera | 2016

Architecture for Dynamic Allocation of Computer Vision Tasks

Axel Weissenfeld; Andreas Opitz; Roman P. Pflugfelder; Gustavo Fernández

The use of reconfigurable computer vision architecture for image processing tasks is an important and challenging application in real time systems with limited resources. It is an emerging field as new computing architectures are developed, new algorithms are proposed and users define new emerging applications in surveillance. In this paper, a computer vision architecture capable of reconfiguring the processing chain of computer vision algorithms is summarised. The processing chain consists of multiple computer vision tasks, which can be distributed over various computing units. One key characteristic of the designed architecture is graceful degradation, which prevents the system from failure. This system characteristic is achieved by distributing computer vision tasks to other nodes and parametrizing each task depending on the specified quality-of-service. Experiments using an object detector applied to a public dataset are presented.


Proceedings of the 19th Computer Vision Winter Workshop | 2014

The VOT2013 challenge: overview and additional results

Matej Kristan; Roman P. Pflugfelder; Aleš Leonardis; Jiri Matas; Fatih Porikli; Luka Cehovin; Georg Nebehay; Gustavo Fernández; Tomas Vojir


Imaging for Crime Detection and Prevention 2011 (ICDP 2011), 4th International Conference on | 2011

GPGPU-accelerated visual search in large surveillance archives

Csaba Beleznai; Gustavo Fernández; Stephan Veigl; Bernhard Strobl

Collaboration


Dive into the Gustavo Fernández's collaboration.

Top Co-Authors

Avatar

Roman P. Pflugfelder

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Luka Cehovin

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georg Nebehay

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jiri Matas

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Tomas Vojir

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fatih Porikli

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge