Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roman P. Pflugfelder is active.

Publication


Featured researches published by Roman P. Pflugfelder.


european conference on computer vision | 2016

The Visual Object Tracking VOT2014 Challenge Results

Matej Kristan; Roman P. Pflugfelder; Aleš Leonardis; Jiri Matas; Luka Cehovin; Georg Nebehay; Tomas Vojir; Gustavo Fernández; Alan Lukezic; Aleksandar Dimitriev; Alfredo Petrosino; Amir Saffari; Bo Li; Bohyung Han; CherKeng Heng; Christophe Garcia; Dominik Pangersic; Gustav Häger; Fahad Shahbaz Khan; Franci Oven; Horst Bischof; Hyeonseob Nam; Jianke Zhu; Jijia Li; Jin Young Choi; Jin-Woo Choi; João F. Henriques; Joost van de Weijer; Jorge Batista; Karel Lebeda

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge.net).


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

A Novel Performance Evaluation Methodology for Single-Target Trackers

Matej Kristan; Jiri Matas; Aleš Leonardis; Tomas Vojir; Roman P. Pflugfelder; Gustavo Fernández; Georg Nebehay; Fatih Porikli; Luka Cehovin

This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.


workshop on applications of computer vision | 2014

Consensus-based matching and tracking of keypoints for object tracking

Georg Nebehay; Roman P. Pflugfelder

We propose a novel keypoint-based method for long-term model-free object tracking in a combined matching-and-tracking framework. In order to localise the object in every frame, each keypoint casts votes for the object center. As erroneous keypoints are hard to avoid, we employ a novel consensus-based scheme for outlier detection in the voting behaviour. To make this approach computationally feasible, we propose not to employ an accumulator space for votes, but rather to cluster votes directly in the image space. By transforming votes based on the current keypoint constellation, we account for changes of the object in scale and rotation. In contrast to competing approaches, we refrain from updating the appearance information, thus avoiding the danger of making errors. The use of fast keypoint detectors and binary descriptors allows for our implementation to run in real-time. We demonstrate experimentally on a diverse dataset that is as large as 60 sequences that our method outperforms the state-of-the-art when high accuracy is required and visualise these results by employing a variant of success plots.


computer vision and pattern recognition | 2015

Clustering of static-adaptive correspondences for deformable object tracking

Georg Nebehay; Roman P. Pflugfelder

We propose a novel method for establishing correspondences on deformable objects for single-target object tracking. The key ingredient is a dissimilarity measure between correspondences that takes into account their geometric compatibility, allowing us to separate inlier correspondences from outliers. We employ both static correspondences from the initial appearance of the object as well as adaptive correspondences from the previous frame to address the stability-plasticity dilemma. The geometric dissimilarity measure enables us to also disambiguate keypoints that are difficult to match. Based on these ideas we build a keypoint-based tracker that outputs rotated bounding boxes. We demonstrate in a rigorous empirical analysis that this tracker outperforms the state of the art on a dataset of 77 sequences.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Localization and Trajectory Reconstruction in Surveillance Cameras with Nonoverlapping Views

Roman P. Pflugfelder; Horst Bischof

This paper proposes a method that localizes two surveillance cameras and simultaneously reconstructs object trajectories in 3D space. The method is an extension of the Direct Reference Plane method, which formulates the localization and the reconstruction as a system of linear equations that is globally solvable by Singular Value Decomposition. The methods assumptions are static synchronized cameras, smooth trajectories, known camera internal parameters, and the rotation between the cameras in a world coordinate system. The paper describes the method in the context of self-calibrating cameras, where the internal parameters and the rotation can be jointly obtained assuming a man-made scene with orthogonal structures. Experiments with synthetic and real--image data show that the method can recover the camera centers with an error less than half a meter even in the presence of a 4 meter gap between the fields of view.


international conference on computer vision | 2015

The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results

Michael Felsberg; Amanda Berg; Gustav Häger; Jörgen Ahlberg; Matej Kristan; Jiri Matas; Aleš Leonardis; Luka Cehovin; Gustavo Fernández; Tomas Vojir; Georg Nebehay; Roman P. Pflugfelder

The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Link -- ping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.


Archive | 2009

Self-Calibrating Cameras in Video Surveillance

Roman P. Pflugfelder; Branislav Micusik

This chapter introduces intrinsic camera calibration with focus on video surveillance. Calibration of camera-specific parameters such as the focal length is mandatory for metrological problems, for example, measuring a vehicle’s speed. However, it also improves target classification, target detection, and target tracking. Geometry has become important in multi-camera systems that hand over and track objects across cameras. We present the basic geometric concept behind calibration and show which information about the cameras, the scene, and the images is necessary to realize automatic methods. Self-calibration will be a key technology for the practical deployment of future smart video cameras.


international conference on distributed smart cameras | 2013

Ella: Middleware for multi-camera surveillance in heterogeneous visual sensor networks

Bernhard Dieber; Jennifer Simonjan; Lukas Esterle; Bernhard Rinner; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández

Despite significant interest in the research community, the development of multi-camera applications is still quite challenging. This paper presents Ella - a dedicated publish/subscribe middleware system that facilitates distribution, component reuse and communication for heterogeneous multi-camera applications. We present the key components of this middleware system and demonstrate its applicability based on an autonomous multi-camera person tracking application. Ella is able to run on resource-limited and heterogeneous VSNs. We present performance measurements on different hardware platforms as well as operating systems.


IEEE Computer | 2015

Self-Aware and Self-Expressive Camera Networks

Bernhard Rinner; Lukas Esterle; Jennifer Simonjan; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández Domínguez; Peter R. Lewis

Smart cameras perform on-board image analysis, adapt their algorithms to changes in their environment, and collaborate with other networked cameras to analyze the dynamic behavior of objects. A proposed computational framework adopts the concepts of self-awareness and self-expression to more efficiently manage the complex tradeoffs among performance, flexibility, resources, and reliability. The Web extra at http://youtu.be/NKe31_OKLz4 is a video demonstrating CamSim, a smart camera simulation tool, enables users to test self-adaptive and self-organizing smart-camera techniques without deploying a smart-camera network.


computer vision and pattern recognition | 2010

Localizing non-overlapping surveillance cameras under the L-Infinity norm

Branislav Micusik; Roman P. Pflugfelder

This paper presents a new approach to the problem of camera localization with non-overlapping camera views, particularly relevant for video surveillance systems. We show how to recast localization as quasi-convex optimization under the L-Infinity norm. Thereby we add the problem of reconstructing camera centers and 3D points for non-overlapping cameras with known internal parameters and known rotations to the class of known geometric problems solvable with Second Order Cone Programming. The 3D points are never seen by more than one camera, which makes the localization problem ill-posed. Therefore, the proposed approach employs temporal consistency of the 3D points to supply the missing constraints. Our formulation allows a global optimal solution to be found with a clear physical meaning of the cost function being minimized.

Collaboration


Dive into the Roman P. Pflugfelder's collaboration.

Top Co-Authors

Avatar

Georg Nebehay

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Branislav Micusik

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gustavo Fernández

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard Rinner

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Luka Cehovin

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Horst Bischof

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jennifer Simonjan

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Lukas Esterle

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge