Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ondrej Miksik is active.

Publication


Featured researches published by Ondrej Miksik.


computer vision and pattern recognition | 2016

Staple: Complementary Learners for Real-Time Tracking

Luca Bertinetto; Jack Valmadre; Stuart Golodetz; Ondrej Miksik; Philip H. S. Torr

Correlation Filter-based trackers have recently achieved excellent performance, showing great robustness to challenging situations exhibiting motion blur and illumination changes. However, since the model that they learn depends strongly on the spatial layout of the tracked object, they are notoriously sensitive to deformation. Models based on colour statistics have complementary traits: they cope well with variation in shape, but suffer when illumination is not consistent throughout a sequence. Moreover, colour distributions alone can be insufficiently discriminative. In this paper, we show that a simple tracker combining complementary cues in a ridge regression framework can operate faster than 80 FPS and outperform not only all entries in the popular VOT14 competition, but also recent and far more sophisticated trackers according to multiple benchmarks.


intelligent robots and systems | 2015

Incremental dense multi-modal 3D scene reconstruction

Ondrej Miksik; Yousef Amar; Vibhav Vineet; Patrick Pérez; Philip H. S. Torr

Aquiring reliable depth maps is an essential prerequisite for accurate and incremental 3D reconstruction used in a variety of robotics applications. Depth maps produced by affordable Kinect-like cameras have become a de-facto standard for indoor reconstruction and the driving force behind the success of many algorithms. However, Kinect-like cameras are less effective outdoors where one should rely on other sensors. Often, we use a combination of a stereo camera and lidar, however, process the acquired data in independent pipelines which generally leads to sub-optimal performance since both sensors suffer from different drawbacks. In this paper, we propose a probabilistic model that efficiently exploits complementarity between different depth-sensing modalities for incremental dense scene reconstruction. Our model uses a piecewise planarity prior assumption which is common in both the indoor and outdoor scenes. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction of a number of scenes.


british machine vision conference | 2015

Joint Object-Material Category Segmentation from Audio-Visual Cues.

Anurag Arnab; Michael Sapienza; Stuart Golodetz; Julien P. C. Valentin; Ondrej Miksik; Shahram Izadi; Philip H. S. Torr

It is not always possible to recognise objects and infer material properties for a scene from visual cues alone, since objects can look visually similar whilst being made of very different materials. In this paper, we therefore present an approach that augments the available dense visual cues with sparse auditory cues in order to estimate dense object and material labels. Since estimates of object class and material properties are mutually informative, we optimise our multi-output labelling jointly using a random-field framework. We evaluate our system on a new dataset with paired visual and auditory data that we make publicly available. We demonstrate that this joint estimation of object and material labels significantly outperforms the estimation of either category in isolation.


european conference on computer vision | 2016

The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results

Michael Felsberg; Matej Kristan; Aleš Leonardis; Roman P. Pflugfelder; Gustav Häger; Amanda Berg; Abdelrahman Eldesokey; Jörgen Ahlberg; Luka Cehovin; Tomáš Vojír̃; Alan Lukežič; Gustavo Fernández; Alfredo Petrosino; Álvaro García-Martín; Andres Solis Montero; Anton Varfolomieiev; Aykut Erdem; Bohyung Han; Chang-Ming Chang; Dawei Du; Erkut Erdem; Fahad Shahbaz Khan; Fatih Porikli; Fei Zhao; Filiz Bunyak; Francesco Battistone; Gao Zhu; Hongdong Li; Honggang Qi; Horst Bischof

The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Link -- ping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.


european conference on computer vision | 2016

Coarse-to-fine Planar Regularization for Dense Monocular Depth Estimation

Stephan Liwicki; Christopher Zach; Ondrej Miksik; Philip H. S. Torr

Simultaneous localization and mapping (SLAM) using the whole image data is an appealing framework to address shortcoming of sparse feature-based methods – in particular frequent failures in textureless environments. Hence, direct methods bypassing the need of feature extraction and matching became recently popular. Many of these methods operate by alternating between pose estimation and computing (semi-)dense depth maps, and are therefore not fully exploiting the advantages of joint optimization with respect to depth and pose. In this work, we propose a framework for monocular SLAM, and its local model in particular, which optimizes simultaneously over depth and pose. In addition to a planarity enforcing smoothness regularizer for the depth we also constrain the complexity of depth map updates, which provides a natural way to avoid poor local minima and reduces unknowns in the optimization. Starting from a holistic objective we develop a method suitable for online and real-time monocular SLAM. We evaluate our method quantitatively in pose and depth on the TUM dataset, and qualitatively on our own video sequences.


computer vision and pattern recognition | 2017

ROAM: A Rich Object Appearance Model with Application to Rotoscoping

Ondrej Miksik; Juan-Manuel Perez-Rua; Philip H. S. Torr; Patrick Pérez

Rotoscoping, the detailed delineation of scene elements through a video shot, is a painstaking task of tremendous importance in professional post-production pipelines. While pixel-wise segmentation techniques can help for this task, professional rotoscoping tools rely on parametric curves that offer the artists a much better interactive control on the definition, editing and manipulation of the segments of interest. Sticking to this prevalent rotoscoping paradigm, we propose a novel framework to capture and track the visual aspect of an arbitrary object in a scene, given a first closed outline of this object. This model combines a collection of local foreground/background appearance models spread along the outline, a global appearance model of the enclosed object and a set of distinctive foreground landmarks. The structure of this rich appearance model allows simple initialization, efficient iterative optimization with exact minimization at each step, and on-line adaptation in videos. We demonstrate qualitatively and quantitatively the merit of this framework through comparisons with tools based on either dynamic segmentation with a closed curve or pixel-wise binary labelling.


international conference on robotics and automation | 2015

Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction

Vibhav Vineet; Ondrej Miksik; Morten Lidegaard; Matthias NieBner; Stuart Golodetz; Victor Adrian Prisacariu; Olaf Kähler; David W. Murray; Shahram Izadi; Patrick Peerez; Philip H. S. Torr


human factors in computing systems | 2015

The Semantic Paintbrush: Interactive 3D Mapping and Recognition in Large Outdoor Spaces

Ondrej Miksik; Vibhav Vineet; Morten Lidegaard; Ram Prasaath; Matthias Nießner; Stuart Golodetz; Stephen L. Hicks; Patrick Pérez; Shahram Izadi; Philip H. S. Torr


british machine vision conference | 2014

Distributed Non-convex ADMM-based inference in large-scale random fields.

Ondrej Miksik; Vibhav Vineet; Patrick Pérez; Philip H. S. Torr


arXiv: Artificial Intelligence | 2016

Playing Doom with SLAM-Augmented Deep Reinforcement Learning.

Shehroze Bhatti; Alban Desmaison; Ondrej Miksik; Nantas Nardelli; N. Siddharth; Philip H. S. Torr

Collaboration


Dive into the Ondrej Miksik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vibhav Vineet

Oxford Brookes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge