Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vladimir Tankovich is active.

Publication


Featured researches published by Vladimir Tankovich.


international conference on computer graphics and interactive techniques | 2016

Fusion4D: real-time performance capture of challenging scenes

Mingsong Dou; Sameh Khamis; Yury Degtyarev; Philip Lindsley Davidson; Sean Ryan Fanello; Adarsh Prakash Murthy Kowdle; Sergio Orts Escolano; Christoph Rhemann; David Kim; Jonathan Taylor; Pushmeet Kohli; Vladimir Tankovich; Shahram Izadi

We contribute a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time. Our algorithm supports both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion. Our approach is highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes. We demonstrate advantages over related real-time techniques that either deform an online generated template or continually fuse depth data nonrigidly into a single reference model. Finally, we show geometric reconstruction results on par with offline methods which require orders of magnitude more processing time and many more RGBD cameras.


user interface software and technology | 2016

Holoportation: Virtual 3D Teleportation in Real-time

Sergio Orts-Escolano; Christoph Rhemann; Sean Ryan Fanello; Wayne Chang; Adarsh Prakash Murthy Kowdle; Yury Degtyarev; David Kim; Philip Lindsley Davidson; Sameh Khamis; Mingsong Dou; Vladimir Tankovich; Charles T. Loop; Qin Cai; Philip A. Chou; Sarah Mennicken; Julien P. C. Valentin; Vivek Pradeep; Shenlong Wang; Sing Bing Kang; Pushmeet Kohli; Yuliya Lutchyn; Cem Keskin; Shahram Izadi

We present an end-to-end system for augmented and virtual reality telepresence, called Holoportation. Our system demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras. These 3D models can also be transmitted in real-time to remote users. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication. This paper describes the Holoportation technical system in full, its key interactive capabilities, the application scenarios it enables, and an initial qualitative study of using this new communication medium.


computer vision and pattern recognition | 2016

HyperDepth: Learning Depth from Structured Light without Matching

Sean Ryan Fanello; Christoph Rhemann; Vladimir Tankovich; Adarsh Kowdle; Sergio Orts Escolano; David Kim; Shahram Izadi

Structured light sensors are popular due to their robustness to untextured scenes and multipath. These systems triangulate depth by solving a correspondence problem between each camera and projector pixel. This is often framed as a local stereo matching task, correlating patches of pixels in the observed and reference image. However, this is computationally intensive, leading to reduced depth accuracy and framerate. We contribute an algorithm for solving this correspondence problem efficiently, without compromising depth accuracy. For the first time, this problem is cast as a classification-regression task, which we solve extremely efficiently using an ensemble of cascaded random forests. Our algorithm scales in number of disparities, and each pixel can be processed independently, and in parallel. No matching or even access to the corresponding reference pattern is required at runtime, and regressed labels are directly mapped to depth. Our GPU-based algorithm runs at a 1KHz for 1.3MP input/output images, with disparity error of 0.1 subpixels. We show a prototype high framerate depth camera running at 375Hz, useful for solving tracking-related problems. We demonstrate our algorithmic performance, creating high resolution real-time depth maps that surpass the quality of current state of the art depth technologies, highlighting quantization-free results with reduced holes, edge fattening and other stereo-based depth artifacts.


international conference on computer graphics and interactive techniques | 2017

Motion2fusion: real-time volumetric performance capture

Mingsong Dou; Philip L. Davidson; Sean Ryan Fanello; Sameh Khamis; Adarsh Kowdle; Christoph Rhemann; Vladimir Tankovich; Shahram Izadi

We present Motion2Fusion, a state-of-the-art 360 performance capture system that enables *real-time* reconstruction of arbitrary non-rigid scenes. We provide three major contributions over prior work: 1) a new non-rigid fusion pipeline allowing for far more faithful reconstruction of high frequency geometric details, avoiding the over-smoothing and visual artifacts observed previously. 2) a high speed pipeline coupled with a machine learning technique for 3D correspondence field estimation reducing tracking errors and artifacts that are attributed to fast motions. 3) a backward and forward non-rigid alignment strategy that more robustly deals with topology changes but is still free from scene priors. Our novel performance capture system demonstrates real-time results nearing 3x speed-up from previous state-of-the-art work on the exact same GPU hardware. Extensive quantitative and qualitative comparisons show more precise geometric and texturing results with less artifacts due to fast motions or topology changes than prior art.


computer vision and pattern recognition | 2017

UltraStereo: Efficient Learning-Based Matching for Active Stereo Systems

Sean Ryan Fanello; Julien P. C. Valentin; Christoph Rhemann; Adarsh Kowdle; Vladimir Tankovich; Philip L. Davidson; Shahram Izadi

Efficient estimation of depth from pairs of stereo images is one of the core problems in computer vision. We efficiently solve the specialized problem of stereo matching under active illumination using a new learning-based algorithm. This type of active stereo i.e. stereo matching where scene texture is augmented by an active light projector is proving compelling for designing depth cameras, largely due to improved robustness when compared to time of flight or traditional structured light techniques. Our algorithm uses an unsupervised greedy optimization scheme that learns features that are discriminative for estimating correspondences in infrared images. The proposed method optimizes a series of sparse hyperplanes that are used at test time to remap all the image patches into a compact binary representation in O(1). The proposed algorithm is cast in a PatchMatch Stereo-like framework, producing depth maps at 500Hz. In contrast to standard structured light methods, our approach generalizes to different scenes, does not require tedious per camera calibration procedures and is not adversely affected by interference from overlapping sensors. Extensive evaluations show we surpass the quality and overcome the limitations of current depth sensing technologies.


Archive | 2008

Techniques to perform relative ranking for search results

Vladimir Tankovich; Dmitriy Meyerzon; Michael J. Taylor; Stephen E. Robertson


Archive | 2008

Index Optimization for Ranking Using a Linear Model

Vladimir Tankovich; Dmitriy Meyerzon; Mihai Petriuc


Archive | 2011

GENERATING AND PRESENTING DEEP LINKS

Vladimir Tankovich; Victor Poznanski; Dmitriy Meyerzon


Archive | 2008

Search results ranking using editing distance and document information

Vladimir Tankovich; Hang Li; Dmitriy Meyerzon; Jun Xu


Archive | 2008

Document length as a static relevance feature for ranking search results

Vladimir Tankovich; Dmitriy Meyerzon; Michael J. Taylor

Collaboration


Dive into the Vladimir Tankovich's collaboration.

Researchain Logo
Decentralizing Knowledge