David P. Simons
Adobe Systems
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David P. Simons.
international conference on computer graphics and interactive techniques | 2009
Xue Bai; Jue Wang; David P. Simons; Guillermo Sapiro
Although tremendous success has been achieved for interactive object cutout in still images, accurately extracting dynamic objects in video remains a very challenging problem. Previous video cutout systems present two major limitations: (1) reliance on global statistics, thus lacking the ability to deal with complex and diverse scenes; and (2) treating segmentation as a global optimization, thus lacking a practical workflow that can guarantee the convergence of the systems to the desired results. We present Video SnapCut, a robust video object cutout system that significantly advances the state-of-the-art. In our system segmentation is achieved by the collaboration of a set of local classifiers, each adaptively integrating multiple local image features. We show how this segmentation paradigm naturally supports local user editing and propagates them across time. The object cutout system is completed with a novel coherent video matting technique. A comprehensive evaluation and comparison is presented, demonstrating the effectiveness of the proposed system at achieving high quality results, as well as the robustness of the system against various types of inputs.
computer vision computer graphics collaboration techniques | 2011
Xue Bai; Jue Wang; David P. Simons
Extracting temporally-coherent alpha mattes in video is an important but challenging problem in post-production. Previous video matting systems are highly sensitive to initial conditions and image noise, thus cannot reliably produce stable alpha mattes without temporal jitter. In this paper we propose an improved video matting system which contains two new components: (1) an accurate trimap propagation mechanism for setting up the initial matting conditions in a temporally-coherent way; and (2) a temporal matte filter which can improve the temporal coherence of the mattes while maintaining the matte structures on individual frames. Experimental results show that compared with previous methods, the two new components lead to alpha mattes with better temporal coherence.
ACM Transactions on Graphics | 2017
Jakub Fišer; Ondřej Jamriška; David P. Simons; Eli Shechtman; Jingwan Lu; Paul Asente; Michal Lukác; Daniel Sýkora
We introduce a novel approach to example-based stylization of portrait videos that preserves both the subjects identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.
Archive | 2006
Daniel O'Donnell; James Acquavella; David P. Simons
Archive | 1996
David P. Simons; Scott S. Snibbe; Daniel M. Wilk
Archive | 2012
Xue Bai; Jue Wang; David P. Simons
Archive | 1996
David P. Simons; Scott S. Snibbe
Archive | 2001
Daniel O'Donnell; James Acquavella; David P. Simons
Archive | 1996
David F. Herbstman; Lazarus I. Long; David P. Simons
Archive | 2013
Jue Wang; David P. Simons; Daniel M. Wilk; Xue Bai