Daniel Sýkora
Czech Technical University in Prague
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Sýkora.
non-photorealistic animation and rendering | 2004
Daniel Sýkora; Jan Buriánek; Jiří Žára
We present a novel color-by-example technique which combines image segmentation, patch-based sampling and probabilistic reasoning. This method is able to automate colorization when new color information is applied on the already designed black-and-white cartoon. Our technique is especially suitable for cartoons digitized from classical celluloid films, which were originally produced by a paper or cel based method. In this case, the background is usually a static image and only the dynamic foreground needs to be colored frame-by-frame. We also assume that objects in the foreground layer consist of several well visible outlines which will emphasize the shape of homogeneous regions.
Computer Graphics Forum | 2010
Daniel Sýkora; David Sedlacek; S. Jinchao; John Dingliana; Steven Collins
This paper presents a novel interactive approach for adding depth information into hand‐drawn cartoon images and animations. In comparison to previous depth assignment techniques our solution requires minimal user effort and enables creation of consistent pop‐ups in a matter of seconds. Inspired by perceptual studies we formulate a custom tailored optimization framework that tries to mimic the way that a human reconstructs depth information from a single image. Its key advantage is that it completely avoids inputs requiring knowledge of absolute depth and instead uses a set of sparse depth (in)equalities that are much easier to specify. Since these constraints lead to a solution based on quadratic programming that is time consuming to evaluate we propose a simple approximative algorithm yielding similar results with much lower computational overhead. We demonstrate its usefulness in the context of a cartoon animation production pipeline including applications such as enhancement, registration, composition, 3D modelling and stereoscopic display.
non-photorealistic animation and rendering | 2009
Daniel Sýkora; John Dingliana; Steven Collins
We present a new approach to deformable image registration suitable for articulated images such as hand-drawn cartoon characters and human postures. For such type of data state-of-the-art techniques typically yield undesirable results. We propose a novel geometrically motivated iterative scheme where point movements are decoupled from shape consistency. By combining locally optimal block matching with as-rigid-as-possible shape regularization, our algorithm allows us to register images undergoing large free-form deformations and appearance variations. We demonstrate its practical usability in various challenging tasks performed in the cartoon animation production pipeline including unsupervised inbetweening, example-based shape deformation, auto-painting, editing, and motion retargeting.
ACM Transactions on Graphics | 2014
Daniel Sýkora; Ladislav Kavan; Martin Čadík; Ondřej Jamriška; Alec Jacobson; Brian Whited; Maryann Simmons; Olga Sorkine-Hornung
We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D hand-drawn look-and-feel. We demonstrate our approach on a varied set of hand-drawn images and animations, showing that even in comparison to ground-truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding.
Computer Graphics Forum | 2009
Daniel Sýkora; John Dingliana; Steven Collins
In this paper we present LazyBrush, a novel interactive tool for painting hand‐made cartoon drawings and animations. Its key advantage is simplicity and flexibility. As opposed to previous custom tailored approaches [ SBv05 , QWH06 ] LazyBrush does not rely on style specific features such as homogenous regions or pattern continuity yet still offers comparable or even less manual effort for a broad class of drawing styles. In addition to this, it is not sensitive to imprecise placement of color strokes which makes painting less tedious and brings significant time savings in the context cartoon animation. LazyBrush originally stems from requirements analysis carried out with professional ink‐and‐paint illustrators who established a list of useful features for an ideal painting tool. We incorporate this list into an optimization framework leading to a variant of Potts energy with several interesting theoretical properties. We show how to minimize it efficiently and demonstrate its usefulness in various practical scenarios including the ink‐and‐paint production pipeline.
Image and Vision Computing | 2005
Daniel Sýkora; Jan Buriánek; Jiří ára
We introduce a novel colorization framework for old black-and-white cartoons originally produced by a cel or paper based technology. In this case, the dynamic part of the scene is represented by a set of outlined homogeneous regions which superimpose the static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction, and structural prediction. Our system allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present simple but effective color modulation, composition and dust spot removal techniques able to produce color images in broadcast quality without additional user intervention.
Computer Graphics Forum | 2012
Gioacchino Noris; Daniel Sýkora; Ariel Shamir; Stelian Coros; Brian Whited; Maryann Simmons; Alexander Hornung; Markus H. Gross; Robert W. Sumner
We present ‘Smart Scribbles’—a new scribble‐based interface for user‐guided segmentation of digital sketchy drawings. In contrast to previous approaches based on simple selection strategies, Smart Scribbles exploits richer geometric and temporal information, resulting in a more intuitive segmentation interface. We introduce a novel energy minimization formulation in which both geometric and temporal information from digital input devices is used to define stroke‐to‐stroke and scribble‐to‐stroke relationships. Although the minimization of this energy is, in general, an NP‐hard problem, we use a simple heuristic that leads to a good approximation and permits an interactive system able to produce accurate labellings even for cluttered sketchy drawings. We demonstrate the power of our technique in several practical scenarios such as sketch editing, as‐rigid‐as‐possible deformation and registration, and on‐the‐fly labelling based on pre‐classified guidelines.
international conference on computer graphics and interactive techniques | 2016
Jakub Fišer; Ondřej Jamriška; Michal Lukác; Eli Shechtman; Paul Asente; Jingwan Lu; Daniel Sýkora
We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our methods effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.
non photorealistic animation and rendering | 2011
Daniel Sýkora; Mirela Ben-Chen; Martin Čadík; Brian Whited; Maryann Simmons
We present a novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort. Unlike previous techniques, our approach works entirely in the 2D domain and does not require the knowledge or creation of a 3D proxy model. Inspired by the fact that the human visual system tends to focus on the most salient features of a scene, which we observe for hand-drawn cartoons are the contours rather than the interior of regions, we can create the illusion of temporally coherent animation using only rough 2D image registration. This key observation allows us to design a simple yet effective algorithm that significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines. We demonstrate our technique on a variety of input animations as well as provide examples of postprocessing operations that can be applied to simulate 3D-like effects entirely in the 2D domain.
international conference on computer graphics and interactive techniques | 2013
Michal Lukác; Jakub Fišer; Jean Charles Bazin; Ondrej Jamriska; Alexander Sorkine-Hornung; Daniel Sýkora
In this paper we propose a reinterpretation of the brush and the fill tools for digital image painting. The core idea is to provide an intuitive approach that allows users to paint in the visual style of arbitrary example images. Rather than a static library of colors, brushes, or fill patterns, we offer users entire images as their palette, from which they can select arbitrary contours or textures as their brush or fill tool in their own creations. Compared to previous example-based techniques related to the painting-by-numbers paradigm we propose a new strategy where users can generate salient texture boundaries by our randomized graph-traversal algorithm and apply a content-aware fill to transfer textures into the delimited regions. This workflow allows users of our system to intuitively create visually appealing images that better preserve the visual richness and fluidity of arbitrary example images. We demonstrate the potential of our approach in various applications including interactive image creation, editing and vector image stylization.