Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Asente is active.

Publication


Featured researches published by Paul Asente.


international conference on computer graphics and interactive techniques | 1988

An overview of the X toolkit

Joel McCormack; Paul Asente

The X11 Window System defines a network protocol [6] for communication between a graphics server and an application. The X library [3] provides a procedural interface to the protocol. The X toolkit [4] is an object-oriented construction kit built on top of the X library. The toolkit is used to write user interface components (“widgets”), to organize a set of widget instances into a complete user interface, and to link a user interface with the functionality provided by an application. This paper describes the capabilities and structure of the X toolkit from three viewpoints: application developer, widget writer, and application user. We discuss the toolkits mechanisms to address inefficiencies caused by the separation of application and server, and by the extensive user configurability of toolkit-based applications. We point out some drawbacks to using the toolkit, and briefly describe the tools being developed to overcome these problems.


international conference on computer graphics and interactive techniques | 2016

StyLit: illumination-guided example-based stylization of 3D renderings

Jakub Fišer; Ondřej Jamriška; Michal Lukác; Eli Shechtman; Paul Asente; Jingwan Lu; Daniel Sýkora

We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our methods effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.


user interface software and technology | 2015

Procedural Modeling Using Autoencoder Networks

Mehmet Ersin Yumer; Paul Asente; Radomir Mech; Levent Burak Kara

Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.


international conference on computer graphics and interactive techniques | 2015

LazyFluids: appearance transfer for fluid animations

Ondřej Jamriška; Jakub Fišer; Paul Asente; Jingwan Lu; Eli Shechtman; Daniel Sýkora

In this paper we present a novel approach to appearance transfer for fluid animations based on flow-guided texture synthesis. In contrast to common practice where pre-captured sets of fluid elements are combined in order to achieve desired motion and look, we bring the possibility of fine-tuning motion properties in advance using CG techniques, and then transferring the desired look from a selected appearance exemplar. We demonstrate that such a practical work-flow cannot be simply implemented using current state-of-the-art techniques, analyze what the main obstacles are, and propose a solution to resolve them. In addition, we extend the algorithm to allow for synthesis with rich boundary effects and video exemplars. Finally, we present numerous results that demonstrate the versatility of the proposed approach.


international conference on computer graphics and interactive techniques | 2005

Dynamic planar map illustration

Paul Asente; Mike Schuster; Teri Pettit

There are many types of illustrations that are easier to create in planar-map-based illustration systems than in the more common stacking-based systems. One weakness shared by all existing planar-map-based systems is that the editability of the drawing is severely hampered once coloring has begun. The paths that define the areas to be filled become divided wherever they intersect, making it difficult or impossible to edit them as a whole. Live Paint is a new metaphor that allows planar-map-based coloring while maintaining all the original paths unchanged. When a user makes a change, the regions and edges defined by the new paths take on fill and stroke attributes from the previous regions and edges. This results in greater editing flexibility and ease of use. Live Paint uses a set of heuristics to match each region and edge in a changed illustration with a region or edge in the previous version, a task that is more difficult than it at first appears. It then transfers fill and stroke attributes accordingly.


non photorealistic animation and rendering | 2012

Consistent stylization and painterly rendering of stereoscopic 3D images

Lesley Northam; Paul Asente; Craig S. Kaplan

We present a method for stylizing stereoscopic 3D images that guarantees consistency between the left and right views. Our method decomposes the left and right views of an input image into discretized disparity layers and merges the corresponding layers from the left and right views into a single layer where stylization takes place. We then construct new stylized left and right views by compositing portions of the stylized layers. Because the left and right views come from the same source layers, our method eliminates common artifacts that cause viewer discomfort. We also present a stereoscopic 3D painterly rendering algorithm tailored to our layer-based approach. This method uses disparity information to assist in stroke creation so that strokes follow surface geometry without ignoring painted surface patterns. Finally, we conduct a user study that demonstrates that our approach to stereoscopic 3D image stylization leads to images that are more comfortable to view than those created using other techniques.


Proceedings of the Symposium on Computational Aesthetics | 2013

Patch-based geometric texture synthesis

Zainab AlMeraj; Craig S. Kaplan; Paul Asente

Inspired by the results of recent studies on the perception of geometric textures, we present a patch-based geometric synthesis algorithm that mimics observed synthesis strategies. Our synthesis process first constructs an overlapping grid of copies of the exemplar, and then culls individual motifs based on overlaps and the enforcement of minimum distances.


Computer Graphics Forum | 2015

Brushables: Example-based Edge-aware Directional Texture Painting

Michal Lukác; Jakub Fišer; Paul Asente; Jingwan Lu; Eli Shechtman; Daniel Sýkora

In this paper we present Brushables—a novel approach to example‐based painting that respects user‐specified shapes at the global level and preserves textural details of the source image at the local level. We formulate the synthesis as a joint optimization problem that simultaneously synthesizes the interior and the boundaries of the region, transferring relevant content from the source to meaningful locations in the target. We also provide an intuitive interface to control both local and global direction of textural details in the synthesized image. A key advantage of our approach is that it enables a “combing” metaphor in which the user can incrementally modify the target direction field to achieve the desired look. Based on this, we implement an interactive texture painting tool capable of handling more complex textures than ever before, and demonstrate its versatility on difficult inputs including vegetation, textiles, hair and painting media.


eurographics | 2014

Color Me Noisy: Example-based Rendering of Hand-colored Animations with Temporal Noise Control

Jakub Fišer; Michal Lukác; Ondrej Jamriska; Martin Čadík; Yotam I. Gingold; Paul Asente; Daniel Sýkora

We present an example‐based approach to rendering hand‐colored animations which delivers visual richness comparable to real artwork while enabling control over the amount of perceived temporal noise. This is important both for artistic purposes and viewing comfort, but is tedious or even intractable to achieve manually. We analyse typical features of real hand‐colored animations and propose an algorithm that tries to mimic them using only static examples of drawing media. We apply the algorithm to various animations using different drawing media and compare the quality of synthetic results with real artwork. To verify our method perceptually, we conducted experiments confirming that our method delivers distinguishable noise levels and reduces eye strain. Finally, we demonstrate the capabilities of our method to mask imperfections such as shower‐door artifacts.


Computers & Graphics | 2013

Stereoscopic 3D image stylization

Lesley Northam; Paul Asente; Craig S. Kaplan

We present a method for stylizing stereoscopic 3D images that guarantees consistency between the left and right views. Our method decomposes the left and right views of an input image into discretized disparity layers and merges the corresponding layers from the left and right views into a single layer where stylization takes place. We then construct new stylized left and right views by compositing portions of the stylized layers. Because the new left and right views come from the same stylized source layers, our method eliminates common stylization artifacts that cause viewer discomfort. We also present a stereoscopic 3D painterly rendering algorithm tailored to our layer-based approach. This method uses disparity information to assist in stroke creation so that strokes follow surface geometry without ignoring painted surface patterns. Finally, we conduct a user study that demonstrates that our approach to stereoscopic 3D image stylization leads to images that are more comfortable to view than those created using other techniques. Graphical abstractWe present a method for stylizing stereoscopic 3D images that guarantees consistency between left and right views.Display Omitted Highlights? A method for stylizing stereo 3D images that guarantees consistency between views. ? Our method works with many stylization filters, including Adobe Photoshop filters. ? We present a stereo 3D painterly rendering algorithm tailored to our approach. ? We present user study results that demonstrate that users prefer our method to others.

Collaboration


Dive into the Paul Asente's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Sýkora

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Jakub Fišer

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge