Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingwan Lu is active.

Publication


Featured researches published by Jingwan Lu.


ACM Transactions on Graphics | 2011

Perceptual models of viewpoint preference

Adrian Secord; Jingwan Lu; Adam Finkelstein; Manish Singh; Andrew Nealen

The question of what are good views of a 3D object has been addressed by numerous researchers in perception, computer vision, and computer graphics. This has led to a large variety of measures for the goodness of views as well as some special-case viewpoint selection algorithms. In this article, we leverage the results of a large user study to optimize the parameters of a general model for viewpoint goodness, such that the fitted model can predict peoples preferred views for a broad range of objects. Our model is represented as a combination of attributes known to be important for view selection, such as projected model area and silhouette length. Moreover, this framework can easily incorporate new attributes in the future, based on the data from our existing study. We demonstrate our combined goodness measure in a number of applications, such as automatically selecting a good set of representative views, optimizing camera orbits to pass through good views and avoid bad views, and trackball controls that gently guide the viewer towards better views.


computer vision and pattern recognition | 2017

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

Patsorn Sangkloy; Jingwan Lu; Chen Fang; Fisher Yu; James Hays

Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.


international conference on computer graphics and interactive techniques | 2013

RealBrush: painting with examples of physical media

Jingwan Lu; Connelly Barnes; Stephen DiVerdi; Adam Finkelstein

Conventional digital painting systems rely on procedural rules and physical simulation to render paint strokes. We present an interactive, data-driven painting system that uses scanned images of real natural media to synthesize both new strokes and complex stroke interactions, obviating the need for physical simulation. First, users capture images of real media, including examples of isolated strokes, pairs of overlapping strokes, and smudged strokes. Online, the user inputs a new stroke path, and our system synthesizes its 2D texture appearance with optional smearing or smudging when strokes overlap. We demonstrate high-fidelity paintings that closely resemble the captured media style, and also quantitatively evaluate our synthesis quality via user studies.


international conference on computer graphics and interactive techniques | 2012

HelpingHand: example-based stroke stylization

Jingwan Lu; Fisher Yu; Adam Finkelstein; Stephen DiVerdi

Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.


interactive 3d graphics and games | 2010

Interactive painterly stylization of images, videos and 3D animations

Jingwan Lu; Pedro V. Sander; Adam Finkelstein

We introduce a real-time system that converts images, video, or 3D animation sequences to artistic renderings in various painterly styles. The algorithm, which is entirely executed on the GPU, can efficiently process 512 resolution frames containing 60,000 individual strokes at over 30 fps. In order to exploit the parallel nature of GPUs, our algorithm determines the placement of strokes entirely from local pixel neighborhood information. The strokes are rendered as point sprites with textures. Temporal coherence is achieved by treating the brush strokes as particles and moving them based on optical flow. Our system renders high quality results while allowing the user interactive control over many stylistic parameters such as stroke size, texture and density.


international conference on computer graphics and interactive techniques | 2016

StyLit: illumination-guided example-based stylization of 3D renderings

Jakub Fišer; Ondřej Jamriška; Michal Lukác; Eli Shechtman; Paul Asente; Jingwan Lu; Daniel Sýkora

We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our methods effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.


international conference on computer graphics and interactive techniques | 2015

LazyFluids: appearance transfer for fluid animations

Ondřej Jamriška; Jakub Fišer; Paul Asente; Jingwan Lu; Eli Shechtman; Daniel Sýkora

In this paper we present a novel approach to appearance transfer for fluid animations based on flow-guided texture synthesis. In contrast to common practice where pre-captured sets of fluid elements are combined in order to achieve desired motion and look, we bring the possibility of fine-tuning motion properties in advance using CG techniques, and then transferring the desired look from a selected appearance exemplar. We demonstrate that such a practical work-flow cannot be simply implemented using current state-of-the-art techniques, analyze what the main obstacles are, and propose a solution to resolve them. In addition, we extend the algorithm to allow for synthesis with rich boundary effects and video exemplars. Finally, we present numerous results that demonstrate the versatility of the proposed approach.


Computer Graphics Forum | 2015

Brushables: Example-based Edge-aware Directional Texture Painting

Michal Lukác; Jakub Fišer; Paul Asente; Jingwan Lu; Eli Shechtman; Daniel Sýkora

In this paper we present Brushables—a novel approach to example‐based painting that respects user‐specified shapes at the global level and preserves textural details of the source image at the local level. We formulate the synthesis as a joint optimization problem that simultaneously synthesizes the interior and the boundaries of the region, transferring relevant content from the source to meaningful locations in the target. We also provide an intuitive interface to control both local and global direction of textural details in the synthesized image. A key advantage of our approach is that it enables a “combing” metaphor in which the user can incrementally modify the target direction field to achieve the desired look. Based on this, we implement an interactive texture painting tool capable of handling more complex textures than ever before, and demonstrate its versatility on difficult inputs including vegetation, textiles, hair and painting media.


non photorealistic animation and rendering | 2014

RealPigment: paint compositing by example

Jingwan Lu; Stephen DiVerdi; Willa Chen; Connelly Barnes; Adam Finkelstein

The color of composited pigments in digital painting is generally computed one of two ways: either alpha blending in RGB, or the Kubelka-Munk equation (KM). The former fails to reproduce paint like appearances, while the latter is difficult to use. We present a data-driven pigment model that reproduces arbitrary compositing behavior by interpolating sparse samples in a high dimensional space. The input is an of a color chart, which provides the composition samples. We propose two different prediction algorithms, one doing simple interpolation using radial basis functions (RBF), and another that trains a parametric model based on the KM equation to compute novel values. We show that RBF is able to reproduce arbitrary compositing behaviors, even non-paint-like such as additive blending, while KM compositing is more robust to acquisition noise and can generalize results over a broader range of values.


ACM Transactions on Graphics | 2017

Example-based synthesis of stylized facial animations

Jakub Fišer; Ondřej Jamriška; David P. Simons; Eli Shechtman; Jingwan Lu; Paul Asente; Michal Lukác; Daniel Sýkora

We introduce a novel approach to example-based stylization of portrait videos that preserves both the subjects identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.

Collaboration


Dive into the Jingwan Lu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Sýkora

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakub Fišer

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Ondřej Jamriška

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge