Featured Researches

Graphics

Interactive volume illumination of slice-based ray casting

Volume rendering always plays an important role in the field of medical imaging and industrial design. In recent years, the realistic and interactive volume rendering of the global illumination can improve the perception of shape and depth of volumetric datasets. In this paper, a novel and flexible performance method of slice-based ray casting is proposed to implement the volume illumination effects, such as volume shadow and other scattering effects. This benefits from the slice-based illumination attenuation buffers of the whole geometry slices at the viewpoint of the light source and the high-efficiency shadow or scattering coefficient calculation per sample in ray casting. These tests show the method can obtain much better volume illumination effects and more scalable performance in contrast to the local volume illumination in ray casting volume rendering or other similar slice-based global volume illumination.

Read more
Graphics

Intuitive Facial Animation Editing Based On A Generative RNN Framework

For the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following user-provided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.

Read more
Graphics

Jittering Samples using a kd-Tree Stratification

Monte Carlo sampling techniques are used to estimate high-dimensional integrals that model the physics of light transport in virtual scenes for computer graphics applications. These methods rely on the law of large numbers to estimate expectations via simulation, typically resulting in slow convergence. Their errors usually manifest as undesirable grain in the pictures generated by image synthesis algorithms. It is well known that these errors diminish when the samples are chosen appropriately. A well known technique for reducing error operates by subdividing the integration domain, estimating integrals in each \emph{stratum} and aggregating these values into a stratified sampling estimate. Naïve methods for stratification, based on a lattice (grid) are known to improve the convergence rate of Monte Carlo, but require samples that grow exponentially with the dimensionality of the domain. We propose a simple stratification scheme for d dimensional hypercubes using the kd-tree data structure. Our scheme enables the generation of an arbitrary number of equal volume partitions of the rectangular domain, and n samples can be generated in O(n) time. Since we do not always need to explicitly build a kd-tree, we provide a simple procedure that allows the sample set to be drawn fully in parallel without any precomputation or storage, speeding up sampling to O(logn) time per sample when executed on n cores. If the tree is implicitly precomputed ( O(n) storage) the parallelised run time reduces to O(1) on n cores. In addition to these benefits, we provide an upper bound on the worst case star-discrepancy for n samples matching that of lattice-based sampling strategies, which occur as a special case of our proposed method. We use a number of quantitative and qualitative tests to compare our method against state of the art samplers for image synthesis.

Read more
Graphics

Joint Stabilization and Direction of 360°Videos

360° video provides an immersive experience for viewers, allowing them to freely explore the world by turning their head. However, creating high-quality 360° video content can be challenging, as viewers may miss important events by looking in the wrong direction, or they may see things that ruin the immersion, such as stitching artifacts and the film crew. We take advantage of the fact that not all directions are equally likely to be observed; most viewers are more likely to see content located at ``true north'', i.e. in front of them, due to ergonomic constraints. We therefore propose 360° video direction, where the video is jointly optimized to orient important events to the front of the viewer and visual clutter behind them, while producing smooth camera motion. Unlike traditional video, viewers can still explore the space as desired, but with the knowledge that the most important content is likely to be in front of them. Constraints can be user guided, either added directly on the equirectangular projection or by recording ``guidance'' viewing directions while watching the video in a VR headset, or automatically computed, such as via visual saliency or forward motion direction. To accomplish this, we propose a new motion estimation technique specifically designed for 360° video which outperforms the commonly used 5-point algorithm on wide angle video. We additionally formulate the direction problem as an optimization where a novel parametrization of spherical warping allows us to correct for some degree of parallax effects. We compare our approach to recent methods that address stabilization-only and converting 360° video to narrow field-of-view video.

Read more
Graphics

LCollision: Fast Generation of Collision-Free Human Poses using Learned Non-Penetration Constraints

We present LCollision, a learning-based method that synthesizes collision-free 3D human poses. At the crux of our approach is a novel deep architecture that simultaneously decodes new human poses from the latent space and predicts colliding body parts. These two components of our architecture are used as the objective function and surrogate hard constraints in a constrained optimization for collision-free human pose generation. A novel aspect of our approach is the use of a bilevel autoencoder that decomposes whole-body collisions into groups of collisions between localized body parts. By solving the constrained optimizations, we show that a significant amount of collision artifacts can be resolved. Furthermore, in a large test set of 2.5× 10 6 randomized poses from SCAPE, our architecture achieves a collision-prediction accuracy of 94.1% with 80× speedup over exact collision detection algorithms. To the best of our knowledge, LCollision is the first approach that accelerates collision detection and resolves penetrations using a neural network.

Read more
Graphics

LOCALIS: Locally-adaptive Line Simplification for GPU-based Geographic Vector Data Visualization

Visualization of large vector line data is a core task in geographic and cartographic systems. Vector maps are often displayed at different cartographic generalization levels, traditionally by using several discrete levels-of-detail (LODs). This limits the generalization levels to a fixed and predefined set of LODs, and generally does not support smooth LOD transitions. However, fast GPUs and novel line rendering techniques can be exploited to integrate dynamic vector map LOD management into GPU-based algorithms for locally-adaptive line simplification and real-time rendering. We propose a new technique that interactively visualizes large line vector datasets at variable LODs. It is based on the Douglas-Peucker line simplification principle, generating an exhaustive set of line segments whose specific subsets represent the lines at any variable LOD. At run time, an appropriate and view-dependent error metric supports screen-space adaptive LOD levels and the display of the correct subset of line segments accordingly. Our implementation shows that we can simplify and display large line datasets interactively. We can successfully apply line style patterns, dynamic LOD selection lenses, and anti-aliasing techniques to our line rendering.

Read more
Graphics

LSMAT Least Squares Medial Axis Transform

The medial axis transform has applications in numerous fields including visualization, computer graphics, and computer vision. Unfortunately, traditional medial axis transformations are usually brittle in the presence of outliers, perturbations and/or noise along the boundary of objects. To overcome this limitation, we introduce a new formulation of the medial axis transform which is naturally robust in the presence of these artifacts. Unlike previous work which has approached the medial axis from a computational geometry angle, we consider it from a numerical optimization perspective. In this work, we follow the definition of the medial axis transform as "the set of maximally inscribed spheres". We show how this definition can be formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem. The proposed approach is inherently parallelizable by performing independant optimization of each sphere using Gauss-Newton, and its least-squares form allows it to be significantly more robust compared to traditional computational geometry approaches. Extensive experiments on 2D and 3D objects demonstrate that our method provides superior results to the state of the art on both synthetic and real-data.

Read more
Graphics

Lagrangian Neural Style Transfer for Fluids

Artistically controlling the shape, motion and appearance of fluid simulations pose major challenges in visual effects production. In this paper, we present a neural style transfer approach from images to 3D fluids formulated in a Lagrangian viewpoint. Using particles for style transfer has unique benefits compared to grid-based techniques. Attributes are stored on the particles and hence are trivially transported by the particle motion. This intrinsically ensures temporal consistency of the optimized stylized structure and notably improves the resulting quality. Simultaneously, the expensive, recursive alignment of stylization velocity fields of grid approaches is unnecessary, reducing the computation time to less than an hour and rendering neural flow stylization practical in production settings. Moreover, the Lagrangian representation improves artistic control as it allows for multi-fluid stylization and consistent color transfer from images, and the generality of the method enables stylization of smoke and liquids likewise.

Read more
Graphics

Laplacian Spectral Basis Functions

Representing a signal as a linear combination of a set of basis functions is central in a wide range of applications, such as approximation, de-noising, compression, shape correspondence and comparison. In this context, our paper addresses the main aspects of signal approximation, such as the definition, computation, and comparison of basis functions on arbitrary 3D shapes. Focusing on the class of basis functions induced by the Laplace-Beltrami operator and its spectrum, we introduce the diffusion and Laplacian spectral basis functions, which are then compared with the harmonic and Laplacian eigenfunctions. As main properties of these basis functions, which are commonly used for numerical geometry processing and shape analysis, we discuss the partition of the unity and non-negativity; the intrinsic definition and invariance with respect to shape transformations (e.g., translation, rotation, uniform scaling); the locality, smoothness, and orthogonality; the numerical stability with respect to the domain discretisation; the computational cost and storage overhead. Finally, we consider geometric metrics, such as the area, conformal, and kernel-based norms, for the comparison and characterisation of the main properties of the Laplacian basis functions.

Read more
Graphics

Large-Scale Evaluation of Shape-Aware Neighborhood Weights and Neighborhood Sizes

In this paper, we define and evaluate a weighting scheme for neighborhoods in point sets. Our weighting takes the shape of the geometry, i.e. the normal information, into account. This causes the obtained neighborhoods to be more reliable in the sense that connectivity also depends on the orientation of the point set. We utilize a sigmoid to define the weights based on the normal variation. For an evaluation of the weighting scheme, we turn to a Shannon entropy model for feature separation and rigorously prove its non-degeneracy for our family of weights. Based on this model, we evaluate our weighting terms on a large scale of both clean and real-world models. This evaluation provides results regarding the choice of optimal parameters within our weighting scheme. Furthermore, the large-scale evaluation also reveals that neighborhood sizes should not be fixed globally when processing models. This is in contrast to current general practice in the field of geometry processing.

Read more

Ready to get started?

Join us today