Featured Researches

Graphics

Generative Adversarial Networks for photo to Hayao Miyazaki style cartoons

This paper takes on the problem of transferring the style of cartoon images to real-life photographic images by implementing previous work done by CartoonGAN. We trained a Generative Adversial Network(GAN) on over 60 000 images from works by Hayao Miyazaki at Studio Ghibli. To evaluate our results, we conducted a qualitative survey comparing our results with two state-of-the-art methods. 117 survey results indicated that our model on average outranked state-of-the-art methods on cartoon-likeness.

Read more
Graphics

Generative Modelling of BRDF Textures from Flash Images

We learn a latent space for easy capture, semantic editing, consistent interpolation, and efficient reproduction of visual material appearance. When users provide a photo of a stationary natural material captured under flash light illumination, it is converted in milliseconds into a latent material code. In a second step, conditioned on the material code, our method, again in milliseconds, produces an infinite and diverse spatial field of BRDF model parameters (diffuse albedo, specular albedo, roughness, normals) that allows rendering in complex scenes and illuminations, matching the appearance of the input picture. Technically, we jointly embed all flash images into a latent space using a convolutional encoder, and -- conditioned on these latent codes -- convert random spatial fields into fields of BRDF parameters using a convolutional neural network (CNN). We condition these BRDF parameters to match the visual characteristics (statistics and spectra of visual features) of the input under matching light. A user study confirms that the semantics of the latent material space agree with user expectations and compares our approach favorably to previous work.

Read more
Graphics

Geodesic Centroidal Voronoi Tessellations: Theories, Algorithms and Applications

Nowadays, big data of digital media (including images, videos and 3D graphical models) are frequently modeled as low-dimensional manifold meshes embedded in a high-dimensional feature space. In this paper, we summarized our recent work on geodesic centroidal Voronoi tessellations(GCVTs), which are intrinsic geometric structures on manifold meshes. We show that GCVT can find a widely range of interesting applications in computer vision and graphics, due to the efficiency of search, location and indexing inherent in these intrinsic geometric structures. Then we present the challenging issues of how to build the combinatorial structures of GCVTs and establish their time and space complexities, including both theoretical and algorithmic results.

Read more
Graphics

Geodesic Distance Field-based Curved Layer Volume Decomposition for Multi-Axis Support-free Printing

This paper presents a new curved layer volume decomposition method for multi-axis support-free printing of freeform solid parts. Given a solid model to be printed that is represented as a tetrahedral mesh, we first establish a geodesic distance field embedded on the mesh, whose value at any vertex is the geodesic distance to the base of the model. Next, the model is naturally decomposed into curved layers by interpolating a number of iso-geodesic distance surfaces (IGDSs). These IGDSs morph from bottom-up in an intrinsic and smooth way owing to the nature of geodesics, which will be used as the curved printing layers that are friendly to multi-axis printing. In addition, to cater to the collision-free requirement and to improve the printing efficiency, we also propose a printing sequence optimization algorithm for determining the printing order of the IGDSs, which helps reduce the air-move path length. Ample experiments in both computer simulation and physical printing are performed, and the experimental results confirm the advantages of our method.

Read more
Graphics

Geometric Sample Reweighting for Monte Carlo Integration

We present a general sample reweighting scheme and its underlying theory for the integration of an unknown function with low dimensionality. Our method produces better results than standard weighting schemes for common sampling strategies, while avoiding bias. Our main insight is to link the weight derivation to the function reconstruction process during integration. The implementation of our solution is simple and results in an improved convergence behavior. We illustrate its benefit by applying our method to multiple Monte Carlo rendering problems.

Read more
Graphics

Geometry-Based Layout Generation with Hyper-Relations AMONG Objects

Recent studies show increasing demands and interests in automatically generating layouts, while there is still much room for improving the plausibility and robustness. In this paper, we present a data-driven layout framework without model formulation and loss term optimization. We achieve and organize priors directly based on samples from datasets instead of sampling probabilistic models. Therefore, our method enables expressing and generating mathematically inexpressible relations among three or more objects. Subsequently, a non-learning geometric algorithm attempts arranging objects plausibly considering constraints such as walls, windows, etc. Experiments would show our generated layouts outperform the state-of-art and our framework is competitive to human designers.

Read more
Graphics

Geometry-guided Dense Perspective Network for Speech-Driven Facial Animation

Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet), to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model.

Read more
Graphics

Global Illumination of non-Euclidean spaces

This paper presents a path tracer algorithm to compute the global illumination of non-Euclidean manifolds. We use the 3D torus as an example.

Read more
Graphics

GrabAR: Occlusion-aware Grabbing Virtual Objects in AR

Existing augmented reality (AR) applications often ignore occlusion between real hands and virtual objects when incorporating virtual objects in our views. The challenges come from the lack of accurate depth and mismatch between real and virtual depth. This paper presents GrabAR, a new approach that directly predicts the real-and-virtual occlusion, and bypasses the depth acquisition and inference. Our goal is to enhance AR applications with interactions between hand (real) and grabbable objects (virtual). With paired images of hand and object as inputs, we formulate a neural network that learns to generate the occlusion mask. To train the network, we compile a synthetic dataset to pre-train it and a real dataset to fine-tune it, thus reducing the burden of manual labels and addressing the domain difference. Then, we embed the trained network in a prototyping AR system that supports hand grabbing of various virtual objects, demonstrate the system performance, both quantitatively and qualitatively, and showcase interaction scenarios, in which we can use bare hand to grab virtual objects and directly manipulate them.

Read more
Graphics

GraphSeam: Supervised Graph Learning Framework for Semantic UV Mapping

Recently there has been a significant effort to automate UV mapping, the process of mapping 3D-dimensional surfaces to the UV space while minimizing distortion and seam length. Although state-of-the-art methods, Autocuts and OptCuts, addressed this task via energy-minimization approaches, they fail to produce semantic seam styles, an essential factor for professional artists. The recent emergence of Graph Neural Networks (GNNs), and the fact that a mesh can be represented as a particular form of a graph, has opened a new bridge to novel graph learning-based solutions in the computer graphics domain. In this work, we use the power of supervised GNNs for the first time to propose a fully automated UV mapping framework that enables users to replicate their desired seam styles while reducing distortion and seam length. To this end, we provide augmentation and decimation tools to enable artists to create their dataset and train the network to produce their desired seam style. We provide a complementary post-processing approach for reducing the distortion based on graph algorithms to refine low-confidence seam predictions and reduce seam length (or the number of shells in our supervised case) using a skeletonization method.

Read more

Ready to get started?

Join us today