Featured Researches

Graphics

Data to Physicalization: A Survey of the Physical Rendering Process

Physical representations of data offer physical and spatial ways of looking at, navigating, and interacting with data. While digital fabrication has facilitated the creation of objects with data-driven geometry, rendering data as a physically fabricated object is still a daunting leap for many physicalization designers. Rendering in the scope of this research refers to the back-and-forth process from digital design to digital fabrication and its specific challenges. We developed a corpus of example data physicalizations from research literature and physicalization practice. This survey then unpacks the "rendering" phase of the extended InfoVis pipeline in greater detail through these examples, with the aim of identifying ways that researchers, artists, and industry practitioners "render" physicalizations using digital design and fabrication tools.

Read more
Graphics

Data-Driven Crowd Simulation with Generative Adversarial Networks

This paper presents a novel data-driven crowd simulation method that can mimic the observed traffic of pedestrians in a given environment. Given a set of observed trajectories, we use a recent form of neural networks, Generative Adversarial Networks (GANs), to learn the properties of this set and generate new trajectories with similar properties. We define a way for simulated pedestrians (agents) to follow such a trajectory while handling local collision avoidance. As such, the system can generate a crowd that behaves similarly to observations, while still enabling real-time interactions between agents. Via experiments with real-world data, we show that our simulated trajectories preserve the statistical properties of their input. Our method simulates crowds in real time that resemble existing crowds, while also allowing insertion of extra agents, combination with other simulation methods, and user interaction.

Read more
Graphics

Data-Driven Physical Face Inversion

Facial animation is one of the most challenging problems in computer graphics, and it is often solved using linear heuristics like blend-shape rigging. More expressive approaches like physical simulation have emerged, but these methods are very difficult to tune, especially when simulating a real actor's face. We propose to use a simple finite element simulation approach for face animation, and present a novel method for recovering the required simulation parameters in order to best match a real actor's face motion. Our method involves reconstructing a very small number of head poses of the actor in 3D, where the head poses span different configurations of force directions due to gravity. Our algorithm can then automatically recover both the gravity-free rest shape of the face as well as the spatially-varying physical material stiffness such that a forward simulation will match the captured targets as closely as possible. As a result, our system can produce actor-specific, physical parameters that can be immediately used in recent physical simulation methods for faces. Furthermore, as the simulation results depend heavily on the chosen spatial layout of material clusters, we analyze and compare different spatial layouts.

Read more
Graphics

Data-Driven Space-Filling Curves

We propose a data-driven space-filling curve method for 2D and 3D visualization. Our flexible curve traverses the data elements in the spatial domain in a way that the resulting linearization better preserves features in space compared to existing methods. We achieve such data coherency by calculating a Hamiltonian path that approximately minimizes an objective function that describes the similarity of data values and location coherency in a neighborhood. Our extended variant even supports multiscale data via quadtrees and octrees. Our method is useful in many areas of visualization, including multivariate or comparative visualization, ensemble visualization of 2D and 3D data on regular grids, or multiscale visual analysis of particle simulations. The effectiveness of our method is evaluated with numerical comparisons to existing techniques and through examples of ensemble and multivariate datasets.

Read more
Graphics

DecoSurf: Recursive Geodesic Patterns on Triangle Meshes

In this paper, we show that many complex patterns, which characterize the decorative style of many artisanal objects, can be generated by the recursive application of only four operators. Each operator is derived from tracing the isolines or the integral curves of geodesics fields generated from selected seeds on the surface. Based on this formulation, we present an interactive application that lets designers model complex recursive patterns directly on the object surface, without relying on parametrization. We support interaction on commodity hardware on meshes of a few million triangles, by combining light data structures together with an efficient approximate graph-based geodesic solver. We validate our approach by matching decoration styles from real-world photos, by analyzing the speed and accuracy of our geodesic solver, and by validating the interface with a user study.

Read more
Graphics

Decomposition and Modeling in the Non-Manifold domain

The problem of decomposing non-manifold object has already been studied in solid modeling. However, the few proposed solutions are limited to the problem of decomposing solids described through their boundaries. In this thesis we study the problem of decomposing an arbitrary non-manifold simplicial complex into more regular components. A formal notion of decomposition is developed using combinatorial topology. The proposed decomposition is unique, for a given complex, and is computable for complexes of any dimension. A decomposition algorithm is proposed that is linear w.r.t. the size of the input. In three or higher dimensions a decomposition into manifold parts is not always possible. Thus, in higher dimensions, we decompose a non-manifold into a decidable super class of manifolds, that we call, Initial-Quasi-Manifolds. We also defined a two-layered data structure, the Extended Winged data structure. This data structure is a dimension independent data structure conceived to model non-manifolds through their decomposition into initial-quasi-manifold parts. Our two layered data structure describes the structure of the decomposition and each component separately. In the second layer we encode the connectivity structure of the decomposition. We analyze the space requirements of the Extended Winged data structure and give algorithms to build and navigate it. Finally, we discuss time requirements for the computation of topological relations and show that, for surfaces and tetrahedralizations, embedded in real 3D space, all topological relations can be extracted in optimal time. This approach offers a compact, dimension independent, representation for non-manifolds that can be useful whenever the modeled object has few non-manifold singularities.

Read more
Graphics

Deep Detail Enhancement for Any Garment

Creating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion.

Read more
Graphics

Deep Feature-preserving Normal Estimation for Point Cloud Filtering

Point cloud filtering, the main bottleneck of which is removing noise (outliers) while preserving geometric features, is a fundamental problem in 3D field. The two-step schemes involving normal estimation and position update have been shown to produce promising results. Nevertheless, the current normal estimation methods including optimization ones and deep learning ones, often either have limited automation or cannot preserve sharp features. In this paper, we propose a novel feature-preserving normal estimation method for point cloud filtering with preserving geometric features. It is a learning method and thus achieves automatic prediction for normals. For training phase, we first generate patch based samples which are then fed to a classification network to classify feature and non-feature points. We finally train the samples of feature and non-feature points separately, to achieve decent results. Regarding testing, given a noisy point cloud, its normals can be automatically estimated. For further point cloud filtering, we iterate the above normal estimation and a current position update algorithm for a few times. Various experiments demonstrate that our method outperforms state-of-the-art normal estimation methods and point cloud filtering techniques, in terms of both quality and quantity.

Read more
Graphics

Deep Generation of Face Images from Sketches

Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. We take a local-to-global approach. We first learn feature embeddings of key face components, and push corresponding parts of input sketches towards underlying component manifolds defined by the feature vectors of face component samples. We also propose another deep neural network to learn the mapping from the embedded component features to realistic images with multi-channel feature maps as intermediate results to improve the information flow. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches. Our tool is easy to use even for non-artists, while still supporting fine-grained control of shape details. Both qualitative and quantitative evaluations show the superior generation ability of our system to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.

Read more
Graphics

Deep Geometric Texture Synthesis

Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.

Read more

Ready to get started?

Join us today