Featured Researches

Graphics

3D Custom Fit Garment Design with Body Movement

The standardized sizes used in the garment industry do not cover the range of individual differences in body shape for most people, leading to ill-fitting clothes, high return rates and overproduction. Recent research efforts in both industry and academia therefore focus on on-demand fabrication of individually fitting garments. We propose an interactive design tool for creating custom-fit garments based on 3D body scans of the intended wearer. Our method explicitly incorporates transitions between various body poses to ensure a better fit and freedom of movement. The core of our method focuses on tools to create a 3D garment shape directly on an avatar without an underlying sewing pattern, and on the adjustment of that garment's rest shape while interpolating and moving through the different input poses. We alternate between cloth simulation steps and rest shape adjustment steps based on stretch to achieve the final shape of the garment. At any step in the real-time process, we allow for interactive changes to the garment. Once the garment shape is finalized for production, established techniques can be used to parameterize it into a 2D sewing pattern or transform it into a knitting pattern.

Read more
Graphics

3D Dynamic Point Cloud Inpainting via Temporal Consistency on Graphs

With the development of 3D laser scanning techniques and depth sensors, 3D dynamic point clouds have attracted increasing attention as a representation of 3D objects in motion, enabling various applications such as 3D immersive tele-presence, gaming and navigation. However, dynamic point clouds usually exhibit holes of missing data, mainly due to the fast motion, the limitation of acquisition and complicated structure. Leveraging on graph signal processing tools, we represent irregular point clouds on graphs and propose a novel inpainting method exploiting both intra-frame self-similarity and inter-frame consistency in 3D dynamic point clouds. Specifically, for each missing region in every frame of the point cloud sequence, we search for its self-similar regions in the current frame and corresponding ones in adjacent frames as references. Then we formulate dynamic point cloud inpainting as an optimization problem based on the two types of references, which is regularized by a graph-signal smoothness prior. Experimental results show the proposed approach outperforms three competing methods significantly, both in objective and subjective quality.

Read more
Graphics

3D Magic Mirror: Automatic Video to 3D Caricature Translation

Caricature is an abstraction of a real person which distorts or exaggerates certain features, but still retains a likeness. While most existing works focus on 3D caricature reconstruction from 2D caricatures or translating 2D photos to 2D caricatures, this paper presents a real-time and automatic algorithm for creating expressive 3D caricatures with caricature style texture map from 2D photos or videos. To solve this challenging ill-posed reconstruction problem and cross-domain translation problem, we first reconstruct the 3D face shape for each frame, and then translate 3D face shape from normal style to caricature style by a novel identity and expression preserving VAE-CycleGAN. Based on a labeling formulation, the caricature texture map is constructed from a set of multi-view caricature images generated by CariGANs. The effectiveness and efficiency of our method are demonstrated by comparison with baseline implementations. The perceptual study shows that the 3D caricatures generated by our method meet people's expectations of 3D caricature style.

Read more
Graphics

3D Modeling and WebVR Implementation using Azure Kinect, Open3D, and Three.js

This paper proposes a method of extracting an RGB-D image usingAzure Kinect, a depth camera, creating afragment,i.e., 6D images (RGBXYZ), usingOpen3D, creatingit as a point cloud object, and implementing webVR usingthree.js. Furthermore, it presents limitations and potentials for development.

Read more
Graphics

3D Primitives Gpgpu Generation for Volume Visualization in 3D Graphics Systems

This article discusses the study of 3D graphic volume primitive computer system generation (3D segments) based on General Purpose Graphics Processing Unit (GPGPU) technology for 3D volume visualization systems. It is based on the general method of Volume 3D primitive generation and an algorithm for the voxelization of 3D lines, previously proposed and studied by the authors. We considered the Compute Unified Device Architect (CUDA) implementation of a parametric method for generating 3D line segments and characteristics of generation on modern Graphics Processing Units. Experiments on the test bench showed the relative inefficiency of generating a single 3D line segment and the efficiency of generating both fixed and arbitrary length of 3D segments on a Graphics Processing Unit (GPU). Experimental studies have proven the effectiveness and the quality of produced solutions by our method, when compared to existing state-of-the-art approaches.

Read more
Graphics

3D Pseudo Stereo Visualization with Gpgpu Support

This article discusses the study of a computer system for creating 3D pseudo-stereo images and videos using hardware and software support for accelerating a synthesis process based on General Purpose Graphics Processing Unit (GPGPU) technology. Based on the general strategy of 3D pseudo-stereo synthesis previously proposed by the authors, Compute Unified Device Architect (CUDA) method considers the main implementation stages of 3D pseudo-stereo synthesis: (i) the practical implementation study; (ii) the synthesis characteristics for obtaining images; (iii) the video in Ultra-High Definition (UHD) 4K resolution using the Graphics Processing Unit (GPU). Respectively with these results of 4K content test on evaluation systems with a GPU the acceleration average of 60.6 and 6.9 times is obtained for images and videos. The research results show consistency with previously identified forecasts for processing 4K image frames. They are confirming the possibility of synthesizing 3D pseudo-stereo algorithms in real time using powerful support for modern Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC).

Read more
Graphics

A Bayesian Inference Framework for Procedural Material Parameter Estimation

Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials---wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints---to both synthetic and real target images.

Read more
Graphics

A Causal Convolutional Neural Network for Motion Modeling and Synthesis

We propose a novel deep generative model based on causal convolutions for multi-subject motion modeling and synthesis, which is inspired by the success of WaveNet in multi-subject speech synthesis. However, it is nontrivial to adapt WaveNet to handle high-dimensional and physically constrained motion data. To this end, we add an encoder and a decoder to the WaveNet to translate the motion data into features and back to the predicted motions. We also add 1D convolution layers to take skeleton configuration as an input to model skeleton variations across different subjects. As a result, our network can scale up well to large-scale motion data sets across multiple subjects and support various applications, such as random and controllable motion synthesis, motion denoising, and motion completion, in a unified way. Complex motions, such as punching, kicking and, kicking while punching, are also well handled. Moreover, our network can synthesize motions for novel skeletons not in the training dataset. After fine-tuning the network with a few motion data of the novel skeleton, it is able to capture the personalized style implied in the motion and generate high-quality motions for the skeleton. Thus, it has the potential to be used as a pre-trained network in few-shot learning for motion modeling and synthesis. Experimental results show that our model can effectively handle the variation of skeleton configurations, and it runs fast to synthesize different types of motions on-line. We also perform user studies to verify that the quality of motions generated by our network is superior to the motions of state-of-the-art human motion synthesis methods.

Read more
Graphics

A Characterization of 3D Printability

Additive manufacturing technologies are positioned to provide an unprecedented innovative transformation in how products are designed and manufactured. Due to differences in the technical specifications of AM technologies, the final fabricated parts can vary significantly from the original CAD models, therefore raising issues regarding accuracy, surface finish, robustness, mechanical properties, functional and geometrical constraints. Various researchers have studied the correlation between AM technologies and design rules. In this work we propose a novel approach to assessing the capability of a 3D model to be printed successfully (a.k.a printability) on a specific AM machine. This is utilized by taking into consideration the model mesh complexity and certain part characteristics. A printability score is derived for a model in reference to a specific 3D printing technology, expressing the probability of obtaining a robust and accurate end result for 3D printing on a specific AM machine. The printability score can be used either to determine which 3D technology is more suitable for manufacturing a specific model or as a guide to redesign the model to ensure printability. We verify this framework by conducting 3D printing experiments for benchmark models which are printed on three AM machines employing different technologies: Fused Deposition Modeling (FDM), Binder Jetting (3DP), and Material Jetting (Polyjet).

Read more
Graphics

A Comparison of Radial and Linear Charts for Visualizing Daily Pattern

Radial charts are generally considered less effective than linear charts. Perhaps the only exception is in visualizing periodical time-dependent data, which is believed to be naturally supported by the radial layout. It has been demonstrated that the drawbacks of radial charts outweigh the benefits of this natural mapping. Visualization of daily patterns, as a special case, has not been systematically evaluated using radial charts. In contrast to yearly or weekly recurrent trends, the analysis of daily patterns on a radial chart may benefit from our trained skill on reading radial clocks that are ubiquitous in our culture. In a crowd-sourced experiment with 92 non-expert users, we evaluated the accuracy, efficiency, and subjective ratings of radial and linear charts for visualizing daily traffic accident patterns. We systematically compared juxtaposed 12-hours variants and single 24-hours variants for both layouts in four low-level tasks and one high-level interpretation task. Our results show that over all tasks, the most elementary 24-hours linear bar chart is most accurate and efficient and is also preferred by the users. This provides strong evidence for the use of linear layouts - even for visualizing periodical daily patterns.

Read more

Ready to get started?

Join us today