Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicholas Vining is active.

Publication


Featured researches published by Nicholas Vining.


international conference on computer graphics and interactive techniques | 2013

PolyCut: monotone graph-cuts for PolyCube base-complex construction

Marco Livesu; Nicholas Vining; Alla Sheffer; James Gregson; Riccardo Scateni

PolyCubes, or orthogonal polyhedra, are useful as parameterization base-complexes for various operations in computer graphics. However, computing quality PolyCube base-complexes for general shapes, providing a good trade-off between mapping distortion and singularity counts, remains a challenge. Our work improves on the state-of-the-art in PolyCube computation by adopting a graph-cut inspired approach. We observe that, given an arbitrary input mesh, the computation of a suitable PolyCube base-complex can be formulated as associating, or labeling, each input mesh triangle with one of six signed principal axis directions. Most of the criteria for a desirable PolyCube labeling can be satisfied using a multi-label graph-cut optimization with suitable local unary and pairwise terms. However, the highly constrained nature of PolyCubes, imposed by the need to align each chart with one of the principal axes, enforces additional global constraints that the labeling must satisfy. To enforce these constraints, we develop a constrained discrete optimization technique, PolyCut, which embeds a graph-cut multi-label optimization within a hill-climbing local search framework that looks for solutions that minimize the cut energy while satisfying the global constraints. We further optimize our generated PolyCube base-complexes through a combination of distortion-minimizing deformation, followed by a labeling update and a final PolyCube parameterization step. Our PolyCut formulation captures the desired properties of a PolyCube base-complex, balancing parameterization distortion against singularity count, and produces demonstrably better PolyCube base-complexes then previous work.


international conference on computer graphics and interactive techniques | 2015

Flow aligned surfacing of curve networks

Hao Pan; Yang Liu; Alla Sheffer; Nicholas Vining; Changjian Li; Wenping Wang

We propose a new approach for automatic surfacing of 3D curve networks, a long standing computer graphics problem which has garnered new attention with the emergence of sketch based modeling systems capable of producing such networks. Our approach is motivated by recent studies suggesting that artist-designed curve networks consist of descriptive curves that convey intrinsic shape properties, and are dominated by representative flow lines designed to convey the principal curvature lines on the surface. Studies indicate that viewers complete the intended surface shape by envisioning a surface whose curvature lines smoothly blend these flow-line curves. Following these observations we design a surfacing framework that automatically aligns the curvature lines of the constructed surface with the representative flow lines and smoothly interpolates these representative flow, or curvature directions while minimizing undesired curvature variation. Starting with an initial triangle mesh of the network, we dynamically adapt the mesh to maximize the agreement between the principal curvature direction field on the surface and a smooth flow field suggested by the representative flow-line curves. Our main technical contribution is a framework for curvature-based surface modeling, that facilitates the creation of surfaces with prescribed curvature characteristics. We validate our method via visual inspection, via comparison to artist created and ground truth surfaces, as well as comparison to prior art, and confirm that our results are well aligned with the computed flow fields and with viewer perception of the input networks.


international conference on computer graphics and interactive techniques | 2016

Physics-driven pattern adjustment for direct 3D garment editing

Aric Bartle; Alla Sheffer; Vladimir G. Kim; Danny M. Kaufman; Nicholas Vining; Floraine Berthouzoz

Designers frequently reuse existing designs as a starting point for creating new garments. In order to apply garment modifications, which the designer envisions in 3D, existing tools require meticulous manual editing of 2D patterns. These 2D edits need to account both for the envisioned geometric changes in the 3D shape, as well as for various physical factors that affect the look of the draped garment. We propose a new framework that allows designers to directly apply the changes they envision in 3D space; and creates the 2D patterns that replicate this envisioned target geometry when lifted into 3D via a physical draping simulation. Our framework removes the need for laborious and knowledge-intensive manual 2D edits and allows users to effortlessly mix existing garment designs as well as adjust for garment length and fit. Following each user specified editing operation we first compute a target 3D garment shape, one that maximally preserves the input garments style-its proportions, fit and shape-subject to the modifications specified by the user. We then automatically compute 2D patterns that recreate the target garment shape when draped around the input mannequin within a user-selected simulation environment. To generate these patterns, we propose a fixed-point optimization scheme that compensates for the deformation due to the physical forces affecting the drape and is independent of the underlying simulation tool used. Our experiments show that this method quickly and reliably converges to patterns that, under simulation, form the desired target look, and works well with different black-box physical simulators. We demonstrate a range of edited and resimulated garments, and further validate our approach via expert and amateur critique, and comparisons to alternative solutions.


international conference on computer graphics and interactive techniques | 2015

Practical hex-mesh optimization via edge-cone rectification

Marco Livesu; Alla Sheffer; Nicholas Vining; Marco Tarini

The usability of hexahedral meshes depends on the degree to which the shape of their elements deviates from a perfect cube; a single concave, or inverted element makes a mesh unusable. While a range of methods exist for discretizing 3D objects with an initial topologically suitable hex mesh, their output meshes frequently contain poorly shaped and even inverted elements, requiring a further quality optimization step. We introduce a novel framework for optimizing hex-mesh quality capable of generating inversion-free high-quality meshes from such poor initial inputs. We recast hex quality improvement as an optimization of the shape of overlapping cones, or unions, of tetrahedra surrounding every directed edge in the hex mesh, and show the two to be equivalent. We then formulate cone shape optimization as a sequence of convex quadratic optimization problems, where hex convexity is encoded via simple linear inequality constraints. As this solution space may be empty, we therefore present an alternate formulation which allows the solver to proceed even when constraints cannot be satisfied exactly. We iteratively improve mesh element quality by solving at each step a set of local, per-cone, convex constrained optimization problems, followed by a global energy minimization step which reconciles these local solutions. This latter method provides no theoretical guarantees on the solution but produces inversion-free, high quality meshes in practice. We demonstrate the robustness of our framework by optimizing numerous poor quality input meshes generated using a variety of initial meshing methods and producing high-quality inversion-free meshes in each case. We further validate our algorithm by comparing it against previous work, and demonstrate a significant improvement in both worst and average element quality.


ACM Transactions on Graphics | 2015

Modeling Character Canvases from Cartoon Drawings

Mikhail Bessmeltsev; William S. C. Chang; Nicholas Vining; Alla Sheffer; Karan Singh

We introduce a novel technique for the construction of a 3D character proxy, or canvas, directly from a 2D cartoon drawing and a user-provided correspondingly posed 3D skeleton. Our choice of input is motivated by the observation that traditional cartoon characters are well approximated by a union of generalized surface of revolution body parts, anchored by a skeletal structure. While typical 2D character contour drawings allow ambiguities in 3D interpretation, our use of a 3D skeleton eliminates such ambiguities and enables the construction of believable character canvases from complex drawings. Our canvases conform to the 2D contours of the input drawings, and are consistent with the perceptual principles of Gestalt continuity, simplicity, and contour persistence. We first segment the input 2D contours into individual body-part outlines corresponding to 3D skeletal bones using the Gestalt continuation principle to correctly resolve inter-part occlusions in the drawings. We then use this segmentation to compute the canvas geometry, generating 3D generalized surfaces of revolution around the skeletal bones that conform to the original outlines and balance simplicity against contour persistence. The combined method generates believable canvases for characters drawn in complex poses with numerous inter-part occlusions, variable contour depth, and significant foreshortening. Our canvases serve as 3D geometric proxies for cartoon characters, enabling unconstrained 3D viewing, articulation, and non-photorealistic rendering. We validate our algorithm via a range of user studies and comparisons to ground-truth 3D models and artist-drawn results. We further demonstrate a compelling gallery of 3D character canvases created from a diverse set of cartoon drawings with matching 3D skeletons.


symposium on computer animation | 2015

Real-time dynamic wrinkling of coarse animated cloth

Russell Gillette; Craig Peters; Nicholas Vining; Essex Edwards; Alla Sheffer

Dynamic folds and wrinkles are an important visual cue for creating believably dressed characters in virtual environments. Adding these fine details to real-time cloth visualization is challenging, as the low-quality cloth used for real-time applications often has no reference shape, an extremely low triangle count, and poor temporal and spatial coherence. We introduce a novel real-time method for adding dynamic, believable wrinkles to such coarse cloth animation. We trace spatially and temporally coherent wrinkle paths, overcoming the inaccuracies and noise in low-end cloth animation, by employing a two stage stretch tensor estimation process. We first employ a graph-cut segmentation technique to extract spatially and temporally reliable surface motion patterns, detecting consistent compressing, stable, and stretching patches. We then use the detected motion patterns to compute a per-triangle temporally adaptive reference shape and a stretch tensor based on it. We use this tensor to dynamically generate new wrinkle geometry on the coarse cloth mesh by taking advantage of the GPU tessellation unit. Our algorithm produces plausible fine wrinkles on real-world data sets at real-time frame rates, and is suitable for the current generation of consoles and PC graphics cards.


Computer Graphics Forum | 2014

Game level layout from design specification

Chongyang Ma; Nicholas Vining; Sylvain Lefebvre; Alla Sheffer

The design of video game environments, or levels, aims to control gameplay by steering the player through a sequence of designer‐controlled steps, while simultaneously providing a visually engaging experience. Traditionally these levels are painstakingly designed by hand, often from pre‐existing building blocks, or space templates. In this paper, we propose an algorithmic approach for automatically laying out game levels from user‐specified blocks. Our method allows designers to retain control of the gameplay flow via user‐specified level connectivity graphs, while relieving them from the tedious task of manually assembling the building blocks into a valid, plausible layout. Our method produces sequences of diverse layouts for the same input connectivity, allowing for repeated replay of a given level within a visually different, new environment. We support complex graph connectivities and various building block shapes, and are able to compute complex layouts in seconds. The two key components of our algorithm are the use of configuration spaces defining feasible relative positions of building blocks within a layout and a graph‐decomposition based layout strategy that leverages graph connectivity to speed up convergence and avoid local minima. Together these two tools quickly steer the solution toward feasible layouts. We demonstrate our method on a variety of real‐life inputs, and generate appealing layouts conforming to user specifications.


international conference on computer graphics and interactive techniques | 2016

Gesture3D: posing 3D characters via gesture drawings

Mikhail Bessmeltsev; Nicholas Vining; Alla Sheffer

Artists routinely use gesture drawings to communicate ideated character poses for storyboarding and other digital media. During subsequent posing of the 3D character models, they use these drawing as a reference, and perform the posing itself using 3D interfaces which require time and expert 3D knowledge to operate. We propose the first method for automatically posing 3D characters directly using gesture drawings as an input, sidestepping the manual 3D posing step. We observe that artists are skilled at quickly and effectively conveying poses using such drawings, and design them to facilitate a single perceptually consistent pose interpretation by viewers. Our algorithm leverages perceptual cues to parse the drawings and recover the artist-intended poses. It takes as input a vector-format rough gesture drawing and a rigged 3D character model, and plausibly poses the character to conform to the depicted pose. No other input is required. Our contribution is two-fold: we first analyze and formulate the pose cues encoded in gesture drawings; we then employ these cues to compute a plausible image space projection of the conveyed pose and to imbue it with depth. Our framework is designed to robustly overcome errors and inaccuracies frequent in typical gesture drawings. We exhibit a wide variety of character models posed by our method created from gesture drawings of complex poses, including poses with occlusions and foreshortening. We validate our approach via result comparisons to artist-posed models generated from the same reference drawings, via studies that confirm that our results agree with viewer perception, and via comparison to algorithmic alternatives.


ACM Transactions on Graphics | 2017

FlowRep: descriptive curve networks for free-form design shapes

Giorgio Gori; Alla Sheffer; Nicholas Vining; Enrique Rosales; Nathan A. Carr; Tao Ju

We present FlowRep, an algorithm for extracting descriptive compact 3D curve networks from meshes of free-form man-made shapes. We infer the desired compact curve network from complex 3D geometries by using a series of insights derived from perception, computer graphics, and design literature. These sources suggest that visually descriptive networks are cycle-descriptive, i.e their cycles unambiguously describe the geometry of the surface patches they surround. They also indicate that such networks are designed to be projectable, or easy to envision when observed from a static general viewpoint; in other words, 2D projections of the network should be strongly indicative of its 3D geometry. Research suggests that both properties are best achieved by using networks dominated by flowlines, surface curves aligned with principal curvature directions across anisotropic regions and strategically extended across sharp-features and isotropic areas. Our algorithm leverages these observation in the construction of a compact descriptive curve network. Starting with a curvature aligned quad dominant mesh we first extract sequences of mesh edges that form long, well-shaped and reliable flowlines by leveraging directional similarity between nearby meaningful flowline directions We then use a compact subset of the extracted flowlines and the models sharp-feature, or trim, curves to form a sparse, projectable network which describes the underlying surface. We validate our method by demonstrating a range of networks computed from diverse inputs, using them for surface reconstruction, and showing extensive comparisons with prior work and artist generated networks.


international conference on computer graphics and interactive techniques | 2018

Box Cutter: Atlas Refinement for Efficient Packing via Void Elimination

Max Limper; Nicholas Vining; Alla Sheffer

Packed atlases, consisting of 2D parameterized charts, are ubiquitously used to store surface signals such as texture or normals. Tight packing is similarly used to arrange and cut-out 2D panels for fabrication from sheet materials. Packing efficiency, or the ratio between the areas of the packed atlas and its bounding box, significantly impacts downstream applications. We propose Box Cutter, a new method for optimizing packing efficiency suitable for both settings. Our algorithm improves packing efficiency without changing distortion by strategically cutting and repacking the atlas charts or panels. It preserves the local mapping between the 3D surface and the atlas charts and retains global mapping continuity across the newly formed cuts. We balance packing efficiency improvement against increase in chart boundary length and enable users to directly control the acceptable amount of boundary elongation. While the problem we address is NP-hard, we provide an effective practical solution by iteratively detecting large rectangular empty spaces, or void boxes, in the current atlas packing and eliminating them by first refining the atlas using strategically placed axis-aligned cuts and then repacking the refined charts. We repeat this process until no further improvement is possible, or until the desired balance between packing improvement and boundary elongation is achieved. Packed chart atlases are only useful for the applications we address if their charts are overlap-free; yet many popular parameterization methods, used as-is, produce atlases with global overlaps. Our pre-processing step eliminates all input overlaps while explicitly minimizing the boundary length of the resulting overlap-free charts. We demonstrate our combined strategy on a large range of input atlases produced by diverse parameterization methods, as well as on multiple sets of 2D fabrication panels. Our framework dramatically improves the output packing efficiency on all inputs; for instance with boundary length increase capped at 50% we improve packing efficiency by 68% on average.

Collaboration


Dive into the Nicholas Vining's collaboration.

Top Co-Authors

Avatar

Alla Sheffer

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mikhail Bessmeltsev

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig Peters

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Enrique Rosales

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Essex Edwards

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giorgio Gori

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge