Featured Researches

Graphics

Least-Squares Affine Reflection Using Eigen Decomposition

This note summarizes the steps to computing the best-fitting affine reflection that aligns two sets of corresponding points.

Read more
Graphics

Length Learning for Planar Euclidean Curves

In this work, we used deep neural networks (DNNs) to solve a fundamental problem in differential geometry. One can find many closed-form expressions for calculating curvature, length, and other geometric properties in the literature. As we know these concepts, we are highly motivated to reconstruct them by using deep neural networks. In this framework, our goal is to learn geometric properties from examples. The simplest geometric object is a curve. Therefore, this work focuses on learning the length of planar sampled curves created by a sine waves dataset. For this reason, the fundamental length axioms were reconstructed using a supervised learning approach. Following these axioms a simplified DNN model, we call ArcLengthNet, was established. The robustness to additive noise and discretization errors were tested.

Read more
Graphics

Length-optimal tool path planning for freeform surfaces with preferred feed directions

This paper presents a new method to generate tool paths for machining freeform surfaces represented either as parametric surfaces or as triangular meshes. This method allows for the optimal tradeoff between the preferred feed direction field and the constant scallop height, and yields a minimized overall path length. The optimality is achieved by formulating tool path planning as a Poisson problem that minimizes a simple, quadratic energy. This Poisson formulation considers all tool paths at once, without resorting to any heuristic sampling or initial tool path choosing as in existing methods, and is thus a globally optimal solution. Finding the optimal tool paths amounts to solving a well-conditioned sparse linear system, which is computationally convenient and efficient. Tool paths are represented with an implicit scheme that can completely avoid the challenging topological issues of path singularities and self-intersections seen in previous methods. The presented method has been validated with a series of examples and comparisons.

Read more
Graphics

Levitating Rigid Objects with Hidden Rods and Wires

We propose a novel algorithm to efficiently generate hidden structures to support arrangements of floating rigid objects. Our optimization finds a small set of rods and wires between objects and each other or a supporting surface (e.g., wall or ceiling) that hold all objects in force and torque equilibrium. Our objective function includes a sparsity inducing total volume term and a linear visibility term based on efficiently pre-computed Monte-Carlo integration, to encourage solutions that are as-hidden-as-possible. The resulting optimization is convex and the global optimum can be efficiently recovered via a linear program. Our representation allows for a user-controllable mixture of tension-, compression-, and shear-resistant rods or tension-only wires. We explore applications to theatre set design, museum exhibit curation, and other artistic endeavours.

Read more
Graphics

Light Stage Super-Resolution: Continuous High-Frequency Relighting

The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light transport matrix of that subject, which enables image-based relighting in novel environments. However, due to the finite number of lights in the stage, the light transport matrix only represents a sparse sampling on the entire sphere. As a consequence, relighting the subject with a point light or a directional source that does not coincide exactly with one of the lights in the stage requires interpolation and resampling the images corresponding to nearby lights, and this leads to ghosting shadows, aliased specularities, and other artifacts. To ameliorate these artifacts and produce better results under arbitrary high-frequency lighting, this paper proposes a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage. Given an arbitrary "query" light direction, our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face that appears to be illuminated by a "virtual" light source at the query location. This neural network must circumvent the inherent aliasing and regularity of the light stage data that was used for training, which we accomplish through the use of regularized traditional interpolation methods within our network. Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights, and is able to generalize across a wide variety of subjects.

Read more
Graphics

LightGuider: Guiding Interactive Lighting Design using Suggestions, Provenance, and Quality Visualization

LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.

Read more
Graphics

Lightform: Procedural Effects for Projected AR

Projected augmented reality, also called projection mapping or video mapping, is a form of augmented reality that uses projected light to directly augment 3D surfaces, as opposed to using pass-through screens or headsets. The value of projected AR is its ability to add a layer of digital content directly onto physical objects or environments in a way that can be instantaneously viewed by multiple people, unencumbered by a screen or additional setup. Because projected AR typically involves projecting onto non-flat, textured objects (especially those that are conventionally not used as projection surfaces), the digital content needs to be mapped and aligned to precisely fit the physical scene to ensure a compelling experience. Current projected AR techniques require extensive calibration at the time of installation, which is not conducive to iteration or change, whether intentional (the scene is reconfigured) or not (the projector is bumped or settles). The workflows are undefined and fragmented, thus making it confusing and difficult for many to approach projected AR. For example, a digital artist may have the software expertise to create AR content, but could not complete an installation without experience in mounting, blending, and realigning projector(s); the converse is true for many A/V installation teams/professionals. Projection mapping has therefore been limited to high-end event productions, concerts, and films, because it requires expensive, complex tools, and skilled teams ($100K+ budgets). Lightform provides a technology that makes projected AR approachable, practical, intelligent, and robust through integrated hardware and computer-vision software. Lightform brings together and unites a currently fragmented workflow into a single cohesive process that provides users with an approachable and robust method to create and control projected AR experiences.

Read more
Graphics

Liver Pathology Simulation: Algorithm for Haptic Rendering and Force Maps for Palpation Assessment

Preoperative gestures include tactile sampling of the mechanical properties of biological tissue for both histological and pathological considerations. Tactile properties used in conjunction with visual cues can provide useful feedback to the surgeon. Development of novel cost effective haptic-based simulators and their introduction in the minimally invasive surgery learning cycle can absorb the learning curve for your residents. Receiving pre-training in a core set of surgical skills can reduce skill acquisition time and risks. We present the integration of a real-time surface stiffness adjustment algorithm and a novel paradigm -- force maps -- in a visuo-haptic simulator module designed to train internal organs disease diagnostics through palpation.

Read more
Graphics

Local Fourier Slice Photography

Light field cameras provide intriguing possibilities, such as post-capture refocus or the ability to synthesize images from novel viewpoints. This comes, however, at the price of significant storage requirements. Compression techniques can be used to reduce these but refocusing and reconstruction require so far again a dense pixel representation. To avoid this, we introduce local Fourier slice photography that allows for refocused image reconstruction directly from a sparse wavelet representation of a light field, either to obtain an image or a compressed representation of it. The result is made possible by wavelets that respect the "slicing's" intrinsic structure and enable us to derive exact reconstruction filters for the refocused image in closed form. Image reconstruction then amounts to applying these filters to the light field's wavelet coefficients, and hence no reconstruction of a dense pixel representation is required. We demonstrate that this substantially reduces storage requirements and also computation times. We furthermore analyze the computational complexity of our algorithm and show that it scales linearly with the size of the reconstructed region and the non-negligible wavelet coefficients, i.e. with the visual complexity.

Read more
Graphics

LookOut! Interactive Camera Gimbal Controller for Filming Long Takes

The job of a camera operator is more challenging, and potentially dangerous, when filming long moving camera shots. Broadly, the operator must keep the actors in-frame while safely navigating around obstacles, and while fulfilling an artistic vision. We propose a unified hardware and software system that distributes some of the camera operator's burden, freeing them up to focus on safety and aesthetics during a take. Our real-time system provides a solo operator with end-to-end control, so they can balance on-set responsiveness to action vs planned storyboards and framing, while looking where they're going. By default, we film without a field monitor. Our LookOut system is built around a lightweight commodity camera gimbal mechanism, with heavy modifications to the controller, which would normally just provide active stabilization. Our control algorithm reacts to speech commands, video, and a pre-made script. Specifically, our automatic monitoring of the live video feed saves the operator from distractions. In pre-production, an artist uses our GUI to design a sequence of high-level camera "behaviors." Those can be specific, based on a storyboard, or looser objectives, such as "frame both actors." Then during filming, a machine-readable script, exported from the GUI, ties together with the sensor readings to drive the gimbal. To validate our algorithm, we compared tracking strategies, interfaces, and hardware protocols, and collected impressions from a) film-makers who used all aspects of our system, and b) film-makers who watched footage filmed using LookOut.

Read more

Ready to get started?

Join us today