Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Weyrich is active.

Publication


Featured researches published by Tim Weyrich.


international conference on computer graphics and interactive techniques | 2006

Analysis of human faces using a measurement-based skin reflectance model

Tim Weyrich; Wojciech Matusik; Hanspeter Pfister; Bernd Bickel; Craig Donner; Chien Tu; Janet McAndless; Jinho Lee; Addy Ngan; Henrik Wann Jensen; Markus H. Gross

We have measured 3D face geometry, skin reflectance, and subsurface scattering using custom-built devices for 149 subjects of varying age, gender, and race. We developed a novel skin reflectance model whose parameters can be estimated from measurements. The model decomposes the large amount of measured skin data into a spatially-varying analytic BRDF, a diffuse albedo map, and diffuse subsurface scattering. Our model is intuitive, physically plausible, and -- since we do not use the original measured data -- easy to edit as well. High-quality renderings come close to reproducing real photographs. The analysis of the model parameters for our sample population reveals variations according to subject age, gender, skin type, and external factors (e.g., sweat, cold, or makeup). Using our statistics, a user can edit the overall appearance of a face (e.g., changing skin type and age) or change small-scale features using texture synthesis (e.g., adding moles and freckles). We are making the collected statistics publicly available to the research community for applications in face synthesis and analysis.


international conference on computer graphics and interactive techniques | 2012

3D-printing of non-assembly, articulated models

Jacques Calì; Dan Andrei Calian; Cristina Amati; Rebecca Kleinberger; Anthony Steed; Jan Kautz; Tim Weyrich

Additive manufacturing (3D printing) is commonly used to produce physical models for a wide variety of applications, from archaeology to design. While static models are directly supported, it is desirable to also be able to print models with functional articulations, such as a hand with joints and knuckles, without the need for manual assembly of joint components. Apart from having to address limitations inherent to the printing process, this poses a particular challenge for articulated models that should be posable: to allow the model to hold a pose, joints need to exhibit internal friction to withstand gravity, without their parts fusing during 3D printing. This has not been possible with previous printable joint designs. In this paper, we propose a method for converting 3D models into printable, functional, non-assembly models with internal friction. To this end, we have designed an intuitive work-flow that takes an appropriately rigged 3D model, automatically fits novel 3D-printable and posable joints, and provides an interface for specifying rotational constraints. We show a number of results for different articulated models, demonstrating the effectiveness of our method.


eurographics | 2004

Post-processing of scanned 3D surface data

Tim Weyrich; Mark Pauly; Richard Keiser; Simon Heinzle; Sascha Scandella; Markus H. Gross

3D shape acquisition has become a major tool for creating digital 3D surface data in a variety of application elds. Despite the steady increase in accuracy, most available scanning techniques cause severe scanning artifacts such as noise, outliers, holes, or ghost geometry. To apply sophisticated modeling operations on these data sets, substantial post-processing is usually required. In this paper, we address a variety of scanning artifacts that are created by common optical scanners and provide a comprehensive set of user-guided tools to process corrupted data sets. These include an eraser tool, low-pass lter s for noise removal, a set of outlier detection methods, and various up-sampling and hole- lling tools. These techniques can be applied early in the content creation pipeline. Therefore, all our tools are implemented to operate directly on the acquired point cloud. We also emphasize the need for extensive user control and an ef cient visual feedback loop. The effectiveness of our scan cleaning tools is demonstrated on various models acquired with commercial laser-range scanners and low-cost structured light scanners.


international conference on 3d vision | 2013

Real-Time 3D Reconstruction in Dynamic Scenes Using Point-Based Fusion

Maik Keller; Damien Lefloch; Martin Lambers; Shahram Izadi; Tim Weyrich; Andreas Kolb

Real-time or online 3D reconstruction has wide applicability and receives further interest due to availability of consumer depth cameras. Typical approaches use a moving sensor to accumulate depth measurements into a single model which is continuously refined. Designing such systems is an intricate balance between reconstruction quality, speed, spatial scale, and scene assumptions. Existing online methods either trade scale to achieve higher quality reconstructions of small objects/scenes. Or handle larger scenes by trading real-time performance and/or quality, or by limiting the bounds of the active reconstruction. Additionally, many systems assume a static scene, and cannot robustly handle scene motion or reconstructions that evolve to reflect scene changes. We address these limitations with a new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes. Our system is designed around a simple and flat point-Based representation, which directly works with the input acquired from range/depth sensors, without the overhead of converting between representations. The use of points enables speed and memory efficiency, directly leveraging the standard graphics pipeline for all central operations, i.e., camera pose estimation, data association, outlier removal, fusion of depth maps into a single denoised model, and detection and update of dynamic objects. We conclude with qualitative and quantitative results that highlight robust tracking and high quality reconstructions of a diverse set of scenes at varying scales.


international conference on computer graphics and interactive techniques | 2008

A system for high-volume acquisition and matching of fresco fragments: reassembling Theran wall paintings

Benedict J. Brown; Corey Toler-Franklin; Diego Nehab; Michael Burns; David P. Dobkin; Andreas Vlachopoulos; Christos Doumas; Szymon Rusinkiewicz; Tim Weyrich

Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.


international conference on computer graphics and interactive techniques | 2009

Fabricating microgeometry for custom surface reflectance

Tim Weyrich; Pieter Peers; Wojciech Matusik; Szymon Rusinkiewicz

We propose a system for manufacturing physical surfaces that, in aggregate, exhibit a desired surface appearance. Our system begins with a user specification of a BRDF, or simply a highlight shape, and infers the required distribution of surface slopes. We sample this distribution, optimize for a maximally-continuous and valley-minimizing height field, and finally mill the surface using a computer-controlled machine tool. We demonstrate a variety of surfaces, ranging from reproductions of measured BRDFs to materials with unconventional highlights.


international conference on computer graphics and interactive techniques | 2007

Digital bas-relief from 3D scenes

Tim Weyrich; Jia Deng; Connelly Barnes; Szymon Rusinkiewicz; Adam Finkelstein

We present a system for semi-automatic creation of bas-relief sculpture. As an artistic medium, relief spans the continuum between 2D drawing or painting and full 3D sculpture. Bas-relief (or low relief) presents the unique challenge of squeezing shapes into a nearly-flat surface while maintaining as much as possible the perception of the full 3D scene. Our solution to this problem adapts methods from the tone-mapping literature, which addresses the similar problem of squeezing a high dynamic range image into the (low) dynamic range available on typical display devices. However, the bas-relief medium imposes its own unique set of requirements, such as maintaining small, fixed-size depth discontinuities. Given a 3D model, camera, and a few parameters describing the relative attenuation of different frequencies in the shape, our system creates a relief that gives the illusion of the 3D shape from a given vantage point while conforming to a greatly compressed height.


international conference on computer graphics and interactive techniques | 2008

A layered, heterogeneous reflectance model for acquiring and rendering human skin

Craig Donner; Tim Weyrich; Eugene d'Eon; Ravi Ramamoorthi; Szymon Rusinkiewicz

We introduce a layered, heterogeneous spectral reflectance model for human skin. The model captures the inter-scattering of light among layers, each of which may have an independent set of spatially-varying absorption and scattering parameters. For greater physical accuracy and control, we introduce an infinitesimally thin absorbing layer between scattering layers. To obtain parameters for our model, we use a novel acquisition method that begins with multi-spectral photographs. By using an inverse rendering technique, along with known chromophore spectra, we optimize for the best set of parameters for each pixel of a patch. Our method finds close matches to a wide variety of inputs with low residual error. We apply our model to faithfully reproduce the complex variations in skin pigmentation. This is in contrast to most previous work, which assumes that skin is homogeneous or composed of homogeneous layers. We demonstrate the accuracy and flexibility of our model by creating complex skin visual effects such as veins, tattoos, rashes, and freckles, which would be difficult to author using only albedo textures at the skins outer surface. Also, by varying the parameters to our model, we simulate effects from external forces, such as visible changes in blood flow within the skin due to external pressure.


eurographics | 2006

GPU-based ray-casting of quadratic surfaces

Christian Sigg; Tim Weyrich; Mario Botsch; Markus H. Gross

Quadratic surfaces are frequently used primitives in geometric modeling and scientific visualization, such as rendering of tensor fields, particles, and molecular structures. While high visual quality can be achieved using sophisticated ray tracing techniques, interactive applications typically use either coarsely tessellated polygonal approximations or pre-rendered depth sprites, thereby trading off visual quality and perspective correctness for higher rendering performance. In contrast, we propose an efficient rendering technique for quadric primitives based on GPU-accelerated splatting. While providing similar performance as point-sprites, our methods provides perspective correctness and superior visual quality using per-pixel ray-casting.


computer vision and pattern recognition | 2011

Capturing Time-of-Flight data with confidence

Malcolm Reynolds; Jozef Doboš; Leto Peel; Tim Weyrich; Gabriel J. Brostow

Time-of-Flight cameras provide high-frame-rate depth measurements within a limited range of distances. These readings can be extremely noisy and display unique errors, for instance, where scenes contain depth discontinuities or materials with low infrared reflectivity. Previous works have treated the amplitude of each Time-of-Flight sample as a measure of confidence. In this paper, we demonstrate the shortcomings of this common lone heuristic, and propose an improved per-pixel confidence measure using a Random Forest regressor trained with real-world data. Using an industrial laser scanner for ground truth acquisition, we evaluate our technique on data from two different Time-of-Flight cameras1. We argue that an improved confidence measure leads to superior reconstructions in subsequent steps of traditional scan processing pipelines. At the same time, data with confidence reduces the need for point cloud smoothing and median filtering.

Collaboration


Dive into the Tim Weyrich's collaboration.

Top Co-Authors

Avatar

Melissa Terras

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Campagnolo

University of the Arts London

View shared research outputs
Top Co-Authors

Avatar

Adam Gibson

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S Robson

University College London

View shared research outputs
Top Co-Authors

Avatar

Jan Kautz

University College London

View shared research outputs
Top Co-Authors

Avatar

Kazim Pal

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge