Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diego Nehab is active.

Publication


Featured researches published by Diego Nehab.


international conference on computer graphics and interactive techniques | 2005

Efficiently combining positions and normals for precise 3D geometry

Diego Nehab; Szymon Rusinkiewicz; James Davis; Ravi Ramamoorthi

Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes or range images) or orientations (normal maps or bump maps). We present an algorithm that combines these two kinds of estimates to produce a new surface that approximates both. Our formulation is linear, allowing it to operate efficiently on complex meshes commonly used in graphics. It also treats high-and low-frequency components separately, allowing it to optimally combine outputs from data sources such as stereo triangulation and photometric stereo, which have different error-vs.-frequency characteristics. We demonstrate the ability of our technique to both recover high-frequency details and avoid low-frequency bias, producing surfaces that are more widely applicable than position or orientation data alone.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Spacetime stereo: a unifying framework for depth from triangulation

James Davis; Diego Nehab; Ravi Ramamoorthi; Szymon Rusinkiewicz

Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. We propose a common framework called spacetime stereo that unifies and generalizes many of these previous methods. To show the practical utility of the framework, we develop two new algorithms for depth estimation: depth from unstructured illumination change and depth estimation in dynamic scenes. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly.


international conference on computer graphics and interactive techniques | 2008

A system for high-volume acquisition and matching of fresco fragments: reassembling Theran wall paintings

Benedict J. Brown; Corey Toler-Franklin; Diego Nehab; Michael Burns; David P. Dobkin; Andreas Vlachopoulos; Christos Doumas; Szymon Rusinkiewicz; Tim Weyrich

Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.


international conference on computer graphics and interactive techniques | 2007

Fast triangle reordering for vertex locality and reduced overdraw

Pedro V. Sander; Diego Nehab; Joshua Barczak

We present novel algorithms that optimize the order in which triangles are rendered, to improve post-transform vertex cache efficiency as well as for view-independent overdraw reduction. The resulting triangle orders perform on par with previous methods, but are orders magnitude faster to compute. The improvements in processing speed allow us to perform the optimization right after a model is loaded, when more information on the host hardware is available. This allows our vertex cache optimization to often outperform other methods. In fact, our algorithms can even be executed interactively, allowing for re-optimization in case of changes to geometry or topology, which happen often in CAD/CAM applications. We believe that most real-time rendering applications will immediately benefit from these new results.


eurographics | 2004

Stratified point sampling of 3D models

Diego Nehab; Philip Shilane

Point sampling is an important intermediate step for a variety of computer graphics applications, and specialized sampling strategies have been developed to satisfy the requirements of each problem. In this article, we present a technique to generate a stratified sampling of 3D models that is applicable across many domains. The algorithm voxelizes the model and selects one sample per voxel, restricted to the original models surface. Parameters allow control of the uniformity of the sample placement and the minimum distance between samples. We demonstrate the effectiveness of this technique in selecting stroke locations for painterly rendering models and for producing sampled geometry used as input to shape descriptors.


international conference on computer graphics and interactive techniques | 2008

Random-access rendering of general vector graphics

Diego Nehab; Hugues Hoppe

We introduce a novel representation for random-access rendering of antialiased vector graphics on the GPU, along with efficient encoding and rendering algorithms. The representation supports a broad class of vector primitives, including multiple layers of semitransparent filled and stroked shapes, with quadratic outlines and color gradients. Our approach is to create a coarse lattice in which each cell contains a variable-length encoding of the graphics primitives it overlaps. These cell-specialized encodings are interpreted at runtime within a pixel shader. Advantages include localized memory access and the ability to map vector graphics onto arbitrary surfaces, or under arbitrary deformations. Most importantly, we perform both prefiltering and supersampling within a single pixel shader invocation, achieving inter-primitive antialiasing at no added memory bandwidth cost. We present an efficient encoding algorithm, and demonstrate high-quality real-time rendering of complex, real-world examples.


ACM Transactions on Graphics | 2014

Automating Image Morphing Using Structural Similarity on a Halfway Domain

Jing Liao; Rodolfo S. Lima; Diego Nehab; Hugues Hoppe; Pedro V. Sander; Jinhui Yu

The main challenge in achieving good image morphs is to create a map that aligns corresponding image elements. Our aim is to help automate this often tedious task. We compute the map by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity. The optimization is regularized by a thin-plate spline and may be guided by a few user-drawn points. We parameterize the map over a halfway domain and show that this representation offers many benefits. The map is able to treat the image pair symmetrically, model simple occlusions continuously, span partially overlapping images, and define extrapolated correspondences. Moreover, it enables direct evaluation of the morph in a pixel shader without mesh rasterization. We improve the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries. We parallelize the algorithm on a GPU to achieve a responsive interface and demonstrate challenging morphs obtained with little effort.


international conference on computer graphics and interactive techniques | 2009

Amortized supersampling

Lei Yang; Diego Nehab; Pedro V. Sander; Pitchaya Sitthi-amorn; Jason Lawrence; Hugues Hoppe

We present a real-time rendering scheme that reuses shading samples from earlier time frames to achieve practical antialiasing of procedural shaders. Using a reprojection strategy, we maintain several sets of shading estimates at subpixel precision, and incrementally update these such that for most pixels only one new shaded sample is evaluated per frame. The key difficulty is to prevent accumulated blurring during successive reprojections. We present a theoretical analysis of the blur introduced by reprojection methods. Based on this analysis, we introduce a nonuniform spatial filter, an adaptive recursive temporal filter, and a principled scheme for locally estimating the spatial blur. Our scheme is appropriate for antialiasing shading attributes that vary slowly over time. It works in a single rendering pass on commodity graphics hardware, and offers results that surpass 4x4 stratified supersampling in quality, at a fraction of the cost.


international conference on computer graphics and interactive techniques | 2011

GPU-efficient recursive filtering and summed-area tables

Diego Nehab; André Maximo; Rodolfo S. Lima; Hugues Hoppe

Image processing operations like blurring, inverse convolution, and summed-area tables are often computed efficiently as a sequence of 1D recursive filters. While much research has explored parallel recursive filtering, prior techniques do not optimize across the entire filter sequence. Typically, a separate filter (or often a causal-anticausal filter pair) is required in each dimension. Computing these filter passes independently results in significant traffic to global memory, creating a bottleneck in GPU systems. We present a new algorithmic framework for parallel evaluation. It partitions the image into 2D blocks, with a small band of additional data buffered along each block perimeter. We show that these perimeter bands are sufficient to accumulate the effects of the successive filters. A remarkable result is that the image data is read only twice and written just once, independent of image size, and thus total memory bandwidth is reduced even compared to the traditional serial algorithm. We demonstrate significant speedups in GPU computation.


computer vision and pattern recognition | 2008

Dense 3D reconstruction from specularity consistency

Diego Nehab; Tim Weyrich; Szymon Rusinkiewicz

In this work, we consider the dense reconstruction of specular objects. We propose the use of a specularity constraint, based on surface normal/depth consistency, to define a matching cost function that can drive standard stereo reconstruction methods. We discuss the types of ambiguity that can arise, and suggest an aggregation method based on anisotropic diffusion that is particularly suitable for this matching cost function. We also present a controlled illumination setup that includes a pair of cameras and one LCD monitor, which is used as a calibrated, variable-position light source. We use this setup to evaluate the proposed method on real data, and demonstrate its capacity to recover high-quality depth and orientation from specular objects.

Collaboration


Dive into the Diego Nehab's collaboration.

Top Co-Authors

Avatar

Pedro V. Sander

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Yang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Rodolfo S. Lima

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luiz Henrique de Figueiredo

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

André Maximo

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Luiz Velho

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

James Davis

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge