Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ren Ng is active.

Publication


Featured researches published by Ren Ng.


international conference on computer graphics and interactive techniques | 2002

Chromium: a stream-processing framework for interactive rendering on clusters

Greg Humphreys; Mike Houston; Ren Ng; Randall J. Frank; Sean Ahern; P. D. Kirchner; James T. Klosowski

We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromiums stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.


international conference on computer graphics and interactive techniques | 2005

Fourier slice photography

Ren Ng

This paper contributes to the theory of photograph formation from light fields. The main result is a theorem that, in the Fourier domain, a photograph formed by a full lens aperture is a 2D slice in the 4D light field. Photographs focused at different depths correspond to slices at different trajectories in the 4D space. The paper demonstrates the utility of this theorem in two different ways. First, the theorem is used to analyze the performance of digital refocusing, where one computes photographs focused at different depths from a single light field. The analysis shows in closed form that the sharpness of refocused photographs increases linearly with directional resolution. Second, the theorem yields a Fourier-domain algorithm for digital refocusing, where we extract the appropriate 2D slice of the light fields Fourier transform, and perform an inverse 2D Fourier transform. This method is faster than previous approaches.


international conference on computer graphics and interactive techniques | 2006

Light field microscopy

Marc Levoy; Ren Ng; Andrew Adams; Matthew J. Footer; Mark Horowitz

By inserting a microlens array into the optical train of a conventional microscope, one can capture light fields of biological specimens in a single photograph. Although diffraction places a limit on the product of spatial and angular resolution in these light fields, we can nevertheless produce useful perspective views and focal stacks from them. Since microscopes are inherently orthographic devices, perspective views represent a new way to look at microscopic specimens. The ability to create focal stacks from a single photograph allows moving or light-sensitive specimens to be recorded. Applying 3D deconvolution to these focal stacks, we can produce a set of cross sections, which can be visualized using volume rendering. In this paper, we demonstrate a prototype light field microscope (LFM), analyze its optical performance, and show perspective views, focal stacks, and reconstructed volumes for a variety of biological specimens. We also show that synthetic focusing followed by 3D deconvolution is equivalent to applying limited-angle tomography directly to the 4D light field.


international conference on computer graphics and interactive techniques | 2004

Triple product wavelet integrals for all-frequency relighting

Ren Ng; Ravi Ramamoorthi; Pat Hanrahan

This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such an environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While image-based and synthetic methods for real-time rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials). We propose a new mathematical and computational analysis of pre-computed light transport. We use factored forms, separately pre-computing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublinear-time algorithms for Haar wavelets, incorporating non-linear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques.


siggraph eurographics conference on graphics hardware | 2002

Efficient partitioning of fragment shaders for multipass rendering on programmable graphics hardware

Eric E. Chan; Ren Ng; Pradeep Sen; Kekoa Proudfoot; Pat Hanrahan

Real-time programmable graphics hardware has resource constraints that prevent complex shaders from rendering in a single pass. One way to virtualize these resources is to partition shading computations into multiple passes, each of which satisfies the given constraints. Many such partitions exist for a shader, but it is important to find one that renders efficiently. We present Recursive Dominator Split (RDS), a polynomial-time algorithm that uses a cost model to find near-optimal partitions of arbitrarily complex shaders. Using a simulator, we analyze partitions for architectures with different resource constraints and show that RDS performs well on different graphics architectures. We also demonstrate that shader partitions computed by RDS can run efficiently on programmable graphics hardware available today.


eurographics symposium on rendering techniques | 2006

Efficient wavelet rotation for environment map rendering

Rui Wang; Ren Ng; David Luebke; Greg Humphreys

Real-time shading with environment maps requires the ability to rotate the global lighting to each surface points local coordinate frame. Although extensive previous work has studied rotation of functions represented by spherical harmonics, little work has investigated efficient rotation of wavelets. Wavelets are superior at approximating high frequency signals such as detailed high dynamic range lighting and very shiny BRDFs, but present difficulties for interactive rendering due to the lack of an analytic solution for rotation. In this paper we present an efficient computational solution for wavelet rotation using precomputed matrices. Each matrix represents the linear transformation of source wavelet bases defined in the global coordinate frame to target wavelet bases defined in sampled local frames. Since wavelets have compact support, these matrices are very sparse, enabling efficient storage and fast computation at run-time. In this paper, we focus on the application of our technique to interactive environment map rendering. We show that using these matrices allows us to evaluate the integral of dynamic lighting with dynamic BRDFs at interactive rates, incorporating efficient non-linear approximation of both illumination and reflection. Our technique improves on previous work by eliminating the need for prefiltering environment maps, and is thus significantly faster for accurate rendering of dynamic environment lighting with high frequency reflection effects.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Digital correction of lens aberrations in light field photography

Ren Ng; Pat Hanrahan

Digital light field photography consists of recording the radiance along all rays (the 4D light field) flowing into the image plane inside the camera, and using the computer to control the final convergence of rays in final images. The light field is sampled with integral photography techniques, using a microlens array in front of the sensor inside a conventional digital camera. Previous work has shown that this approach enables refocusing of photographs after the fact. This paper explores computation of photographs with reduced lens aberrations by digitally re-sorting aberrated rays to where they should have terminated. The paper presents a test with a prototype light field camera, and simulated results across a set of 35mm format lenses.


arXiv: Computer Vision and Pattern Recognition | 2018

DiffuserCam: lensless single-exposure 3D imaging

Nick Antipa; Grace Kuo; Reinhard Heckel; Ben Mildenhall; Emrah Bostan; Ren Ng; Laura Waller

We demonstrate a compact and easy-to-build computational camera for single-shot 3D imaging. Our lensless system consists solely of a diffuser placed in front of a standard image sensor. Every point within the volumetric field-of-view projects a unique pseudorandom pattern of caustics on the sensor. By using a physical approximation and simple calibration scheme, we solve the large-scale inverse problem in a computationally efficient way. The caustic patterns enable compressed sensing, which exploits sparsity in the sample to solve for more 3D voxels than pixels on the 2D sensor. Our 3D voxel grid is chosen to match the experimentally measured two-point optical resolution across the field-of-view, resulting in 100 million voxels being reconstructed from a single 1.3 megapixel image. However, the effective resolution varies significantly with scene content. Because this effect is common to a wide range of computational cameras, we provide new theory for analyzing resolution in such systems.


international conference on computational photography | 2016

Single-shot diffuser-encoded light field imaging

Nicholas Antipa; Sylvia Necula; Ren Ng; Laura Waller

We capture 4D light field data in a single 2D sensor image by encoding spatio-angular information into a speckle field (causticpattern) through a phase diffuser. Using wave-optics theory and a coherent phase retrieval method, we calibrate the system by measuring the diffuser surface height from through-focus images. Wave-optics theory further informs the design of system geometry such that a purely additive ray-optics model is valid. Light field reconstruction is done using nonlinear matrix inversion methods, including ℓ1 minimization. We demonstrate a prototype system and present empirical results of 4D light field reconstruction and computational refocusing from a single diffuser-encoded 2D image.


interactive 3d graphics and games | 2007

4D compression and relighting with high-resolution light transport matrices

Ewen Cheslack-Postava; Nolan D. Goodnight; Ren Ng; Ravi Ramamoorthi; Greg Humphreys

This paper presents a method for efficient compression and relighting with high-resolution, precomputed light transport matrices. We accomplish this using a 4D wavelet transform, transforming the columns of the transport matrix, in addition to the 2D row transform used in previous work. We show that a standard 4D wavelet transform can actually inflate portions of the matrix, because high-frequency lights lead to high-frequency images that cannot easily be compressed. Therefore, we present an adaptive 4D wavelet transform that terminates at a level that avoids inflation and maximizes sparsity in the matrix data. Finally, we present an algorithm for fast relighting from adaptively compressed transport matrices. Combined with a GPU-based precomputation pipeline, this results in an image and geometry relighting system that performs significantly better than 2D compression techniques, on average 2x-3x better in terms of storage cost and rendering speed for equal quality matrices.

Collaboration


Dive into the Ren Ng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Waller

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grace Kuo

University of California

View shared research outputs
Top Co-Authors

Avatar

Nick Antipa

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Mildenhall

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge