Jack Tumblin
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jack Tumblin.
international conference on computer graphics and interactive techniques | 2007
Ashok Veeraraghavan; Ramesh Raskar; Amit K. Agrawal; Ankit Mohan; Jack Tumblin
We describe a theoretical framework for reversibly modulating 4D light fields using an attenuating mask in the optical path of a lens based camera. Based on this framework, we present a novel design to reconstruct the 4D light field from a 2D camera image without any additional refractive elements as required by previous light field cameras. The patterned mask attenuates light rays inside the camera instead of bending them, and the attenuation recoverably encodes the rays on the 2D sensor. Our mask-equipped camera focuses just as a traditional camera to capture conventional 2D photos at full sensor resolution, but the raw pixel values also hold a modulated 4D light field. The light field can be recovered by rearranging the tiles of the 2D Fourier transform of sensor values into 4D planes, and computing the inverse Fourier transform. In addition, one can also recover the full resolution image information for the in-focus parts of the scene. We also show how a broadband mask placed at the lens enables us to compute refocused images at full sensor resolution for layered Lambertian scenes. This partial encoding of 4D ray-space data enables editing of image contents by depth, yet does not require computational recovery of the complete 4D light field.
IEEE Computer Graphics and Applications | 1993
Jack Tumblin; Holly E. Rushmeier
Radiosity and other global illumination methods for image synthesis calculate the real world radiance values of a scene instead of the display radiance values that will represent them. Though radiosity and ray tracing methods can compute extremely accurate and wide-ranging scene radiances, modern display devices emit light only in a tiny fixed range. The radiances must be converted, but ad-hoc conversions cause serious errors and give little assurance that the evoked visual sensations are truly equivalent. Sensation-preserving conversions for display, already known in photography, printing, and television as tone reproduction methods, are discussed. Computer graphics workers can apply the existing photographic methods, but may also extend them to include more complex and subtle effects of human vision using the published findings of vision researchers. Ways of constructing a sensation-preserving display converter, or tone reproduction operator, for monochrome images are demonstrated.<<ETX>>
international conference on computer graphics and interactive techniques | 1999
Jack Tumblin; Greg Turk
High contrast scenes are difficult to depict on low contrast displays without loss of important fine details and textures. Skilled artists preserve these details by drawing scene contents in coarseto-fine order using a hierarchy of scene boundaries and shadings. We build a similar hierarchy using multiple instances of a new low curvature image simplifier (LCIS), a partial differential equation inspired by anisotropic diffusion. Each LCIS reduces the scene to many smooth regions that are bounded by sharp gradient discontinuities, and a single parameter K chosen for each LCIS controls region size and boundary complexity. With a few chosen K values (K1 > K2 > K3:::) LCIS makes a set of progressively simpler images, and image differences form a hierarchy of increasingly important details, boundaries and large features. We construct a high detail, low contrast display image from this hierarchy by compressing only the large features, then adding back all small details. Unlike linear filter hierarchies such as wavelets, filter banks, or image pyramids, LCIS hierarchies do not smooth across scene boundaries, avoiding “halo” artifacts common to previous contrast reducing methods and some tone reproduction operators. We demonstrate LCIS effectiveness on several example images. CR Descriptors: I.3.3 [Computer Graphics]: Picture/image generation Display algorithms; I.4.1 [Image Processing and Computer Vision]: Enhancement -Digitization and Image Capture
international conference on computer graphics and interactive techniques | 2005
Amy Ashurst Gooch; Sven C. Olsen; Jack Tumblin; Bruce Gooch
Visually important image features often disappear when color images are converted to grayscale. The algorithm introduced here reduces such losses by attempting to preserve the salient features of the color image. The Color2Gray algorithm is a 3-step process: 1) convert RGB inputs to a perceptually uniform CIE L*a*b* color space, 2) use chrominance and luminance differences to create grayscale target differences between nearby image pixels, and 3) solve an optimization problem designed to selectively modulate the grayscale representation as a function of the chroma variation of the source image. The Color2Gray results offer viewers salient information missing from previous grayscale image creation methods.
international conference on computer graphics and interactive techniques | 2000
Sumanta N. Pattanaik; Jack Tumblin; Hector Yee; Donald P. Greenberg
Human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. This paper offers a new operator to include these appearance changes in animations or interactive real-time simulations, and to match a users visual responses to those the user would experience in a real-world scene. Large, abrupt changes in scene intensities can cause dramatic compression of visual responses, followed by a gradual recovery of normal vision. Asymmetric mechanisms govern these time-dependent adjustments, and offer adaptation to increased light that is much more rapid than adjustment to darkness. We derive a new tone reproduction operator that simulates these mechanisms. The operator accepts a stream of scene intensity frames and creates a stream of color display images. All operator components are derived from published quantitative measurements from physiology, psychophysics, color science, and photography. ept intentionally simple to allow fast computation, the operator is meant for use with real-time walk-through renderings, high dynamic range video cameras, and other interactive applications. We demonstrate its performance on both synthetically generated and acquired “real-world” scenes with large dynamic variations of illumination and contrast.
ACM Transactions on Graphics | 1999
Jack Tumblin; Jessica K. Hodgins; Brian K. Guenter
High contrast images are common in night scenes and other scenes that include dark shadows and bright light sources. These scenes are difficult to display because their contrasts greatly exceed the range of most display devices for images. As a result, the image constrasts are compressed or truncated, obscuring subtle textures and details. Humans view and understand high contrast scenes easily, “adapting” their visual response to avoid compression or truncation with no apparent loss of detail. By imitating some of these visual adaptation processes, we developed methods for the improved display of high-contrast images. The first builds a display image from several layers of lighting and surface properties. Only the lighting layers are compressed, drastically reducing contrast while preserving much of the image detail. This method is practical only for synthetic images where the layers can be retained from the rendering process. The second method interactively adjusts the displayed image to preserve local contrasts in a small “foveal” neighborhood. Unlike the first method, this technique is usable on any image and includes a new tone reproduction operator. Both methods use a sigmoid function for contrast compression. This function has no effect when applied to small signals but compresses large signals to fit within an asymptotic limit. We demonstrate the effectiveness of these approaches by comparing processed and unprocessed images.
IEEE Transactions on Visualization and Computer Graphics | 1998
Jessica K. Hodgins; James F. O'Brien; Jack Tumblin
Human figures have been animated using a variety of geometric models, including stick figures, polygonal models and NURBS-based models with muscles, flexible skin or clothing. This paper reports on experimental results indicating that a viewers perception of motion characteristics is affected by the geometric model used for rendering. Subjects were shown a series of paired motion sequences and asked if the two motions in each pair were the same or different. The motion sequences in each pair were rendered using the same geometric model. For the three types of motion variation tested, sensitivity scores indicate that subjects were better able to observe changes with the polygonal model than they were with the stick-figure model.
international conference on computer graphics and interactive techniques | 2007
Sylvain Paris; Pierre Kornprobst; Jack Tumblin
This course reviews the wealth of work related to bilateral filtering. The bilateral filter is ubiquitous in computational photography applications. It is increasingly common in computer graphics research papers but no single reference summarizes its properties and applications. This course provides a graphical, intuitive introduction to bilateral filtering, and a practical guide for image editing, tone-mapping, video processing and more.
eurographics symposium on rendering techniques | 2004
Jack Tumblin; Prasun Choudhury
Pixels store a digital image as a grid of point samples that can reconstruct a limited-bandwidth continuous 2-D source image. Although convenient for anti-aliased display, these bandwidth limits irreversibly discard important visual boundary information that is difficult or impossible to accurately recover from pixels alone. We propose bixels instead: they also store a digital image as a grid of point samples, but each sample keeps 8 extra bits to set embedded geometric boundaries that are infinitely sharp, more accurately placed, and directly machine-readable. Bixels represent images as piecewise-continuous, with discontinuous intensities and gradients at boundaries that form planar graphs. They reversibly combine vector and raster image features, decouple boundary sharpness from the number of samples used to store them, and do not mix unrelated but adjacent image contents, e.g blue sky and green leaf. Bixels are meant to be compatible with pixels. A bixel is a image sample point with an 8 bit code for local boundaries. We describe a boundary-switched bilinear filter kernel for bixel reconstruction and pre-filtering to find bixel samples, a bixels-to-pixels conversion method for display, and an iterative method to combine pixels and given boundaries to make bixels. We discuss applications in texture synthesis, matting and compositing. We demonstrate sharpness-preserving enlargement, warping and bixels-to-pixels conversion with example images.
eurographics | 2008
Ankit Mohan; Ramesh Raskar; Jack Tumblin
We advocate the use of quickly‐adjustable, computer‐controlled color spectra in photography, lighting and displays. We present an optical relay system that allows mechanical or electronic color spectrum control and use it to modify a conventional camera and projector. We use a diffraction grating to disperse the rays into different colors, and introduce a mask (or LCD/DMD) in the optical path to modulate the spectrum. We analyze the trade‐offs and limitations of this design, and demonstrate its use in a camera, projector and light source. We propose applications such as adaptive color primaries, metamer detection, scene contrast enhancement, photographing fluorescent objects, and high dynamic range photography using spectrum modulation.