Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Todd J. Kosloff is active.

Publication


Featured researches published by Todd J. Kosloff.


international conference on computational science and its applications | 2007

An algorithm for rendering generalized depth of field effects based on simulated heat diffusion

Todd J. Kosloff; Brian A. Barsky

Depth of field refers to the swath through a 3D scene that is imaged in acceptable focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool that can be used to emphasize the subject of a photograph. In a real camera, the control over depth of field is limited by the nature of the image formation process and by physical constraints. The depth of field effect has been simulated in computer graphics, but with the same limited control as found in real camera lenses. In this paper, we use diffusion in a non-homogeneous medium to generalize depth of field in computer graphics by enabling the user to independently specify the degree of blur at each point in three-dimensional space. Generalized depth of field provides a novel tool to emphasize an area of interest within a 3D scene, to pick objects out of a crowd, and to render a busy, complex picture more understandable by focusing only on relevant details that may be scattered throughout the scene. Our algorithm operates by blurring a sequence of nonplanar layers that form the scene. Choosing a suitable blur algorithm for the layers is critical; thus, we develop appropriate blur semantics such that the blur algorithm will properly generalize depth of field. We found that diffusion in a non-homogeneous medium is the process that best suits these semantics.


Archive | 2010

Three Techniques for Rendering Generalized Depth of Field Effects

Todd J. Kosloff; Brian A. Barsky

Depth of field refers to the swath that is imaged in sufficient focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool that can be used to emphasize the subject of a photograph. In a real camera, the control over depth of field is limited by the laws of physics and by physical constraints. Depth of field has been rendered in computer graphics, but usually with the same limited control as found in real camera lenses. In this paper, we generalize depth of field in computer graphics by allowing the user to specify the distribution of blur throughout a scene in a more flexible manner. Generalized depth of field provides a novel tool to emphasize an area of interest within a 3D scene, to select objects from a crowd, and to render a busy, complex picture more understandable by focusing only on relevant details that may be scattered throughout the scene. We present three approaches for rendering generalized depth of field based on nonlinear distributed ray tracing, compositing, and simulated heat diffusion. Each of these methods has a different set of strengths and weaknesses, so it is useful to have all three available. The ray tracing approach allows the amount of blur to vary with depth in an arbitrary way. The compositing method creates a synthetic image with focus and aperture settings that vary per-pixel. The diffusion approach provides full generality by allowing each point in 3D space to have an arbitrary amount of blur. 1 Background and Previous Work 1.1 Simulated Depth of Field A great deal of work has been done in rendering realistic (non-generalized) depth of field effects, e.g. [4, 6, 12, 15, 13, 17]. Distributed ray tracing [4] can be considered a gold standard; at great computational cost, highly accurate simulations of geometric optics can be obtained. For each pixel, a number of rays are chosen to sample the aperture. Accumulation buffer methods [6] provide essentially the same results of distributed ray tracing, but render entire images per aperture sample, in order to utilize graphics hardware Both distributed ray tracing and accumulation buffer methods are quite expensive, so a variety of faster post-process methods have been created [12, 15, 2]. Post-process methods use image filters to blur images originally rendered with everything in perfect focus. ∗e-mail: [email protected] †e-mail: [email protected] Post-process methods are fast, sometimes to the point of real-time [13, 17, 9], but generally do not share the same image quality as distributed ray tracing. A full literature review of depth of field methods is beyond the scope of this paper, but the interested reader should consult the following surveys: [1, 2, 5]. Kosara [8] introduced the notion of semantic depth of field, a somewhat similar notion to generalized depth of field. Semantic depth of field is non-photorealistic depth of field used for visualization purposes. Semantic depth of field operates at a per-object granularity, allowing each object to have a different amount of blur. Generalized depth of field, on the other hand, goes further, allowing each point in space to have a different blur value. Generalized depth of field is more useful than semantic depth of field in that generalized depth of field allows per-pixel control over blur, whereas semantic depth of field only allows per-object control. Heat diffusion has previously been shown to be useful in depth of field simulation, by Bertalmio [3] and Kass [7]. However, they simulated traditional depth of field, not generalized depth of field. We introduced generalized depth of field via simulated heat diffusion in [10]. For completeness, the present work contains one section dedicated to the heat diffusion method. Please see [10] for complete details. 1.2 Terminology The purpose of this section is to explain certain terms that are important in discussions of simulated depth of field. Perhaps the most fundamental concept is that of the point spread function, or PSF. The PSF is the blurred image of a single point of light. The PSF completely characterizes the appearance of blur. In the terminology of linear systems, the PSF is the impulse response of the lens. Photographers use the Japanese word bokeh to describe the appearance of the out-of-focus parts of a photograph. Different PSFs will lead to different bokeh. Typical high-quality lenses have a PSF shaped like their diaphragm, i.e. circles or polygons. On the other hand, computer generated images often use Gaussian 42 Copyright


international conference on computational science | 2005

New 3d graphics rendering engine architecture for direct tessellation of spline surfaces

Adrian Sfarti; Brian A. Barsky; Todd J. Kosloff; Egon C. Pasztor; Alex Kozlowski; Eric Roman; Alex Perelman

In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal vectors, colors, texture coordinates). We develop a new 3D graphics architecture using data compression to unclog the bus between the triangle server and the rendering engine. The data compression is achieved by replacing the conventional idea of a GPU that renders triangles with a GPU that tessellates surface patches into triangles.


applied perception in graphics and visualization | 2004

An opponent process approach to modeling the blue shift of the human color vision system

Brian A. Barsky; Todd J. Kosloff; Steven D. Upstill

Low light level affects human visual perception in various ways. Visual acuity is reduced and scenes appear bluer, darker, less saturated, and with reduced contrast. We confine our attention to an approach to modeling the appearance of the bluish cast in dim light, which is known as blue shift. Both photographs and computer-generated images of night scenes can be made to appear more realistic by understanding these phenomena as well as how they are produced by the retina.


international conference on computational science | 2006

Extensions for 3d graphics rendering engine used for direct tessellation of spline surfaces

Adrian Sfarti; Brian A. Barsky; Todd J. Kosloff; Egon C. Pasztor; Alex Kozlowski; Eric Roman; Alex Perelman

In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal vectors, colors, texture coordinates). We have developed a new 3D graphics architecture using data compression to unclog the bus between the triangle server and the rendering engine. This new architecture has been described in [1]. In the present paper we describe further developments of the newly proposed architecture. The current paper shows several interesting extensions of our architecture such as backsurface rejection, NURBS real time tesselation and a description of a surface based API. We also show how the implementation of our architecture operates on top of the pixel shaders.


annual conference on computers | 2008

Algorithms for rendering depth of field effects in computer graphics

Brian A. Barsky; Todd J. Kosloff


graphics interface | 2009

Depth of field postprocessing for layered scenes using constant-time rectangle spreading

Todd J. Kosloff; Michael W. Tao; Brian A. Barsky


Archive | 2010

Fast image filters for depth-of-field post-processing

Brian A. Barsky; Todd J. Kosloff


international conference on computer vision theory and applications | 2010

Two New Approaches to Depth-of-Field Post-processing - Pyramid Spreading and Tensor Filtering.

Todd J. Kosloff; Brian A. Barsky


Archive | 2006

Fifth International Workshop on Computer Graphics and Geometric Modeling (CGGM 2006)-Extensions for 3D Graphics Rendering Engine Used for Direct Tessellation of Spline Surfaces

Adrian Sfarti; Brian A. Barsky; Todd J. Kosloff; Egon C. Pasztor; Alex Kozlowski; Eric Rornan; Alex Perelman

Collaboration


Dive into the Todd J. Kosloff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Sfarti

University of California

View shared research outputs
Top Co-Authors

Avatar

Alex Kozlowski

University of California

View shared research outputs
Top Co-Authors

Avatar

Alex Perelman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Roman

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael W. Tao

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge