Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maneesh Agrawala is active.

Publication


Featured researches published by Maneesh Agrawala.


international conference on computer graphics and interactive techniques | 2004

Digital photography with flash and no-flash image pairs

Georg F. Petschnigg; Richard Szeliski; Maneesh Agrawala; Michael F. Cohen; Hugues Hoppe; Kentaro Toyama

Digital photography has made it possible to quickly and easily take a pair of images of low-light environments: one with flash to capture detail and one without flash to capture ambient illumination. We present a variety of applications that analyze and combine the strengths of such flash/no-flash image pairs. Our applications include denoising and detail transfer (to merge the ambient qualities of the no-flash image with the high-frequency flash detail), white-balancing (to change the color tone of the ambient image), continuous flash (to interactively adjust flash intensity), and red-eye removal (to repair artifacts in the flash image). We demonstrate how these applications can synthesize new images that are of higher quality than either of the originals.


international conference on computer graphics and interactive techniques | 2005

Interactive video cutout

Jue Wang; Pravin Bhat; R. Alex Colburn; Maneesh Agrawala; Michael F. Cohen

We present an interactive system for efficiently extracting foreground objects from a video. We extend previous min-cut based image segmentation techniques to the domain of video with four new contributions. We provide a novel painting-based user interface that allows users to easily indicate the foreground object across space and time. We introduce a hierarchical mean-shift preprocess in order to minimize the number of nodes that min-cut must operate on. Within the min-cut we also define new local cost functions to augment the global costs defined in earlier work. Finally, we extend 2D alpha matting methods designed for images to work with 3D video volumes. We demonstrate that our matting approach preserves smoothness across both space and time. Our interactive video cutout system allows users to quickly extract foreground objects from video sequences for use in a variety of applications including compositing onto new backgrounds and NPR cartoon style rendering.


human factors in computing systems | 2006

Gaze-based interaction for semi-automatic photo cropping

Anthony Santella; Maneesh Agrawala; Douglas DeCarlo; David Salesin; Michael F. Cohen

We present an interactive method for cropping photographs given minimal information about important content location, provided by eye tracking. Cropping is formulated in a general optimization framework that facilitates adding new composition rules, and adapting the system to particular applications. Our system uses fixation data</ to identify important image content and compute the best crop for any given aspect ratio or size, enabling applications such as automatic snapshot recomposition, adaptive documents, and thumbnailing. We validate our approach with studies in which users compare our crops to ones produced by hand and by a completely automatic approach. Experiments show that viewers prefer our gaze-based crops to uncropped images and fully automatic crops.


international conference on computer graphics and interactive techniques | 2001

Rendering effective route maps: improving usability through generalization

Maneesh Agrawala; Chris Stolte

Route maps, which depict a path from one location to another, have emerged as one of the most popular applications on the Web. Current computer-generated route maps, however, are often very difficult to use. In this paper we present a set of cartographic generalization techniques specifically designed to improve the usability of route maps. Our generalization techniques are based both on cognitive psychology research studying how route maps are used and on an analysis of the generalizations commonly found in handdrawn route maps. We describe algorithmic implementations of these generalization techniques within LineDrive, a real-time system for automatically designing and rendering route maps. Feedback from over 2200 users indicates that almost all believe LineDrive maps are preferable to using standard computer-generated route maps alone.


international conference on computer graphics and interactive techniques | 1997

The two-user Responsive Workbench: support for collaboration through individual views of a shared space

Maneesh Agrawala; Andrew C. Beers; Ian E. McDowall; Bernd Fröhlich; Mark T. Bolas; Pat Hanrahan

We present the two-user Responsive Workbench: a projectionbased virtual reality system that allows two people to simultaneously view individual stereoscopic image pairs from their own viewpoints. The system tracks the head positions of both users and computes four images one for each eye of each person. To display the four images as two stereo pairs, we must ensure each image is correctly presented to the appropriate eye. We describe a hardware solution to this display problem as well as registration and calibration procedures. These procedures ensure that when two users point to the same location on a virtual object, their fingers will physically touch. Since the stereo pairs are independent, we have the option of displaying specialized views of the shared virtual environment to each user. We present several scenarios in which specialized views might be useful. CR


human factors in computing systems | 2006

Hover widgets: using the tracking state to extend the capabilities of pen-operated devices

Tovi Grossman; Ken Hinckley; Patrick Baudisch; Maneesh Agrawala; Ravin Balakrishnan

We present Hover Widgets, a new technique for increasing the capabilities of pen-based interfaces. Hover Widgets are implemented by using the pen movements above the display surface, in the tracking state. Short gestures while hovering, followed by a pen down, access the Hover Widgets, which can be used to activate localized interface widgets. By using the tracking state movements, Hover Widgets create a new command layer which is clearly distinct from the input layer of a pen interface. In a formal experiment Hover Widgets were found to be faster than a more traditional command activation technique, and also reduced errors due to divided attention.


international conference on computer graphics and interactive techniques | 2007

Multiscale shape and detail enhancement from multi-light image collections

Raanan Fattal; Maneesh Agrawala; Szymon Rusinkiewicz

We present a new image-based technique for enhancing the shape and surface details of an object. The input to our system is a small set of photographs taken from a fixed viewpoint, but under varying lighting conditions. For each image we compute a multiscale decomposition based on the bilateral filter and then reconstruct an enhanced image that combines detail information at each scale across all the input images. Our approach does not require any information about light source positions, or camera calibration, and can produce good results with 3 to 5 input images. In addition our system provides a few high-level parameters for controlling the amount of enhancement and does not require pixel-level user input. We show that the bilateral filter is a good choice for our multiscale algorithm because it avoids the halo artifacts commonly associated with the traditional Laplacian image pyramid. We also develop a new scheme for computing our multiscale bilateral decomposition that is simple to implement, fast O(N2 log N) and accurate.


international conference on computer graphics and interactive techniques | 2008

Interactive 3D architectural modeling from unordered photo collections

Sudipta N. Sinha; Drew Steedly; Richard Szeliski; Maneesh Agrawala; Marc Pollefeys

We present an interactive system for generating photorealistic, textured, piecewise-planar 3D models of architectural structures and urban scenes from unordered sets of photographs. To reconstruct 3D geometry in our system, the user draws outlines overlaid on 2D photographs. The 3D structure is then automatically computed by combining the 2D interaction with the multi-view geometric information recovered by performing structure from motion analysis on the input photographs. We utilize vanishing point constraints at multiple stages during the reconstruction, which is particularly useful for architectural scenes where parallel lines are abundant. Our approach enables us to accurately model polygonal faces from 2D interactions in a single image. Our system also supports useful operations such as edge snapping and extrusions. Seamless texture maps are automatically generated by combining multiple input photographs using graph cut optimization and Poisson blending. The user can add brush strokes as hints during the texture generation stage to remove artifacts caused by unmodeled geometric structures. We build models for a variety of architectural scenes from collections of up to about a hundred photographs.


visual analytics science and technology | 2007

Design Considerations for Collaborative Visual Analytics

Jeffrey Heer; Maneesh Agrawala

Information visualization leverages the human visual system to support the process of sensemaking, in which information is collected, organized, and analyzed to generate knowledge and inform action. Though most research to date assumes a single-user focus on perceptual and cognitive processes, in practice, sensemaking is often a social process involving parallelization of effort, discussion, and consensus building. This suggests that to fully support sensemaking, interactive visualization should also support social interaction. However, the most appropriate collaboration mechanisms for supporting this interaction are not immediately clear. In this article, we present design considerations for asynchronous collaboration in visual analysis environments, highlighting issues of work parallelization, communication, and social organization. These considerations provide a guide for the design and evaluation of collaborative visualization systems.


international conference on computer graphics and interactive techniques | 1996

Rendering from compressed textures

Andrew C. Beers; Maneesh Agrawala; Navin Chaddha

We present a simple method for rendering directly from compressed textures in hardware and software rendering systems. Textures are compressed using a vector quantization (VQ) method. The advantage of VQ over other compression techniques is that textures can be decompressed quickly during rendering. The drawback of using lossy compression schemes such as VQ for textures is that such methods introduce errors into the textures. We discuss techniques for controlling these losses. We also describe an extension to the basic VQ technique for compressing mipmaps. We have observed compression rates of up to 35 : 1, with minimal loss in visual quality and a small impact on rendering time. The simplicity of our technique lends itself to an efficient hardware implementation. CR categories: I.3.7 [Computer Graphics]: 3D Graphics and Realism Texture; I.4.2 [Image Processing]: Compression Coding

Collaboration


Dive into the Maneesh Agrawala's collaboration.

Top Co-Authors

Avatar

David Salesin

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey Heer

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Curless

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge