Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lavanya Sharan is active.

Publication


Featured researches published by Lavanya Sharan.


Nature | 2007

Image statistics and the perception of surface qualities

Isamu Motoyoshi; Shin'ya Nishida; Lavanya Sharan; Edward H. Adelson

The world is full of surfaces, and by looking at them we can judge their material qualities. Properties such as colour or glossiness can help us decide whether a pancake is cooked, or a patch of pavement is icy. Most studies of surface appearance have emphasized textureless matte surfaces, but real-world surfaces, which may have gloss and complex mesostructure, are now receiving increased attention. Their appearance results from a complex interplay of illumination, reflectance and surface geometry, which are difficult to tease apart given an image. If there were simple image statistics that were diagnostic of surface properties it would be sensible to use them. Here we show that the skewness of the luminance histogram and the skewness of sub-band filter outputs are correlated with surface gloss and inversely correlated with surface albedo (diffuse reflectance). We find evidence that human observers use skewness, or a similar measure of histogram asymmetry, in making judgements about surfaces. When the image of a surface has positively skewed statistics, it tends to appear darker and glossier than a similar surface with lower skewness, and this is true whether the skewness is inherent to the original image or is introduced by digital manipulation. We also find a visual after-effect based on skewness: adaptation to patterns with skewed statistics can alter the apparent lightness and glossiness of surfaces that are subsequently viewed. We suggest that there are neural mechanisms sensitive to skewed statistics, and that their outputs can be used in estimating surface properties.


computer vision and pattern recognition | 2010

Exploring features in a Bayesian framework for material recognition

Ce Liu; Lavanya Sharan; Edward H. Adelson; Ruth Rosenholtz

We are interested in identifying the material category, e.g. glass, metal, fabric, plastic or wood, from a single image of a surface. Unlike other visual recognition tasks in computer vision, it is difficult to find good, reliable features that can tell material categories apart. Our strategy is to use a rich set of low and mid-level features that capture various aspects of material appearance. We propose an augmented Latent Dirichlet Allocation (aLDA) model to combine these features under a Bayesian generative framework and learn an optimal combination of features. Experimental results show that our system performs material recognition reasonably well on a challenging material database, outperforming state-of-the-art material/texture recognition systems.


International Journal of Computer Vision | 2013

Recognizing Materials Using Perceptually Inspired Features

Lavanya Sharan; Ce Liu; Ruth Rosenholtz; Edward H. Adelson

Our world consists not only of objects and scenes but also of materials of various kinds. Being able to recognize the materials that surround us (e.g., plastic, glass, concrete) is important for humans as well as for computer vision systems. Unfortunately, materials have received little attention in the visual recognition literature, and very few computer vision systems have been designed specifically to recognize materials. In this paper, we present a system for recognizing material categories from single images. We propose a set of low and mid-level image features that are based on studies of human material recognition, and we combine these features using an SVM classifier. Our system outperforms a state-of-the-art system (Varma and Zisserman, TPAMI 31(11):2032–2047, 2009) on a challenging database of real-world material categories (Sharan et al., J Vis 9(8):784–784a, 2009). When the performance of our system is compared directly to that of human observers, humans outperform our system quite easily. However, when we account for the local nature of our image features and the surface properties they measure (e.g., color, texture, local shape), our system rivals human performance. We suggest that future progress in material recognition will come from: (1) a deeper understanding of the role of non-local surface properties (e.g., extended highlights, object identity); and (2) efforts to model such non-local surface properties in images.


Journal of The Optical Society of America A-optics Image Science and Vision | 2008

Image statistics for surface reflectance perception

Lavanya Sharan; Yuanzhen Li; Isamu Motoyoshi; Shin'ya Nishida; Edward H. Adelson

Human observers can distinguish the albedo of real-world surfaces even when the surfaces are viewed in isolation, contrary to the Gelb effect. We sought to measure this ability and to understand the cues that might underlie it. We took photographs of complex surfaces such as stucco and asked observers to judge their diffuse reflectance by comparing them to a physical Munsell scale. Their judgments, while imperfect, were highly correlated with the true reflectance. The judgments were also highly correlated with certain image statistics, such as moment and percentile statistics of the luminance and subband histograms. When we digitally manipulated these statistics in an image, human judgments were correspondingly altered. Moreover, linear combinations of such statistics allow a machine vision system (operating within the constrained world of single surfaces) to estimate albedo with an accuracy similar to that of human observers. Taken together, these results indicate that some simple image statistics have a strong influence on the judgment of surface reflectance.


Journal of Vision | 2014

Accuracy and speed of material categorization in real-world images.

Lavanya Sharan; Ruth Rosenholtz; Edward H. Adelson

It is easy to visually distinguish a ceramic knife from one made of steel, a leather jacket from one made of denim, and a plush toy from one made of plastic. Most studies of material appearance have focused on the estimation of specific material properties such as albedo or surface gloss, and as a consequence, almost nothing is known about how we recognize material categories like leather or plastic. We have studied judgments of high-level material categories with a diverse set of real-world photographs, and we have shown (Sharan, 2009) that observers can categorize materials reliably and quickly. Performance on our tasks cannot be explained by simple differences in color, surface shape, or texture. Nor can the results be explained by observers merely performing shape-based object recognition. Rather, we argue that fast and accurate material categorization is a distinct, basic ability of the visual system.


Journal of Vision | 2010

Do colored highlights look like highlights

Shin'ya Nishida; Isamu Motoyoshi; Lisa Nakano; Yuanzhen Li; Lavanya Sharan; Edward H. Adelson

Case IV: When a colored specular component was combined with a white diffuse component (e.g., red on white), the surface images looked somewhat strange. They looked less glossy, and more importantly, did not appear to have a uniform reflectance. Colored highlight regions appeared to be spatially segregated from the surrounding white-body regions, as if pieces of colored foil were attached to a white matte surface.


tests and proofs | 2015

The Perception of Lighting Inconsistencies in Composite Outdoor Scenes

Minghui Tan; Jean-François Lalonde; Lavanya Sharan; Holly E. Rushmeier; Carol O'Sullivan

It is known that humans can be insensitive to large changes in illumination. For example, if an object of interest is extracted from one digital photograph and inserted into another, we do not always notice the differences in illumination between the object and its new background. This inability to spot illumination inconsistencies is often the key to success in digital “doctoring” operations. We present a set of experiments in which we explore the perception of illumination in outdoor scenes. Our results can be used to predict when and why inconsistencies go unnoticed. Applications of the knowledge gained from our studies include smarter digital “cut-and-paste” and digital “fake” detection tools, and image-based composite scene backgrounds for layout and previsualization.


tests and proofs | 2010

Perceptually motivated guidelines for voice synchronization in film

Elizabeth J. Carter; Lavanya Sharan; Laura C. Trutoiu; Iain A. Matthews; Jessica K. Hodgins

We consume video content in a multitude of ways, including in movie theaters, on television, on DVDs and Blu-rays, online, on smart phones, and on portable media players. For quality control purposes, it is important to have a uniform viewing experience across these various platforms. In this work, we focus on voice synchronization, an aspect of video quality that is strongly affected by current post-production and transmission practices. We examined the synchronization of an actors voice and lip movements in two distinct scenarios. First, we simulated the temporal mismatch between the audio and video tracks that can occur during dubbing or during broadcast. Next, we recreated the pitch changes that result from conversions between formats with different frame rates. We show, for the first time, that these audio visual mismatches affect viewer enjoyment. When temporal synchronization is noticeably absent, there is a decrease in the perceived performance quality and the perceived emotional intensity of a performance. For pitch changes, we find that higher pitch voices are not preferred, especially for male actors. Based on our findings, we advise that mismatched audio and video signals negatively affect viewer experience.


international conference on computer graphics and interactive techniques | 2005

Compressing and companding high dynamic range images with subband architectures

Yuanzhen Li; Lavanya Sharan; Edward H. Adelson


Journal of Vision | 2010

Material perception: What can you see in a brief glance?

Lavanya Sharan; Ruth Rosenholtz; Edward H. Adelson

Collaboration


Dive into the Lavanya Sharan's collaboration.

Top Co-Authors

Avatar

Edward H. Adelson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ruth Rosenholtz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuanzhen Li

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shin'ya Nishida

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge