Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roland W. Fleming is active.

Publication


Featured researches published by Roland W. Fleming.


Journal of Vision | 2003

Real-world illumination and the perception of surface reflectance properties

Roland W. Fleming; Ron O. Dror; Edward H. Adelson

Under typical viewing conditions, we find it easy to distinguish between different materials, such as metal, plastic, and paper. Recognizing materials from their surface reflectance properties (such as lightness and gloss) is a nontrivial accomplishment because of confounding effects of illumination. However, if subjects have tacit knowledge of the statistics of illumination encountered in the real world, then it is possible to reject unlikely image interpretations, and thus to estimate surface reflectance even when the precise illumination is unknown. A surface reflectance matching task was used to measure the accuracy of human surface reflectance estimation. The results of the matching task demonstrate that subjects can match surface reflectance properties reliably and accurately in the absence of context, as long as the illumination is realistic. Matching performance declines when the illumination statistics are not representative of the real world. Together these findings suggest that subjects do use stored assumptions about the statistics of real-world illumination to estimate surface reflectance. Systematic manipulations of pixel and wavelet properties of illuminations reveal that the visual systems assumptions about illumination are of intermediate complexity (e.g., presence of edges and bright light sources), rather than of high complexity (e.g., presence of recognizable objects in the environment).


international conference on computer graphics and interactive techniques | 2005

Image-based material editing

Erum Arif Khan; Erik Reinhard; Roland W. Fleming; Hh Bülthoff

Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a method for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image as input. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies, while being sensitive to others. By adjusting our simulations to be careful about those aspects to which the human visual system is sensitive, we are for the first time able to demonstrate significant material changes on the basis of a single photograph as input.


international conference on computer graphics and interactive techniques | 2007

Do HDR displays support LDR content?: a psychophysical evaluation

Ahmet Oguz Akyuz; Roland W. Fleming; Bernhard E. Riecke; Erik Reinhard; Hh Bülthoff

The development of high dynamic range (HDR) imagery has brought us to the verge of arguably the largest change in image display technologies since the transition from black-and-white to color television. Novel capture and display hardware will soon enable consumers to enjoy the HDR experience in their own homes. The question remains, however, of what to do with existing images and movies, which are intrinsically low dynamic range (LDR). Can this enormous volume of legacy content also be displayed effectively on HDR displays? We have carried out a series of rigorous psychophysical investigations to determine how LDR images are best displayed on a state-of-the-art HDR monitor, and to identify which stages of the HDR imaging pipeline are perceptually most critical. Our main findings are: (1) As expected, HDR displays outperform LDR ones. (2) Surprisingly, HDR images that are tone-mapped for display on standard monitors are often no better than the best single LDR exposure from a bracketed sequence. (3) Most importantly of all, LDR data does not necessarily require sophisticated treatment to produce a compelling HDR experience. Simply boosting the range of an LDR image linearly to fit the HDR display can equal or even surpass the appearance of a true HDR image. Thus the potentially tricky process of inverse tone mapping can be largely circumvented.


tests and proofs | 2005

Low-Level Image Cues in the Perception of Translucent Materials

Roland W. Fleming; Hh Bülthoff

When light strikes a translucent material (such as wax, milk or fruit flesh), it enters the body of the object, scatters and reemerges from the surface. The diffusion of light through translucent materials gives them a characteristic visual softness and glow. What image properties underlie this distinctive appearance? What cues allow us to tell whether a surface is translucent or opaque? Previous work on the perception of semitransparent materials was based on a very restricted physical model of thin filters [Metelli 1970; 1974a,b]. However, recent advances in computer graphics [Jensen et al. 2001; Jensen and Buhler 2002] allow us to efficiently simulate the complex subsurface light transport effects that occur in real translucent objects. Here we use this model to study the perception of translucency, using a combination of psychophysics and image statistics. We find that many of the cues that were traditionally thought to be important for semitransparent filters (e.g., X-junctions) are not relevant for solid translucent objects. We discuss the role of highlights, color, object size, contrast, blur, and lighting direction in the perception of translucency. We argue that the physics of translucency are too complex for the visual system to estimate intrinsic physical parameters by inverse optics. Instead, we suggest that we identify translucent materials by parsing them into key regions and by gathering image statistics from these regions.


international conference on computer graphics and interactive techniques | 2009

Evaluation of reverse tone mapping through varying exposure conditions

Belen Masia; Sandra Agustin; Roland W. Fleming; Olga Sorkine; Diego Gutierrez

Most existing image content has low dynamic range (LDR), which necessitates effective methods to display such legacy content on high dynamic range (HDR) devices. Reverse tone mapping operators (rTMOs) aim to take LDR content as input and adjust the contrast intelligently to yield output that recreates the HDR experience. In this paper we show that current rTMO approaches fall short when the input image is not exposed properly. More specifically, we report a series of perceptual experiments using a Brightside HDR display and show that, while existing rTMOs perform well for under-exposed input data, the perceived quality degrades substantially with over-exposure, to the extent that in some cases subjects prefer the LDR originals to images that have been treated with rTMOs. We show that, in these cases, a simple rTMO based on gamma expansion avoids the errors introduced by other methods, and propose a method to automatically set a suitable gamma value for each image, based on the image key and empirical data. We validate the results both by means of perceptual experiments and using a recent image quality metric, and show that this approach enhances visible details without causing artifacts in incorrectly-exposed regions. Additionally, we perform another set of experiments which suggest that spatial artifacts introduced by rTMOs are more disturbing than inaccuracies in the expanded intensities. Together, these findings suggest that when the quality of the input data is unknown, reverse tone mapping should be handled with simple, non-aggressive methods to achieve the desired effect.


Psychological Science | 2011

Visual Perception of Thick Transparent Materials

Roland W. Fleming; Frank Jäkel; Laurence T. Maloney

Under typical viewing conditions, human observers readily distinguish between materials such as silk, marmalade, or granite, an achievement of the visual system that is poorly understood. Recognizing transparent materials is especially challenging. Previous work on the perception of transparency has focused on objects composed of flat, infinitely thin filters. In the experiments reported here, we considered thick transparent objects, such as ice cubes, which are irregular in shape and can vary in refractive index. An important part of the visual evidence signaling the presence of such objects is distortions in the perceived shape of other objects in the scene. We propose a new class of visual cues derived from the distortion field induced by thick transparent objects, and we provide experimental evidence that cues arising from the distortion field predict both the successes and the failures of human perception in judging refractive indices.


Current Biology | 2011

Visual Motion and the Perception of Surface Material

Katja Doerschner; Roland W. Fleming; Ozgur Yilmaz; Paul R. Schrater; Bruce Hartung; Daniel Kersten

Many critical perceptual judgments, from telling whether fruit is ripe to determining whether the ground is slippery, involve estimating the material properties of surfaces. Very little is known about how the brain recognizes materials, even though the problem is likely as important for survival as navigating or recognizing objects. Though previous research has focused nearly exclusively on the properties of static images, recent evidence suggests that motion may affect the appearance of surface material. However, what kind of information motion conveys and how this information may be used by the brain is still unknown. Here, we identify three motion cues that the brain could rely on to distinguish between matte and shiny surfaces. We show that these motion measurements can override static cues, leading to dramatic changes in perceived material depending on the image motion characteristics. A classifier algorithm based on these cues correctly predicts both successes and some striking failures of human material perception. Together these results reveal a previously unknown use for optic flow in the perception of surface material properties.


Journal of Vision | 2013

Perceptual qualities and material classes.

Roland W. Fleming; Christiane B. Wiebel; Karl R. Gegenfurtner

Under typical viewing conditions, we can easily group materials into distinct classes (e.g., woods, plastics, textiles). Additionally, we can also make many other judgments about material properties (e.g., hardness, rigidity, colorfulness). Although these two types of judgment (classification and inferring material properties) have different requirements, they likely facilitate one another. We conducted two experiments to investigate the interactions between material classification and judgments of material qualities in both the visual and semantic domains. In Experiment 1, nine students viewed 130 images of materials from 10 different classes. For each image, they rated nine subjective properties (glossiness, transparency, colorfulness, roughness, hardness, coldness, fragility, naturalness, prettiness). In Experiment 2, 65 subjects were given the verbal names of six material classes, which they rated in terms of 42 adjectives describing material qualities. In both experiments, there was notable agreement between subjects, and a relatively small number of factors (weighted combinations of different qualities) were substantially independent of one another. Despite the difficulty of classifying materials from images (Liu, Sharan, Adelson, & Rosenholtz, 2010), the different classes were well clustered in the feature space defined by the subjective ratings. K-means clustering could correctly identify class membership for over 90% of the samples, based on the average ratings across subjects. We also found a high degree of consistency between the two tasks, suggesting subjects access similar information about materials whether judging their qualities visually or from memory. Together, these findings show that perceptual qualities are well defined, distinct, and systematically related to material class membership.


Computers & Graphics | 2009

Computational Aesthetics 2008: Categorizing art: Comparing humans and computers

Christian Wallraven; Roland W. Fleming; Douglas W. Cunningham; Jaume Rigau; Miquel Feixas; Mateu Sbert

The categorization of art (paintings, literature) into distinct styles such as Expressionism, or Surrealism has had a profound influence on how art is presented, marketed, analyzed, and historicized. Here, we present results from human and computational experiments with the goal of determining to which degree such categories can be explained by simple, low-level appearance information in the image. Following experimental methods from perceptual psychology on category formation, naive, non-expert participants were first asked to sort printouts of artworks from different art periods into categories. Converting these data into similarity data and running a multi-dimensional scaling (MDS) analysis, we found distinct categories which corresponded sometimes surprisingly well to canonical art periods. The result was cross-validated on two complementary sets of artworks for two different groups of participants showing the stability of art interpretation. The second focus of this paper was on determining how far computational algorithms would be able to capture human performance or would be able in general to separate different art categories. Using several state-of-the-art algorithms from computer vision, we found that whereas low-level appearance information can give some clues about category membership, human grouping strategies included also much higher-level concepts.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Estimation of 3D shape from image orientations

Roland W. Fleming; Daniel Holtmann-Rice; Hh Bülthoff

One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by “smearing” (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual systems orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.

Collaboration


Dive into the Roland W. Fleming's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward H. Adelson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge