Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bei Xiao is active.

Publication


Featured researches published by Bei Xiao.


Journal of Vision | 2006

Bayesian model of human color constancy.

David H. Brainard; Philippe Longère; Peter B. Delahunt; William T. Freeman; James M. Kraft; Bei Xiao

Vision is difficult because images are ambiguous about the structure of the world. For object color, the ambiguity arises because the same object reflects a different spectrum to the eye under different illuminations. Human vision typically does a good job of resolving this ambiguity-an ability known as color constancy. The past 20 years have seen an explosion of work on color constancy, with advances in both experimental methods and computational algorithms. Here, we connect these two lines of research by developing a quantitative model of human color constancy. The model includes an explicit link between psychophysical data and illuminant estimates obtained via a Bayesian algorithm. The model is fit to the data through a parameterization of the prior distribution of illuminant spectral properties. The fit to the data is good, and the derived prior provides a succinct description of human performance.


Journal of Vision | 2014

Looking against the light: How perception of translucency depends on lighting direction

Bei Xiao; Bruce Walter; Ioannis Gkioulekas; Todd E. Zickler; Edward H. Adelson; Kavita Bala

Translucency is an important aspect of material appearance. To some extent, humans are able to estimate translucency in a consistent way across different shapes and lighting conditions, i.e., to exhibit translucency constancy. However, Fleming and Bülthoff (2005) have shown that that there can be large failures of constancy, with lighting direction playing an important role. In this paper, we explore the interaction of shape, illumination, and degree of translucency constancy more deeply by including in our analysis the variations in translucent appearance that are induced by the shape of the scattering phase function. This is an aspect of translucency that has been largely neglected. We used appearance matching to measure how perceived translucency depends on both lighting and phase function. The stimuli were rendered scenes that contained a figurine and the lighting direction was represented by spherical harmonic basis function. Observers adjusted the density of a figurine under one lighting condition to match the material property of a target figurine under another lighting condition. Across the trials, we varied both the lighting direction and the phase function of the target. The phase functions were sampled from a 2D space proposed by Gkioulekas et al. (2013) to span an important range of translucent appearance. We find the degree of translucency constancy depends strongly on the phase functions location in the same 2D space, suggesting that the space captures useful information about different types of translucency. We also find that the geometry of an object is important. We compare the case of a torus, which has a simple smooth shape, with that of the figurine, which has more complex geometric features. The complex shape shows a greater range of apparent translucencies and a higher degree of constancy failure. In summary, humans show significant failures of translucency constancy across changes in lighting direction, but the effect depends both on the shape complexity and the translucency phase function.


Journal of Vision | 2012

The color constancy of three-dimensional objects.

Bei Xiao; Brendan Hurst; Lauren MacIntyre; David H. Brainard

Human color constancy has been studied for over 100 years, and there is extensive experimental data for the case where a spatially diffuse light source illuminates a set of flat matte surfaces. In natural viewing, however, three-dimensional objects are viewed in three-dimensional scenes. Little is known about color constancy for three-dimensional objects. We used a forced-choice task to measure the achromatic chromaticity of matte disks, matte spheres, and glossy spheres. In all cases, the test stimuli were viewed in the context of stereoscopically viewed graphics simulations of three-dimensional scenes, and we varied the scene illuminant. We studied conditions both where all cues were consistent with the simulated illuminant change (consistent-cue conditions) and where local contrast was silenced as a cue (reduced-cue conditions). We computed constancy indices from the achromatic chromaticities. To first order, constancy was similar for the three test object types. There was, however, a reliable interaction between test object type and cue condition. In the consistent-cue conditions, constancy tended to be best for the matte disks, while in the reduced-cue conditions constancy was best for the spheres. The presence of this interaction presents an important challenge for theorists who seek to generalize models that account for constancy for flat tests to the more general case of three-dimensional objects.


Journal of Vision | 2014

RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research.

Benjamin S. Heasly; Nicolas P. Cottaris; Daniel P. Lichtman; Bei Xiao; David H. Brainard

RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.


applied perception in graphics and visualization | 2006

Color perception of 3D objects: constancy with respect to variation of surface gloss

Bei Xiao; David H. Brainard

What determines the color appearance of real objects viewed under natural conditions? The light reflected from different locations on a single object can vary enormously. This variation is enhanced when the material properties of the object are changed from matte to glossy. Yet humans have no trouble assigning a color name to most things. We studied how people perceive the color of spheres in complex scenes. Observers viewed graphics simulations of a three-dimensional scene containing two spheres, test and match. The observers task was to adjust the match sphere until its color appearance was the same as that of the test sphere. The match sphere was always matte, and observers varied its color by changing the simulated spectral reflectance function. The surface gloss of the test spheres was varied across conditions. The data show that for fixed test sphere body reflectance, color appearance depends on surface gloss. This effect is small, however, compared to the variation that would be expected if observers simply matched the average of the light reflected from the test.


Journal of Vision | 2016

Can you see what you feel? Color and folding properties affect visual-tactile material discrimination of fabrics.

Bei Xiao; Wenyan Bi; Xiaodan Jia; Hanhan Wei; Edward H. Adelson

Humans can often estimate tactile properties of objects from vision alone. For example, during online shopping, we can often infer material properties of clothing from images and judge how the material would feel against our skin. What visual information is important for tactile perception? Previous studies in material perception have focused on measuring surface appearance, such as gloss and roughness, and using verbal reports of material attributes and categories. However, in real life, predicting tactile properties of an object might not require accurate verbal descriptions of its surface attributes or categories. In this paper, we use tactile perception as ground truth to measure visual material perception. Using fabrics as our stimuli, we measure how observers match what they see (photographs of fabric samples) with what they feel (physical fabric samples). The data shows that color has a significant main effect in that removing color significantly reduces accuracy, especially when the images contain 3-D folds. We also find that images of draped fabrics, which revealed 3-D shape information, achieved better matching accuracy than images with flattened fabrics. The data shows a strong interaction between color and folding conditions on matching accuracy, suggesting that, in 3-D folding conditions, the visual system takes advantage of chromatic gradients to infer tactile properties but not in flattened conditions. Together, using a visual-tactile matching task, we show that humans use folding and color information in matching the visual and tactile properties of fabrics.


Journal of Vision | 2018

Estimating mechanical properties of cloth from videos using dense motion trajectories: Human psychophysics and machine learning

Wenyan Bi; Peiran Jin; Hendrikje Nienborg; Bei Xiao

Humans can visually estimate the mechanical properties of deformable objects (e.g., cloth stiffness). While much of the recent work on material perception has focused on static image cues (e.g., textures and shape), little is known about whether humans can integrate information over time to make a judgment. Here we investigated the effect of spatiotemporal information across multiple frames (multiframe motion) on estimating the bending stiffness of cloth. Using high-fidelity cloth animations, we first examined how the perceived bending stiffness changed as a function of the physical bending stiffness defined in the simulation model. Using maximum-likelihood difference-scaling methods, we found that the perceived stiffness and physical bending stiffness were highly correlated. A second experiment in which we scrambled the frame sequences diminished this correlation. This suggests that multiframe motion plays an important role. To provide further evidence for this finding, we extracted dense motion trajectories from the videos across 15 consecutive frames and used the trajectory descriptors to train a machine-learning model with the measured perceptual scales. The model can predict human perceptual scales in new videos with varied winds, optical properties of cloth, and scene setups. When the correct multiframe was removed (using either scrambled videos or two-frame optical flow to train the model), the predictions significantly worsened. Our findings demonstrate that multiframe motion information is important for both humans and machines to estimate the mechanical properties. In addition, we show that dense motion trajectories are effective features to build a successful automatic cloth-estimation system.


Journal of Vision | 2015

Perceptual Dimensions of Material Properties of Fabrics in Dynamic Scenes

Bei Xiao; William Kistler

Deformable objects such as fabrics, rubber, and food can be distinguished by their textures, and also by the different motions they exhibit when interacting with external forces. These motion patterns can be used to estimate the intrinsic mechanical properties of the objects (Bouman et al 2013). Little is known however about human perception of mechanical properties (e.g. stiffness) in dynamic scenes. We use a dissimilarity scaling method to study how perception of mechanical properties of fabrics is related to the physical parameters of mass and bending stiffness using videos of both simulated and real fabrics. The stimuli are videos containing a hanging fabric moving under oscillating wind. In each trial, observers were shown a pair of these videos and asked to indicate on a scale of 0-100 how different the material properties of the two fabrics are from each other. In Experiment 1, we used Blender physics engine to simulate the cloth behavior. Four values are sampled for each of the 3 parameters: mass, structural stiffness, and bending stiffness. All the fabrics have the same textures. For each rendered video, the wind direction was randomized along a lateral plane. Five observers finished 2016 paired-comparisons, which were analyzed with a non-metric multidimensional scaling method to learn lower-dimensional embedding of the data. In Experiment 2, we performed the same experiment with videos of the real fabrics with similar scene settings. In the 2D perceptual embedding, we found that one dimension was best correlated with the mass, while the other dimension was correlated with the bending stiffness. Structural stiffness and wind direction did not predict perceptual dissimilarities. 2D embedding of the real fabrics also showed similar results. Together, the experiments suggest humans can estimate intrinsic mechanical properties of fabrics from dynamic scenes, while discounting variations in textures and direction of the wind force. Meeting abstract presented at VSS 2015.


Visual Neuroscience | 2008

Surface gloss and color perception of 3D objects

Bei Xiao; David H. Brainard


Journal of Vision | 2013

Can you see what you feel? Tactile and visual matching of material properties of fabrics

Bei Xiao; Xiaodan Jia; Edward H. Adelson

Collaboration


Dive into the Bei Xiao's collaboration.

Top Co-Authors

Avatar

David H. Brainard

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Edward H. Adelson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaodan Jia

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge