Katja Doerschner
Bilkent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katja Doerschner.
Nature Genetics | 2011
Tanyeri Barak; Kenneth Y. Kwan; Angeliki Louvi; Veysi Demirbilek; Serap Saygi; Beyhan Tüysüz; Murim Choi; Huseyin Boyaci; Katja Doerschner; Ying Zhu; Hande Kaymakçalan; Saliha Yılmaz; Mehmet Bakırcıoğlu; Ahmet Okay Caglayan; Ali K. Ozturk; Katsuhito Yasuno; William J. Brunken; Ergin Atalar; Cengiz Yalcinkaya; Alp Dinçer; Richard A. Bronen; Shrikant Mane; Tayfun Ozcelik; Richard P. Lifton; Nenad Sestan; Kaya Bilguvar; Murat Gunel
The biological basis for regional and inter-species differences in cerebral cortical morphology is poorly understood. We focused on consanguineous Turkish families with a single affected member with complex bilateral occipital cortical gyration abnormalities. By using whole-exome sequencing, we initially identified a homozygous 2-bp deletion in LAMC3, the laminin γ3 gene, leading to an immediate premature termination codon. In two other affected individuals with nearly identical phenotypes, we identified a homozygous nonsense mutation and a compound heterozygous mutation. In human but not mouse fetal brain, LAMC3 is enriched in postmitotic cortical plate neurons, localizing primarily to the somatodendritic compartment. LAMC3 expression peaks between late gestation and late infancy, paralleling the expression of molecules that are important in dendritogenesis and synapse formation. The discovery of the molecular basis of this unusual occipital malformation furthers our understanding of the complex biology underlying the formation of cortical gyrations.
Journal of Vision | 2014
Burak Akin; Ceylan Ozdem; Seda Eroglu; Dudu Taslak Keskin; Fang Fang; Katja Doerschner; Daniel Kersten; Huseyin Boyaci
In early retinotopic areas of the human visual system, information from the left and right visual hemifields (VHFs) is processed contralaterally in two hemispheres. Despite this segregation, we have the perceptual experience of a unified, coherent, and uninterrupted single visual field. How exactly the visual system integrates information from the two VHFs and achieves this perceptual experience still remains largely unknown. In this study using fMRI, we explored candidate areas that are involved in interhemispheric integration and the perceptual experience of a unified, global motion across VHFs. Stimuli were two-dimensional, computer-generated objects with parts in both VHFs. The retinal image in the left VHF always remained stationary, but in the experimental condition, it appeared to have local motion because of the perceived global motion of the object. This perceptual effect could be weakened by directing the attention away from the global motion through a demanding fixation task. Results show that lateral occipital areas, including the medial temporal complex, play an important role in the process of perceptual experience of a unified global motion across VHFs. In early areas, including the lateral geniculate nucleus and V1, we observed correlates of this perceptual experience only when attention is not directed away from the object. These findings reveal effects of attention on interhemispheric integration in motion perception and imply that both the bilateral activity of higher-tier visual areas and feedback mechanisms leading to bilateral activity of early areas play roles in the perceptual experience of a unified visual field.
Current Biology | 2011
Katja Doerschner; Roland W. Fleming; Ozgur Yilmaz; Paul R. Schrater; Bruce Hartung; Daniel Kersten
Many critical perceptual judgments, from telling whether fruit is ripe to determining whether the ground is slippery, involve estimating the material properties of surfaces. Very little is known about how the brain recognizes materials, even though the problem is likely as important for survival as navigating or recognizing objects. Though previous research has focused nearly exclusively on the properties of static images, recent evidence suggests that motion may affect the appearance of surface material. However, what kind of information motion conveys and how this information may be used by the brain is still unknown. Here, we identify three motion cues that the brain could rely on to distinguish between matte and shiny surfaces. We show that these motion measurements can override static cues, leading to dramatic changes in perceived material depending on the image motion characteristics. A classifier algorithm based on these cues correctly predicts both successes and some striking failures of human material perception. Together these results reveal a previously unknown use for optic flow in the perception of surface material properties.
Journal of Vision | 2004
Katja Doerschner; Huseyin Boyaci; Laurence T. Maloney
In complex scenes, the light absorbed and re-emitted by one surface can serve as a source of illumination for a second. We examine whether observers systematically discount this secondary illumination when estimating surface color. We asked six naive observers to make achromatic settings of a small test patch adjacent to a brightly colored orange cube in rendered scenes. The orientation of the test patch with respect to the cube was varied from trial to trial, altering the amount of secondary illumination reaching the test patch. Observers systematically took orientation into account in making their settings, discounting the added secondary illumination more at orientations where it was more intense. Overall, they tended to under-compensate for the added secondary illumination.
Journal of Vision | 2006
Huseyin Boyaci; Katja Doerschner; Laurence T. Maloney
We investigate how human observers make use of three candidate cues in their lightness judgments. Each cue potentially provides information about the spatial distribution of light sources in complex, rendered 3D scenes. The illumination (lighting model) of each scene consisted of a punctate light source combined with a diffuse light source. The cues were (1) cast shadows, (2) surface shading, and (3) specular highlights. Observers were asked to judge the albedo of a matte grayscale test patch that varied in orientation with respect to the punctate light source. We tested their performance in scenes containing only one type of cue and in scenes where all three cue types were present. From the results, we deduced how accurately they had estimated the spatial distribution of light sources in each scene given the cues available. In Experiment 1, we established that all of the individual cues were used in isolation. We showed that the highlight and cast shadow cues in isolation were used by more than half of the observers. We could reject the hypothesis that the observers did not make use of the shading cue for only one observer. In Experiment 2, we showed that the observers combined information from multiple cues when all three cues were presented together.
Vision Research | 2007
Katja Doerschner; Huseyin Boyaci; Laurence T. Maloney
We investigated limits on the human visual systems ability to discount directional variation in complex lights field when estimating Lambertian surface color. Directional variation in the light field was represented in the frequency domain using spherical harmonics. The bidirectional reflectance distribution function of a Lambertian surface acts as a low-pass filter on directional variation in the light field. Consequently, the visual system needs to discount only the low-pass component of the incident light corresponding to the first nine terms of a spherical harmonics expansion [Basri, R., Jacobs, D. (2001). Lambertian reflectance and linear subspaces. In: International Conference on Computer Vision II, pp. 383-390; Ramamoorthi, R., Hanrahan, P., (2001). An efficient representation for irradiance environment maps. SIGGRAPH 01. New York: ACM Press, pp. 497-500] to accurately estimate surface color. We test experimentally whether the visual system discounts directional variation in the light field up to this physical limit. Our results are consistent with the claim that the visual system can compensate for all of the complexity in the light field that affects the appearance of Lambertian surfaces.
Journal of Vision | 2005
Jacqueline L. Snyder; Katja Doerschner; Laurence T. Maloney
We report the results of three experiments in which observers judged the albedo of surfaces at different locations in rendered, three-dimensional scenes consisting of two rooms connected by a doorway. All surfaces composing the rooms were achromatic and Lambertian, and a gradient of illumination increased with depth. Observers made asymmetric albedo matches between a standard surface placed in the rooms at different depths along the line of sight and an adjustable surface at a fixed location. In Experiment 1, gradients of intensity on the walls, floor, and ceiling of the scene, as well as its three-dimensional structure, provided information about variations in the intensity of illumination across depth (the illumination profile). In Experiment 2, specular spheres provided an additional veridical cue to the illumination profile. We sought to determine whether observers would make use of this additional cue. They did: all observers exhibited a greater degree of lightness constancy in Experiment 2 than in Experiment 1. In Experiment 3, the specular spheres reflected an illumination profile in conflict with that signaled by the other cues in the scene. We found that observers chose albedo matches consistent with an illumination profile that was a mixture of the illumination profiles signaled by the specular spheres and by the remaining cues.
Journal of Vision | 2010
Katja Doerschner; Laurence T. Maloney; Huseyin Boyaci
We investigated how spatial pattern, background, and dynamic range affect perceived gloss in brightly lit real scenes. Observers viewed spherical objects against uniform backgrounds. There were three possible objects. Two were black matte spheres with circular matte white dots painted on them (matte-dot spheres). The third sphere was painted glossy black (glossy black sphere). Backgrounds were either black or white matte, and observers saw each of the objects in turn on each background. Scenes were illuminated by an intense collimated source. On each trial, observers matched the apparent albedo of the sphere to an albedo reference scale and its apparent gloss to a gloss reference scale. We found that matte-dot spheres and the black glossy sphere were perceived as glossy on both backgrounds. All spheres were judged to be significantly glossier when in front of the black background. In contrast with previous research using conventional computer displays, we find that background markedly affects perceived gloss. This finding is surprising because darker surfaces are normally perceived as glossier (F. Pellacini, J. A. Ferwerda, & D. P. Greenberg, 2000). We conjecture that there are cues to surface material signaling glossiness present in high dynamic range scenes that are absent or weak in scenes presented using conventional computer displays.
Archive | 2011
Laurence T. Maloney; Holly E. Gerhard; Huseyin Boyaci; Katja Doerschner
Previous research on surface color perception has typically used Mondrian stimuli consisting of a small number of matte surface patches in a plane perpendicular to the line of sight. In such scenes, reliable estimation of the color of a surface is a difficult if not impossible computational problem (Maloney, 1999). In three-dimensional scenes consisting of surfaces at many different orientations it is at least in theory possible to estimate surface color. However, the difficulty of the problem increases, in part, because the effective illumination incident on each surface (the light field) now depends on surface orientation and location. We review recent work in multiple laboratories that examines the degree to which the human visual system discounts the light field in judging matte surface lightness and color and how the visual system estimates the flow of light in a scene.
Pattern Recognition | 2011
Katja Doerschner; Daniel Kersten; Paul R. Schrater
We propose a method for rapidly classifying surface reflectance directly from the output of spatio-temporal filters applied to an image sequence of rotating objects. Using image data from only a single frame, we compute histograms of image velocities and classify these as being generated by a specular or a diffusely reflecting object. Exploiting characteristics of material-specific image velocities we show that our classification approach can predict the reflectance of novel 3D objects, as well as human perception.