Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayelet McKyton is active.

Publication


Featured researches published by Ayelet McKyton.


Current Biology | 2015

The Limits of Shape Recognition following Late Emergence from Blindness

Ayelet McKyton; Itay Ben-Zion; Ravid Doron; Ehud Zohary

Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation.


Journal of Vision | 2009

Pattern matching is assessed in retinotopic coordinates

Ayelet McKyton; Yoni Pertzov; Ehud Zohary

We typically examine scenes by performing multiple saccades to different objects of interest within the image. Therefore, an extra-retinotopic representation, invariant to the changes in the retinal image caused by eye movements, might be useful for high-level visual processing. We investigate here, using a matching task, whether the representation of complex natural images is retinotopic or screen-based. Subjects observed two simultaneously presented images, made a saccadic eye movement to a new fixation point, and viewed a third image. Their task was to judge whether the third image was identical to one of the two earlier images or different. Identical images could appear either in the same retinotopic position, in the same screen position, or in totally different locations. Performance was best when the identical images appeared in the same retinotopic position and worst when they appeared in the opposite hemifield. Counter to commonplace intuition, no advantage was conferred from presenting the identical images in the same screen position. This, together with performance sensitivity for image translation of a few degrees, suggests that image matching, which can often be judged without overall recognition of the scene, is mostly determined by neuronal activity in earlier brain areas containing a strictly retinotopic representation and small receptive fields.


Journal of Vision | 2012

Motion adaptation reveals that the motion vector is represented in multiple coordinate frames

Tal Seidel Malkinson; Ayelet McKyton; Ehud Zohary

Accurately perceiving the velocity of an object during smooth pursuit is a complex challenge: although the object is moving in the world, it is almost still on the retina. Yet we can perceive the veridical motion of a visual stimulus in such conditions, suggesting a nonretinal representation of the motion vector. To explore this issue, we studied the frames of representation of the motion vector by evoking the well known motion aftereffect during smooth-pursuit eye movements (SPEM). In the retinotopic configuration, due to an accompanying smooth pursuit, a stationary adapting random-dot stimulus was actually moving on the retina. Motion adaptation could therefore only result from motion in retinal coordinates. In contrast, in the spatiotopic configuration, the adapting stimulus moved on the screen but was practically stationary on the retina due to a matched SPEM. Hence, adaptation here would suggest a representation of the motion vector in spatiotopic coordinates. We found that exposure to spatiotopic motion led to significant adaptation. Moreover, the degree of adaptation in that condition was greater than the adaptation induced by viewing a random-dot stimulus that moved only on the retina. Finally, pursuit of the same target, without a random-dot array background, yielded no adaptation. Thus, in our experimental conditions, adaptation is not induced by the SPEM per se. Our results suggest that motion computation is likely to occur in parallel in two distinct representations: a low-level, retinal-motion dependent mechanism and a high-level representation, in which the veridical motion is computed through integration of information from other sources.


Vision Research | 2008

The coordinate frame of pop-out learning

Ayelet McKyton; Ehud Zohary

Saccades are ubiquitous in natural vision. One way to generate a coherent representation of a scene across saccades is to produce an extra-retinal coordinate frame (such as a head-based representation). We investigate this issue by behavioral means: Participants learned to detect a 3D-pop-out target in a fixed position. Next, target was relocated in one coordinate frame while maintaining it fixed in the others. Performance was severely affected only when the change in target position occurred in a retinotopic coordinate frame. This further suggests that perceptual learning occurs in retinotopic regions having receptive fields restricted within a hemifield.


Psychological Science | 2018

Lack of Automatic Imitation in Newly Sighted Individuals

Ayelet McKyton; Itay Ben-Zion; Ehud Zohary

Viewing a hand action performed by another person facilitates a response-compatible action and slows a response-incompatible one, even when the viewed action is irrelevant to the task. This automatic imitation effect is taken as the clearest evidence for a direct mapping between action viewing and motor performance. But there is an ongoing debate whether this effect is innate or experience dependent. We tackled this issue by studying a unique group of newly sighted children who suffered from dense bilateral cataracts from early infancy and were surgically treated only years later. The newly sighted children were less affected by viewing task-irrelevant actions than were control children, even 2 years after the cataract-removal surgery. This strongly suggests that visually guided motor experience is necessary for the development of automatic imitation. At the very least, our results indicate that if imitation is based on innate mechanisms, these are clearly susceptible to long periods of visual deprivation.


Current Biology | 2017

Size constancy following long-term visual deprivation

Elena Andres; Ayelet McKyton; Itay Ben-Zion; Ehud Zohary

We can estimate the veridical size of nearby objects reasonably well irrespective of their viewing distance. This perceptual capability, termed size constancy, is accomplished by combining information about retinal image size together with the viewing distance, or using the relational information available in the scene, via direct perception [1]. A previous study [2] showed that children typically underestimate the size of a distant object. This underestimation is reduced with time, suggesting that years of visual experience may be essential for attaining true size constancy. But what if you have had very limited vision during the early years of life? We studied 23 Ethiopian children suffering from bilateral, early-onset cataract, who were surgically treated only years after birth. Surprisingly, most children were able to estimate object size reasonably well irrespective of distance; in fact, they usually tended to overestimate the far-object size. Closer examination indicated that, although before surgery the patients were diagnosed as having a full, mature bilateral cataract, they nevertheless had some residual form of vision, typically limited to very close range. Gandhi et al.[3] earlier reported immediate susceptibility to geometric visual illusions in a similar group of newly-sighted children, concluding that size constancy was probably innate. We suggest that their immediate ability to judge physical size irrespective of distance is more likely to result from their previous visual experience.


Neuroreport | 2011

Tactile interactions activate mirror system regions in the human brain.

Ayelet McKyton

Communicating with others is essential for the development of a society. Although types of communications, such as language and visual gestures, were thoroughly investigated in the past, little research has been done to investigate interactions through touch. To study this we used functional magnetic resonance imaging. Twelve participants were scanned with their eyes covered while stroking four kinds of items, representing different somatosensory stimuli: a human hand, a realistic rubber hand, an object, and a simple texture. Although the human and the rubber hands had the same overall shape, in three regions there was significantly more blood oxygen level dependent activation when touching the real hand: the anterior medial prefrontal cortex, the ventral premotor cortex, and the posterior superior temporal cortex. The last two regions are part of the mirror network and are known to be activated through visual interactions such as gestures. Interestingly, in this study, these areas were activated through a somatosensory interaction. A control experiment was performed to eliminate confounds of temperature, texture, and imagery, suggesting that the activation in these areas was correlated with the touch of a human hand. These results reveal the neuronal network working behind human tactile interactions, and highlight the participation of the mirror system in such functions.


Cerebral Cortex | 2007

Beyond Retinotopic Mapping: The Spatial Representation of Objects in the Human Lateral Occipital Complex

Ayelet McKyton; Ehud Zohary


Journal of Vision | 2018

Impairment of "vision for action" functions in the newly sighted, following early-onset and prolonged visual deprivation

Ehud Zohary; Itay Ben Zion; Caterin Schreiber; Ayelet McKyton


Journal of Vision | 2014

Sensitivity to spatiotopic location in the human visual system

Yuval Porat; Tanya Orlov; Ayelet McKyton; Ehud Zohary

Collaboration


Dive into the Ayelet McKyton's collaboration.

Top Co-Authors

Avatar

Ehud Zohary

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caterin Schreiber

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Elena Andres

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tal Seidel Malkinson

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Tanya Orlov

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yoni Pertzov

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yuval Porat

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge