Hila Harris
Weizmann Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hila Harris.
The Journal of Neuroscience | 2015
Yaron Meirovitch; Hila Harris; Eran Dayan; Amos Arieli; Tamar Flash
The short-lasting attenuation of brain oscillations is termed event-related desynchronization (ERD). It is frequently found in the alpha and beta bands in humans during generation, observation, and imagery of movement and is considered to reflect cortical motor activity and action-perception coupling. The shared information driving ERD in all these motor-related behaviors is unknown. We investigated whether particular laws governing production and perception of curved movement may account for the attenuation of alpha and beta rhythms. Human movement appears to be governed by relatively few kinematic laws of motion. One dominant law in biological motion kinematics is the 2/3 power law (PL), which imposes a strong dependency of movement speed on curvature and is prominent in action-perception coupling. Here we directly examined whether the 2/3 PL elicits ERD during motion observation by characterizing the spatiotemporal signature of ERD. ERDs were measured while human subjects observed a cloud of dots moving along elliptical trajectories either complying with or violating the 2/3 PL. We found that ERD within both frequency bands was consistently stronger, arose faster, and was more widespread while observing motion obeying the 2/3 PL. An activity pattern showing clear 2/3 PL preference and lying within the alpha band was observed exclusively above central motor areas, whereas 2/3 PL preference in the beta band was observed in additional prefrontal–central cortical sites. Our findings reveal that compliance with the 2/3 PL is sufficient to elicit a selective ERD response in the human brain.
Nature Neuroscience | 2015
Hila Harris; David Israeli; Nancy J. Minshew; Yoram Bonneh; David J. Heeger; Marlene Behrmann; Dov Sagi
Inflexible behavior is a core characteristic of autism spectrum disorder (ASD), but its underlying cause is unknown. Using a perceptual learning protocol, we observed initially efficient learning in ASD that was followed by anomalously poor learning when the location of the target was changed (over-specificity). Reducing stimulus repetition eliminated over-specificity. Our results indicate that inflexible behavior may be evident ubiquitously in ASD, even in sensory learning, but can be circumvented by specifically designed stimulation protocols.
Vision Research | 2016
Noga Pinchuk-Yacobi; Hila Harris; Dov Sagi
Sensory adaptation and perceptual learning are two forms of plasticity in the visual system, with some potential overlapping neural mechanisms and functional benefits. However, they have been largely considered in isolation. Here we examined whether extensive perceptual training with oriented textures (texture discrimination task, TDT) induces adaptation tilt aftereffects (TAE). Texture elements were oriented lines at -22.5° (target) and 22.5° (background). Observers were trained in 5 daily sessions on the TDT, with 800-1000trials/session. Thresholds increased within the daily sessions, showing within-session performance deterioration, but decreased between days, showing learning. To evaluate TAE, perceived vertical (0°) was measured prior to and after each daily session using a single line element. The results showed a TAE of ∼1.5° at retinal locations consistently stimulated by the target, but none at locations consistently stimulated by the background texture. Retinal locations equally stimulated by target and background elements showed a significant TAE (∼0.7°), in a direction expected by target-driven sensory adaptation. Moreover, these locations showed increasing TAE persistence with training. Additional experiments with a modified target, in order to have balanced stimulation around the vertical direction in all target locations, confirmed the locality of the task-dependent TAE. The present results support a strong link between perceptual learning and local orientation-selective adaptation leading to TAE; the latter was shown here to be task and experience dependent.
Scientific Reports | 2016
Nitzan Censor; Hila Harris; Dov Sagi
Perceptual learning refers to improvement in perception thresholds with practice, however, extended training sessions show reduced performance during training, interfering with learning. These effects were taken to indicate a tight link between sensory adaptation and learning. Here we show a dissociation between adaptation and consolidated learning. Participants trained with a texture discrimination task, in which visual processing time is limited by a temporal target-to-mask window defined as the Stimulus-Onset-Asynchrony (SOA). An initial training phase, previously shown to produce efficient learning, was followed by training structures with varying numbers of SOAs. Largest interference with learning was found in structures containing the largest SOA density, when SOA was gradually decreased. When SOAs were largely kept unchanged, learning was effective. All training structures yielded the same within-session performance reduction, as expected from sensory adaptation. The results point to a dissociation between within-day effects, which depend on the number of trials per se regardless of their temporal structure, and consolidation effects observed on the following day, which were mediated by the temporal structure of practice. These results add a new dimension to consolidation in perceptual learning, suggesting that the degree of its effectiveness depends on variations in temporal properties of the visual stimuli.
Frontiers in Integrative Neuroscience | 2016
Hila Harris; David Israeli; Nancy J. Minshew; Yoram Bonneh; David J. Heeger; Marlene Behrmann; Dov Sagi
In a recent study, we tested perceptual learning in adults with autism spectrum disorder (ASD) (Harris et al., 2015), employing the standard and well-established texture-learning paradigm [TDT; (Karni and Sagi, 1991; Sagi, 1995; Harris et al., 2012)]. In this paradigm, observers learn to discriminate an oriented texture target embedded at a fixed location in a background of elements having a different orientation. Performance is measured as a function of the time-interval between the onset of the target and a mask (stimulus onset asynchrony, SOA), with threshold defined as the minimal time (SOA) to reach a predefined criterion level of performance. Typical observers improve their performance (show reduced thresholds) with training across 3–4 days, but need to relearn the task when the target is moved to a different location in the visual field, showing specificity. We (Harris et al., 2015) reported similar results with observers with ASD, but unlike the typical observers who showed faster learning at the second location (Sagi, 2011), ASD observers showed difficulty in relearning the task at the second location, suggesting that the training with the target at the first location might have interfered with the training at the new, second location. We termed this anomalous poor learning “over-specificity” (OS) to reflect the narrowness of the learning and the failure to generalize, and quantified OS as the average threshold difference between the second and the first learning curves (for generalization OS 0). A modified learning paradigm, where standard target trials were interleaved with no-target trials (“dummy” trials) during training, showed generalization of learning (OS 1) has never been observed before.
Journal of Vision | 2015
Hila Harris; Noga Pinchuk-Yacobi; Dov Sagi
In texture learning, observers are presented with repeated stimulations, resulting in within-day threshold elevation, as well as between day threshold reductions. Within-day deteriorations were shown to be location and orientation specific (Mednick et. al., 2002; Ofen et al., 2007). Accordingly, the declined performance was suggested to reflect sensory adaptation. Here we test whether extensive training produces adaptation dependent tilt after-effects (TAE). Six observers were trained for 5 days on the texture discrimination task (Karni & Sagi, 1991), 800-1000 trials/day. The target (40ms) was composed of 3 lines of 22.5o orientation, stacked vertically or horizontally (2AFC task), embedded in a background of -22.5o lines. The target was followed by a variable blank interval and a mask (100ms). Multiple tests of perceived vertical (PV) were carried out prior and after each daily session to evaluate TAE at four locations, corresponding to: a target line, +22.5o on all trials (T+); both target and background lines, balanced, either +22.5o or -22.5o (T0); and the two background locations, near (BN) and far from the target (BF). Results showed learning across days, with within-day deterioration that varied across days. There was a significant TAE immediately following TDT training at both targets locations, at T+ (-1.6°±0.3, mean±SEM; p< 0.01) and, more surprisingly, at T0 (-0.8°±0.2; p=0.03) but not at background locations (BN, 0.3°±0.3; BF, 0.3°±0.1). The persistence of the TAE varied across days, as indicated by successive PV tests; decaying faster at location T+ (p=0.02), while persisting longer at location T0 (p=0.02). Here we show that texture training induces a localized target-selective TAE, which in return has a training-dependent component. The absence of background TAE and the target-biased TAE at the balanced location indicate that aftereffects are not determined by stimulus statistics, but rather by experience-dependent task-relevance. This supports mutual interactions between sensory adaptation and perceptual learning. Meeting abstract presented at VSS 2015.
Scientific Reports | 2018
Hila Harris; Dov Sagi
Visual learning is known to be specific to the trained target location, showing little transfer to untrained locations. Recently, learning was shown to transfer across equal-eccentricity retinal-locations when sensory adaptation due to repetitive stimulation was minimized. It was suggested that learning transfers to previously untrained locations when the learned representation is location invariant, with sensory adaptation introducing location-dependent representations, thus preventing transfer. Spatial invariance may also fail when the trained and tested locations are at different distance from the center of gaze (different retinal eccentricities), due to differences in the corresponding low-level cortical representations (e.g. allocated cortical area decreases with eccentricity). Thus, if learning improves performance by better classifying target-dependent early visual representations, generalization is predicted to fail when locations of different retinal eccentricities are trained and tested in the absence sensory adaptation. Here, using the texture discrimination task, we show specificity of learning across different retinal eccentricities (4–8°) using reduced adaptation training. The existence of generalization across equal-eccentricity locations but not across different eccentricities demonstrates that learning accesses visual representations preceding location independent representations, with specificity of learning explained by inhomogeneous sensory representation.
Frontiers in Psychology | 2015
Mikhail Katkov; Hila Harris; Dov Sagi
Our experience with the natural world, as composed of ordered entities, implies that perception captures relationships between image parts. For instance, regularities in the visual scene are rapidly identified by our visual system. Defining the regularities that govern perception is a basic, unresolved issue in neuroscience. Mathematically, perfect regularities are represented by symmetry (perfect order). The transition from ordered configurations to completely random ones has been extensively studied in statistical physics, where the amount of order is characterized by a symmetry-specific order parameter. Here we applied tools from statistical physics to study order detection in humans. Different sets of visual textures, parameterized by the thermodynamic temperature in the Boltzmann distribution, were designed. We investigated how much order is required in a visual texture for it to be discriminated from random noise. The performance of human observers was compared to Ideal and Order observers (based on the order parameter). The results indicated a high consistency in performance across human observers, much below that of the Ideal observer, but well-approximated by the Order observer. Overall, we provide a novel quantitative paradigm to address order perception. Our findings, based on this paradigm, suggest that the statistical physics formalism of order captures regularities to which the human visual system is sensitive. An additional analysis revealed that some order perception properties are captured by traditional texture discrimination models according to which discrimination is based on integrated energy within maps of oriented linear filters.
Current Biology | 2012
Hila Harris; Michael Gliksberg; Dov Sagi
Vision Research | 2015
Hila Harris; Dov Sagi