Shaul Hochstein
Hebrew University of Jerusalem
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shaul Hochstein.
Neuron | 2002
Shaul Hochstein; Merav Ahissar
We propose that explicit vision advances in reverse hierarchical direction, as shown for perceptual learning. Processing along the feedforward hierarchy of areas, leading to increasingly complex representations, is automatic and implicit, while conscious perception begins at the hierarchys top, gradually returning downward as needed. Thus, our initial conscious percept--vision at a glance--matches a high-level, generalized, categorical scene interpretation, identifying forest before trees. For later vision with scrutiny, reverse hierarchy routines focus attention to specific, active, low-level units, incorporating into conscious perception detailed information available there. Reverse Hierarchy Theory dissociates between early explicit perception and implicit low-level vision, explaining a variety of phenomena. Feature search pop-out is attributed to high areas, where large receptive fields underlie spread attention detecting categorical differences. Search for conjunctions or fine discriminations depends on reentry to low-level specific receptive fields using serial focused attention, consistent with recently reported primary visual cortex effects.
Trends in Cognitive Sciences | 2004
Merav Ahissar; Shaul Hochstein
Perceptual learning can be defined as practice-induced improvement in the ability to perform specific perceptual tasks. We previously proposed the Reverse Hierarchy Theory as a unifying concept that links behavioral findings of visual learning with physiological and anatomical data. Essentially, it asserts that learning is a top-down guided process, which begins at high-level areas of the visual system, and when these do not suffice, progresses backwards to the input levels, which have a better signal-to-noise ratio. This simple concept has proved powerful in explaining a broad range of findings, including seemingly contradicting data. We now extend this concept to describe the dynamics of skill acquisition and interpret recent behavioral and electrophysiological findings.
Vision Research | 2005
Orit Hershler; Shaul Hochstein
To determine the nature of face perception, several studies used the visual search paradigm, whereby subjects detect an odd target among distractors. When detection reaction time is set-size independent, the odd element is said to pop out, reflecting a basic mechanism or map for the relevant feature. A number of previous studies suggested that schematic faces do not pop out. We show that natural face stimuli do pop out among assorted non-face objects. Animal faces, on the other hand, do not pop out from among the same assorted non-face objects. In addition, search for a face among distractors of another object category is easier than the reverse search, and face search is mediated by holistic face characteristics, rather than by face parts. Our results indicate that the association of pop out with elementary features and lower cortical areas may be incorrect. Instead, face search, and indeed all feature search, may reflect high-level activity with generalization over spatial and other property details.
Vision Research | 1996
Merav Ahissar; Shaul Hochstein
Training induces dramatic improvement in the performance of pop-out detection. In this study, we examined the specificities of this improvement to stimulus characteristics. We found that learning is specific within basic visual dimensions: orientation, size and position. Accordingly, following training with one set of orientations, rotating target and distractors by 30 deg or more substantially hampers performance. Furthermore, rotation of either target or distractors alone greatly increases threshold. Learning is not transferred to reduced-size stimuli. Position specificity near fixation may be finer than 0.7 deg. On the other hand, learning transfers to the untrained eye, to expanded images, to mirror image transformations and to homologous positions across the midline (near fixation). Thus, learning must occur at a processing level which is early enough to maintain fine separability along basic stimulus dimensions, yet sufficiently high to manifest the described generalizations. We suggest that the site of early perceptual learning is one of the cortical areas which receive input from primary visual cortex, V1, and where top-down attentional control is present.
Nature | 2000
Tanya Orlov; Volodya Yakovlev; Shaul Hochstein; Ehud Zohary
The recall of a list of items in a serial order is a basic cognitive skill. However, it is unknown whether a list of arbitrary items is remembered by associations between sequential items or by associations between each item and its ordinal position. Here, to study the nonverbal strategies used for such memory tasks, we trained three macaque monkeys on a delayed sequence recall task. Thirty abstract images, divided into ten triplets, were presented repeatedly in fixed temporal order. On each trial the monkeys viewed three sequentially presented sample stimuli, followed by a test stimulus consisting of the same three images and a distractor image (chosen randomly from the remaining 27). The task was to touch the three images in their original order without touching the distractor. The most common error was touching the distractor when it had the same ordinal number (in its own triplet) as the correct image. Thus, the monkeys natural tendency was to categorize images by their ordinal number. Additional, secondary strategies were used eventually to avoid the distractor images. These included memory of the sample images (working memory) and associations between sequence triplet members. Thus, monkeys use multiple mnemonic strategies according to their innate tendencies and the requirements of the task.
Vision Research | 2000
Merav Ahissar; Shaul Hochstein
We examined the roles of two determinants of spatial attention in governing the spread of perceptual learning, namely, stimulus location distribution and task difficulty. Subjects were trained on detection of a target element with an odd orientation imbedded in an array of light bars with otherwise uniform orientation. To assess the effects of target distribution on attention and learning, target positions were distributed so that attention was allocated not only to the target positions themselves, but also to intermediate positions where the target was not presented. Target detection performance substantially improved and improvement spread to match the induced window of spatial attention rather than only the actual target locations. To assess the effect of task difficulty on the spread of attention and learning, the target-distractor orientation difference and the time interval available for processing were manipulated. In addition, we compared performance of subjects with more versus with less detection difficulty. A consistent pattern emerged: When the task becomes more difficult, the window of attention shrinks, and learning becomes more localized. We conclude that task-specific spatial attention is both necessary and sufficient to induce learning. The spread of spatial attention, and thus of learning, is determined by the integrated effects of target distribution and task difficulty. We propose a theoretical framework whereby these factors combine to determine the cortical level of the focus of attention, which in turn enables learning modifications.
Vision Research | 1993
Nava Rubin; Shaul Hochstein
A considerable body of evidence suggests the existence of a two-stage mechanism for the detection of global motion. In the first stage the motion of elongated contours is extracted and then at the second stage these one-dimensional (1D) motion signals are combined. What is the nature of the computation carried out in combining the 1D motion signals towards forming a global motion percept? We devised a set of stimuli that differentiate between different possible computations. In particular, they distinguish between a velocity-space construction (such as intersection of constraints) and a linear computation such as vector averaging. In addition, these stimuli do not contain two-dimensional (2D) motion signals such as line intersections, that allow unambiguous determination of global velocity. Stimuli were presented in uncrossed disparity relative to the aperture through which they were presented, to reduce the effect of line terminator motion. We found that subjects are unable to detect the veridical global direction of motion for these stimuli. Instead, they perceive the stimulus pattern to be moving in a direction which reflects the average of its 1D motion components. Our results suggest that the visual system is not equipped with a mechanism implementing a velocity-space computation of global motion.
Vision Research | 2006
Orit Hershler; Shaul Hochstein
In this issue of Vision Research, VanRullen, R. (2006). On second glance: Still no high-level pop-out effect for faces. Vision Research, in press. challenges our earlier Vision Research paper, At first sight: A high-level pop-out effect for faces (Hershler, O., & Hochstein, S. (2005). At first sight: A high-level pop-out effect for faces. Vision Research, 45, 1707-1724). In that paper, we showed that faces pop-out from a great variety of heterogeneous distractors. This search must have been based on a holistic combination of facial features, since it could not have relied on any single low-level distinguishing feature-each of which was present in at least some of the distractors. VanRullen implies that the pop-out effect is not limited to faces, is not holistic, and is due to a low-level confound, namely that the low-level Fourier amplitude spectrum may differentiate between faces and other categories. We now show that he fails to substantiate all three claims. His first experiment replicates our own and shows once again that faces do indeed pop-out, while other objects, such as cars, do not. The claim regarding the non-holistic nature of face search is based on a failure to differentiate between holistic processing for face detection and for individual face identification. His central claim is that the Fourier amplitude spectrum is processed low-level and could be used for face pop-out. However, changing the amplitude spectrum may well affect high-level representations as well. For example, his demonstration uses hybrid images which are extremely fuzzy, rendering them difficult to identify. More importantly, this claim would lead to the conclusion that targets with a non-face phase spectrum and only a face amplitude spectrum would pop-out among distractors with different amplitude spectra. We demonstrate that this is, of course, not the case and that the Fourier amplitude is not the hoped for low-level confound. Until another such hidden low level feature is found, we must accept that face pop out depends on a high level mechanism.
Vision Research | 1998
Merav Ahissar; Roni Laiwand; Gal Kozminsky; Shaul Hochstein
Studies of perceptual learning consistently found that improvement is stimulus specific. These findings were interpreted as indicating an early cortical learning site. In line with this interpretation, we consider two alternative hypotheses: the earliest modification and the output-level modification assumptions, which respectively assume that learning occurs within the earliest representation which is selective for the trained stimuli, or at cortical levels receiving its output. We studied performance in a pop-out task using light bar distractor elements of one orientation, and a target element rotated by 30 degrees (or 90 degrees). We tested the alternative hypotheses by examining pop-out learning through an initial training phase, a subsequent learning stage with swapped target and distracted orientations, and a final re-test with the originally trained stimuli. We found learning does not transfer across orientation swapping. However, following training with swapped orientations, a similar performance level is reached as with original orientations. That is, learning neither facilitates nor interferes to a substantial degree with subsequent performance with altered stimuli. Furthermore, this re-training does not hamper performance with the original trained stimuli. If training changed the earliest orientation selective representation (specializing it for performance of the particular performed task) it would necessarily affect performance with swapped orientations, as well. The co-existence of similar asymptotes for apparently conflicting stimulus sets refutes the earliest modification hypothesis, supporting the alternative output level modification hypothesis. We conclude that secondary cortical processing levels use outputs from the earliest orientation representation to compute higher order structures, promoting and improving successful task performance.
Journal of Cognitive Neuroscience | 2002
Marina Pavlovskaya; Haim Ring; Zeev Groswasser; Shaul Hochstein
We address two longstanding conflicts in the visual search and unilateral neglect literature by studying feature and conjunction search performance of neglect patients using laterally presented search arrays. The first issue relates to whether feature search is performed independently of attention, or rather requires spread attention. If feature search is preattentive, it should survive neglect. However, we find neglect effects for both feature and conjunction search, suggesting that feature search, too, has an attentional requirement. The second controversy refers to the space-or object-based nature of neglect following unilateral right-hemisphere parietal lobe damage. If neglect were a purely spatial phenomenon, then we would expect no detriment in performance in the right (nonneglect) field, and diminished performance for the whole left (neglect) field. On the other hand, if neglect were purely object-based, we would expect diminished performance on the left side of the search array, irrespective of its location in the visual field. We now demonstrate a combination of strong object-based and space-based neglect effects for conjunction search with laterally placed element arrays, suggesting that these two mechanisms work in tandem.