Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shimon Edelman is active.

Publication


Featured researches published by Shimon Edelman.


Neuron | 1999

Differential Processing of Objects under Various Viewing Conditions in the Human Lateral Occipital Complex

Kalanit Grill-Spector; Tammar Kushnir; Shimon Edelman; Galia Avidan; Yacov Itzchak; Rafael Malach

The invariant properties of human cortical neurons cannot be studied directly by fMRI due to its limited spatial resolution. Here, we circumvented this limitation by using fMR adaptation, namely, reduction of the fMR signal due to repeated presentation of identical images. Object-selective regions (lateral occipital complex [LOC]) showed a monotonic signal decrease as repetition frequency increased. The invariant properties of fMR adaptation were studied by presenting the same object in different viewing conditions. LOC exhibited stronger fMR adaptation to changes in size and position (more invariance) compared to illumination and viewpoint. The effect revealed two putative subdivisions within LOC: caudal-dorsal (LO), which exhibited substantial recovery from adaptation under all transformations, and posterior fusiform (PF/LOa), which displayed stronger adaptation. This study demonstrates the utility of fMR adaptation for revealing functional characteristics of neurons in fMRI studies.


Human Brain Mapping | 1998

A sequence of object-processing stages revealed by fMRI in the human occipital lobe.

Kalanit Grill-Spector; Tammar Kushnir; Talma Hendler; Shimon Edelman; Yacov Itzchak; Rafael Malach

Functional magnetic resonance imaging was used in combined functional selectivity and retinotopic mapping tests to reveal object‐related visual areas in the human occpital lobe. Subjects were tested with right, left, up, or down hemivisual field stimuli which were composed of images of natural objects (faces, animals, man‐made objects) or highly scrambled (1,024 elements) versions of the same images. In a similar fashion, the horizontal and vertical meridians were mapped to define the borders of these areas. Concurrently, the same cortical sites were tested for their sensitivity to image‐scrambling by varying the number of scrambled picture fragments (from 16–1,024) while controlling for the Fourier power spectrum of the pictures and their order of presentation. Our results reveal a stagewise decrease in retinotopy and an increase in sensitivity to image‐scrambling. Three main distinct foci were found in the human visual object recognition pathway (Ungerleider and Haxby [1994]: Curr Opin Neurobiol 4:157–165): 1) Retinotopic primary areas V1–3 did not exhibit significant reduction in activation to scrambled images. 2) Areas V4v (Sereno et al., [1995]: Science 268:889–893) and V3A (DeYoe et al., [1996]: Proc Natl Acad Sci USA 93:2382–2386; Tootell et al., [1997]: J Neurosci 71:7060–7078) manifested both retinotopy and decreased activation to highly scrambled images. 3) The essentially nonretinotopic lateral occipital complex (LO) (Malach et al., [1995]: Proc Natl Acad Sci USA 92:8135–8139; Tootell et al., [1996]: Trends Neurosci 19:481–489) exhibited the highest sensitivity to image scrambling, and appears to be homologous to macaque the infero‐temporal (IT) cortex (Tanaka [1996]: Curr Opin Neurobiol 523–529). Breaking the images into 64, 256, or 1,024 randomly scrambled blocks reduced activation in LO voxels. However, many LO voxels remained significantly activated by mildly scrambled images (16 blocks). These results suggest the existence of object‐fragment representation in LO. Hum. Brain Mapping 6:316–328, 1998.


Vision Research | 1992

Orientation dependence in the recognition of familiar and novel views of three-dimensional objects

Shimon Edelman; Hh Bülthoff

We report four experiments that investigated the representation of novel three-dimensional (3D) objects by the human visual system. In the first experiment, canonical views were demonstrated for novel objects seen equally often from all test viewpoints. The next two experiments showed that the canonical views persisted under repeated testing, and in the presence of a variety of depth cues, including binocular stereo. The fourth experiment probed the ability of subjects to generalize recognition to unfamiliar views of objects previously seen at a limited range of attitudes. Both mono and stereo conditions yielded the same increase in the error rate with misorientation relative to the training attitude. Taken together, these results support the notion that 3D objects are represented by multiple specific views, possibly augmented by partial viewer-centered 3D information.


Neuron | 1998

Cue-Invariant Activation in Object-Related Areas of the Human Occipital Lobe

Kalanit Grill-Spector; Tamar Kushnir; Shimon Edelman; Yacov Itzchak; Rafael Malach

The extent to which primary visual cues such as motion or luminance are segregated in different cortical areas is a subject of controversy. To address this issue, we examined cortical activation in the human occipital lobe using functional magnetic resonance imaging (fMRI) while subjects performed a fixed visual task, object recognition, using three different primary visual cues: motion, texture, or luminance contrast. In the first experiment, a region located on the lateral aspect of the occipital lobe (LO complex) was preferentially activated in all 11 subjects both by luminance and motion-defined object silhouettes compared to full-field moving and stationary noise (ratios, 2.00+/-0.19 and 1.86+/-0.65, respectively). In the second experiment, all subjects showed enhanced activation in the LO complex to objects defined both by luminance and texture contrast compared to full-field texture patterns (ratios, 1.43+/-0.08 and 1.32+/-0.08, respectively). An additional smaller dorsal focus that exhibited convergence of object-related cues appeared to correspond to area V3a or a region slightly anterior to it. These results show convergence of visual cues in LO and provide strong evidence for its role in object processing.


Vision Research | 1993

Long-term learning in vernier acuity: Effects of stimulus orientation, range and of feedback

Manfred Fahle; Shimon Edelman

In hyperacuity, as in many other tasks, performance improves with practice. To better understand the underlying mechanisms, we measured thresholds of 41 inexperienced observers for the discrimination of vernier displacements. In spite of considerable inter-individual differences, mean thresholds decreased monotonically over the 10,000 stimuli presented to each observer, if stimulus orientation was constant. Generalization of learning seemed to be possible across offset-ranges, but not across orientations. Learning was slightly faster with error feedback than without it in one experiment. These results effectively constrain the range of conceivable models for learning of hyperacuity.


Proceedings of the National Academy of Sciences of the United States of America | 2005

Unsupervised learning of natural languages

Zach Solan; D. Horn; Eytan Ruppin; Shimon Edelman

We address the problem, fundamental to linguistics, bioinformatics, and certain other disciplines, of using corpora of raw symbolic sequential data to infer underlying rules that govern their production. Given a corpus of strings (such as text, transcribed speech, chromosome or protein sequence data, sheet music, etc.), our unsupervised algorithm recursively distills from it hierarchically structured patterns. The adios (automatic distillation of structure) algorithm relies on a statistical method for pattern extraction and on structured generalization, two processes that have been implicated in language acquisition. It has been evaluated on artificial context-free grammars with thousands of rules, on natural languages as diverse as English and Chinese, and on protein data correlating sequence with function. This unsupervised algorithm is capable of learning complex syntax, generating grammatical novel sentences, and proving useful in other fields that call for structure discovery from raw data, such as bioinformatics.


Behavioral and Brain Sciences | 1998

Representation is representation of similarities

Shimon Edelman

Advanced perceptual systems are faced with the problem of securing a principled (ideally, veridical) relationship between the world and its internal representation. I propose a unified approach to visual representation, addressing the need for superordinate and basic-level categorization and for the identification of specific instances of familiar categories. According to the proposed theory, a shape is represented internally by the responses of a small number of tuned modules, each broadly selective for some reference shape, whose similarity to the stimulus it measures. This amounts to embedding the stimulus in a low-dimensional proximal shape space spanned by the outputs of the active modules. This shape space supports representations of distal shape similarities that are veridical as Shepards (1968) second-order isomorphisms (i.e., correspondence between distal and proximal similarities among shapes, rather than between distal shapes and their proximal representations). Representation in terms of similarities to reference shapes supports processing (e.g., discrimination) of shapes that are radically different from the reference ones, without the need for the computationally problematic decomposition into parts required by other theories. Furthermore, a general expression for similarity between two stimuli, based on comparisons to reference shapes, can be used to derive models of perceived similarity ranging from continuous, symmetric, and hierarchical ones, as in multidimensional scaling (Shepard 1980), to discrete and nonhierarchical ones, as in the general contrast models (Shepard & Arabie 1979; Tversky 1977).


Vision Research | 1995

Fast perceptual learning in hyperacuity

Manfred Fahle; Shimon Edelman; Tomaso Poggio

We investigated fast improvement of visual performance in several hyperacuity tasks such as vernier acuity and stereoscopic depth perception in almost 100 observers. Results indicate that the fast phase of perceptual learning, occurring within less than 1 hr of training, is specific for the visual field position and for the particular hyperacuity task, but is only partly specific for the eye trained and for the offset tested. Learning occurs without feedback. We conjecture that the site of learning may be quite early in the visual pathway.


neural information processing systems | 1989

A self-organizing multiple-view representation of 3D objects

Daphna Weinshall; Shimon Edelman; Hh Bülthoff

We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the networks behavior was qualitatively similar to that of human subjects.


Minds and Machines | 1993

Representation, Similarity, and the Chorus of Prototypes

Shimon Edelman

It is proposed to conceive of representation as an emergent phenomenon that is supervenient on patterns of activity of coarsely tuned and highly redundant feature detectors. The computational underpinnings of the outlined concept of representation are (1) the properties of collections of overlapping graded receptive fields, as in the biological perceptual systems that exhibit hyperacuity-level performance, and (2) the sufficiency of a set of proximal distances between stimulus representations for the recovery of the corresponding distal contrasts between stimuli, as in multidimensional scaling. The present preliminary study appears to indicate that this concept of representation is computationally viable, and is compatible with psychological and neurobiological data.

Collaboration


Dive into the Shimon Edelman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomaso Poggio

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daphna Weinshall

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge