Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kourosh Saberi is active.

Publication


Featured researches published by Kourosh Saberi.


Nature | 1999

Cognitive restoration of reversed speech

Kourosh Saberi; David R. Perrott

Speech is the most complex auditory signal and requires the most processing. The human brain devotes large cortical areas, to deciphering the information it contains, as well as parsing speech sounds produced simultaneously by several speakers. The brain can also invoke corrective measures to restore distortions in speech; for example, if a brief speech sound is replaced by an interfering sound that masks it, such as a cough, the listener perceives the missing speech as if the brain interpolates through the absent segment. We have studied the intelligibility of speech, and find it is resistant to time reversal of local segments of a spoken sentence, which has been described as “the most drastic form of time scale distortion”.


Attention Perception & Psychophysics | 1990

Auditory psychomotor coordination and visual search performance

David R. Perrott; Kourosh Saberi; Kathleen Brown; Thomas Z. Strybel

In Experiments 1 and 2, the time to locate and identify a visual target (visual search performance in a two-alternative forced-choice paradigm) was measured as a function of the location of the target relative to the subject’s initial line of gaze. In Experiment 1, tests were conducted within a 260° region on the horizontal plane at a fixed elevation (eye level). In Experiment 2, the position of the target was varied in both the horizontal (260°) and the vertical (±46° from the initial line of gaze) planes. In both experiments, and for all locations tested, the time required to conduct a visual search was reduced substantially (175–1,200 msec) when a 10-Hz click train was presented from the same location as that occupied by the visual target. Significant differences in latencies were still evident when the visual target was located within 10° of the initial line of gaze (central visual field). In Experiment 3, we examined head and eye movements that occur as subjects attempt to locate a sound source. Concurrent movements of the head and eyes are commonly encountered during auditorily directed search behavior. In over half of the trials, eyelid closures were apparent as the subjects attempted to orient themselves toward the sound source. The results from these experiments support the hypothesis that the auditory spatial channel has a significant role in regulating visual gaze.


Cerebral Cortex | 2010

Hierarchical Organization of Human Auditory Cortex: Evidence from Acoustic Invariance in the Response to Intelligible Speech

Kayoko Okada; Feng Rong; Jon Venezia; William Matchin; I-Hui Hsieh; Kourosh Saberi; John T. Serences; Gregory Hickok

Hierarchical organization of human auditory cortex has been inferred from functional imaging observations that core regions respond to simple stimuli (tones) whereas downstream regions are selectively responsive to more complex stimuli (band-pass noise, speech). It is assumed that core regions code low-level features, which are combined at higher levels in the auditory system to yield more abstract neural codes. However, this hypothesis has not been critically evaluated in the auditory domain. We assessed sensitivity to acoustic variation within intelligible versus unintelligible speech using functional magnetic resonance imaging and a multivariate pattern analysis. Core auditory regions on the dorsal plane of the superior temporal gyrus exhibited high levels of sensitivity to acoustic features, whereas downstream auditory regions in both anterior superior temporal sulcus and posterior superior temporal sulcus (pSTS) bilaterally showed greater sensitivity to whether speech was intelligible or not and less sensitivity to acoustic variation (acoustic invariance). Acoustic invariance was most pronounced in more pSTS regions of both hemispheres, which we argue support phonological level representations. This finding provides direct evidence for a hierarchical organization of human auditory cortex and clarifies the cortical pathways supporting the processing of intelligible speech.


The Journal of Neuroscience | 2011

Functional Anatomy of Language and Music Perception: Temporal and Structural Factors Investigated Using Functional Magnetic Resonance Imaging

Corianne Rogalsky; Feng Rong; Kourosh Saberi; Gregory Hickok

Language and music exhibit similar acoustic and structural properties, and both appear to be uniquely human. Several recent studies suggest that speech and music perception recruit shared computational systems, and a common substrate in Brocas area for hierarchical processing has recently been proposed. However, this claim has not been tested by directly comparing the spatial distribution of activations to speech and music processing within subjects. In the present study, participants listened to sentences, scrambled sentences, and novel melodies. As expected, large swaths of activation for both sentences and melodies were found bilaterally in the superior temporal lobe, overlapping in portions of auditory cortex. However, substantial nonoverlap was also found: sentences elicited more ventrolateral activation, whereas the melodies elicited a more dorsomedial pattern, extending into the parietal lobe. Multivariate pattern classification analyses indicate that even within the regions of blood oxygenation level-dependent response overlap, speech and music elicit distinguishable patterns of activation. Regions involved in processing hierarchical aspects of sentence perception were identified by contrasting sentences with scrambled sentences, revealing a bilateral temporal lobe network. Music perception showed no overlap whatsoever with this network. Brocas area was not robustly activated by any stimulus type. Overall, these findings suggest that basic hierarchical processing for music and speech recruits distinct cortical networks, neither of which involves Brocas area. We suggest that previous claims are based on data from tasks that tap higher-order cognitive processes, such as working memory and/or cognitive control, which can operate in both speech and music domains.


Journal of the Acoustical Society of America | 1990

Minimum audible angle thresholds for sources varying in both elevation and azimuth

David R. Perrott; Kourosh Saberi

Minimum audible angle (MAA) thresholds were obtained for four subjects in a two-alternative, forced-choice, three up/one down, adaptive paradigm as a function of the orientation of the array of sources. With sources distributed on the horizontal plane, the mean MAA threshold was 0.97 degrees. With the sources distributed on the vertical plane (array rotated 90 degrees), the mean MAA threshold was 3.65 degrees. Performance in both conditions was well in line with previous experiments of this type. Tests were also conducted with sources distributed on oblique planes. As the array was rotated from 10 degrees-60 degrees from the horizontal plane, relatively little change in the MAA threshold was observed; the mean MAA thresholds ranged from 0.78 degrees to 1.06 degrees. Only when the array was nearly vertical (80 degrees) was there any appreciable loss in spatial resolution; the MAA threshold had increased to 1.8 degrees. The relevance of these results to research on auditory localization under natural listening conditions, especially in the presence of head movements, is also discussed.


Journal of the Acoustical Society of America | 1991

Free‐field release from masking

Kourosh Saberi; Lynda Dostal; Toktam Sadralodabai; Vicki Bull; David R. Perrott

Free-field release from masking was studied as a function of the spatial separation of a signal and masker in a two-interval, forced-choice (2IFC) adaptive paradigm. The signal was a 250-ms train of clicks (100/s) generated by filtering 50-microseconds pulses with a TDH-49 speaker (0.9 to 9.0 kHz). The masker was continuous broadband (0.7 to 11 kHz) white noise presented at a level of 44 dBA measured at the position of the subjects head. In experiment I, masked and absolute thresholds were measured for 36 signal source locations (10 degree increments) along the horizontal plane as a function of seven masking source locations (30 degree increments). In experiment II, both absolute and masked thresholds were measured for seven signal locations along three vertical planes located at azimuthal rotations of 0 degrees (median vertical plane), 45 degrees, and 90 degrees. In experiment III, monaural absolute and masked thresholds were measured for various signal-masker configurations. Masking-level differences (MLDs) were computed relative to the condition where the signal and mask were in front of the subjects after using absolute thresholds to account for differences in the signals sound-pressure level (SPL) due to direction. Maximum MLDs were 15 dB along the horizontal plane, 8 dB along the vertical, and 9 dB under monaural conditions.


Human Factors | 1991

Aurally aided visual search in the central visual field: effects of visual load and visual enhancement of the target

David R. Perrott; Toktam Sadralodabai; Kourosh Saberi; Thomas Z. Strybel

Visual search performance was examined in a two-alternative, forced-choice paradigm. The task involved locating and identifying which of two visual targets was present on a trial. The location of the targets varied relative to the subjects initial fixation point from 0 to 14.8 deg. The visual targets were either presented concurrently with a sound located at the same position as the visual target or were presented in silence. Both the number of distractor visual figures (0-63) present in the field during the search (Experiments 1 and 2) and the distinctness of the visual target relative to the distractors (Experiment 2) were considered. Under all conditions, visual search latencies were reduced when spatially correlated sounds were present. Aurally guided search was particularly enhanced when the visual target was located in the peripheral regions of the central visual field and when a larger number of distractor images (63) were present. Similar results were obtained under conditions in which the target was visually enhanced. These results indicate that spatially correlated sounds may have considerable utility in highinformation environments (e.g., piloting an aircraft).


Proceedings of the National Academy of Sciences of the United States of America | 2012

Orthogonal acoustic dimensions define auditory field maps in human cortex

Brian Barton; Jonathan H. Venezia; Kourosh Saberi; Gregory Hickok; Alyssa A. Brewer

The functional organization of human auditory cortex has not yet been characterized beyond a rudimentary level of detail. Here, we use functional MRI to measure the microstructure of orthogonal tonotopic and periodotopic gradients forming complete auditory field maps (AFMs) in human core and belt auditory cortex. These AFMs show clear homologies to subfields of auditory cortex identified in nonhuman primates and in human cytoarchitectural studies. In addition, we present measurements of the macrostructural organization of these AFMs into “clover leaf” clusters, consistent with the macrostructural organization seen across human visual cortex. As auditory cortex is at the interface between peripheral hearing and central processes, improved understanding of the organization of this system could open the door to a better understanding of the transformation from auditory spectrotemporal signals to higher-order information such as speech categories.


Neuron | 1998

Effects of Interaural Decorrelation on Neural and Behavioral Detection of Spatial Cues

Kourosh Saberi; Yoshifumi Takahashi; Masakazu Konishi; Yehuda Albeck; Ben J. Arthur; Haleh Farahbod

The detection of interaural time differences (ITDs) for sound localization critically depends on the similarity between the left and right ear signals (interaural correlation). We show that, like humans, owls can localize phantom sound sources well until the correlation declines to a very low value, below which their performance rapidly deteriorates. Decreasing interaural correlation also causes the response of the owls tectal auditory neurons to decline nonlinearly, with a rapid drop followed by a more gradual reduction. A detection-theoretic analysis of the statistical properties of neuronal responses could account for the variance of behavioral responses as interaural correlation is decreased. Finally, cross-correlation analysis suggests that low interaural correlations cause misalignment of cross-correlation peaks across different frequencies, contributing heavily to the nonlinear decline in neural and ultimately behavioral performance.


Journal of the Acoustical Society of America | 1990

Minimum audible movement angles as a function of sound source trajectory

Kourosh Saberi; David R. Perrott

Auditory resolution of moving sound sources was determined in a simulated motion paradigm for sources traveling along horizontal, vertical, or oblique orientations in the subjectss frontal plane. With motion restricted to the horizontal orientation, minimum audible movement angles (MAMA) ranged from about 1.7 degrees at the lowest velocity (1.8 degrees/s) to roughly 10 degrees at the highest velocity (320 degrees/s). With the sound moving along an oblique orientation (rotated 45 degrees relative to the horizontal) MAMAs generally matched those of the horizontal condition. When motion was restricted to the vertical, MAMAs were substantially larger at all velocities (often exceeding 8 degrees). Subsequent tests indicated that MAMAs are a U-shaped function of velocity, with optimum resolution obtained at about 2 degrees/s for the horizontal (and oblique) and 7-11 degrees/s for the vertical orientation. Additional tests conducted at a fixed velocity of 1.8 degrees/s along oblique orientations of 80 degrees and 87 degrees indicated that even a small deviation from the vertical had a significant impact on MAMAs. A displacement of 10 degrees from the vertical orientation (a slope of 80 degrees) was sufficient to reduce thresholds (obtained at a velocity of 1.8 degrees/s) from about 11 degrees to approximately 2 degrees (a fivefold increase in acuity). These results are in good agreement with our previous study of minimum audible angles long oblique planes [Perrott and Saberi, J. Acoust. Soc. Am. 87, 1728-1731 (1990)].(ABSTRACT TRUNCATED AT 250 WORDS)

Collaboration


Dive into the Kourosh Saberi's collaboration.

Top Co-Authors

Avatar

Gregory Hickok

University of California

View shared research outputs
Top Co-Authors

Avatar

I-Hui Hsieh

National Central University

View shared research outputs
Top Co-Authors

Avatar

David R. Perrott

California State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haleh Farahbod

University of California

View shared research outputs
Top Co-Authors

Avatar

Masakazu Konishi

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric R. Jensen

University of California

View shared research outputs
Top Co-Authors

Avatar

Feng Rong

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge