Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew T. Sabin is active.

Publication


Featured researches published by Andrew T. Sabin.


The Journal of Neuroscience | 2010

Enhancing Perceptual Learning by Combining Practice with Periods of Additional Sensory Stimulation

Beverly A. Wright; Andrew T. Sabin; Yuxuan Zhang; Nicole Marrone; Matthew B. Fitzgerald

Perceptual skills can be improved even in adulthood, but this learning seldom occurs by stimulus exposure alone. Instead, it requires considerable practice performing a perceptual task with relevant stimuli. It is thought that task performance permits the stimuli to drive learning. A corresponding assumption is that the same stimuli do not contribute to improvement when encountered separately from relevant task performance because of the absence of this permissive signal. However, these ideas are based on only two types of studies, in which the task was either always performed or not performed at all. Here we demonstrate enhanced perceptual learning on an auditory frequency-discrimination task in human listeners when practice on that target task was combined with additional stimulation. Learning was enhanced regardless of whether the periods of additional stimulation were interleaved with or provided exclusively before or after target-task performance, and even though that stimulation occurred during the performance of an irrelevant (auditory or written) task. The additional exposures were only beneficial when they shared the same frequency with, though they did not need to be identical to, those used during target-task performance. Their effectiveness also was diminished when they were presented 15 min after practice on the target task and was eliminated when that separation was increased to 4 h. These data show that exposure to an acoustic stimulus can facilitate learning when encountered outside of the time of practice on a perceptual task. By properly using additional stimulation one may markedly improve the efficiency of perceptual training regimens.


The Journal of Neuroscience | 2010

Generalization lags behind learning on an auditory perceptual task.

Beverly A. Wright; Roselyn M. Wilson; Andrew T. Sabin

The generalization of learning from trained to untrained conditions is of great potential value because it markedly increases the efficacy of practice. In principle, generalization and the learning itself could arise from either the same or distinct neural changes. Here, we assessed these two possibilities in the realm of human perceptual learning by comparing the time course of improvement on a trained condition (learning) to that on an untrained condition (generalization) for an auditory temporal-interval discrimination task. While significant improvement on the trained condition occurred within 2 d, generalization to the untrained condition lagged behind, only emerging after 4 d. The different time courses for learning and generalization suggest that these two types of perceptual improvement can arise from at least partially distinct neural changes. The notably longer time course for generalization than learning demonstrates that increasing the duration of training can be an effective means to increase the number of conditions to which learning generalizes on perceptual tasks.


Hearing Research | 2011

Separable developmental trajectories for the abilities to detect auditory amplitude and frequency modulation

Karen Banai; Andrew T. Sabin; Beverly A. Wright

Amplitude modulation (AM) and frequency modulation (FM) are inherent components of most natural sounds. The ability to detect these modulations, considered critical for normal auditory and speech perception, improves over the course of development. However, the extent to which the development of AM and FM detection skills follow different trajectories, and therefore can be attributed to the maturation of separate processes, remains unclear. Here we explored the relationship between the developmental trajectories for the detection of sinusoidal AM and FM in a cross-sectional design employing children aged 8-10 and 11-12 years and adults. For FM of tonal carriers, both average performance (mean) and performance consistency (within-listener standard deviation) were adult-like in the 8-10 y/o. In contrast, in the same listeners, average performance for AM of wideband noise carriers was still not adult-like in the 11-12 y/o, though performance consistency was already mature in the 8-10 y/o. Among the children there were no significant correlations for either measure between the degrees of maturity for AM and FM detection. These differences in developmental trajectory between the two modulation cues and between average detection thresholds and performance consistency suggest that at least partially distinct processes may underlie the development of AM and FM detection as well as the abilities to detect modulation and to do so consistently.


international conference on independent component analysis and signal separation | 2007

Modeling perceptual similarity of audio signals for blind source separation evaluation

Brendan Fox; Andrew T. Sabin; Bryan Pardo; Alec Zopf

Existing perceptual models of audio quality, such as PEAQ, were designed to measure audio codec performance and are not well suited to evaluation of audio source separation algorithms. The relationship of many other signal quality measures to human perception is not well established. We collected subjective human assessments of distortions encountered when separating audio sources from mixtures of two to four harmonic sources. We then correlated these assessments to 18 machine-measurable parameters. Results show a strong correlation (r=0.96) between a linear combination of a subset of four of these parameters and mean human assessments. This correlation is stronger than that between human assessments and several measures currently in use.


The Journal of Neuroscience | 2012

Perceptual Learning Evidence for Tuning to Spectrotemporal Modulation in the Human Auditory System

Andrew T. Sabin; David A. Eddins; Beverly A. Wright

Natural sounds are characterized by complex patterns of sound intensity distributed across both frequency (spectral modulation) and time (temporal modulation). Perception of these patterns has been proposed to depend on a bank of modulation filters, each tuned to a unique combination of a spectral and a temporal modulation frequency. There is considerable physiological evidence for such combined spectrotemporal tuning. However, direct behavioral evidence is lacking. Here we examined the processing of spectrotemporal modulation behaviorally using a perceptual-learning paradigm. We trained human listeners for ∼1 h/d for 7 d to discriminate the depth of spectral (0.5 cyc/oct; 0 Hz), temporal (0 cyc/oct; 32 Hz), or upward spectrotemporal (0.5 cyc/oct; 32 Hz) modulation. Each trained group learned more on their respective trained condition than did controls who received no training. Critically, this depth-discrimination learning did not generalize to the trained stimuli of the other groups or to downward spectrotemporal (0.5 cyc/oct; −32 Hz) modulation. Learning on discrimination also led to worsening on modulation detection, but only when the same spectrotemporal modulation was used for both tasks. Thus, these influences of training were specific to the trained combination of spectral and temporal modulation frequencies, even when the trained and untrained stimuli had one modulation frequency in common. This specificity indicates that training modified circuitry that had combined spectrotemporal tuning, and therefore that circuits with such tuning can influence perception. These results are consistent with the possibility that the auditory system analyzes sounds through filters tuned to combined spectrotemporal modulation.


Archives of Otolaryngology-head & Neck Surgery | 2013

Sound Localization in Unilateral Deafness With the Baha or TransEar Device

Robert A. Battista; Krystine Mullins; R. Mark Wiet; Andrew T. Sabin; Joyce Kim; Vasilike Rauch

OBJECTIVE To evaluate the sound localization capabilities of patients with unilateral, profound sensorineural hearing loss who had been treated with either a bone-anchored hearing device (Baha BP100) or a TransEar 380-HF bone-conduction hearing device. STUDY DESIGN Nonrandomized, prospective study. SETTING Tertiary referral private practice. PATIENTS Patients with unilateral, profound sensorineural hearing loss treated with a BP100 (n = 10) or a TransEar (n = 10) device. Patients wore the hearing device for at least 1 month and had normal hearing in the contralateral ear. Ten patients with normal, bilateral hearing were used for control. INTERVENTIONS Sound localization of a 3-second recorded sound with and without a TransEar or Baha device was assessed using an array of 7 speakers at head level separated by approximately 45 degrees. The recorded sounds were that of a barking dog or a police siren. Randomized trials of 4 presentations per speaker were given for each hearing condition. MAIN OUTCOME MEASURES Sound localization was assessed by the accuracy in response and the generalized laterality of response. RESULTS The mean accuracy of speaker localization was 24% and 26% for the aided condition using the BP100 and TransEar devices, respectively. The mean accuracy of laterality judgment was 59% and 69% for the aided condition using the BP100 and TransEar devices, respectively. These results were only slightly better than chance. There was no statistical difference in localization accuracy or laterality judgment between the BP100 and TransEar groups. CONCLUSION Neither the BP100 nor the TransEar device improved sound localization accuracy or laterality judgment ability in patients with unilateral, profound sensorineural hearing loss compared with performance in the unaided condition.


creativity and cognition | 2009

2DEQ: an intuitive audio equalizer

Andrew T. Sabin; Bryan Pardo

The complexity of music production tools can be a significant bottleneck in the creative process. Here we describe the development of a simple, intuitive audio equalizer with the idea that our approach could also be applied to other types of music production tools. First, users generate a large set of equalization curves representative of the most common types of modifications. Next, we represent the entire set of curves in 2-dimensional space and determine the spatial location of common auditory adjectives. Finally we create an interface, called 2DEQ, where the user can drag a single dot to control equalization in this adjective-labeled space.


Experimental Brain Research | 2012

Perceptual learning of auditory spectral modulation detection

Andrew T. Sabin; David A. Eddins; Beverly A. Wright

Normal sensory perception requires the ability to detect and identify patterns of activity distributed across the receptor surface. In the visual system, the ability to perceive these patterns across the retina improves with training. This learning differs in magnitude for different trained stimuli and does not generalize to untrained spatial frequencies or retinal locations. Here, we asked whether training to detect patterns of activity across the cochlea yields learning with similar characteristics. Differences in learning between the visual and auditory systems would be inconsistent with the suggestion that the ability to detect these patterns is limited by similar constraints in these two systems. We trained three groups of normal-hearing listeners to detect spectral envelopes with a sinusoidal shape (spectral modulation) at 0.5, 1, or 2 cycles/octave and compared the performance of each group to that of a separate group that received no training. On average, as the trained spectral modulation frequency increased, the magnitude of training-induced improvement and the time to reach asymptotic performance decreased, while the tendency for performance to worsen within a training session increased. The training-induced improvements did not generalize to untrained spectral modulation frequencies or untrained carrier spectra. Thus, for both visual-spatial and auditory spectral modulation detection, learning depended upon and was specific to analogous features of the trained stimulus. Such similarities in learning could arise if, as has been suggested, similar constraints limit the ability to detect patterns across the receptor surface between the auditory and visual systems.


acm multimedia | 2009

A method for rapid personalization of audio equalization parameters

Andrew T. Sabin; Bryan Pardo

Potential users of audio production software, such as audio equalizers, may be discouraged by the complexity of the interface. We describe a system that simplifies the interface by quickly mapping an individuals preferred sound manipulation onto parameters for audio equalization. This system learns mappings by presenting a sequence of equalizer settings to the user and correlating the gain in each frequency band with the users preference rating. Learning typically converges in 25 user ratings (under two minutes). The system then creates a simple on-screen slider that lets the user manipulate the audio in terms of the descriptive term, without need to learn or use the parameters of an equalizer. Results are reported on the speed and effectiveness of the system for a set of 19 users and a set of five descriptive terms.


Otolaryngology-Head and Neck Surgery | 2013

Hearing outcomes in stapes surgery: a comparison of fat, fascia, and vein tissue seals.

Richard J. Wiet; Robert A. Battista; R. Mark Wiet; Andrew T. Sabin

Objective To compare short- and long-term hearing results following stapedectomy using 3 different oval window grafting materials with the same stapes prosthesis. Study Design Database review. Setting Tertiary referral private practice. Subjects and Methods Subjects were ears that underwent stapedectomy for otosclerosis, with placement of fat, fascia, or vein as an oval window seal and reconstruction with a titanium bucket handle prosthesis. A total of 365 procedures met these inclusion criteria: 98 fat grafts, 135 fascia grafts, and 132 vein grafts. Outcome measures included short-term (<1 year) and long-term follow-up air-bone gap. We compared the preoperative and postoperative amount of change in air-bone gap and preoperative and postoperative amount of change in the high-frequency bone conduction average. Results Overall median times to short-term and long-term follow-ups were 2.2 months and 36.1 months, respectively. There were no statistically significant differences between the 3 tissue seal groups in the amount of change in air-bone gap. There was no significant difference in amount of change in high-frequency bone conduction (representing sensorineural hearing level) between the 3 tissue seal groups. Most patients in all 3 groups had an air-bone gap at long-term follow-up of ≤10 dB (fat, 79.5%; fascia, 78.8%; and vein, 75.6%), with 90.3% of all patients at ≤20 dB. Conclusions In both the short-term postoperative period and long-term follow-up, there were no significant differences in hearing results among 3 types of tissue seals of the oval window in stapes surgery. Fat, fascia, and vein grafts all provide satisfactory hearing outcomes in stapedectomy.

Collaboration


Dive into the Andrew T. Sabin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Eddins

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Bryan Pardo

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Mark Wiet

Rush University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alec Zopf

Northwestern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge