Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anastasios Sarampalis is active.

Publication


Featured researches published by Anastasios Sarampalis.


Journal of the Acoustical Society of America | 2015

Validation of a simple response-time measure of listening effort

Carina Pals; Anastasios Sarampalis; Hedderik van Rijn; Deniz Başkent

This study compares two response-time measures of listening effort that can be combined with a clinical speech test for a more comprehensive evaluation of total listening experience; verbal response times to auditory stimuli (RT(aud)) and response times to a visual task (RTs(vis)) in a dual-task paradigm. The listening task was presented in five masker conditions; no noise, and two types of noise at two fixed intelligibility levels. Both the RTs(aud) and RTs(vis) showed effects of noise. However, only RTs(aud) showed an effect of intelligibility. Because of its simplicity in implementation, RTs(aud) may be a useful effort measure for clinical applications.


Journal of the Acoustical Society of America | 2006

Cognitive effects of noise reduction strategies

Anastasios Sarampalis; Sridhar Kalluri; Brent Edwards; Ervin R. Hafter

Noise reduction algorithms that attempt to improve speech intelligibility in noise have been used in hearing aids for decades. Yet, the surprising result has been that they are not effective in improving the speech reception threshold (SRT). One explanation is that the artificial processing is redundant, providing the same information as that extracted by the listener’s own auditory system. Our working hypothesis is that natural processing is effective only if there are sufficient cognitive resources to support it and that a reduction in those resources may increase the functional value of the computational algorithm. In the present experiments, subjects performed in a dual‐task paradigm in which they listened to and repeated sentences presented in a noisy background while doing a competing visual task, such as a simple driving game. Using driving performance as an indicator of mental effort, we evaluate the effects of different noise reduction algorithms on speech reception. The logic is that, while ther...


Trends in hearing | 2016

Cognitive Compensation of Speech Perception With Hearing Impairment, Cochlear Implants, and Aging: How and to What Degree Can It Be Achieved?

Deniz Başkent; Jeanne Clarke; Carina Pals; Michel Ruben Benard; Pranesh Bhargava; Jefta D. Saija; Anastasios Sarampalis; Anita Wagner; Etienne Gaudrain

External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.


Journal of the Acoustical Society of America | 2013

Attention and effort during speech processing

Anastasios Sarampalis

The concepts of attention and effort are not new in auditory science, yet it is only recently that we have started systematically studying their involvement in speech processing. The task of deciphering speech can vary in its cognitive demands, depending on a number of factors, such as sound quality, the state of the auditory and cognitive systems, room acoustics, and the semantic complexity of the signal itself. Understanding these interactions does not only shed light on the functions supporting speech processing, but is also critical when it comes to evaluating new hearing aids and cochlear implant strategies. This presentation will describe work that is either based on Erv Hafter’s ideas while I was at UC Berkeley or inspired by discussions with him in subsequent years. Its central theme is the measuring of listening effort and its implications to digital signal processing, cochlear implants, aging, or understanding non-native languages.


Journal of the Acoustical Society of America | 2018

Second-language learning in adolescents with cochlear implants

Deniz Başkent; Dorit Enja Jung; Wander Lowie; Anastasios Sarampalis

Speech signals delivered via cochlear implants (CIs) lack spectro-temporal details, yet, young-implanted children can develop good native language skills (L1). This study explores three research questions: 1. Can adolescents with CIs learn a second language (L2)? 2. Is there a difference in spoken (auditory-A) vs. written (visual-V) L2 skills? 3. Which perceptual and cognitive factors influence L2 learning? Two groups (L1 = Dutch, age 12—17 years), one with normal hearing (NH) and one with CIs, and both learning English (L2) at school, participated. L1 and L2 proficiency was measured in receptive vocabulary (A), comprehension (A, V), and general proficiency (V). Further, basic auditory functioning, in temporal (gap detection) and spectral (spectral ripple detection) resolution, and cognitive functioning, in IQ, working memory, and attention, were measured. Preliminary data (n = 7 per group) indicated comparable L1 proficiency between NH and CI groups. While some CI users showed L2 proficiency within the NH range, on average, L2 proficiency was lower for the CI group. This effect was more pronounced for auditory tests. Reduced temporal and spectral resolution, but no difference in cognitive tests, were observed in CI group compared to NH, emphasizing the importance of auditory factors in L2 learning.Speech signals delivered via cochlear implants (CIs) lack spectro-temporal details, yet, young-implanted children can develop good native language skills (L1). This study explores three research questions: 1. Can adolescents with CIs learn a second language (L2)? 2. Is there a difference in spoken (auditory-A) vs. written (visual-V) L2 skills? 3. Which perceptual and cognitive factors influence L2 learning? Two groups (L1 = Dutch, age 12—17 years), one with normal hearing (NH) and one with CIs, and both learning English (L2) at school, participated. L1 and L2 proficiency was measured in receptive vocabulary (A), comprehension (A, V), and general proficiency (V). Further, basic auditory functioning, in temporal (gap detection) and spectral (spectral ripple detection) resolution, and cognitive functioning, in IQ, working memory, and attention, were measured. Preliminary data (n = 7 per group) indicated comparable L1 proficiency between NH and CI groups. While some CI users showed L2 proficiency within the N...


Ear and Hearing | 2018

Effects of Additional Low-pass–filtered Speech on Listening Effort for Noise-band–vocoded Speech in Quiet and in Noise

Carina Pals; Anastasios Sarampalis; Mart van Dijk; Deniz Başkent

Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role.


Journal of Psychopharmacology | 2016

The effects of acute tryptophan depletion on speech and behavioural mimicry in individuals at familial risk for depression

Koen Hogenelst; Anastasios Sarampalis; N. Pontus Leander; Barbara C. N. Müller; Robert A. Schoevers; Marije aan het Rot

Major depressive disorder (MDD) has been associated with abnormalities in speech and behavioural mimicry. These abnormalities may contribute to the impairments in interpersonal functioning that are often seen in MDD patients. MDD has also been associated with disturbances in the brain serotonin system, but the extent to which serotonin regulates speech and behavioural mimicry remains unclear. In a randomized, double-blind, crossover study, we induced acute tryptophan depletion (ATD) in individuals with or without a family history of MDD. Five hours afterwards, participants engaged in two behavioural-mimicry experiments in which speech and behaviour were recorded. ATD reduced the time participants waited before speaking, which might indicate increased impulsivity. However, ATD did not significantly alter speech otherwise, nor did it affect mimicry. This suggests that a brief lowering of brain serotonin has limited effects on verbal and non-verbal social behaviour. The null findings may be due to low test sensitivity, but they otherwise suggest that low serotonin has little effect on social interaction quality in never-depressed individuals. It remains possible that recovered MDD patients are more strongly affected.


Journal of the Acoustical Society of America | 2013

“The Ear Club : Ervin R. Hafter's academic family”

Frederick J. Gallun; G. Christopher Stecker; Psyche Loui; Anastasios Sarampalis

Ervin R. Hafter is a direct academic descendent of Wilhelm Wundt, William James, James Cattell, Robert Woodworth, and Warner Brown. Erv’s Ph.D. advisor, Lloyd Jeffress, is one of the most renowned names in binaural hearing theory. For more than 40 years, auditory scientists from around the globe have been traveling to Berkeley, where Erv has been teaching and researching at the University of California, Berkeley, mentoring undergraduate students and Ph.D. students and supervising postdocs. Many of those who worked in Ervs lab have gone on to run labs of their own and have graduated and supervised many of their own students and postdocs, who in turn now run their own labs. This presentation will document the extensive web of connections that all started with the Hafter Lab and the weekly seminar series, the Ear Club, where for decades auditory scientists of all ranges of background and experience have been coming together to talk, listen, and make friends for a lifetime.


Journal of the Acoustical Society of America | 2004

Amplitude modulation detection with cochlear implants: Effects of electrode separation and stimulus level

Anastasios Sarampalis; Monita Chatterjee

Amplitude modulation (AM) detection performance has been studied in the past with normal‐hearing and hearing‐impaired populations. The temporal modulation transfer function (TMTF) is a plot of AM detection performance as a function of modulation rate and provides a way of characterizing temporal sensitivity. Typically the TMTF takes the form of a low‐pass filter, with performance declining above 50–70‐Hz modulation rate. TMTFs have also been measured with cochlear implant patients, showing a similar low‐pass characteristic, with a cutoff around 140‐Hz rate, while sensitivity to AM was found to increase with increasing current level. The present study investigated the effects of stimulation level and electrode separation on TMTFs with cochlear implant patients. TMTFs were measured for narrow through wide electrode separations and three different (loudness‐balanced) percentages of the dynamic range. Preliminary results indicate that sensitivity increases (lower thresholds) with increasing stimulation level,...


Journal of Speech Language and Hearing Research | 2009

Objective Measures of Listening Effort: Effects of Background Noise and Noise Reduction

Anastasios Sarampalis; Sridhar Kalluri; Brent Edwards; Ervin R. Hafter

Collaboration


Dive into the Anastasios Sarampalis's collaboration.

Top Co-Authors

Avatar

Deniz Başkent

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar

Carina Pals

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar

Anita Wagner

University of Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Etienne Gaudrain

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brent Edwards

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge