Kostas Kokkinakis
University of Kansas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kostas Kokkinakis.
Journal of the Acoustical Society of America | 2010
Kostas Kokkinakis; Philipos C. Loizou
Bilateral cochlear implant (BI-CI) recipients achieve high word recognition scores in quiet listening conditions. Still, there is a substantial drop in speech recognition performance when there is reverberation and more than one interferers. BI-CI users utilize information from just two directional microphones placed on opposite sides of the head in a so-called independent stimulation mode. To enhance the ability of BI-CI users to communicate in noise, the use of two computationally inexpensive multi-microphone adaptive noise reduction strategies exploiting information simultaneously collected by the microphones associated with two behind-the-ear (BTE) processors (one per ear) is proposed. To this end, as many as four microphones are employed (two omni-directional and two directional) in each of the two BTE processors (one per ear). In the proposed two-microphone binaural strategies, all four microphones (two behind each ear) are being used in a coordinated stimulation mode. The hypothesis is that such strategies combine spatial information from all microphones to form a better representation of the target than that made available with only a single input. Speech intelligibility is assessed in BI-CI listeners using IEEE sentences corrupted by up to three steady speech-shaped noise sources. Results indicate that multi-microphone strategies improve speech understanding in single- and multi-noise source scenarios.
international conference on acoustics, speech, and signal processing | 2007
Kostas Kokkinakis; Philipos C. Loizou
This paper describes a highly practical blind signal separation (BSS) scheme operating on subband domain data to blindly segregate convolutive mixtures of speech. The proposed method relies on spatiotemporal separation carried out in the time domain by using a multichannel blind deconvolution (MBD) algorithm that enforces separation by entropy maximization through the popular natural gradient algorithm (NGA). Numerical experiments with binaural impulse responses affirm the validity and illustrate the practical appeal of the presented technique even for difficult speech separation setups.
Journal of the Acoustical Society of America | 2011
Kostas Kokkinakis; Philipos C. Loizou
The purpose of this study is to determine the relative impact of reverberant self-masking and overlap-masking effects on speech intelligibility by cochlear implant listeners. Sentences were presented in two conditions wherein reverberant consonant segments were replaced with clean consonants, and in another condition wherein reverberant vowel segments were replaced with clean vowels. The underlying assumption is that self-masking effects would dominate in the first condition, whereas overlap-masking effects would dominate in the second condition. Results indicated that the degradation of speech intelligibility in reverberant conditions is caused primarily by self-masking effects that give rise to flattened formant transitions.
Trends in Amplification | 2012
Kostas Kokkinakis; Behnam Azimi; Yi Hu; David R. Friedland
To restore hearing sensation, cochlear implants deliver electrical pulses to the auditory nerve by relying on sophisticated signal processing algorithms that convert acoustic inputs to electrical stimuli. Although individuals fitted with cochlear implants perform well in quiet, in the presence of background noise, the speech intelligibility of cochlear implant listeners is more susceptible to background noise than that of normal hearing listeners. Traditionally, to increase performance in noise, single-microphone noise reduction strategies have been used. More recently, a number of approaches have suggested that speech intelligibility in noise can be improved further by making use of two or more microphones, instead. Processing strategies based on multiple microphones can better exploit the spatial diversity of speech and noise because such strategies rely mostly on spatial information about the relative position of competing sound sources. In this article, we identify and elucidate the most significant theoretical aspects that underpin single- and multi-microphone noise reduction strategies for cochlear implants. More analytically, we focus on strategies of both types that have been shown to be promising for use in current-generation implant devices. We present data from past and more recent studies, and furthermore we outline the direction that future research in the area of noise reduction for cochlear implants could follow.
Journal of the Acoustical Society of America | 2014
Kostas Kokkinakis; Natalie Pak
This paper investigates to what extent users of bilateral and bimodal fittings should expect to benefit from all three different binaural advantages found to be present in normal-hearing listeners. Head-shadow and binaural squelch are advantages occurring under spatially separated speech and noise, while summation emerges when speech and noise coincide in space. For 14 bilateral or bimodal listeners, speech reception thresholds in the presence of four-talker babble were measured in sound-field under various speech and noise configurations. Statistical analysis revealed significant advantages of head-shadow and summation for both bilateral and bimodal listeners. Squelch was significant only for bimodal listeners.
Journal of the Acoustical Society of America | 2014
Yi Hu; Kostas Kokkinakis
The purpose of this study was to determine the overall impact of early and late reflections on the intelligibility of reverberated speech by cochlear implant listeners. Two specific reverberation times were assessed. For each reverberation time, sentences were presented in three different conditions wherein the target signal was filtered through the early, late or entire part of the acoustic impulse response. Results obtained with seven cochlear implant listeners indicated that while early reflections neither enhanced nor reduced overall speech perception performance, late reflections severely reduced speech intelligibility in both reverberant conditions tested.
PLOS ONE | 2014
Ying-Yee Kong; Ala Mullangi; Kostas Kokkinakis
Objective To investigate a set of acoustic features and classification methods for the classification of three groups of fricative consonants differing in place of articulation. Method A support vector machine (SVM) algorithm was used to classify the fricatives extracted from the TIMIT database in quiet and also in speech babble noise at various signal-to-noise ratios (SNRs). Spectral features including four spectral moments, peak, slope, Mel-frequency cepstral coefficients (MFCC), Gammatone filters outputs, and magnitudes of fast Fourier Transform (FFT) spectrum were used for the classification. The analysis frame was restricted to only 8 msec. In addition, commonly-used linear and nonlinear principal component analysis dimensionality reduction techniques that project a high-dimensional feature vector onto a lower dimensional space were examined. Results With 13 MFCC coefficients, 14 or 24 Gammatone filter outputs, classification performance was greater than or equal to 85% in quiet and at +10 dB SNR. Using 14 Gammatone filter outputs above 1 kHz, classification accuracy remained high (greater than 80%) for a wide range of SNRs from +20 to +5 dB SNR. Conclusions High levels of classification accuracy for fricative consonants in quiet and in noise could be achieved using only spectral features extracted from a short time window. Results of this work have a direct impact on the development of speech enhancement algorithms for hearing devices.
Journal of Speech Language and Hearing Research | 2014
Michelle Mason; Kostas Kokkinakis
PURPOSE The purpose of this study was to evaluate the contribution of a contralateral hearing aid to the perception of consonants, in terms of voicing, manner, and place-of-articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. METHOD Eight postlingually deafened adult cochlear implant (CI) listeners with a fully inserted CI in 1 ear and low-frequency hearing in the other ear were tested on consonant perception. They were presented with consonant stimuli processed in the following experimental conditions: 1 quiet condition, 2 different reverberation times (0.3 s and 1.0 s), and the combination of 2 reverberation times with a single signal-to-noise ratio (5 dB). RESULTS Consonant perception improved significantly when listening in combination with a contralateral hearing aid as opposed to listening with a CI alone in 0.3 s and 1.0 s of reverberation. Significantly higher scores were also noted when noise was added to 0.3 s of reverberation. CONCLUSIONS A considerable benefit was noted from the additional acoustic information in conditions of reverberation and reverberation plus noise. The bimodal benefit observed was more pronounced for voicing and manner of articulation than for place of articulation.
Journal of the Acoustical Society of America | 2016
Navin Viswanathan; Kostas Kokkinakis; Brittany T. Williams
Several studies demonstrate that in complex auditory scenes, speech recognition is improved when the competing background and target speech differ linguistically. However, such studies typically utilize spatially co-located speech sources which may not fully capture typical listening conditions. Furthermore, co-located presentation may overestimate the observed benefit of linguistic dissimilarity. The current study examines the effect of spatial separation on linguistic release from masking. Results demonstrate that linguistic release from masking does extend to spatially separated sources. The overall magnitude of the observed effect, however, appears to be diminished relative to the co-located presentation conditions.
Journal of the Acoustical Society of America | 2015
Kostas Kokkinakis; Christina L. Runge; Qudsia Tahmina; Yi Hu
The smearing effects of room reverberation can significantly impair the ability of cochlear implant (CI) listeners to understand speech. To ameliorate the effects of reverberation, current dereverberation algorithms focus on recovering the direct sound from the reverberated signal by inverse filtering the reverberation process. This contribution describes and evaluates a spectral subtraction (SS) strategy capable of suppressing late reflections. Late reflections are the most detrimental to speech intelligibility by CI listeners as reverberation increases. By tackling only the late part of reflections, it is shown that users of CI devices can benefit from the proposed strategy even in highly reverberant rooms. The proposed strategy is also compared against an ideal reverberant (binary) masking approach. Speech intelligibility results indicate that the proposed SS solution is able to suppress additive reverberant energy to a degree comparable to that achieved by an ideal binary mask. The added advantage is that the SS strategy proposed in this work can allow for a potentially real-time implementation in clinical CI processors.