Jinqiu Sang
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jinqiu Sang.
international conference on acoustics, speech, and signal processing | 2011
Jalal Taghia; Jalil Taghia; Nasser Mohammadiha; Jinqiu Sang; Vaclav Bouse; Rainer Martin
Noise power spectral density estimation is an important component of speech enhancement systems due to its considerable effect on the quality and the intelligibility of the enhanced speech. Recently, many new algorithms have been proposed and significant progress in noise tracking has been made. In this paper, we present an evaluation framework for measuring the performance of some recently proposed and some well-known noise power spectral density estimators and compare their performance in adverse acoustic environments. In this investigation we do not only consider the performance in the mean of a spectral distance measure but also evaluate the variance of the estimators as the latter is related to undesirable fluctuations also known as musical noise. By providing a variety of different non-stationary noises, the robustness of noise estimators in adverse environments is examined.
international conference on acoustics, speech, and signal processing | 2011
Hongmei Hu; Jinqiu Sang; Mark E. Lutman; Stefan Bleeck
Hearing loss simulation (HLS) systems can provide normal hearing (NH) listeners with demonstrations of consequences of hearing impairment. An accurate simulation of hearing loss can be a valuable tool for developing signal processing strategies for hearing aids. This paper presents a novel HLS system which is based on a physiologically motivated compressive gammachirp auditory filter bank to simulate several aspects of hearing loss including elevated hearing threshold, loudness recruitment and reduced frequency selectivity. The model was evaluated by speech-in-noise tests. An experiment with normally hearing and hearing-impaired listeners showed that the proposed HLS model can mimic typical hearing loss. It is concluded that a physiologically-inspired hearing loss model can perform in the same way as phenomenological models, yet has more fundamental underpinning.
international conference on communication technology | 2011
Jinqiu Sang; Hongmei Hu; Guoping Li; Mark E. Lutman; Stefan Bleeck
Hearing impaired people are struggling more understanding speech that is corrupted with noise than normal hearing listeners. In this paper we develop a supervised single channel sparse coding (SC) strategy for hearing aid (HA) users in noise environment. In this algorithm, the sparse coding and shrinkage principles are applied to noisy speech. The algorithm is implemented in the temporal domain by arranging one-dimensional speech into a data matrix. The strategy not only reduces background noise but also extracts key information from speech. The performance of the supervised sparse coding strategy is compared with other state-of-art noise reduction strategies (Wiener filtering and spectral subtraction) in both objective and subjective experiments. Results show that sparse coding leads to better sound quality (objective measures) and preserves the level of intelligibility (subjective measures).
international conference on acoustics, speech, and signal processing | 2013
Hongmei Hu; Jinqiu Sang; Mark E. Lutman; Stefan Bleeck
Cochlear implants (CIs) require efficient speech processing to maximize information transfer to the brain, especially in noise. Since speech information in CI is coded in the waveform envelope which is non-negative and is highly correlated to firing of auditory neurons, a novel CI processing strategy is proposed in which sparse constraint non-negative matrix factorization (NMF) is applied to the envelope matrix of 22 frequency channels in order to improve the CI performance in noisy environments. The proposed strategy is evaluated by subjective speech reception threshold (SRT) experiments and subjective quality rating tests at three SNRs. Compared to the default commercially available CI processing strategy, the advanced combination encoder (ACE), the NMF algorithm significantly enhanced speech intelligibility and improved speech quality in the 0 dB and 5 dB for normal hearing subjects with vocoded speech, but not in the 10 dB.
ieee international conference on intelligent systems and knowledge engineering | 2011
Hongmei Hu; Jalil Taghia; Jinqiu Sang; Jalal Taghia; Nasser Mohammadiha; Masoumeh Azarpour; Rajyalakshmi Dokku; Shouyan Wang; Mark E. Lutman; Stefan Bleeck
Automatic speech recognition (ASR) often fails in acoustically noisy environments. Aimed to improve speech recognition scores of an ASR in a real-life like acoustical environment, a speech pre-processing system is proposed in this paper, which consists of several stages: First, a convolutive blind source separation (BSS) is applied to the spectrogram of the signals that are pre-processed by binaural Wiener filtering (BWF). Secondly, the target speech is detected by an ASR system recognition rate based on a Hidden Markov Model (HMM). To evaluate the performance of the proposed algorithm, the signal-to-interference ratio (SIR), the improvement signal-to-noise ratio (ISNR) and the speech recognition rates of the output signals were calculated using the signal corpus of the CHiME database. The results show an improvement in SIR and ISNR, but no obvious improvement of speech recognition scores. Improvements for future research are suggested.
international symposium on communications and information technologies | 2011
Jinqiu Sang; Hongmei Hu; Ian M. Winter; Matthew Wright; Stefan Bleeck
We present a novel noise reduction strategy that is inspired by the physiology of the auditory brainstem. Following the hypothesis that neurons code sound based on fractional derivatives we develop a model in which sound is transformed into a ‘neural space’. In this space sound is represented by various fractional derivatives of the envelopes in a 22 channel filter bank. We demonstrate that noise reduction schemes can work in the neural space and that the sound can be resynthesized. A supervised sparse coding strategy reduces noise while keeping the sound quality intact. This was confirmed in preliminary subjective listening tests. We conclude that new signal processing schemes, inspired by neuronal processing, offer exciting opportunities to implement novel noise reduction and speech enhancement algorithms.
european signal processing conference | 2011
Hongmei Hu; Guoping Li; Liang Chen; Jinqiu Sang; Shouyan Wang; Mark E. Lutman; Stefan Bleeck
In: (pp. pp. 1793-1796). (2011) | 2011
Jinqiu Sang; Guoping Li; Hongmei Hu; Mark E. Lutman; Stefan Bleeck
Hearing Research | 2015
Jinqiu Sang; Hongmei Hu; Chengshi Zheng; Guoping Li; Mark E. Lutman; Stefan Bleeck
conference of the international speech communication association | 2011
Jinqiu Sang; Guoping Li; Hongmei Hu; Mark E. Lutman; Stefan Bleeck