Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chng Eng Siong is active.

Publication


Featured researches published by Chng Eng Siong.


ieee region 10 conference | 2004

Foreground motion detection by difference-based spatial temporal entropy image

Guo Jing; Chng Eng Siong; Deepu Rajan

Human motion detection is a fundamental task for many computer vision tasks. The most popular method for motion detection is background subtraction where a background model needs to be maintained. In this paper an entropy based method for human motion detection is described which makes no use of background model. The difference image between consecutive images are calculated and at each particular pixel, and a spatio-temporal histogram is generated by accumulating pixels in difference image. This histogram is then normalized to calculate entropy and the magnitude of entropy is used to denote the significance of motion. Experiment results demonstrate that our method can detect motion object effectively and reliably.


international conference on pattern recognition | 2004

High accuracy classification of EEG signal

Wenjie Xu; Cuntain Guan; Chng Eng Siong; S. Ranganatha; M. Thulasidas; Jiankand Wu

Improving classification accuracy is a key issue to advancing brain computer interface (BCI) research from laboratory to real world applications. This work presents a high accuracy EEG signal classification method using single trial EEC signal to detect left and right finger movement. We apply an optimal temporal filter to remove irrelevant signal and subsequently extract key features from spatial patterns of EEG signal to perform classification. Specifically, the proposed method transforms the original EEG signal into a spatial pattern and applies the RBF feature selection method to generate robust feature. Classification is performed by the SVM and our experimental result shows that the classification accuracy of the proposed method reaches 90% as compared to the current reported best accuracy of 84%.


conference on intelligent text processing and computational linguistics | 2015

Modelling Public Sentiment in Twitter: Using Linguistic Patterns to Enhance Supervised Learning

Prerna Chikersal; Soujanya Poria; Erik Cambria; Alexander F. Gelbukh; Chng Eng Siong

This paper describes a Twitter sentiment analysis system that classifies a tweet as positive or negative based on its overall tweet-level polarity. Supervised learning classifiers often misclassify tweets containing conjunctions such as “but” and conditionals such as “if”, due to their special linguistic characteristics. These classifiers also assign a decision score very close to the decision boundary for a large number tweets, which suggests that they are simply unsure instead of being completely wrong about these tweets. To counter these two challenges, this paper proposes a system that enhances supervised learning for polarity classification by leveraging on linguistic rules and sentic computing resources. The proposed method is evaluated on two publicly available Twitter corpora to illustrate its effectiveness.


PLOS ONE | 2014

Severity-based adaptation with limited data for ASR to aid dysarthric speakers

Mumtaz Begum Mustafa; Siti Salwah Salim; Noraini Mohamed; Bassam Ali Al-Qatab; Chng Eng Siong

Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data.


international conference on signal and information processing | 2013

Robust sound event recognition under TV playing conditions

Ng Wen Zheng Terence; Tran Huy Dat; Jonathan William Dennis; Chng Eng Siong

The ability to automatically recognize sound events in real-life conditions is an important part of applications such as acoustic surveillance and smart home automation. The main challenge of these applications is that the sound sources often come from unknown distances under different acoustic environments, which are also noisy and reverberant. Among the noises in the home, the most difficult to deal with are non-stationary interference, such as TV, radio or music playing. In this paper, we address one of the hardest situations of sound event recognition: the presence of interference under reverberant conditions. Our system is a dual microphone approach and consists of a comprehensive combination of several modules: first, a novel regression-based noise cancellation (RNC), to reduce the interference, and second, an improved subband power distribution image feature (iSPD-IF) to classify the noise cancelled signals. A comprehensive experiment is carried out, which demonstrates nearly perfect classification accuracy under severe noisy and reverberant conditions.


asia-pacific signal and information processing association annual summit and conference | 2013

Adaptive semi-supervised tree SVM for sound event recognition in home environments

Ng Wen Zheng Terence; Tran Huy Dat; Huynh Thai Hoa; Chng Eng Siong

This paper addresses a problem in sound event recognition, more specifically for home environments in which training data is not readily available. Our proposed method is an extension of our previous method based on a robust semi-supervised Tree-SVM classifier. The key step in this paper is that the MFCC features are adapted using custom filters constructed at each classification node of the tree. This is shown to significantly improve the discriminative capability. Experimental results under realistic noisy environments demonstrate that our proposed framework outperforms conventional methods.


asia-pacific signal and information processing association annual summit and conference | 2013

A robust sound event recognition framework under TV playing conditions

Ng Wen Zheng Terence; Tran Huy Dat; Jonathan William Dennis; Chng Eng Siong

In this paper, we address the problem of performing sound event recognition tasks in the presence of television playing in a home environment. Our proposed framework consist of two modules: (1) a novel regression-based noise cancellation (RNC), a preprocessing which utilises a addition reference microphone placed near the television to reduce the noise. RNC learns an empirical mapping instead of the convention adaptive methods to achieve better noise reduction. (2) An improved subband power distribution image feature (iSPD-IF) which build on our existing classification framework by enhancing the feature extraction. A comprehensive experiment is carried out on our recorded data, which demonstrates high classification accuracy under severe television noise.


international conference on audio, language and image processing | 2012

Spectral local harmonicity feature for voice activity detection

Pham Chau Khoa; Chng Eng Siong

In this paper, we propose a method to exploit the harmonicity of human voiced speech using only the most harmonic sub-part of the spectrum. This technique searches for all the potential sub-windows of the spectrum, and measures their local harmonicity, using a newly proposed metric, which works in the spectral autocorrelation domain and employs a novel sinusoidal fitting approach. Experiments show that the new feature can be used to detect noisy voiced speech frames heavily corrupted by non-stationary noise even at 0dB SNR with high precision and recall, which gives better results than the Windowed Autocorrelation Lag Energy (WALE), a recently proposed voicing features, under a complex factory noise scenarios.


conference of the international speech communication association | 2012

Detecting Converted Speech and Natural Speech for anti-Spoofing Attack in Speaker Recognition.

Zhizheng Wu; Chng Eng Siong; Haizhou Li


conference of the international speech communication association | 2014

Kernel density-based acoustic model with cross-lingual bottleneck features for resource limited LVCSR.

Van Hai Do; Xiong Xiao; Chng Eng Siong; Haizhou Li

Collaboration


Dive into the Chng Eng Siong's collaboration.

Top Co-Authors

Avatar

Haizhou Li

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiong Xiao

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhizheng Wu

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Chin-Wei Eugene Koh

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Deepu Rajan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Erik Cambria

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge