Norhaslinda Kamaruddin
Universiti Teknologi MARA
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Norhaslinda Kamaruddin.
international symposium on consumer electronics | 2011
Kazi Shahzabeen Rahnuma; Abdul Wahab; Norhaslinda Kamaruddin; Hariyati Majid
Coping with stress has shown to be able to avoid many complications in medical condition. In this paper we present an alternative method in analyzing and understanding stress using the four basic emotions of happy, calm, sad and fear as our basis function. Electroencephalogram (EEG) signals were captured from the scalp of the brain and measured in responds to various stimuli from the four basic emotions to stimulating stress base on the IAPS emotion stimuli. Features from the EEG signals were extracted using the Kernel Density Estimation (KDE) and classified using the Multilayer Perceptron (MLP), a neural network classifier to obtain accuracy of the subjects emotion leading to stress. Results have shown the potential of using the basic emotion basis function to visualize the stress perception as an alternative tool for engineers and psychologist.
international conference of the ieee engineering in medicine and biology society | 2012
Norhaslinda Kamaruddin; Abdul Wahab
People typically associate health with only physical health. However, health is also interconnected to mental and emotional health. People who are emotionally healthy are in control of their behaviors and experience better quality of life. Hence, understanding human behavior is very important in ensuring the complete understanding of ones holistic health. In this paper, we attempt to map human behavior state (HBS) profiles onto recalibrated speech affective space model (rSASM). Such an approach is derived from hypotheses that: 1) Behavior is influenced by emotion, 2) Emotion can be quantified through speech, 3) Emotion is dynamic and changes over time and 4) the emotion conveyance is conditioned by culture. Empirical results illustrated that the proposed approach can complement other types of behavior analysis in such a way that it offers more explanatory components from the perspective of emotion primitives (valence and arousal). Four different driving HBS; namely: distracted, laughing, sleepy and normal are profiled onto the rSASM to visualize the correlation between HBS and emotion. This approach can be incorporated in the future behavior analysis to envisage better performance.
international symposium on consumer electronics | 2011
Norzaliza Md Nor; Abdul Wahab; Norhaslinda Kamaruddin; Hariyati Majid
In this research paper, we proposed to understand and analyzed the driver behavior through affective space model which allows the emotion to be represented in valance(V) and arousal(A). Through this analysis, we can determine correlation between driver behavior and basics emotion which will gain such agreement by psychologists in this area. Besides, through the VA, it will let us to see the driver behavior post accident. This paper presented, the data which has been collected by using Electroencephalogram (EEG) machine and classified into two parts which consist of 4 subjects (2 females and 2 males). First part will be the three basic emotions for each driver which are happy, calm and sad, whereas the second part contains of 3 driving tasks. We use the Mel-frequency cepstral coefficients (Mfcc) as tools to extract features and together with neural network classifier, multi layer perceptron (MLP as a classifier. According to the preliminary experiment, the results show the reasonable accuracy for verifying emotions and identifying subjects. The understanding of drivers behavior will assist us to develop a system which can easily detect highly emotional agitated driver so that we can prevent the accident.
international conference on software engineering | 2010
Abdul Wahab; Norhaslinda Kamaruddin; Lavan Palaniappan; Ma Li; Reza Khosrowabadi
This paper proposes an emotion recognition system using the electroencephalographic (EEG) signals. Both time domain and frequency domain approaches for feature extraction were evaluated using neural network (NN) and fuzzy neural network (FNN) as classifiers. Data was collected using psychological stimulation experiments. Three basic emotions namely; Angry, Happy, and Sad were selected for recognition with relax as an emotionless state. Both the time domain (based on statistical method) and frequency domain (based on MFCC) approaches shows potential to be used for emotion recognition using the EEG signals.
international conference on information and communication technology | 2013
Norhaslinda Kamaruddin; Abdul Wahab
Speech emotion recognition field is growing due to the increasing needs for effective human-computer interaction. There are many approaches in term of features extraction methods coupled with classifiers to obtain optimum performance. However, none can claim superiority as it is very data-dependant and domain-oriented. In this paper, the appropriate sets of features are investigated using segregation method and feature ranking algorithm of Automatic Relevance Determination (ARD) [1]. Two popular classifiers of Multi Layer Perceptron (MLP) [2] and Generic Self-organizing Fuzzy Neural Network (GenSoFNN) [3] are employed to discriminate emotions in the data corpus used in the FAU Aibo Emotion Corpus [4, 5]. The experimental results shows that Mel Frequency Cepstral Coefficient (MFCC) [6] features are able to yield comparable accuracy with baseline result [5]. In addition, it is observed that MLP can perform slightly better than GenSoFNN. Hence, such system envisages that appropriate combination of features extracted with good classifiers is fundamental for the good speech emotion recognition system.
international conference on information and communication technology | 2013
Hamwira Yaacob; Wahab Abdul; Norhaslinda Kamaruddin
Emotions are frequently studied based on two approaches; categorical and dimensional. In this study, Multi-Layer Perceptron (MLP) was employed to classify four affective states as posited from these approaches. It was observed that emotional states viewed from the dimensional perspective are well discriminated using memory test. In addition to that, the dynamic for each of the four emotions were also presented, in which it was also indicated that an emotional state does not occur abruptly.
international conference of the ieee engineering in medicine and biology society | 2012
Hamwira Yaacob; Izzah Karim; Abdul Wahab; Norhaslinda Kamaruddin
Emotions are ambiguous. Many techniques have been employed to perform emotion prediction and to understand emotional elicitations. Brain signals measured using electroencephalogram (EEG) are also used in studies about emotions. Using KDE as feature extraction technique and MLP for performing supervised learning on the brain signals. It has shown that all channels in EEG can capture emotional experience. In addition it was also indicated that emotions are dynamic as represented by the level of valence and the intensity of arousal. Such findings are useful in biomedical studies, especially in dealing with emotional disorders which can results in using a two-channel EEG device for neurofeedback applications.
international conference on information and communication technology | 2014
Norhaslinda Kamaruddin; Abdul Wahab Abdul Rahman; Nor Sakinah Abdullah
Human speech communication will convey semantic information of the uttered word as well as the underlying emotion information of the interlocutor. Emotion identification is important, as it could enhance many applications added-features that can improve human computer interaction aspect. Such improvement surely can help to retain customer satisfaction and loyalty in the long run and serves as an attraction factor for a new customer. Although many researchers have used many approaches to recognize emotion from speech, no one can claim superiority of their findings. This is because different feature extraction methods coupled with various classifiers may produce different performance depending on the data used. This paper presents a comparative analysis of the speech emotion identification system using two different feature extraction methods of Mel Frequency Cepstral Coefficient (MFCC) and Linear Prediction Coefficient (LPC) coupled with Multilayer Perceptron (MLP) classifier. For further exploration, different numbers of MFCC filters are employed to observe the performance of the proposed system. The results indicate that MFCC-40 gives slightly better performance compared to the other MFCC coefficients in the Berlin EMO-DB and NTU_American whereas the MFCC-20 performs well for NTU_Asian. It is also observed that MFCC consistently performed better than LPC in all experiments, which are in-line with many reported findings. Such understanding can be extended to further study speech emotion in order to develop more robust and least error system in the future.
international conference on information and communication technology | 2014
Hamwira Yaacob; Wahab Abdul; Imad Fakhribo Al Shaikhli; Norhaslinda Kamaruddin
Several studies have been performed to profile emotions using EEG signals through affective computing approach. It includes data acquisition, signal pre-processing, feature extraction and classification. Different combinations of feature extraction and classification techniques have been proposed. However, the results are subjective. Very few studies include subject-independent classification. In this paper, a new profiling model, known as CMAC-based Computational Model of Affects (CCMA), is proposed), CMAC is presumed to be a reasonable model for processing EEG signals with its innate capabilities to solve non-linear problems through self-organization feature mapping (SOFM). Features that are extracted using CCMA are trained using Evolving Fuzzy Neural Network (EFuNN) as the classifier. For comparison, classification of emotions using features that are derived from power spectral density (PSD) was also performed. The results shows that the performance of using CCMA for profiling emotions outperforms the performance of classifying emotions from PSD features.
Smart Mobile In-Vehicle Systems | 2014
Abdul Wahab; Norhaslinda Kamaruddin; Norzaliza Md Nor; Hüseyin Abut
There are many contributing factors that result in high number of traffic accidents on the roads and highways today. Globally, the human (operator) error is observed to be the leading cause. These errors may be transpired by the driver’s emotional state that leads to his/her uncontrolled driving behavior. It has been reported in a number of recent studies that emotion has direct influence on the driver behavior. In this chapter, the pre- and postaccident emotion of the driver is studied in order to better understand the behavior of the driver. A two-dimensional Affective Space Model (ASM) is used to determine the correlation between the driver behavior and the driver emotion. A 2-D ASM developed in this study consists of the valance and arousal values extracted from electroencephalogram (EEG) signals of ten subjects while driving a simulator under three different conditions consisting of initialization, pre-accident, and postaccident. The initialization condition refers to the subject’s brain signals during the initial period where he/she is asked to open and close his/her eyes. In order to elicit appropriate precursor emotion for the driver, the selected picture stimuli for three basic emotions, namely, happiness, fear, and sadness are used. The brain signals of the drivers are captured and labeled as the EEG reference signals for each driver. The Mel frequency cepstral coefficient (MFCC) feature extraction method is then employed to extract relevant features to be used by the multilayer perceptron (MLP) classifier to verify emotion. Experimental results show an acceptable accuracy for emotion verification and subject identification. Subsequently, a two-dimensional Affective Space Model (ASM) is employed to determine the correlation between the emotion and the behavior of drivers. The analysis using the 2-D ASM provides a visualization tool to facilitate better understanding of the pre- and postaccident driver emotion.