Merlin Teodosia Suarez
De La Salle University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Merlin Teodosia Suarez.
knowledge and systems engineering | 2011
Ella T. Mampusti; Jose S. Ng; Jarren James I. Quinto; Grizelda L. Teng; Merlin Teodosia Suarez; Rhia Trogo
Multiple studies show that electroencephalogram (EEG) signals behave differently when humans experience various emotions. The objective of this project is to create a model of human academic emotions (namely: boredom, confusion, engagement and frustration) using EEG signals. Raw EEG signals were collected from nineteen (19) students while solving Bergs Card Sorting Task. Noise reduction was performed using 8Hz-30Hz 10th-Order Butter worth Band pass Filter. The following statistical features of raw EEG signals were computed: mean, standard deviation, mean of absolute first and second differences and standardized mean of absolute first and second differences. The k-Nearest Neighbor, Support Vector Machines, and Multilayer Perceptron were used as classifiers. Accuracy scores (at their highest) were 54.09%, 46.86% and 40.72% respectively, using batch cross-validation.
intelligent tutoring systems | 2008
Merlin Teodosia Suarez; Raymund Sison
Machine learning techniques have been applied to the task of student modeling, more so in building tutors for acquiring programming skill. These were developed for various languages (Pascal, Prolog, Lisp, C++) and programming paradigms (procedural and declarative) but never for object-oriented programming in Java. JavaBugs builds a bug library automatically using discrepancies between a student and correct program. While other works analyze code snippets or UML diagrams to infer student knowledge of object-oriented design and programming, JavaBugs examines a complete Java program and identifies the most similar correct program to the students solution among a collection of correct solutions and builds trees of misconceptions using similarity measures and background knowledge. Experiments show that JavaBugs can detect the most similar correct program 97% of the time, and discover and detect 61.4% of student misconceptions identified by the expert.
privacy security risk and trust | 2011
Juan Lorenzo Hagad; Roberto S. Legaspi; Masayuki Numao; Merlin Teodosia Suarez
Research in psychology and SSP often describe posture as one of the most expressive nonverbal cues. Various studies in psychology particularly link posture mirroring behaviour to rapport. Currently, however, there are few studies which deal with the automatic analysis of postures and none at all particularly focus on its connection with rapport. This study presents a method for automatically predicting rapport in dyadic interactions based on posture and congruence. We begin by constructing a dataset of dyadic interactions and self-reported rapport annotations. Then, we present a simple system for posture classification and use it to detect posture congruence in dyads. Sliding time windows are used to collect posture congruence statistics across video segments. And lastly, various machine learning techniques are tested and used to create rapport models. Among the machine learners tested, Support Vector Machines and Multi layer Perceptrons performed best, at around 71% average accuracy.
2010 3rd International Conference on Human-Centric Computing | 2010
Miguel Miranda; Julie Ann Alonzo; Janelle Campita; Stephanie Lucila; Merlin Teodosia Suarez
Laughter is one important aspect when it comes to non-verbal communication. Though laughter is often associated with the feeling of happiness, it may not always be the case; laughter may also portray different kinds of emotions. We infer that a variety of other emotions exist during laughter and occurrence and therefore investigate this phenomenon. It is the objective of this research to be able to identify the underlying emotions in Filipino laughter. This research focuses on studying existing machine learning techniques on emotion identification through audio signals from laughter in order to derive more suitable solutions. In this research, we present a comparative study of the performances of Multilayer Perceptron (MLP) and Support Vector Machines (SVM) using our system. Manual segmentation was done on recorded audio and pre- processing was implemented using low-pass filters. The 13 Mel-frequency cepstral coefficients (MFCCs) and prosodic features (pitch, intensity and formants), were extracted from audio signals and were separately fed to the machine classifier. Results had shown that highest rate of correctly classified instances is achieved using prosodic features only. MLP yielded a 44.4444% rate while SVM has a 18.5185% rate.
affective computing and intelligent interaction | 2013
Zakia Hammal; Merlin Teodosia Suarez
This is an introduction to the Second International Workshop on Context Based Affect Recognition CBAR 2013 Held in conjunction with Affective Computing and Intelligent Interaction 2-5 September 2013, Geneva, Switzerland.
Archive | 2012
Shin-ya Nishizaki; Masayuki Numao; Jaime Caro; Merlin Teodosia Suarez
ion of Operations of Aspect-Oriented Languages . . . . . . . . . . . . . . 187 Sosuke Moriguchi and Takuo Watanabe Detection of the Music or Video Files in BitTorrent . . . . . . . . . . . . . . . . . . 202 Zhou Zhiqiang and Noriaki Yoshiura Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
artificial intelligence in education | 2011
Dana May Bustos; Geoffrey Loren Chua; Richard Thomas Cruz; Jose Miguel Santos; Merlin Teodosia Suarez
This paper investigates the feasibility of using gestures and posture for building affect models for an ITS. Recordings of students studying with a computer were taken and an HMM was built to recognize gestures and posture. Results indicate distinctions can be achieved with an accuracy of 43.10% using leave-one out cross validation. Results further indicate the relevance of hand location, movement and speed of movement as features for affect modeling using gestures and posture.
2010 3rd International Conference on Human-Centric Computing | 2010
Jocelynn Cu; Rafael Cabredo; Gregory Cu; Roberto S. Legaspi; Paul Salvador Inventado; Rhia Trogo; Merlin Teodosia Suarez
Advancement in ambient intelligence is driving the trend towards innovative interaction with computing systems. In this paper, we present our efforts towards the development of the ambient intelligent space TALA, which has the concept of empathy in cognitive science as its architectures backbone to guide its human-system interactions. We envision TALA to be capable of automatically identifying its occupant, modeling his/her affective states and activities, and providing empathic responses via changes in ambient settings. We present here the empirical results and analyses we obtained for the first two of this three-fold capability. We constructed face and voice datasets for identity and affect recognition and an activity dataset. Using a multimodal approach, specifically, applying a decision level fusion of independent face and voice models, we obtained accuracies of 88% and 79% for identity and affect recognition, respectively. For activity recognition, classification is 80% accurate even without employing any fusion technique.
International Journal of Distance Education Technologies | 2013
Judith J. Azcarraga; Merlin Teodosia Suarez
Brainwaves EEG signals and mouse behavior information are shown to be useful in predicting academic emotions, such as confidence, excitement, frustration and interest. Twenty five college students were asked to use the Aplusix math learning software while their brainwaves signals and mouse behavior number of clicks, duration of each click, distance traveled by the mouse were automatically being captured. It is shown that by combining the extracted features from EEG signals with data representing mouse click behavior, the accuracy in predicting academic emotions substantially increases compared to using only features extracted from EEG signals or just mouse behavior alone. Furthermore, experiments were conducted to assess the prediction accuracy of the system at points during the learning session where several of the extracted features significantly deviate in value from their mean. The experiments confirm that the prediction performance increases as the number of feature values that deviate significantly from the mean increases.
pacific rim international conference on artificial intelligence | 2012
Jocelynn Cu; Rafael Cabredo; Roberto S. Legaspi; Merlin Teodosia Suarez
Rhythm is one of the most essential elements of music that can easily capture the attention of the listener. In this study, we explored various rhythm features and used them to build emotion models. The emotion labels used are based on Thayers Model of Mood, which includes contentment, exuberance, anxiety, and depression. Empirical results identify 11 low-level rhythmic features to classify music emotion. We also determined that KStar can be used to build user-specific emotion models with a precision value of 0.476, recall of 0.480, and F-measure of 0.475.