Roberto S. Legaspi
Osaka University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roberto S. Legaspi.
intelligent user interfaces | 2007
Roberto S. Legaspi; Yuya Hashimoto; Koichi Moriyama; Satoshi Kurihara; Masayuki Numao
The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a systems music compositional intelligence that tightly couples with the listeners affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.
Knowledge Based Systems | 2008
Toshihito Sugimoto; Roberto S. Legaspi; Akihiro Ota; Koichi Moriyama; Satoshi Kurihara; Masayuki Numao
This research investigates the use of emotion data derived from analyzing change in activity in the autonomic nervous system (ANS) as revealed by brainwave production to support the creative music compositional intelligence of an adaptive interface. A relational model of the influence of musical events on the listeners affect is first induced using inductive logic programming paradigms with the emotion data and musical score features as inputs of the induction task. The components of composition such as interval and scale, instrumentation, chord progression and melody are automatically combined using genetic algorithm and melodic transformation heuristics that depend on the predictive knowledge and character of the induced model. Out of the four targeted basic emotional states, namely, stress, joy, sadness, and relaxation, the empirical results reported here show that the system is able to successfully compose tunes that convey one of these affective states.
International Conference on Innovative Techniques and Applications of Artificial Intelligence | 2007
Toshihito Sugimoto; Roberto S. Legaspi; Akihiro Ota; Koichi Moriyama; Satoshi Kurihara; Masayuki Numao
This research investigates the use of emotion data derived from analyzing change in activity in the autonomic nervous system (ANS) as revealed by brainwave production to support the creative music compositional intelligence of an adaptive interface. A relational model of the influence of musical events on the listener’s affect is first induced using inductive logic programming paradigms with the emotion data and musical score features as inputs of the induction task. The components of composition such as interval and scale, instrumentation, chord progression and melody are automatically combined using genetic algorithm and melodic transformation heuristics that depend on the predictive knowledge and character of the induced model. Out of the four targeted basic emotional states, namely, stress, joy, sadness, and relaxation, the empirical results reported here show that the system is able to successfully compose tunes that convey one of these affective states.
Journal on Multimodal User Interfaces | 2015
Vanus Vachiratamporn; Roberto S. Legaspi; Koichi Moriyama; Ken-ichi Fukui; Masayuki Numao
The trend of multimodal interaction in interactive gaming has grown significantly as demonstrated for example by the wide acceptance of the Wii Remote and the Kinect as tools not just for commercial games but for game research as well. Furthermore, using the player’s affective state as an additional input for game manipulation has opened the realm of affective gaming. In this paper, we analyzed the affective states of players prior to and after witnessing a scary event in a survival horror game. Player affect data were collected through our own affect annotation tool that allows the player to report his affect labels while watching his recorded gameplay and facial expressions. The affect data were then used for training prediction models with the player’s brainwave and heart rate signals, as well as keyboard–mouse activities collected during gameplay. Our results show that (i) players are likely to get more fearful of a scary event when they are in the suspense state and that (ii) heart rate is a good candidate for detecting player affect. Using our results, game designers can maximize the fear level of the player by slowly building tension until the suspense state and showing a scary event after that. We believe that this approach can be applied to the analyses of different sets of emotions in other games as well.
affective computing and intelligent interaction | 2013
Vanus Vachiratamporn; Roberto S. Legaspi; Koichi Moriyama; Masayuki Numao
An upcoming trend of affective gaming is where a players emotional state is used to manipulate game play. This is an interesting field to explore especially for the survival horror genre that is excellent at producing players intense emotions. In this research, we analyzed different player affective states prior to (i.e., Neutral, Anxiety, Suspense) and after (i.e., Low-Fear, Mid-Fear, High-Fear) a scary event using an affect annotation tool to collect player self-reports of their affective states during the game. Brainwave signals, heart rate and keyboard-mouse activity were also collected for analyzing the potential of automatically detecting horror-related affect. Results indicated that players were more likely to experience fear from a scary event when they were in a suspense state compared to when they were in a neutral state. In this state, players only experienced fear after experiencing surprise. Heart rate data gave the best result in classifying player affect, which achieved up to 90% overall accuracy. This highlights the potential of using player affect in survival horror games to adapt a scary event to evoke more fear from players.
conference on human system interactions | 2008
Roberto S. Legaspi; Satoshi Kurihara; Kenichi Fukui; Koichi Moriyama; Masayuki Numao
Empathy is a learnable skill that requires experiential learning and practice of empathic ability for it to improve and mature. In the context of human-system interaction (HSI) this can mean that a system should be permitted to have an initial knowledge of empathy provision that is inaccurate or incomplete, but with this knowledge evolving and progressing over time through learning from experience. This problem has yet to be defined and dealt in HSI. This paper is an attempt to state an empathy learning problem for an ambient intelligent system to self-improve its empathic responses based on user affective states.
Computers in Human Behavior | 2008
Roberto S. Legaspi; Raymund Sison; Ken-ichi Fukui; Masayuki Numao
This paper discusses a cluster knowledge-based predictive modeling framework actualized in a learning agent that leverages on the capability of a clustering algorithm to discover in logged tutorial interactions unknown structures that may exhibit predictive characteristics. The learned cluster models are described along learner-system interaction attributes, i.e., in terms of the learners knowledge state and behaviour and systems tutoring actions. The agent utilizes the knowledge of its various clusters to learn predictive models of high-level student information that can be utilized to support fine-grained individualized adaptation. We investigated on utilizing the Self-Organizing Map as clustering algorithm, and the naive Bayesian classifier and perception as weighting algorithms to learn the predictive models. Though the agent faced the difficulty imposed by the experimentation dataset, empirical results show that utilizing cluster knowledge has the potential to improve coarse-grained prediction for a more informed and improved pedagogic decision-making.
intelligent tutoring systems | 2004
Roberto S. Legaspi; Raymund Sison; Masayuki Numao
Though various approaches have been used to tackle the task of instructional planning, the compelling need is for ITSs to improve their own plans dynamically. We have developed a Category-based Self-improving Planning Module (CSPM) for a tutor agent that utilizes the knowledge learned from automatically derived student categories to support efficient on-line self-improvement. We have tested and validated the learning capability of CSPM to alter its planning knowledge towards achieving effective plans for various student categories using recorded teaching scenarios.
privacy security risk and trust | 2011
Juan Lorenzo Hagad; Roberto S. Legaspi; Masayuki Numao; Merlin Teodosia Suarez
Research in psychology and SSP often describe posture as one of the most expressive nonverbal cues. Various studies in psychology particularly link posture mirroring behaviour to rapport. Currently, however, there are few studies which deal with the automatic analysis of postures and none at all particularly focus on its connection with rapport. This study presents a method for automatically predicting rapport in dyadic interactions based on posture and congruence. We begin by constructing a dataset of dyadic interactions and self-reported rapport annotations. Then, we present a simple system for posture classification and use it to detect posture congruence in dyads. Sliding time windows are used to collect posture congruence statistics across video segments. And lastly, various machine learning techniques are tested and used to create rapport models. Among the machine learners tested, Support Vector Machines and Multi layer Perceptrons performed best, at around 71% average accuracy.
2010 3rd International Conference on Human-Centric Computing | 2010
Jocelynn Cu; Rafael Cabredo; Gregory Cu; Roberto S. Legaspi; Paul Salvador Inventado; Rhia Trogo; Merlin Teodosia Suarez
Advancement in ambient intelligence is driving the trend towards innovative interaction with computing systems. In this paper, we present our efforts towards the development of the ambient intelligent space TALA, which has the concept of empathy in cognitive science as its architectures backbone to guide its human-system interactions. We envision TALA to be capable of automatically identifying its occupant, modeling his/her affective states and activities, and providing empathic responses via changes in ambient settings. We present here the empirical results and analyses we obtained for the first two of this three-fold capability. We constructed face and voice datasets for identity and affect recognition and an activity dataset. Using a multimodal approach, specifically, applying a decision level fusion of independent face and voice models, we obtained accuracies of 88% and 79% for identity and affect recognition, respectively. For activity recognition, classification is 80% accurate even without employing any fusion technique.