Javier Gonzalez-Sanchez
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Javier Gonzalez-Sanchez.
working ieee/ifip conference on software architecture | 2011
Javier Gonzalez-Sanchez; Maria Elena Chavez-Echeagaray; Robert K. Atkinson; Winslow Burleson
The computers ability to recognize human emotional states given physiological signals is gaining in popularity to create empathetic systems such as learning environments, health care systems and videogames. Despite that, there are few frameworks, libraries, architectures, or software tools, which allow systems developers to easily integrate emotion recognition into their software projects. The work reported here offers a first step to fill this gap in the lack of frameworks and models, addressing: (a) the modeling of an agent-driven component-based architecture for multimodal emotion recognition, called ABE, and (b) the use of ABE to implement a multimodal emotion recognition framework to support third-party systems becoming empathetic systems.
Computers in Education | 2014
Lishan Zhang; Kurt VanLehn; Sylvie Girard; Winslow Burleson; Maria Elena Chavez-Echeagaray; Javier Gonzalez-Sanchez; Yoalli Hidalgo-Pontet
Abstract Modelling is an important skill to acquire, but it is not an easy one for students to learn. Existing instructional technology has had limited success in teaching modelling. We have applied a recently developed technology, meta-tutoring, to address the important problem of teaching model construction. More specifically, we have developed and evaluated a system that has two parts, a tutor and a meta-tutor. The tutor is a simple step-based tutoring system that can give correct/incorrect feedback on students steps and can demonstrate steps for students when asked. Because deep modelling requires difficult analyses of the quantitative relationships in a given system, we expected, and found, that students tended to avoid deep modelling by abusing the tutors help. In order to increase the frequency of deep modelling, we added a meta-tutor that coached students to follow a learning strategy that decomposed the overall modelling problem into a series of “atomic” modelling problems. We conducted three experiments to test the effectiveness of the meta-tutor. The results indicate that students who studied with meta-tutor did indeed engage in more deep modelling practices. However, when the meta-tutor and tutor were turned off, students tended to revert to shallow modelling. Thus, the next stage of the research is to add an affective agent that will try to persuade students to persist in using the taught strategies even when the meta-tutoring and tutoring have ceased.
intelligent tutoring systems | 2014
Kurt VanLehn; Winslow Burleson; Sylvie Girard; Maria Elena Chavez-Echeagaray; Javier Gonzalez-Sanchez; Yoalli Hidalgo-Pontet; Lishan Zhang
The Affective Meta-Tutoring system is comprised of 1 a tutor that teaches system dynamics modeling, 2 a meta-tutor that teaches good strategies for learning how to model from the tutor, and 3 an affective learning companion that encourages students to use the learning strategy that the meta-tutor teaches. The affective learning companions messages are selected by using physiological sensors and log data to determine the students affective state. Evaluations compared the learning gains of three conditions: the tutor alone, the tutor plus meta-tutor and the tutor, meta-tutor and affective learning companion.
artificial intelligence in education | 2013
Sylvie Girard; Maria Elena Chavez-Echeagaray; Javier Gonzalez-Sanchez; Yoalli Hidalgo-Pontet; Lishan Zhang; Winslow Burleson; Kurt VanLehn
Research in affective computing and educational technology has shown the potential of affective interventions to increase student’s self-concept and motivation while learning. Our project aims to investigate whether the use of affective interventions in a meta-cognitive tutor can help students achieve deeper modeling of dynamic systems by being persistent in their use of meta-cognitive strategies during and after tutoring. This article is an experience report on how we designed and implemented the affective intervention. (The meta-tutor is described in a separate paper.) We briefly describe the theories of affect underlying the design and how the agent’s affective behavior is defined and implemented. Finally, the evaluation of a detector-driven categorization of student behavior, that guides the agent’s affective interventions, against a categorization performed by human coders, is presented.
user interface software and technology | 2012
Ryan Bernays; Jeremy Mone; Patty Yau; Michael Murcia; Javier Gonzalez-Sanchez; Maria Elena Chavez-Echeagaray; Robert Christopherson; Robert K. Atkinson
Having environments that are able to adjust accordingly with the user has been sought in the last years particularly in the area of Human Computer Interfaces. Environments able to recognize the user emotions and react in consequence have been of interest on the area of Affective Computing. This work presents a project -- an adaptable 3D video game, Lost in the Dark: Emotion Adaption, which uses users emotions as input to alter and adjust the gaming environment. To achieve this, an interface that is capable of reading brain waves, facial expressions, and head motion was used, an Emotiv® EPOC headset. For our purposes we read emotions such as meditation, excitement, and engagement into the game, altering the lighting, music, gates, colors, and other elements that would appeal to the user emotional state. With this, we achieve closing the loop of using the emotions as inputs, adjusting a system accordingly as a result, and elicit emotions.
international conference on advanced learning technologies | 2011
Javier Gonzalez-Sanchez; Robert Christopherson; Maria Elena Chavez-Echeagaray; David Gibson; Robert K. Atkinson; Winslow Burleson
The human-element is crucial for designing and implementing interactive intelligent systems, and therefore on instructional design. This tutorial provides a description and hands-on demonstration for detection of affective states and a description of devices, methodologies and tools necessary for automatic detection of affective states. Automatic detection of affective states requires that the computer sense information that is complex and diverse, it can range from brain-waves signals, and biofeedback readings to face-based and gesture emotion recognition to posture and pressure sensing. Obtaining, processing and understanding that information, to create systems that improve learning, requires the use of several sensing devices (and their perceiving algorithms) and the application of software tools.
intelligent tutoring systems | 2014
Javier Gonzalez-Sanchez; Maria Elena Chavez-Echeagaray; Kurt VanLehn; Winslow Burleson; Sylvie Girard; Yoalli Hidalgo-Pontet; Lishan Zhang
Intelligent Tutoring Systems ITSs constitute an alternative to expert human tutors, providing direct customized instruction and feedback to students. ITSs could positively impact education if adopted on a large scale, but doing that requires tools to enable their mass production. This circumstance is the key motivation for this work. We present a component-based approach for a system architecture for ITSs equipped with meta-tutoring and affective capabilities. We elicited the requirements that those systems might address and created a system architecture that models their structure and behavior to drive development efforts. Our experience applying the architecture in the incremental implementation of a four-year project is discussed.
artificial intelligence in education | 2013
Lishan Zhang; Winslow Burleson; Maria Elena Chavez-Echeagaray; Sylvie Girard; Javier Gonzalez-Sanchez; Yoalli Hidalgo-Pontet; Kurt VanLehn
While modeling dynamic systems in an efficient manner is an important skill to acquire for a scientist, it is a difficult skill to acquire. A simple step-based tutoring system, called AMT, was designed to help students learn how to construct models of dynamic systems using deep modeling practices. In order to increase the frequency of deep modeling and reduce the amount of guessing/gaming, a meta-tutor coaching students to follow a deep modeling strategy was added to the original modeling tool. This paper presents the results of two experiments investigating the effectiveness of the meta-tutor when compared to the original software. The results indicate that students who studied with the meta-tutor did indeed engage more in deep modeling practices.
Proceedings of the 18th Conference on Pattern Languages of Programs | 2011
Javier Gonzalez-Sanchez; Maria Elena Chavez-Echeagaray; Kurt VanLehn; Winslow Burleson
Intelligent Tutoring Systems are software applications capable of complementing and enhancing the learning process by providing direct customized instruction and feedback to students in various disciplines. Although Intelligent Tutoring Systems could differ widely in their attached knowledge bases and user interfaces (including interaction mechanisms), their behaviors are quite similar. Therefore, it must be possible to establish a common software model for them. A common software model is a step forward to move these systems from proof-of-concepts and academic research tools to widely available tools in schools and homes. The work reported here addresses: (1) the use of Design Patterns to create an object-oriented software model for Intelligent Tutoring Systems; (2) our experience using this model in a three-year development project and its impact on facets such as creating a common language among stakeholders, supporting an incremental development, and adjustment to a highly shifting development team; and (3) the qualities achieved and trade-offs made.
european conference on pattern languages of programs | 2012
Javier Gonzalez-Sanchez; Maria Elena Chavez-Echeagaray; Robert K. Atkinson; Winslow Burleson
There is a growing interest in how to leverage information about users emotions as a mean of personalizing the response of computer systems. This is particularly useful for computer-aided learning, health, and entertainment systems. However, there are few architectures, frameworks, libraries, or software tools that allow developers to easily integrate emotion recognition into their software projects. The work reported in this paper offers a way to address this shortcoming in models by proposing the use of software design patterns for modeling a multimodal emotion recognition framework. The framework is designed to: (1) integrate existing sensing devices and SDK platforms, (2) include diverse inference algorithms, and (3) correlate measurements from diverse sources. We describe our experience using this model and its impact on facets, such as creating a common language among stakeholders, supporting an incremental development, and adjusting to a highly shifting development team, as well as the qualities achieved and trade-offs made.