Carol Forsyth
University of Memphis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carol Forsyth.
Serious Games and Edutainment Applications | 2011
Keith K. Millis; Carol Forsyth; Heather A. Butler; Patty Wallace; Arthur C. Graesser; Diane F. Halpern
Operation ARIES! is a serious game that teaches critical thinking about scientific inquiry. The player must help to identify aliens on Earth who are intentionally publishing bad research. The game combines aspects of video games and intelligent tutors in which the player holds conversations with animated agents using natural language. The player first takes a training course with a virtual trainee, followed by a module in which the player identifies flaws in research cases. In the third and final module, the player interviews suspected alien scientists on their research. Operation ARIES! is designed for high school seniors and adults.
Current Directions in Psychological Science | 2014
Arthur C. Graesser; Haiying Li; Carol Forsyth
Learning is facilitated by conversational interactions both with human tutors and with computer agents that simulate human tutoring and ideal pedagogical strategies. In this article, we describe some intelligent tutoring systems (e.g., AutoTutor) in which agents interact with students in natural language while being sensitive to their cognitive and emotional states. These systems include one-on-one tutorial dialogues, conversational trialogues in which two agents (a tutor and a “peer”) interact with a human student, and other conversational ensembles in which agents take on different roles. Tutorial conversations with agents have also been incorporated into educational games. These learning environments have been developed for different populations (elementary through high school students, college students, adults with reading difficulties) and different subjects spanning science, technology, engineering, mathematics, reading, writing, and reasoning. This article identifies some of the conversation patterns that are implemented in the dialogues and trialogues.
intelligent tutoring systems | 2010
Arthur C. Graesser; Anne Britt; Keith K. Millis; Patty Wallace; Diane F. Halpern; Zhiqiang Cai; Kris Kopp; Carol Forsyth
Operation Aries! is a computer environment that helps students learn about scientific methods and inquiry The system has several components designed to optimize learning and motivation, such as game features, animated agents, natural language communication, trialogues among agents, an eBook, multimedia, and formative assessment The present focus is on a Case Study learning module that involves critiquing reports of scientific findings in news media that have flawed scientific methodology After the human student lists the methodological flaws of a Case Study in natural language, a teacher agent and a peer agent hold a trialogue with the student that evaluates each listed flaw and that uncovers additional flaws that that student missed.
Archive | 2018
Arthur C. Graesser; Peter W. Foltz; Yigal Rosen; David Williamson Shaffer; Carol Forsyth
An assessment of Collaborative Problem Solving (CPS) proficiency was developed by an expert group for the PISA 2015 international evaluation of student skills and knowledge. The assessment framework defined CPS skills by crossing three major CPS competencies with four problem solving processes that were adopted from PISA 2012 Complex Problem Solving to form a matrix of 12 specific skills. The three CPS competencies are (1) establishing and maintaining shared understanding, (2) taking appropriate action, and (3) establishing and maintaining team organization. For the assessment, computer-based agents provide the means to assess students by varying group composition and discourse across multiple collaborative situations within a short period of time. Student proficiency is then measured by the extent to which students respond to requests and initiate actions or communications to advance the group goals. This chapter identifies considerations and challenges in the design of a collaborative problem solving assessment for large-scale testing.
artificial intelligence in education | 2013
Carol Forsyth; Arthur C. Graesser; Breya Walker; Keith K. Millis; Philip I. Pavlik; Diane F. Halpern
Operation ARA is a serious game that teaches scientific inquiry using natural language conversations. Within the context of the game, students completed up to two distinct training modules that teach either didactic or applied conceptual information about research methodology (e.g., validity of dependent variables, need for control groups). An experiment using a 4-condition between-subjects pretest-interaction-posttest design was conducted in which 81 undergraduate college students interacted with varying modules of Operation ARA. The four conditions were designed to test the impact of the two distinct modules on different types of learning measured by multiple-choice, short answer, and case-based assessment questions. Results revealed significant differences on training condition and learning gains on two of the three types of questions.
artificial intelligence in education | 2017
Carol Forsyth; Stephanie Peters; Diego Zapata-Rivera; Jennifer Lentini; Arthur C. Graesser; Zhiqiang Cai
Teachers often have difficulties understanding many aspects of score reports for assessments, thus hindering their ability to help students. Computerized environments with natural language conversations may help teachers better understand these reports. Thus, we created a tutor on score reports for teachers based on the AutoTutor conversational framework, which conventionally teaches various topics to students rather than teachers. We conducted a pilot study where eight teachers completed interaction with the tutor, providing a total of 98 responses. Results revealed specific ways the framework may be altered for teachers as well as teachers’ overall favorable attitudes towards the tutor.
intelligent tutoring systems | 2012
Zhiqiang Cai; Carol Forsyth; Arthur C. Graesser; Keith K. Millis
OperationARIES! is an ITS that uses natural language conversations in order to teach research methodology to students in a serious game environment. Regular expressions and Latent Semantic Analysis (LSA) are used to evaluate the semantic matches between student contributions, expected good answers and misconceptions. Current implementation of these algorithms yields accuracy comparable to human ratings of student contributions. The performance of LSA can be further perfected by using a domain-specific rather than a generic corpus as a space for interpreting the meaning of the student generated contributions. ARIES can therefore accurately compute the quality of student answers during natural language tutorial conversations.
international conference on augmented cognition | 2018
Diego Zapata-Rivera; Priya Kannan; Carol Forsyth; Stephanie Peters; Andrew D. Bryant; Enruo Guo; Rodolfo Long
The effective communication of assessment results to the intended audience is an important issue that has implications for accomplishing the goals of an assessment. New assessments can provide score report users with a variety of additional evidence about the test taker’s knowledge, skills, and abilities, than has been possible with traditional assessments. Two audience-specific score reporting systems for highly interactive assessments are currently being developed to provide formative feedback for teachers. The first system provides formative feedback to preservice teachers based on their performance teaching a group of virtual student avatars in a simulated classroom. The second system provides teachers with information relevant to how students interact with a conversation-based assessment. These two score reporting systems provide us with good examples of the types of communication and interaction issues that are present in the development of new types of assessments. In this paper, we describe these two reporting systems, discuss commonalities between the two systems particularly focusing on the design and evaluation processes, and elaborate on the implications for future work in this area.
artificial intelligence in education | 2017
Carol Forsyth; G. Tanner Jackson; Delano Hebert; Blair Lehman; Pat Inglese; Lindsay D. Grace
Game-based assessment (GBA) is a new frontier in the assessment industry. However, as with serious games, it will likely be important to find an optimal balance between making the game “fun” versus focusing on achieving the educational goals. We created two minigames to assess students’ knowledge of argumentation skills. We conducted an iterative counter-balanced pre-survey-interaction-post-survey study with 124 students. We discovered that game presentation sequence and game perceptions are related to performance in two games with varying numbers of game features and alignment to educational content. Specifically, understanding how to play the games is related to performance when users start with a familiar environment and move to one with more game features, whereas enjoyment is related to performance when users start with a more gamified experience before moving to a familiar environment.
artificial intelligence in education | 2015
Carol Forsyth; Arthur C. Graesser; Andrew Olney; Keith K. Millis; Breya Walker; Zhiqiang Cai
The current study investigated teacher emotions, student emotions, and discourse features in relation to learning in a serious game. The experiment consisted of 48 subjects participating in a 4-condition within-subjects counter-balanced pretest-interaction-posttest design. Participants interacted with a serious game teaching research methodology with natural language conversations between the human student and two artificial pedagogical agents. The discourse of the artificial pedagogical agents was manipulated to evoke student affective states. Student emotion was measured via affect grids and discourse features were measured with computational linguistics techniques. Results indicated that learner’s arousal levels impacted learning and that language use is correlated with learning.