Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brent Morgan is active.

Publication


Featured researches published by Brent Morgan.


Human Factors | 2013

Individual Differences in Multitasking Ability and Adaptability

Brent Morgan; Sidney K. D'Mello; Robert G. Abbott; Gabriel A. Radvansky; Michael Joseph Haass; Andrea K. Tamplin

Objective: The aim of this study was to identify the cognitive factors that predictability and adaptability during multitasking with a flight simulator. Background: Multitasking has become increasingly prevalent as most professions require individuals to perform multiple tasks simultaneously. Considerable research has been undertaken to identify the characteristics of people (i.e., individual differences) that predict multitasking ability. Although working memory is a reliable predictor of general multitasking ability (i.e., performance in normal conditions), there is the question of whether different cognitive faculties are needed to rapidly respond to changing task demands (adaptability). Method: Participants first completed a battery of cognitive individual differences tests followed by multitasking sessions with a flight simulator. After a baseline condition, difficulty of the flight simulator was incrementally increased via four experimental manipulations, and performance metrics were collected to assess multitasking ability and adaptability. Results: Scholastic aptitude and working memory predicted general multitasking ability (i.e., performance at baseline difficulty), but spatial manipulation (in conjunction with working memory) was a major predictor of adaptability (performance in difficult conditions after accounting for baseline performance). Conclusion: Multitasking ability and adaptability may be overlapping but separate constructs that draw on overlapping (but not identical) sets of cognitive abilities. Application: The results of this study are applicable to practitioners and researchers in human factors to assess multitasking performance in real-world contexts and with realistic task constraints. We also present a framework for conceptualizing multitasking adaptability on the basis of five adaptability profiles derived from performance on tasks with consistent versus increased difficulty.


international conference on human-computer interaction | 2013

Automating the Mentor in a Serious Game: A Discourse Analysis Using Finite State Machines.

Brent Morgan; Fazel Kehtkar; Athur Graesser; David Williamson Shaffer

Serious games are increasingly becoming a popular, effective supplement to standard classroom instruction [1]. Similar to recreational games, multi-party chat is a standard method of communication in serious games. As players collaborate in a serious game, mentoring is often needed to facilitate progress and learning [2, 3, 4]. This role is almost exclusively provided by a human at the present time. However, the cost incurred with training a human mentor represents a critical barrier for widespread use of a collaborative epistemic game. Although great strides have been made in automating one-on-one tutorial dialogues [5, 6], multi-party chat presents a significant challenge for natural language processing. The goal of this research, then, is to provide a preliminary understanding of player-mentor conversations in the context of an epistemic game, Land Science [7].


International Journal of STEM Education | 2018

ElectronixTutor: An Intelligent Tutoring System with Multiple Learning Resources for Electronics.

Arthur C. Graesser; Xiangen Hu; Benjamin D. Nye; Kurt VanLehn; Rohit Kumar; Cristina Heffernan; Neil T. Heffernan; Beverly Park Woolf; Andrew Olney; Vasile Rus; Frank Andrasik; Philip I. Pavlik; Zhiqiang Cai; Jon Wetzel; Brent Morgan; Andrew J. Hampton; Anne Lippert; Lijia Wang; Qinyu Cheng; Joseph E. Vinson; Craig Kelly; Cadarrius McGlown; Charvi A. Majmudar; Bashir I. Morshed; Whitney O. Baer

BackgroundThe Office of Naval Research (ONR) organized a STEM Challenge initiative to explore how intelligent tutoring systems (ITSs) can be developed in a reasonable amount of time to help students learn STEM topics. This competitive initiative sponsored four teams that separately developed systems that covered topics in mathematics, electronics, and dynamical systems. After the teams shared their progress at the conclusion of an 18-month period, the ONR decided to fund a joint applied project in the Navy that integrated those systems on the subject matter of electronic circuits. The University of Memphis took the lead in integrating these systems in an intelligent tutoring system called ElectronixTutor. This article describes the architecture of ElectronixTutor, the learning resources that feed into it, and the empirical findings that support the effectiveness of its constituent ITS learning resources.ResultsA fully integrated ElectronixTutor was developed that included several intelligent learning resources (AutoTutor, Dragoon, LearnForm, ASSISTments, BEETLE-II) as well as texts and videos. The architecture includes a student model that has (a) a common set of knowledge components on electronic circuits to which individual learning resources contribute and (b) a record of student performance on the knowledge components as well as a set of cognitive and non-cognitive attributes. There is a recommender system that uses the student model to guide the student on a small set of sensible next steps in their training. The individual components of ElectronixTutor have shown learning gains in previous decades of research.ConclusionsThe ElectronixTutor system successfully combines multiple empirically based components into one system to teach a STEM topic (electronics) to students. A prototype of this intelligent tutoring system has been developed and is currently being tested. ElectronixTutor is unique in its assembling a group of well-tested intelligent tutoring systems into a single integrated learning environment.


Acta Psychologica | 2016

The influence of positive vs. negative affect on multitasking.

Brent Morgan; Sidney K. D'Mello

Considerable research has investigated how affect influences performance on a single task; however, little is known about the role of affect in complex multitasking environments. In this paper, 178 participants multitasked in a synthetic work environment (SYNWORK) consisting of memory, visual monitoring, auditory monitoring, and math tasks. Participants multitasked for a 3-min baseline phase (MT1), following which they were randomly assigned to watch one of three affect-induction videos: positive, neutral, or negative. Participants then resumed multitasking for two additional critical phases (MT2, MT3; 3min each). In MT2, performance of the positive and neutral conditions was statistically equivalent and higher than the negative condition. In MT3, the positive condition performed better than the negative condition, with the neutral condition not significantly different from the other two. The differences in overall multitasking scores were largely driven by errors in the Math task (the most cognitively demanding task) in MT2 and the Memory task in MT3. These findings have implications for how positive and negative affective states influence processing in a cognitively demanding multitasking environment.


intelligent tutoring systems | 2014

Question Asking During Collaborative Problem Solving in an Online Game Environment

Haiying Li; Ying Duan; Danielle N. Clewley; Brent Morgan; Arthur C. Graesser; David Williamson Shaffer; Jenny Saucerman

This paper investigated frequency of questions and depth of questions in terms of both task difficulty and game phase when players collaboratively solve problems in an online game environment, Land Science. The results showed frequency of questions increased with both the task difficulty and unfamiliar tasks in the game phases. We also found players asked much more shallow questions than intermediate and deep questions, but more deep questions than intermediate questions.


artificial intelligence in education | 2011

Typed versus spoken conversations in a multi-party epistemic game

Brent Morgan; Candice Burkett; Elizabeth Bagley; Arthur C. Graesser

Multi-party chat is a standard feature of popular online games and is increasingly available in collaborative learning environments. This paper addresses the differences between spoken and typed conversations as high school students interacted with the epistemic game Urban Science. Coh-Metrix analyses showed that speech was associated with narrativity and cohesion whereas typed input was associated with syntactic simplicity and word concreteness. These findings suggest that the modality in group communication should be considered.


Computers in Human Behavior | 2017

Assessment with computer agents that engage in conversational dialogues and trialogues with learners

Arthur C. Graesser; Zhiqiang Cai; Brent Morgan; Lijia Wang

This article describes conversation-based assessments with computer agents that interact with humans through chat, talking heads, or embodied animated avatars. Some of these agents perform actions, interact with multimedia, hold conversations with humans in natural language, and adaptively respond to a persons actions, verbal contributions, and emotions. Data are logged throughout the interactions in order to assess the individuals mastery of subject matters, skills, and proficiencies on both cognitive and noncognitive characteristics. There are different agent-based designs that focus on learning and assessment. Dialogues occur between one agent and one human, as in the case of intelligent tutoring systems. Three-party conversations, called trialogues, involve two agents interacting with a human. The two agents can take on different roles (such as tutors and peers), model actions and social interactions, stage arguments, solicit help from the human, and collaboratively solve problems. Examples of assessment with these agent-based environments are presented in the context of intelligent tutoring, educational games, and interventions to help struggling adult readers. Most of these involve assessment at varying grain sizes to guide the intelligent interaction, but conversation-based assessment with agents is also currently being used in high stakes assessments. Computer agents interact with humans in dialogues and trialogues with two agents.Intelligent agents reliably analyze many but not all aspects of natural language.Human tutoring has systematic discourse patterns that can be simulated by agents.Intelligent tutoring systems with agents improve learning of difficult content.Computer agents are used in conversation-based formative and summative assessments.


Quarterly Journal of Experimental Psychology | 2015

The fluid events model: Predicting continuous task action change:

Gabriel A. Radvansky; Sidney K. D'Mello; Robert G. Abbott; Brent Morgan; Karl Fike; Andrea K. Tamplin

The fluid events model is a behavioural model aimed at predicting the likelihood that people will change their actions in ongoing, interactive events. From this view, not only are people responding to aspects of the environment, but they are also basing responses on prior experiences. The fluid events model is an attempt to predict the likelihood that people will shift the type of actions taken within an event on a trial-by-trial basis, taking into account both event structure and experience-based factors. The event-structure factors are: (a) changes in event structure, (b) suitability of the current action to the event, and (c) time on task. The experience-based factors are: (a) whether a person has recently shifted actions, (b) how often a person has shifted actions, (c) whether there has been a dip in performance, and (d) a persons propensity to switch actions within the current task. The model was assessed using data from a series of tasks in which a person was producing responses to events. These were two stimulus-driven figure-drawing studies, a conceptually driven decision-making study, and a probability matching study using a standard laboratory task. This analysis predicted trial-by-trial action switching in a person-independent manner with an average accuracy of 70%, which reflects a 34% improvement above chance. In addition, correlations between overall switch rates and actual switch rates were remarkably high (mean r = .98). The experience-based factors played a more major role than the event-structure factors, but this might be attributable to the nature of the tasks.


intelligent tutoring systems | 2012

Using state transition networks to analyze multi-party conversations in a serious game

Brent Morgan; Fazel Keshtkar; Ying Duan; Padraig Nash; Arthur C. Graesser

As players interact in a serious game, mentoring is often needed to facilitate progress and learning. Although human mentors are the current standard, they present logistical difficulties. Automating the mentors role is a difficult task, however, especially for multi-party collaborative learning environments. In order to better understand the conversational demands of a mentor, this paper investigates the dynamics and linguistic features of multi-party chat in the context of an online epistemic game, Urban Science. We categorized thousands of player and mentor contributions into eight different speech acts and analyzed the sequence of dialogue moves using State Transition Networks. The results indicate that dialogue transitions are relatively stable with respect to gameplay goals; however, task-oriented stages emphasize mentor-player scaffolding, whereas discussion-oriented stages feature player-player collaboration.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

The Effect of Positive vs. Negative Emotion on Multitasking

Brent Morgan; Sidney D’Mello

Emotions have been shown to affect cognition and performance on a multitude of individual tasks; however, people increasingly choose (or are required) to perform multiple tasks simultaneously (multitask1). How, then, do emotions affect multitasking performance? This question was assessed in an experiment wherein participants first multitasked in a Baseline phase, watched a video designed to induce a positive, neutral, or negative emotion, and then resumed multitasking for two additional phases. The results indicated that both the positive and neutral video conditions were superior to the negative condition; however, a marginally significant interaction indicated that the neutral condition was equivalent to negative at the final multitasking phase. We conclude by discussing the theoretical and applied aspects of these findings.

Collaboration


Dive into the Brent Morgan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert G. Abbott

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Joseph Haass

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge