Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nia Dowell is active.

Publication


Featured researches published by Nia Dowell.


intelligent tutoring systems | 2014

What Works: Creating Adaptive and Intelligent Systems for Collaborative Learning Support

Nia Dowell; Whitney L. Cade; Yla R. Tausczik; James W. Pennebaker; Arthur C. Graesser

An emerging trend in classrooms is the use of collaborative learning environments that promote lively exchanges between learners in order to facilitate learning. This paper explored the possibility of using discourse features to predict student and group performance during collaborative learning interactions. We investigated the linguistic patterns of group chats, within an online collaborative learning exercise, on five discourse dimensions using an automated linguistic facility, Coh-Metrix. The results indicated that students who engaged in deeper cohesive integration and generated more complicated syntactic structures performed significantly better. The overall group level results indicated collaborative groups who engaged in deeper cohesive and expository style interactions performed significantly better on posttests. Although students do not directly express knowledge construction and cognitive processes, our results indicate that these states can be monitored by analyzing language and discourse. Implications are discussed regarding computer supported collaborative learning and ITSs to facilitate productive communication in collaborative learning environments.


Journal of Abnormal Psychology | 2016

Participant, rater, and computer measures of coherence in posttraumatic stress disorder.

David C. Rubin; Samantha A. Deffler; Christin M. Ogle; Nia Dowell; Arthur C. Graesser; Jean C. Beckham

We examined the coherence of trauma memories in a trauma-exposed community sample of 30 adults with and 30 without posttraumatic stress disorder. The groups had similar categories of traumas and were matched on multiple factors that could affect the coherence of memories. We compared the transcribed oral trauma memories of participants with their most important and most positive memories. A comprehensive set of 28 measures of coherence including 3 ratings by the participants, 7 ratings by outside raters, and 18 computer-scored measures, provided a variety of approaches to defining and measuring coherence. A multivariate analysis of variance indicated differences in coherence among the trauma, important, and positive memories, but not between the diagnostic groups or their interaction with these memory types. Most differences were small in magnitude; in some cases, the trauma memories were more, rather than less, coherent than the control memories. Where differences existed, the results agreed with the existing literature, suggesting that factors other than the incoherence of trauma memories are most likely to be central to the maintenance of posttraumatic stress disorder and thus its treatment.


IEEE Transactions on Affective Computing | 2013

Unimodal and Multimodal Human Perceptionof Naturalistic Non-Basic Affective Statesduring Human-Computer Interactions

Sidney K. D'Mello; Nia Dowell; Arthur C. Graesser

The present study investigated unimodal and multimodal emotion perception by humans, with an eye for applying the findings towards automated affect detection. The focus was on assessing the reliability by which untrained human observers could detect naturalistic expressions of non-basic affective states (boredom, engagement/flow, confusion, frustration, and neutral) from previously recorded videos of learners interacting with a computer tutor. The experiment manipulated three modalities to produce seven conditions: face, speech, context, face+speech, face+context, speech+context, face+speech+context. Agreement between two observers (OO) and between an observer and a learner (LO) were computed and analyzed with mixed-effects logistic regression models. The results indicated that agreement was generally low (kappas ranged from .030 to .183), but, with one exception, was greater than chance. Comparisons of overall agreement (across affective states) between the unimodal and multimodal conditions supported redundancy effects between modalities, but there were superadditive, additive, redundant, and inhibitory effects when affective states were individually considered. There was both convergence and divergence of patterns in the OO and LO data sets; however, LO models yielded lower agreement but higher multimodal effects compared to OO models. Implications of the findings for automated affect detection are discussed.


Review of Educational Research | 2018

How Do We Model Learning at Scale? A Systematic Review of Research on MOOCs:

Srećko Joksimović; Oleksandra Poquet; Vitomir Kovanović; Nia Dowell; Caitlin Mills; Dragan Gasevic; Shane Dawson; Arthur C. Graesser; Christopher Brooks

Despite a surge of empirical work on student participation in online learning environments, the causal links between the learning-related factors and processes with the desired learning outcomes remain unexplored. This study presents a systematic literature review of approaches to model learning in Massive Open Online Courses offering an analysis of learning-related constructs used in the prediction and measurement of student engagement and learning outcome. Based on our literature review, we identify current gaps in the research, including a lack of solid frameworks to explain learning in open online setting. Finally, we put forward a novel framework suitable for open online contexts based on a well-established model of student engagement. Our model is intended to guide future work studying the association between contextual factors (i.e., demographic, classroom, and individual needs), student engagement (i.e., academic, behavioral, cognitive, and affective engagement metrics), and learning outcomes (i.e., academic, social, and affective). The proposed model affords further interstudy comparisons as well as comparative studies with more traditional education models.


Archive | 2017

Assessing Collaborative Problem Solving Through Conversational Agents

Arthur C. Graesser; Nia Dowell; Danielle N. Clewley

Communication is a core component of collaborative problem solving and its assessment. Advances in computational linguistics and discourse science have made it possible to analyze conversation on multiple levels of language and discourse in different educational settings. Most of these advances have focused on tutoring contexts in which a student and a tutor collaboratively solve problems, but there has also been some progress in analyzing conversations in small groups. Naturalistic patterns of collaboration in one-on-one tutoring and in small groups have also been compared with theoretically ideal patterns. Conversation-based assessment is currently being applied to measure various competencies, such as literacy, mathematics, science, reasoning, and collaborative problem solving. One conversation-based assessment approach is to design computerized conversational agents that interact with the human in natural language. This chapter reports research that uses one or more agents to assess human competencies while the humans and agents collaboratively solve problems or answer difficult questions. AutoTutor holds a collaborative dialogue in natural language and concurrently assesses student performance. The agent converses through a variety of dialogue moves: questions, short feedback, pumps for information, hints, prompts for specific words, corrections, assertions, summaries, and requests for summaries. Trialogues are conversations between the human and two computer agents that play different roles (e.g., peer, tutor, expert). Trialogues are being applied in both training and assessment contexts on particular skills and competencies. Agents are currently being developed at Educational Testing Service for assessments of individuals on various competencies, including the Programme for International Student Assessment 2015 assessment of collaborative problem solving.


artificial intelligence in education | 2011

Does topic matter? topic influences on linguistic and rubric-based evaluation of writing

Nia Dowell; Sidney K. D'Mello; Caitlin Mills; Arthur C. Graesser

Although writing is an integral part of education, there is limited knowledge on how assigned topics influence writing quality both in terms of micro-level linguistic features and macro-level subjective evaluations by human judges. We addressed this question by conducting a study in which 44 students wrote short essays on three different topics: traditional academic-based topics such as the ones used in standardized tests, personal emotional experiences, and socially charged topics. The essays were automatically scored on five linguistic dimensions (narrativity, situation model cohesion, referential cohesion, syntactic complexity, and word abstractness). They were also manually scored by human judges based on a rubric focusing on macro-level dimensions (i.e., introduction, thesis, and conclusion). The results indicated that topic-related differences were observed on both the rubric-based and linguistic assessments, although there were weak relationships between these two measures.


learning analytics and knowledge | 2018

Are MOOC forums changing

Oleksandra Poquet; Nia Dowell; Christopher Brooks; Shane Dawson

There has been a growing trend in higher education towards increased use and adoption of Massive Open Online Courses (MOOCs). Despite this interest in learning at scale, limited work has compared MOOC activity across subsequent course offerings. In this study, we explore forum activity in ten iterations of the same MOOC. Our results suggest that participation in MOOC forums has changed over the past four years of delivery. First, overall participation in MOOC forums have decreased. Second, in later iterations cohorts of more committed forum users start to resemble formal online courses in size (67>n>36). However, despite the smaller groups of learners that should find it easier to form connections with one another, our analysis did not reveal the expected increase in the quality of social activity. Instead, MOOC forums evolved into smaller on-task question and answer (Q&A) spaces, not capitalizing on the opportunities for social learning. We discuss practical and research implications of such changes.


artificial intelligence in education | 2018

Temporal Changes in Affiliation and Emotion in MOOC Discussion Forum Discourse

Jing Hu; Nia Dowell; Christopher Brooks; Wenfei Yan

Studies have shown discourse constructs of affiliation and emotion to be critical factors influencing learning performance and outcomes in both traditional environments and Massive Open Online Courses (MOOCs). However, there is limited research investigating the affiliation and emotions of MOOC learners and how these factors develop over time. To gain a deeper understanding of the MOOC population and to facilitate MOOC environment design, we addressed this gap by conducting a longitudinal analysis of change of affiliation and emotions presented in discussion forums of five Coursera courses. They have been offered numerous times from 2012 to 2015. We demonstrate that discussion forums have reflected decreasing affiliation and increasing negative emotions over the four years for most courses, with no significant overall change in positive emotions.


International Interactions | 2018

Leader Language and Political Survival Strategies

Leah Windsor; Nia Dowell; Alistair Windsor; John Kaltner

ABSTRACT Authoritarian leaders’ language provides clues to their survival strategies for remaining in office. This line of inquiry fits within an emerging literature that refocuses attention from state-level features to the dynamic role that individual heads of state and government play in international relations, especially in authoritarian regimes. The burgeoning text-as-data field can be used to deepen our understanding of the nuances of leader survival and political choices; for example, language can serve as a leading indicator of leader approval, which itself is a good predictor of leader survival. In this paper, we apply computational linguistics tools to an authoritarian leader corpus consisting of 102 speeches from nine leaders of countries across the Middle East and North Africa between 2009 and 2012. We find systematic differences in the language of these leaders, which help advance a more broadly applicable theory of authoritarian leader language and tenure.


Behavior Research Methods | 2018

Group communication analysis: A computational linguistics approach for detecting sociocognitive roles in multiparty interactions

Nia Dowell; Tristan M. Nixon; Arthur C. Graesser

Roles are one of the most important concepts in understanding human sociocognitive behavior. During group interactions, members take on different roles within the discussion. Roles have distinct patterns of behavioral engagement (i.e., active or passive, leading or following), contribution characteristics (i.e., providing new information or echoing given material), and social orientation (i.e., individual or group). Different combinations of roles can produce characteristically different group outcomes, and thus can be either less or more productive with regard to collective goals. In online collaborative-learning environments, this can lead to better or worse learning outcomes for the individual participants. In this study, we propose and validate a novel approach for detecting emergent roles from participants’ contributions and patterns of interaction. Specifically, we developed a group communication analysis (GCA) by combining automated computational linguistic techniques with analyses of the sequential interactions of online group communication. GCA was applied to three large collaborative interaction datasets (participant N = 2,429, group N = 3,598). Cluster analyses and linear mixed-effects modeling were used to assess the validity of the GCA approach and the influence of learner roles on student and group performance. The results indicated that participants’ patterns of linguistic coordination and cohesion are representative of the roles that individuals play in collaborative discussions. More broadly, GCA provides a framework for researchers to explore the micro intra- and interpersonal patterns associated with participants’ roles and the sociocognitive processes related to successful collaboration.

Collaboration


Dive into the Nia Dowell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shane Dawson

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oleksandra Poquet

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oleksandra Skrypnyk

University of South Australia

View shared research outputs
Researchain Logo
Decentralizing Knowledge