Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Olney is active.

Publication


Featured researches published by Andrew Olney.


IEEE Transactions on Education | 2005

AutoTutor: an intelligent tutoring system with mixed-initiative dialogue

Arthur C. Graesser; Patrick Chipman; Brian C. Haynes; Andrew Olney

AutoTutor simulates a human tutor by holding a conversation with the learner in natural language. The dialogue is augmented by an animated conversational agent and three-dimensional (3-D) interactive simulations in order to enhance the learners engagement and the depth of the learning. Grounded in constructivist learning theories and tutoring research, AutoTutor achieves learning gains of approximately 0.8 sigma (nearly one letter grade), depending on the learning measure and comparison condition. The computational architecture of the system uses the .NET framework and has simplified deployment for classroom trials.


Behavior Research Methods Instruments & Computers | 2004

Autotutor: A tutor with dialogue in natural language

Arthur C. Graesser; Shulan Lu; George Tanner Jackson; Heather Hite Mitchell; Mathew Ventura; Andrew Olney; Max M. Louwerse

AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student’s typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student’s questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.


intelligent tutoring systems | 2012

Guru: a computer tutor that models expert human tutors

Andrew Olney; Sidney K. D'Mello; Natalie K. Person; Whitney L. Cade; Patrick Hays; Claire Williams; Blair Lehman; Arthur C. Graesser

We present Guru, an intelligent tutoring system for high school biology that has conversations with students, gestures and points to virtual instructional materials, and presents exercises for extended practice. Gurus instructional strategies are modeled after expert tutors and focus on brief interactive lectures followed by rounds of scaffolding as well as summarizing, concept mapping, and Cloze tasks. This paper describes the Guru session and presents learning outcomes from an in-school study comparing Guru, human tutoring, and classroom instruction. Results indicated significant learning gains for students in the Guru and human tutoring conditions compared to classroom controls.


north american chapter of the association for computational linguistics | 2003

Utterance classification in AutoTutor

Andrew Olney; Max M. Louwerse; Eric Matthews; Johanna Marineau; Heather Hite-Mitchell; Arthur C. Graesser

This paper describes classification of typed student utterances within AutoTutor, an intelligent tutoring system. Utterances are classified to one of 18 categories, including 16 question categories. The classifier presented uses part of speech tagging, cascaded finite state transducers, and simple disambiguation rules. Shallow NLP is well suited to the task: session log file analysis reveals significant classification of eleven question categories, frozen expressions, and assertions.


empirical methods in natural language processing | 2005

An Orthonormal Basis for Topic Segmentation in Tutorial Dialogue

Andrew Olney; Zhiqiang Cai

This paper explores the segmentation of tutorial dialogue into cohesive topics. A latent semantic space was created using conversations from human to human tutoring transcripts, allowing cohesion between utterances to be measured using vector similarity. Previous cohesion-based segmentation methods that focus on expository monologue are reapplied to these dialogues to create benchmarks for performance. A novel moving window technique using orthonormal bases of semantic vectors significantly outperforms these benchmarks on this dialogue segmentation task.


Computers in Education | 2004

A framework of synthesizing tutoring conversation capability with web-based distance education courseware

Ki-Sang Song; Xiangen Hu; Andrew Olney; Arthur C. Graesser

Whereas existing learning environments on the Web lack high level interactivity, we have developed a human tutor-like tutorial conversation system for the Web that enhances educational courseware through mixed-initiative dialog with natural language processing. The conversational tutoring agent is composed of an animated tutor, a Latent Semantic Analysis (LSA) module, a database with curriculum scripts, and a dialog manager. As in the case of human tutors, the meaning of learners contributions in natural language are compared with the content of expected answers to questions or problems specified in curriculum scripts. LSA is used to evaluate the conceptual matches between learner input and tutor expectations, whereas the dialog manager determines how the tutor adaptively responds to the learner by selecting content from the curriculum script. The integration of available courseware with the tutorial dialog system guarantees the reusability of existing Web tutorials with minimal effort in the modification of the curriculum script and LSA module. This development thereby simplifies the change into more valuable Web based training courseware.


PLOS ONE | 2015

Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

Jacqueline Kory Westlund; Sidney D’Mello; Andrew Olney

Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed.


intelligent tutoring systems | 2010

Collaborative lecturing by human and computer tutors

Sidney K. D'Mello; Patrick Hays; Claire Williams; Whitney L. Cade; Jennifer Brown; Andrew Olney

We implemented and evaluated a collaborative lecture module in an ITS that models the pedagogical and motivational tactics of expert human tutors Inspired by the lecture delivery styles of the expert tutors, the collaborative lectures of the ITS were conversational and interactive, instead of a polished one-way information delivery from tutor to student We hypothesized that the enhanced interactivity of the expert tutor lectures were linked to efforts to promote student engagement This hypothesis was tested in an experiment that compared the collaborative lecture module (dialogue) to less interactive alternatives such as monologues and vicarious dialogues The results indicated that students in the collaborative lecture condition reported more arousal (a key component of engagement) than the controls and that arousal was positively correlated with learning gains We discuss the implications of our findings for ITSs that aspire to model expert human tutors.


intelligent tutoring systems | 2010

Tutorial Dialog in Natural Language

Andrew Olney; Arthur C. Graesser; Natalie K. Person

This chapter reviews our past and ongoing investigations into conversational interaction during human tutoring and our attempts to build intelligent tutoring systems (ITS) to simulate this interaction. We have previously modeled the strategies, actions, and dialogue of novice tutors in an ITS, called AutoTutor, with learning gains comparable to novice tutors. There is evidence, however, that expert human tutors may foster exceptional learning gains beyond those reported for some categories of human tutors. We have undertaken a rigorous, large scale study of expert human tutors and are using these data to create Guru, an expert ITS for high school biology. Based on our analyses, expert human tutoring has several distinctive features which differ from novice human tutoring. These distinctive features have implications for the development of an expert ITS, and we briefly describe how these are being addressed in Guru.


Archive | 2013

Affect, Meta-affect, and Affect Regulation During Complex Learning

Sidney D’Mello; Amber Chauncey Strain; Andrew Olney; Arthur C. Graesser

Complex learning of difficult subject matter with educational technologies involves a coordination of cognitive, metacognitive, and affective processes. While extensive theoretical and empirical research has examined learners’ cognitive and metacognitive processes, research on affective processes during learning has been slow to emerge. Because learners’ affective states can significantly impact their thoughts, feelings, behavior, and learning outcomes, inquiry into how these states emerge and influence engagement and learning is of vital importance. In this chapter, we describe several key theories of affect, meta-affect, and affect regulation during learning. We then describe our own empirical research that focuses on identifying the affective states that spontaneously emerge during learning with educational technologies, how affect relates to learning outcomes, and how affect can be regulated. The studies that we describe incorporate a variety of educational technologies, different learning contexts, a number of student populations, and diverse methodologies to track affect. We then describe and evaluate an affect-sensitive version of AutoTutor, a fully-automated intelligent tutoring system that detects and helps learners regulate their negative affective states (frustration, boredom, confusion) in order to increase engagement, task persistence, and learning gains. We conclude by discussing future directions of research on affect, meta-affect, and affect regulation during learning with educational technologies.

Collaboration


Dive into the Andrew Olney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Nystrand

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyi Sun

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Philip I. Pavlik

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge