Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Hastings is active.

Publication


Featured researches published by Peter Hastings.


Behavior Research Methods | 2008

Research Methods Tutor: evaluation of a dialogue-based tutoring system in the classroom.

Elizabeth Arnott; Peter Hastings; David Allbritton

Research Methods Tutor (RMT) is a dialogue-based intelligent tutoring system for use in conjunction with undergraduate psychology research methods courses. RMT includes five topics that correspond to the curriculum of introductory research methods courses: ethics, variables, reliability, validity, and experimental design. We evaluated the effectiveness of the RMT system in the classroom using a nonequivalent control group design. Students in three classes (n = 83) used RMT, and students in two classes (n = 53) did not use RMT. Results indicated that the use of RMT yielded strong learning gains of 0.75 standard deviations above classroom instruction alone. Further, the dialogue-based tutoring condition of the system resulted in higher gains than did the textbook-style condition (CAI version) of the system. Future directions for RMT include the addition of new topics and tutoring elements.


Journal of Educational Computing Research | 2007

Tutoring Bilingual Students with an Automated Reading Tutor that Listens

Robert Poulsen; Peter Hastings; David Allbritton

Children from non-English-speaking homes are doubly disadvantaged when learning English in school. They enter school with less prior knowledge of English sounds, word meanings, and sentence structure, and they get little or no reinforcement of their learning outside of the classroom. This article compares the classroom standard practice of sustained silent reading with the Project LISTEN Reading Tutor which uses automated speech recognition to “listen” to children read aloud, providing both spoken and graphical feedback. Previous research with the Reading Tutor has focused primarily on native speaking populations. In this study 34 Hispanic students spent one month in the classroom and one month using the Reading Tutor for 25 minutes per day. The Reading Tutor condition produced significant learning gains in several measures of fluency. Effect sizes ranged from 0.55 to 1.27. These dramatic results from a one-month treatment indicate this technology may have much to offer English language learners.


Behavior Research Methods | 2012

Assessing the use of multiple sources in student essays

Peter Hastings; Simon Hughes; Joseph P. Magliano; Susan R. Goldman; Kimberly A. Lawless

The present study explored different approaches for automatically scoring student essays that were written on the basis of multiple texts. Specifically, these approaches were developed to classify whether or not important elements of the texts were present in the essays. The first was a simple pattern-matching approach called “multi-word” that allowed for flexible matching of words and phrases in the sentences. The second technique was latent semantic analysis (LSA), which was used to compare student sentences to original source sentences using its high-dimensional vector-based representation. Finally, the third was a machine-learning technique, support vector machines, which learned a classification scheme from the corpus. The results of the study suggested that the LSA-based system was superior for detecting the presence of explicit content from the texts, but the multi-word pattern-matching approach was better for detecting inferences outside or across texts. These results suggest that the best approach for analyzing essays of this nature should draw upon multiple natural language processing approaches.


artificial intelligence in education | 2017

Different Approaches to Assessing the Quality of Explanations Following a Multiple-Document Inquiry Activity in Science

Jennifer Wiley; Peter Hastings; Dylan Blaum; Allison J. Jaeger; Simon Hughes; Patricia S. Wallace; Thomas D. Griffin; M. Anne Britt

This article describes several approaches to assessing student understanding using written explanations that students generate as part of a multiple-document inquiry activity on a scientific topic (global warming). The current work attempts to capture the causal structure of student explanations as a way to detect the quality of the students’ mental models and understanding of the topic by combining approaches from Cognitive Science and Artificial Intelligence, and applying them to Education. First, several attributes of the explanations are explored by hand coding and leveraging existing technologies (LSA and Coh-Metrix). Then, we describe an approach for inferring the quality of the explanations using a novel, two-phase machine-learning approach for detecting causal relations and the causal chains that are present within student essays. The results demonstrate the benefits of using a machine-learning approach for detecting content, but also highlight the promise of hybrid methods that combine ML, LSA and Coh-Metrix approaches for detecting student understanding. Opportunities to use automated approaches as part of Intelligent Tutoring Systems that provide feedback toward improving student explanations and understanding are discussed.


artificial intelligence in education | 2011

Text categorization for assessing multiple documents integration, or John Henry visits a data mine

Peter Hastings; Simon Hughes; Joseph P. Magliano; Susan R. Goldman; Kimberly A. Lawless

A critical need for students in the digital age is to learn how to gather, analyze, evaluate, and synthesize complex and sometimes contradictory information across multiple sources and contexts. Yet reading is most often taught with single sources. In this paper, we explore techniques for analyzing student essays to give feedback to teachers on how well their students deal with multiple texts. We compare the performance of a simple regular expression matcher to Latent Semantic Analysis and to Support Vector Machines, a machine learning approach.


intelligent tutoring systems | 2012

Automated approaches for detecting integration in student essays

Simon Hughes; Peter Hastings; Joseph P. Magliano; Susan R. Goldman; Kimberly A. Lawless

Integrating information across multiple sources is an important literacy skill, yet there has been little research into automated methods for measuring integration in written text. This study investigated the efficacy of three different algorithms at classifying student essays according to an expert model of the essay topic which categorized statements by argument function, including claims and integration. A novel classification algorithm is presented which uses multi-word regular expressions. Its performance is compared to that of Latent Semantic Analysis and several variants of the Support Vector Machine algorithm at the same classification task. One variant of the SVM approach worked best overall, but another proved more successful at detecting integration within and across texts. This research has important implications for systems that can gauge the level of integration in written essays.


intelligent tutoring systems | 2010

Squeezing out gaming behavior in a dialog-based ITS

Peter Hastings; Elizabeth Arnott-Hill; David Allbritton

Research Methods Tutor (RMT) is a dialog-based intelligent tutoring system which has been used by students in Research Methods in Psychology classes since 2003. Students interact with RMT to reinforce what they learn in class in five different topics. In this paper, we evaluate a different population of students and replicate our prior research: despite the relatively small amount of exposure during the term to RMT compared to other course-related activities, students learn significantly more on topics covered with RMT [1]. However, we did not find the same advantage for the dialog-based tutoring mode of RMT over the CAI mode. When transcript analyses indicated that a small but significant number of students were gaming the system by entering empty or nonsense responses, we modified the tutor to require reasonable attempts. This did lead some students to reform their gaming ways. In other cases, however, it resulted in disengagement from tutoring at least temporarily because reasonable answers were not recognized.


artificial intelligence in education | 2015

Machine Learning for Holistic Evaluation of Scientific Essays

Simon Hughes; Peter Hastings; Mary Anne Britt; Patricia S. Wallace; Dylan Blaum

In the US in particular, there is an increasing emphasis on the importance of science in education. To better understand a scientific topic, students need to compile information from multiple sources and determine the principal causal factors involved. We describe an approach for automatically inferring the quality and completeness of causal reasoning in essays on two separate scientific topics using a novel, two-phase machine learning approach for detecting causal relations. For each core essay concept, we initially trained a window-based tagging model to predict which individual words belonged to that concept. Using the predictions from this first set of models, we then trained a second stacked model on all the predicted word tags present in a sentence to predict inferences between essay concepts. The results indicate we could use such a system to provide explicit feedback to students to improve reasoning and essay writing skills.


artificial intelligence in education | 2018

Active Learning for Improving Machine Learning of Student Explanatory Essays

Peter Hastings; Simon Hughes; M. Anne Britt

There is an increasing emphasis, especially in STEM areas, on students’ abilities to create explanatory descriptions. Holistic, overall evaluations of explanations can be performed relatively easily with shallow language processing by humans or computers. However, this provides little information about an essential element of explanation quality: the structure of the explanation, i.e., how it connects causes to effects. The difficulty of providing feedback on explanation structure can lead teachers to either avoid giving this type of assignment or to provide only shallow feedback on them. Using machine learning techniques, we have developed successful computational models for analyzing explanatory essays. A major cost of developing such models is the time and effort required for human annotation of the essays. As part of a large project studying students’ reading processes, we have collected a large number of explanatory essays and thoroughly annotated them. Then we used the annotated essays to train our machine learning models. In this paper, we focus on how to get the best payoff from the expensive annotation process within such an educational context and we evaluate a method called Active Learning.


world conference on information systems and technologies | 2017

Social Quizzes with Scuiz

Massimo Di Pierro; Peter Hastings

Scuiz is a platform for social quizzes. Students, not only the teacher, create quiz questions which are then randomly assigned to other students. Questions are organized in feeds similar to feeds found in common social networks. The system allows students to “challenge”, or dispute a question’s premise or answer, and discuss them. A mechanism of incentives ensures that students engage with the system early and substantively. The system keeps statistical information about students’ questions, and automatically flags some of them as “too easy” or “too difficult”. The teacher can review these challenges and accept/reject the submitted questions, as well as participate in the discussion and change the status of questions. This paper describes Scuiz, which was implemented by the first author, some issues arising from the use of it in classes, and future research directions concerning its effectiveness.

Collaboration


Dive into the Peter Hastings's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dylan Blaum

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Magliano

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

M. Anne Britt

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kimberly A. Lawless

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Susan R. Goldman

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Patricia S. Wallace

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristopher Kopp

Northern Illinois University

View shared research outputs
Researchain Logo
Decentralizing Knowledge