Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathaniel Blanchard is active.

Publication


Featured researches published by Nathaniel Blanchard.


intelligent tutoring systems | 2014

Automated Physiological-Based Detection of Mind Wandering during Learning

Nathaniel Blanchard; Robert Bixler; Tera Joyce; Sidney K. D'Mello

Unintentional lapses of attention, or mind wandering, are ubiquitous and detrimental during learning. Hence, automated methods that detect and combat mind wandering might be beneficial to learning. As an initial step in this direction, we propose to detect mind wandering by monitoring physiological measures of skin conductance and skin temperature. We conducted a study in which students physiology signals were measured while they learned topics in research methods from instructional texts. Momentary self-reports of mind wandering were collected with standard probe-based methods. We computed features from the physiological signals in windows leading up to the probes and trained supervised classification models to detect mind wandering. We obtained a kappa, a measurement of accuracy corrected for random guessing, of .22, signaling feasibility of detecting MW in a student-independent manner. Though modest, we consider this result to be an important step towards fully-automated unobtrusive detection of mind wandering during learning.


international conference on multimodal interfaces | 2015

Multimodal Capture of Teacher-Student Interactions for Automated Dialogic Analysis in Live Classrooms

Sidney K. D'Mello; Andrew Olney; Nathaniel Blanchard; Borhan Samei; Xiaoyi Sun; Brooke Ward; Sean Kelly

We focus on data collection designs for the automated analysis of teacher-student interactions in live classrooms with the goal of identifying instructional activities (e.g., lecturing, discussion) and assessing the quality of dialogic instruction (e.g., analysis of questions). Our designs were motivated by multiple technical requirements and constraints. Most importantly, teachers could be individually micfied but their audio needed to be of excellent quality for automatic speech recognition (ASR) and spoken utterance segmentation. Individual students could not be micfied but classroom audio quality only needed to be sufficient to detect student spoken utterances. Visual information could only be recorded if students could not be identified. Design 1 used an omnidirectional laptop microphone to record both teacher and classroom audio and was quickly deemed unsuitable. In Designs 2 and 3, teachers wore a wireless Samson AirLine 77 vocal headset system, which is a unidirectional microphone with a cardioid pickup pattern. In Design 2, classroom audio was recorded with dual first- generation Microsoft Kinects placed at the front corners of the class. Design 3 used a Crown PZM-30D pressure zone microphone mounted on the blackboard to record classroom audio. Designs 2 and 3 were tested by recording audio in 38 live middle school classrooms from six U.S. schools while trained human coders simultaneously performed live coding of classroom discourse. Qualitative and quantitative analyses revealed that Design 3 was suitable for three of our core tasks: (1) ASR on teacher speech (word recognition rate of 66% and word overlap rate of 69% using Google Speech ASR engine); (2) teacher utterance segmentation (F-measure of 97%); and (3) student utterance segmentation (F-measure of 66%). Ideas to incorporate video and skeletal tracking with dual second-generation Kinects to produce Design 4 are discussed.


artificial intelligence in education | 2015

A Study of Automatic Speech Recognition in Noisy Classroom Environments for Automated Dialog Analysis

Nathaniel Blanchard; Michael Connolly Brady; Andrew Olney; Marci Glaus; Xiaoyi Sun; Martin Nystrand; Borhan Samei; Sean Kelly; Sidney D’Mello

The development of large-scale automatic classroom dialog analysis systems requires accurate speech-to-text translation. A variety of automatic speech recognition (ASR) engines were evaluated for this purpose. Recordings of teachers in noisy classrooms were used for testing. In comparing ASR results, Google Speech and Bing Speech were more accurate with word accuracy scores of 0.56 for Google and 0.52 for Bing compared to 0.41 for AT&T Watson, 0.08 for Microsoft, 0.14 for Sphinx with the HUB4 model, and 0.00 for Sphinx with the WSJ model. Further analysis revealed both Google and Bing engines were largely unaffected by speakers, speech class sessions, and speech characteristics. Bing results were validated across speakers in a laboratory study, and a method of improving Bing results is presented. Results provide a useful understanding of the capabilities of contemporary ASR engines in noisy classroom environments. Results also highlight a list of issues to be aware of when selecting an ASR engine for difficult speech recognition tasks.


international learning analytics knowledge conference | 2017

Words matter: automatic detection of teacher questions in live classroom discourse using linguistics, acoustics, and context

Patrick Donnelly; Nathaniel Blanchard; Andrew Olney; Sean Kelly; Martin Nystrand; Sidney K. D'Mello

We investigate automatic detection of teacher questions from audio recordings collected in live classrooms with the goal of providing automated feedback to teachers. Using a dataset of audio recordings from 11 teachers across 37 class sessions, we automatically segment the audio into individual teacher utterances and code each as containing a question or not. We train supervised machine learning models to detect the human-coded questions using high-level linguistic features extracted from automatic speech recognition (ASR) transcripts, acoustic and prosodic features from the audio recordings, as well as context features, such as timing and turn-taking dynamics. Models are trained and validated independently of the teacher to ensure generalization to new teachers. We are able to distinguish questions and non-questions with a weighted F1 score of 0.69. A comparison of the three feature sets indicates that a model using linguistic features outperforms those using acoustic-prosodic and context features for question detection, but the combination of features yields a 5% improvement in overall accuracy compared to linguistic features alone. We discuss applications for pedagogical research, teacher formative assessment, and teacher professional development.


annual meeting of the special interest group on discourse and dialogue | 2016

Identifying Teacher Questions Using Automatic Speech Recognition in Classrooms.

Nathaniel Blanchard; Patrick Donnelly; Andrew Olney; Borhan Samei; Brooke Ward; Xiaoyi Sun; Sean Kelly; Martin Nystrand; Sidney K. D'Mello

We investigate automatic question detection from recordings of teacher speech collected in live classrooms. Our corpus contains audio recordings of 37 class sessions taught by 11 teachers. We automatically segment teacher speech into utterances using an amplitude envelope thresholding approach followed by filtering non-speech via automatic speech recognition (ASR). We manually code the segmented utterances as containing a teacher question or not based on an empirically-validated scheme for coding classroom discourse. We compute domain-independent natural language processing (NLP) features from transcripts generated by three ASR engines (AT&T, Bing Speech, and Azure Speech). Our teacher-independent supervised machine learning model detects questions with an overall weighted F1 score of 0.59, a 51% improvement over chance. Furthermore, the proportion of automatically-detected questions per class session strongly correlates (Pearson’s r = 0.85) with human-coded question rates. We consider our results to reflect a substantial (37%) improvement over the state-of-the-art in automatic question detection from naturalistic audio. We conclude by discussing applications of our work for teachers, researchers, and other stakeholders.


international conference on multimodal interfaces | 2016

Multi-sensor modeling of teacher instructional segments in live classrooms

Patrick Donnelly; Nathaniel Blanchard; Borhan Samei; Andrew Olney; Xiaoyi Sun; Brooke Ward; Sean Kelly; Martin Nystrand; Sidney K. D'Mello

We investigate multi-sensor modeling of teachers’ instructional segments (e.g., lecture, group work) from audio recordings collected in 56 classes from eight teachers across five middle schools. Our approach fuses two sensors: a unidirectional microphone for teacher audio and a pressure zone microphone for general classroom audio. We segment and analyze the audio streams with respect to discourse timing, linguistic, and paralinguistic features. We train supervised classifiers to identify the five instructional segments that collectively comprised a majority of the data, achieving teacher-independent F1 scores ranging from 0.49 to 0.60. With respect to individual segments, the individual sensor models and the fused model were on par for Question & Answer and Procedures & Directions segments. For Supervised Seatwork, Small Group Work, and Lecture segments, the classroom model outperformed both the teacher and fusion models. Across all segments, a multi-sensor approach led to an average 8% improvement over the state of the art approach that only analyzed teacher audio. We discuss implications of our findings for the emerging field of multimodal learning analytics.


educational data mining | 2014

Domain Independent Assessment of Dialogic Properties of Classroom Discourse

Borhan Samei; Andrew Olney; Sean Kelly; Martin Nystrand; Sidney K. D'Mello; Nathaniel Blanchard; Xiaoyi Sun; Marci Glaus; Arthur C. Graesser


international conference on multimodal interfaces | 2015

Automatic Detection of Mind Wandering During Reading Using Gaze and Physiology

Robert Bixler; Nathaniel Blanchard; Luke Garrison; Sidney K. D'Mello


educational data mining | 2015

Automatic Classification of Question & Answer Discourse Segments from Teacher's Speech in Classrooms.

Nathaniel Blanchard; Sidney K. D'Mello; Andrew Olney; Martin Nystrand


educational data mining | 2015

Modeling Classroom Discourse: Do Models That Predict Dialogic Instruction Properties Generalize across Populations?.

Borhan Samei; Andrew Olney; Sean Kelly; Martin Nystrand; Sidney K. D'Mello; Nathaniel Blanchard; Arthur C. Graesser

Collaboration


Dive into the Nathaniel Blanchard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Nystrand

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyi Sun

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean Kelly

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Brooke Ward

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Sean Kelly

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Robert Bixler

University of Notre Dame

View shared research outputs
Researchain Logo
Decentralizing Knowledge