Michael Sao Pedro
Worcester Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Sao Pedro.
User Modeling and User-adapted Interaction | 2013
Michael Sao Pedro; Ryan S. Baker; Janice D. Gobert; Orlando Montalvo; Adam Nakama
We present work toward automatically assessing and estimating science inquiry skills as middle school students engage in inquiry within a physical science microworld. Towards accomplishing this goal, we generated machine-learned models that can detect when students test their articulated hypotheses, design controlled experiments, and engage in planning behaviors using two inquiry support tools. Models were trained using labels generated through a new method of manually hand-coding log files, “text replay tagging”. This approach led to detectors that can automatically and accurately identify these inquiry skills under student-level cross-validation. The resulting detectors can be applied at run-time to drive scaffolding intervention. They can also be leveraged to automatically score all practice attempts, rather than hand-classifying them, and build models of latent skill proficiency. As part of this work, we also compared two approaches for doing so, Bayesian Knowledge-Tracing and an averaging approach that assumes static inquiry skill level. These approaches were compared on their efficacy at predicting skill before a student engages in an inquiry activity, predicting performance on a paper-style multiple choice test of inquiry, and predicting performance on a transfer task requiring data collection skills. Overall, we found that both approaches were effective at estimating student skills within the environment. Additionally, the models’ skill estimates were significant predictors of the two types of inquiry transfer tests.
American Behavioral Scientist | 2013
Arnon Hershkovitz; Ryan S. Baker; Janice D. Gobert; Michael Wixon; Michael Sao Pedro
In recent years, an increasing number of analyses in learning analytics and educational data mining (EDM) have adopted a “discovery with models” approach, where an existing model is used as a key component in a new EDM or analytics analysis. This article presents a theoretical discussion on the emergence of discovery with models, its potential to enhance research on learning and learners, and key lessons learned in how discovery with models can be conducted validly and effectively. We illustrate these issues through discussion of a case study where discovery with models was used to investigate a form of disengaged behavior (i.e., carelessness) in the context of middle school computer-based science inquiry. This behavior was acknowledged as a problem in education as early as the 1920s. With the increasing use of high-stakes testing, the cost of student carelessness can be higher. For instance, within computer-based learning environments, careless errors can result in reduced educational effectiveness, with students continuing to receive material they have already mastered. Despite the importance of this problem, it has received minimal research attention, in part because of difficulties in operationalizing carelessness as a construct. Building from theory on carelessness and a Bayesian framework for knowledge modeling, we use machine-learned detectors to predict carelessness within authentic use of a computer-based learning environment. We then use a discovery with models approach to link these validated carelessness measures to survey data to study the correlations between the prevalence of carelessness and student goal orientation.
intelligent tutoring systems | 2014
Luc Paquette; Ryan S. Baker; Michael Sao Pedro; Janice D. Gobert; Lisa M. Rossi; Adam Nakama; Zakkai Kauffman-Rogoff
Recently, there has been considerable interest in understanding the relationship between student affect and cognition. This research is facilitated by the advent of automated sensor-free detectors that have been designed to “infer” affect from the logs of student interactions within a learning environment. Such detectors allow for fine-grained analysis of the impact of different affective states on a range of learning outcome measures. However, these detectors have to date only been developed for a subset of online learning environments, including problem-solving tutors, dialogue tutors, and narrative-based virtual environments. In this paper, we extend sensor-free affect detection to a science microworld environment, affording the possibility of more deeply studying and responding to student affect in this type of learning environment.
artificial intelligence in education | 2011
Arnon Hershkovitz; Michael Wixon; Ryan S. Baker; Janice D. Gobert; Michael Sao Pedro
In this paper, we study the relationship between goal orientation within a science inquiry learning environment for middle school students and carelessness, i.e., not demonstrating an inquiry skill despite knowing it. Carelessness is measured based on a machine-learned model. We find, surprisingly, that carelessness is higher for students with strong mastery or learning goals, compared to students who lack strong goal orientation.
intelligent tutoring systems | 2014
Michael Sao Pedro; Janice D. Gobert; Cameron G. Betts
There are well-acknowledged challenges to scaling computerized performance-based assessments. One such challenge is reliably and validly identifying ill-defined skills. We describe an approach that leverages a data mining framework to build and validate a detector that evaluates an ill-defined inquiry process skill, designing controlled experiments. The detector was originally built and validated for use with physical science simulations that have a simpler, linear causal structure. In this paper, we show that the detector can be used to identify demonstration of skill within a life science simulation on Ecosystems that has a complex underlying causal structure. The detector is evaluated in three ways: 1) identifying skill demonstration for a new student cohort, 2) handling the variability in how students conduct experiments, and 3) using it to determine when students are off-track before they finish collecting data.
Archive | 2018
Janice D. Gobert; Raha Moussavi; Haiying Li; Michael Sao Pedro; Rachel Dickler
This chapter addresses students’ data interpretation, a key NGSS inquiry practice, with which students have several different types of difficulties. In this work, we unpack the difficulties associated with data interpretation from those associated with warranting claims. We do this within the context of Inq-ITS (Inquiry Intelligent Tutoring System), a lightweight LMS, providing computer-based assessment and tutoring for science inquiry practices/skills. We conducted a systematic analysis of a subset of our data to address whether our scaffolding is supporting students in the acquisition and transfer of these inquiry skills. We also describe an additional study, which used Bayesian Knowledge Tracing (Corbett and Anderson. User Model User-Adapt Interact 4(4):253–278, 1995), a computational approach allowing for the analysis of the fine-grained sub-skills underlying our practices of data interpretation and warranting claims.
Modeling and Simulation for Military Operations II | 2007
Nicholas J. Pioch; Corey Lofdahl; Michael Sao Pedro; Basil Krikeles; Liam Morley
To foster shared battlespace awareness in Air Operations Centers supporting the Joint Forces Commander and Joint Force Air Component Commander, BAE Systems is developing a Commanders Model Integration and Simulation Toolkit (CMIST), an Integrated Development Environment (IDE) for model authoring, integration, validation, and debugging. CMIST is built on the versatile Eclipse framework, a widely used open development platform comprised of extensible frameworks that enable development of tools for building, deploying, and managing software. CMIST provides two distinct layers: 1) a Commanders IDE for supporting staff to author models spanning the Political, Military, Economic, Social, Infrastructure, Information (PMESII) taxonomy; integrate multiple native (third-party) models; validate model interfaces and outputs; and debug the integrated models via intuitive controls and time series visualization, and 2) a PMESII IDE for modeling and simulation developers to rapidly incorporate new native simulation tools and models to make them available for use in the Commanders IDE. The PMESII IDE provides shared ontologies and repositories for world state, modeling concepts, and native tool characterization. CMIST includes extensible libraries for 1) reusable data transforms for semantic alignment of native data with the shared ontology, and 2) interaction patterns to synchronize multiple native simulations with disparate modeling paradigms, such as continuous-time system dynamics, agent-based discrete event simulation, and aggregate solution methods such as Monte Carlo sampling over dynamic Bayesian networks. This paper describes the CMIST system architecture, our technical approach to addressing these semantic alignment and synchronization problems, and initial results from integrating Political-Military-Economic models of post-war Iraq spanning multiple modeling paradigms.
artificial intelligence in education | 2013
Janice D. Gobert; Ermal Toto; Michael Brigham; Michael Sao Pedro
We present a study that addressed if providing students with scaffolding about how to “integrate” science text and animations impacts content learning. Scaffolding was delivered by a pedagogical agent and driven by student’s eye gaze movements (compared to controls).We hypothesized that students in the pedagogical agent condition would engage in richer learning as evidence by a more “integrated” pattern from text to animation and back, etc. In addition to eye gazes we collected pre- and post test knowledge about the domain, and open responses to explanation-type questions. We are currently analyzing these data.
The Journal of the Learning Sciences | 2013
Janice D. Gobert; Michael Sao Pedro; Juelaila J. Raziuddin; Ryan S. Baker
educational data mining | 2012
Janice D. Gobert; Michael Sao Pedro; Ryan S. Baker; Ermal Toto; Orlando Montalvo