Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip I. Pavlik is active.

Publication


Featured researches published by Philip I. Pavlik.


Cognitive Science | 2005

Practice and forgetting effects on vocabulary memory: an activation-based model of the spacing effect.

Philip I. Pavlik; John R. Anderson

An experiment was performed to investigate the effects of practice and spacing on retention of Japanese-English vocabulary paired associates. The relative benefit of spacing increased with increased practice and with longer retention intervals. Data were fitted with an activation-based memory model, which proposes that each time an item is practiced it receives an increment of strength but that these increments decay as a power function of time. The rate of decay for each presentation depended on the activation at the time of the presentation. This mechanism limits long-term benefits from further practice at higher levels of activation and produces the spacing effect and its observed interactions with practice and retention interval. The model was compared with another model of the spacing effect (Raaijmakers, 2003) and was fit to some results from the literature on spacing and memory.


Journal of Experimental Psychology: Applied | 2008

Using a model to compute the optimal schedule of practice.

Philip I. Pavlik; John R. Anderson

By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with large effect sizes for both improved recall and recall latency. The optimization method achieved these benefits by using a modeling approach to develop a quantitative algorithm, which dynamically maximizes learning by determining for each item when the balance between increasing temporal spacing (that causes better long-term recall) and decreasing temporal spacing (that reduces the failure related time cost of each practice) means that the item is at the spacing interval where long-term gain per unit of practice time is maximal. As practice repetitions accumulate for each item, items become stable in memory and this optimal interval increases.


Cognitive Science | 2013

iMinerva: A Mathematical Model of Distributional Statistical Learning

Erik D. Thiessen; Philip I. Pavlik

Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non-adjacent relations. On the surface, these different aspects share few commonalities. Despite this, we demonstrate that the same computational framework can account for learning in all of these tasks. These results support two conclusions. The first is that much, and perhaps all, of distributional statistical learning can be explained by the same underlying set of processes. The second is that some aspects of language can be learned due to domain-general characteristics of memory.


artificial intelligence in education | 2011

Using contextual factors analysis to explain transfer of least common multiple skills

Philip I. Pavlik; Michael Yudelson; Kenneth R. Koedinger

Transfer of learning to new or different contexts has always been a chief concern of education because unlike training for a specific job, education must establish skills without knowing exactly how those skills might be called upon. Research on transfer can be difficult, because it is often superficially unclear why transfer occurs or, more frequently, does not, in a particular paradigm. While initial results with Learning Factors Transfer (LiFT) analysis (a search procedure using Performance Factors Analysis, PFA) show that more predictive models can be built by paying attention to these transfer factors [1, 2], like proceeding models such as AFM (Additive Factors Model) [3], these models rely on a Q-matrix analysis that treats skills as discrete units at transfer. Because of this discrete treatment, the models are more parsimonious, but may lose resolution on aspects of component transfer. To improve understanding of this transfer, we develop new logistic regression model variants that predict learning differences as a function of the context of learning. One advantage of these models is that they allow us to disentangle learning of transferable knowledge from the actual transfer performance episodes.


Topics in Cognitive Science | 2016

Testing Theories of Transfer Using Error Rate Learning Curves

Kenneth R. Koedinger; Michael Yudelson; Philip I. Pavlik

We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions.


Discourse Processes | 2016

A New Measure of Text Formality: An Analysis of Discourse of Mao Zedong

Haiying Li; Arthur C. Graesser; Mark W. Conley; Zhiqiang Cai; Philip I. Pavlik; James W. Pennebaker

Formality has long been of interest in the study of discourse, with periodic discussions of the best measure of formality and the relationship between formality and text categories. In this research, we explored what features predict formality as humans perceive the construct. We categorized a corpus consisting of 1,158 discourse samples published in the Collected/Selected Works of Mao Zedong into the following categories: conversations, speeches, letters, comments, published articles, telegrams, and official documents. We developed two models of human formality perception: one measured at the multitextual level and one at the word level. We compared these two models with the previous metrics of formality when predicting human formality judgments. The weighted formality model at multiple levels of language, discourse, and psychological features best captured the concept of formality as humans perceive it using genre (narrativity), discourse cohesion, topic-related words (e.g., embodiment), and emotional words (positive emotion and negative emotion).


international conference on user modeling adaptation and personalization | 2011

User modeling: a notoriously black art

Michael Yudelson; Philip I. Pavlik; Kenneth R. Koedinger

This paper is intended as guidance for those who are familiar with user modeling field but are less fluent in statistical methods. It addresses potential problems with user model selection and evaluation, that are often clear to expert modelers, but are not obvious for others. These problems are frequently a result of a falsely straightforward application of statistics to user modeling (e.g. over-reliance on model fit metrics). In such cases, absolute trust in arguably shallow model accuracy measures could lead to selecting models that are hard-to-interpret, less meaningful, over-fit, and less generalizable. We offer a list of questions to consider in order to avoid these modeling pitfalls. Each of the listed questions is backed by an illustrative example based on the user modeling approach called Performance Factors Analysis (PFA) [9].


International Journal of STEM Education | 2018

ElectronixTutor: An Intelligent Tutoring System with Multiple Learning Resources for Electronics.

Arthur C. Graesser; Xiangen Hu; Benjamin D. Nye; Kurt VanLehn; Rohit Kumar; Cristina Heffernan; Neil T. Heffernan; Beverly Park Woolf; Andrew Olney; Vasile Rus; Frank Andrasik; Philip I. Pavlik; Zhiqiang Cai; Jon Wetzel; Brent Morgan; Andrew J. Hampton; Anne Lippert; Lijia Wang; Qinyu Cheng; Joseph E. Vinson; Craig Kelly; Cadarrius McGlown; Charvi A. Majmudar; Bashir I. Morshed; Whitney O. Baer

BackgroundThe Office of Naval Research (ONR) organized a STEM Challenge initiative to explore how intelligent tutoring systems (ITSs) can be developed in a reasonable amount of time to help students learn STEM topics. This competitive initiative sponsored four teams that separately developed systems that covered topics in mathematics, electronics, and dynamical systems. After the teams shared their progress at the conclusion of an 18-month period, the ONR decided to fund a joint applied project in the Navy that integrated those systems on the subject matter of electronic circuits. The University of Memphis took the lead in integrating these systems in an intelligent tutoring system called ElectronixTutor. This article describes the architecture of ElectronixTutor, the learning resources that feed into it, and the empirical findings that support the effectiveness of its constituent ITS learning resources.ResultsA fully integrated ElectronixTutor was developed that included several intelligent learning resources (AutoTutor, Dragoon, LearnForm, ASSISTments, BEETLE-II) as well as texts and videos. The architecture includes a student model that has (a) a common set of knowledge components on electronic circuits to which individual learning resources contribute and (b) a record of student performance on the knowledge components as well as a set of cognitive and non-cognitive attributes. There is a recommender system that uses the student model to guide the student on a small set of sensible next steps in their training. The individual components of ElectronixTutor have shown learning gains in previous decades of research.ConclusionsThe ElectronixTutor system successfully combines multiple empirically based components into one system to teach a STEM topic (electronics) to students. A prototype of this intelligent tutoring system has been developed and is currently being tested. ElectronixTutor is unique in its assembling a group of well-tested intelligent tutoring systems into a single integrated learning environment.


artificial intelligence in education | 2015

A Measurement Model of Microgenetic Transfer for Improving Instructional Outcomes

Philip I. Pavlik; Michael Yudelson; Kenneth R. Koedinger

Efforts to improve instructional task design often make reference to the mental structures, such as “schemas” (e.g., Gick & Holyoak, 1983) or “identical elements” (Thorndike & Woodworth, 1901), that are common to both the instructional and target tasks. This component based (e.g., Singley & Anderson, 1989) approach has been employed in psychometrics (Tatsuoka, 1983), cognitive science (Koedinger & MacLaren, 2002), and most recently in educational data mining (Cen, Koedinger, & Junker, 2006). A typical assumption of these theory based models is that an itemization of “knowledge components” shared between tasks is sufficient to predict transfer between these tasks. In this paper we step back from these more cognitive theory based models of transfer and suggest a psychometric measurement model that removes most cognitive assumptions, thus allowing us to understand the data without the bias of a theory of transfer or domain knowledge. The goal of this work is to help provide a methodology that allows researchers to analyse complex data without the theoretical assumptions clearly part of other methods. Our experimentally controlled examples illustrate the non-intuitive nature of some transfer situations which motivates the necessity of the unbiased analysis that our model provides. We explain how to use this Contextual Performance Factors Analysis (CPFA) model to measure learning progress of related skills at a fine granularity. This CPFA analysis then allows us to answer questions regarding the best order of practice for related skills and the appropriate amount of repetition depending on whether students are succeeding or failing with each individual practice problem. We conclude by describing how the model allows us to test theories, in which we discuss how well two different cognitive theories agree with the qualitative results of the model.


International Journal of STEM Education | 2018

SKOPE-IT (Shareable Knowledge Objects as Portable Intelligent Tutors): overlaying natural language tutoring on an adaptive learning system for mathematics

Benjamin D. Nye; Philip I. Pavlik; Alistair Windsor; Andrew Olney; Mustafa H. Hajeer; Xiangen Hu

BackgroundThis study investigated learning outcomes and user perceptions from interactions with a hybrid intelligent tutoring system created by combining the AutoTutor conversational tutoring system with the Assessment and Learning in Knowledge Spaces (ALEKS) adaptive learning system for mathematics. This hybrid intelligent tutoring system (ITS) uses a service-oriented architecture to combine these two web-based systems. Self-explanation tutoring dialogs were used to talk students through step-by-step worked examples to algebra problems. These worked examples presented an isomorphic problem to the preceding algebra problem that the student could not solve in the adaptive learning system.ResultsDue to crossover issues between conditions, experimental versus control condition assignment did not show significant differences in learning gains. However, strong dose-dependent learning gains were observed that could not be otherwise explained by either initial mastery or time-on-task. User perceptions of the dialog-based tutoring were mixed, and survey results indicate that this may be due to the pacing of dialog-based tutoring using voice, students judging the agents based on their own performance (i.e., the quality of their answers to agent questions), and the students’ expectations about mathematics pedagogy (i.e., expecting to solving problems rather than talking about concepts). Across all users, learning was most strongly influenced by time spent studying, which correlated with students’ self-reported tendencies toward effort avoidance, effective study habits, and beliefs about their ability to improve in mathematics with effort.ConclusionsIntegrating multiple adaptive tutoring systems with complementary strengths shows some potential to improve learning. However, managing learner expectations during transitions between systems remains an open research area. Finally, while personalized adaptation can improve learning efficiency, effort and time-on-task for learning remains a dominant factor that must be considered by interventions.

Collaboration


Dive into the Philip I. Pavlik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Yudelson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John R. Anderson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge