Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patricia L. Albacete is active.

Publication


Featured researches published by Patricia L. Albacete.


conference on computers and accessibility | 1994

Iconic language design for people with significant speech and multiple impairments

Patricia L. Albacete; Shi-Kuo Chang; Giuseppe Polese; Bruce R. Baker

We present an approach of iconic language design for people with significant speech and multiple impairments (SSMI), based upon the theory of Icon Algebra and the theory of Conceptual Dependency (CD) to derive the meaning of iconic sentences. An interactive design environment based upon this methodology is described.


artificial intelligence in education | 2015

Is a Dialogue-Based Tutoring System that Emulates Helpful Co-constructed Relations During Human Tutoring Effective?

Patricia L. Albacete; Pamela W. Jordan; Sandra Katz

We present an initial field evaluation of Rimac, a natural-language tutoring system which implements decision rules that simulate the highly interactive nature of human tutoring. We compared this rule-driven version of the tutor with a non-rule-driven control in high school physics classes. Although students learned from both versions of the system, the experimental group outperformed the control group. A particularly interesting finding is that the experimental version was especially beneficial for female students.


artificial intelligence in education | 2013

Pilot Test of a Natural-Language Tutoring System for Physics That Simulates the Highly Interactive Nature of Human Tutoring

Sandra Katz; Patricia L. Albacete; Michael E. Ford; Pamela W. Jordan; Michael Lipschultz; Diane J. Litman; Scott Silliman; Christine Wilson

This poster describes Rimac, a natural-language tutoring system that engages students in dialogues that address physics concepts and principles, after students have solved quantitative physics problems. We summarize our approach to deriving decision rules that simulate the highly interactive nature of human tutoring, and describe a pilot test that compares two versions of Rimac: an experimental version that deliberately executes these decision rules within a Knowledge Construction Dialogue (KCD) framework, and a control KCD system that does not intentionally execute these rules.


european conference on technology enhanced learning | 2017

The “Grey Area”: A Computational Approach to Model the Zone of Proximal Development

Irene-Angelica Chounta; Patricia L. Albacete; Pamela W. Jordan; Sandra Katz; Bruce M. McLaren

In this paper, we propose a computational approach to model the Zone of Proximal Development (ZPD) using predicted probabilities of correctness while students engage in reflective dialogue. We employ a predictive model that uses a linear function of a variety of parameters, including difficulty and student knowledge, as students use a natural-language tutoring system that presents conceptual reflection questions after they solve high-school physics problems. In order to operationalize our approach, we introduce the concept of the “Grey Area”, that is, the area of uncertainty in which the student model cannot predict with acceptable accuracy whether a student is able to give a correct answer without support. We further discuss the impact of our approach on student modeling, the limitations of this work and future work in systematically and rigorously evaluating the approach.


artificial intelligence in education | 2017

Adapting Step Granularity in Tutorial Dialogue Based on Pretest Scores

Pamela W. Jordan; Patricia L. Albacete; Sandra Katz

We explore the effectiveness of adaptively deciding whether to further decompose a step in a line of reasoning during tutorial dialogue based on students’ pretest scores. We compare two versions of a tutorial dialogue system in high school classrooms: one that always decomposes a step to its simplest substeps and one that adaptively decides to decompose a step based on a student’s performance on pretest items that target the knowledge required to correctly answer that step. We hypothesize that students using the two versions of the tutoring system will learn similarly but that students who use the version that adaptively decomposes a step will learn more efficiently. Our results from classroom studies suggest support for our hypothesis. While students learned similarly and with similar efficiency across conditions, high prior knowledge students in the adaptive condition learned significantly more efficiently than high prior knowledge students in the control condition and learned similar amounts.


artificial intelligence in education | 2013

Interactive Event: The Rimac Tutor - A Simulation of the Highly Interactive Nature of Human Tutorial Dialogue

Pamela W. Jordan; Patricia L. Albacete; Michael E. Ford; Sandra Katz; Michael Lipschultz; Diane J. Litman; Scott Silliman; Christine Wilson

Rimac is a natural-language intelligent tutoring system that engages students in dialogues that address physics concepts and principles, after they have solved quantitative physics problems. Much research has been devoted to identifying features of tutorial dialogue that can explain its effectiveness (e.g., [1]), so that these features can be simulated in natural-language tutoring systems. One hypothesis is that the highly interactive nature of tutoring itself promotes learning. Several studies indicate that our understanding of interactivty needs refinement because it cannot be defined simply by the amount of interaction nor the granularity of the interaction but must also take into consideration how well the interaction is carried out (e.g., [2]). This need for refinement suggests that we should more closely examine the linguistic mechanisms evident in tutorial dialogue. Towards this end, we first identified which of a subset of co-constructed discourse relations correlate with learning and operationalized our findings with a set of nine decision rules which we implemented in Rimac [3]. To test for causality, we are conducting pilot tests that compare learning outcomes for two versions of Rimac: an experimental version that deliberately executes the nine decision rules within a Knowledge Construction Dialogue (KCD) framework, and a control KCD system that does not intentionally execute these rules. In this interactive demo, participants will experience the two versions of the system that students have been using in high school classrooms during pilot testing. Students first take a pre-test, and then complete a homework assignment in which they solve four quantitative physics problems. In a subsequent class, they then use the Rimac system and finally during the next class meeting take a post-test. When working with the Rimac system, students are asked to first view a brief video that describes how to solve a homework problem and then are engaged in a reflective dialogue about that problem. See [4] for a more detailed description of the pilot study and planned analyses. Demo participants will have the opportunity to experience exactly what the students experience when working with Rimac. They will see the video and engage in a reflective dialogue about that problem with the highly interactive


annual meeting of the special interest group on discourse and dialogue | 2015

Exploring the Effects of Redundancy within a Tutorial Dialogue System: Restating Students' Responses

Pamela W. Jordan; Patricia L. Albacete; Sandra Katz

Although restating part of a student’s correct response correlates with learning and various types of restatements have been incorporated into tutorial dialogue systems, this tactic has not been tested in isolation to determine if it causally contributes to learning. When we explored the effect of tutor restatements that support inference on student learning, it did not benefit all students equally. We found that students with lower incoming knowledge tend to benefit more from an increased level of these types of restatement while students with higher incoming knowledge tend to benefit more from a decreased level of such restatements. This finding has implications for tutorial dialogue system design since an inappropriate use of restatements could dampen learning.


The international journal of learning | 2014

Predicting semantic changes in abstraction in tutor responses to students

Michael Lipschultz; Diane J. Litman; Sandra Katz; Patricia L. Albacete; Pamela W. Jordan

Post-problem reflective tutorial dialogues between human tutors and students are examined to predict when the tutor changed the level of abstraction from the students preceding turn (i.e., used more general terms or more specific terms); such changes correlate with learning. Prior work examined lexical changes in abstraction. In this work, we consider semantic changes. Since we are interested in developing a fully-automatic computer-based tutor, we use only automatically-extractable features (e.g., percent of domain words in student turn) or features available in a tutoring system (e.g., correctness). We find patterns that predict tutor changes in abstraction better than a majority class baseline. Generalisation is best-predicted using student and reflection features. Specification is best-predicted using student and problem features.


artificial intelligence in education | 2018

Providing Proactive Scaffolding During Tutorial Dialogue Using Guidance from Student Model Predictions

Patricia L. Albacete; Pamela W. Jordan; Dennis Lusetich; Irene-Angelica Chounta; Sandra Katz; Bruce M. McLaren

This paper discusses how a dialogue-based tutoring system makes decisions to proactively scaffold students during conceptual discussions about physics. The tutor uses a student model to predict the likelihood that the student will answer the next question in a dialogue script correctly. Based on these predictions, the tutor will, step by step, choose the granularity at which the next step in the dialogue is discussed. The tutor attempts to pursue the discussion at the highest possible level, with the goal of helping the student achieve mastery, but with the constraint that the questions it asks are within the student’s ability to answer when appropriately supported; that is, the tutor aims to stay within its estimate of the student’s zone of proximal development for the targeted concepts. The scaffolding provided by the tutor is further adapted by adjusting the way the questions are expressed.


artificial intelligence in education | 2018

A Comparison of Tutoring Strategies for Recovering from a Failed Attempt During Faded Support.

Pamela W. Jordan; Patricia L. Albacete; Sandra Katz

The support the tutor provides for a student is expected to fade over time as the student makes progress towards mastery of the learning objectives. One way in which the tutor can fade support is to prompt or elicit a next step that requires the student to fill in some intermediate actions or reasoning on her own. But what should the tutor do if the student is unable to complete such a step? In human-human tutoring interactions, a tutor may remediate by explicitly covering the missing intermediate steps with the student and in some contexts this behavior correlates with learning. But if there are multiple intermediate steps that need to be made explicit, the tutor could focus the student’s attention on the last successful step and then move forward through the intermediate steps (forward reasoning) or the tutor could focus the student’s attention on the intermediate step just before the failed step and move backward through the intermediate steps (backward reasoning). In this paper we explore when the forward strategy or backward strategy may be beneficial for remediation. We also compare the two faded support+remediation strategies to a control in which support is never faded and found that faded support was not detrimental to student learning outcomes when the two remediation strategies were available and it took significantly less time on task to achieve similar learning gains when starting the tutor-student interaction with faded support.

Collaboration


Dive into the Patricia L. Albacete's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandra Katz

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kurt VanLehn

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce M. McLaren

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Silliman

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge