Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephanie Siler is active.

Publication


Featured researches published by Stephanie Siler.


Cognitive Science | 2001

Learning from human tutoring

Michelene T. H. Chi; Stephanie Siler; Heisawn Jeong; Takashi Yamauchi; Robert G.M. Hausmann

Human one-to-one tutoring has been shown to be a very effective form of instruction. Three contrasting hypotheses, a tutor-centered one, a student-centered one, and an interactive one could all potentially explain the effectiveness of tutoring. To test these hypotheses, analyses focused not only on the effectiveness of the tutors’ moves, but also on the effectiveness of the students’ construction on learning, as well as their interaction. The interaction hypothesis is further tested in the second study by manipulating the kind of tutoring tactics tutors were permitted to use. In order to promote a more interactive style of dialogue, rather than a didactic style, tutors were suppressed from giving explanations and feedback. Instead, tutors were encouraged to prompt the students. Surprisingly, students learned just as effectively even when tutors were suppressed from giving explanations and feedback. Their learning in the interactive style of tutoring is attributed to construction from deeper and a greater amount of scaffolding episodes, as well as their greater effort to take control of their own learning by reading more. What they learned from reading was limited, however, by their reading abilities.


Cognition and Instruction | 2003

Why Do Only Some Events Cause Learning During Human Tutoring

Kurt VanLehn; Stephanie Siler; Charles Murray; Takashi Yamauchi; William B. Baggett

Developers of intelligent tutoring systems would like to know what human tutors do and which activities are responsible for their success in tutoring. We address these questions by comparing episodes where tutoring does and does not cause learning. Approximately 125 hr of tutorial dialog between expert human tutors and physics students are analyzed to see what features of the dialog are associated with learning. Successful learning appears to require that the student reach an impasse. When students were not at an impasse, learning was uncommon regardless of the tutorial explanations employed. On the other hand, once students were at an impasse, tutorial explanations were sometimes associated with learning. Moreover, for different types of knowledge, different types of tutorial explanations were associated with learning different types of knowledge.


intelligent tutoring systems | 2002

The Architecture of Why2-Atlas: A Coach for Qualitative Physics Essay Writing

Kurt VanLehn; Pamela W. Jordan; Carolyn Penstein Rosé; Dumisizwe Bhembe; Michael Böttner; Andy Gaydos; Maxim Makatchev; Umarani Pappuswamy; Michael A. Ringenberg; Antonio Roque; Stephanie Siler; Ramesh Srivastava

The Why2-Atlas system teaches qualitative physics by having students write paragraph-long explanations of simple mechanical phenomena. The tutor uses deep syntactic analysis and abductive theorem proving to convert the students essay to a proof. The proof formalizes not only what was said, but the likely beliefs behind what was said. This allows the tutor to uncover misconceptions as well as to detect missing correct parts of the explanation. If the tutor finds such a flaw in the essay, it conducts a dialogue intended to remedy the missing or misconceived beliefs, then asks the student to correct the essay. It often takes several iterations of essay correction and dialogue to get the student to produce an acceptable explanation. Pilot subjects have been run, and an evaluation is in progress. After explaining the research questions that the system addresses, the bulk of the paper describes the systems architecture and operation.


intelligent tutoring systems | 1998

Student Modeling from Conversational Test Data: A Bayesian Approach Without Priors

Kurt VanLehn; Zhendong Niu; Stephanie Siler; Abigail S. Gertner

Although conventional tests are often used for determining a students overall competence, they are seldom used for determining a finegrained model. However, this problem does arise occasionally, such as when a conventional test is used to initialize the student model of an ITS. Existing psychometric techniques for solving this problem are intractable. Straightforward Bayesian techniques are also inapplicable because they depend too strongly on the priors, which are often not available. Our solution is to base the assessment on the difference between the prior and posterior probabilities. If the test data raise the posterior probability of mastery of a piece of knowledge even slightly above its prior probability, then that is interpreted as evidence that the student has mastered that piece of knowledge. Evaluation of this technique with artificial students indicates that it can deliver highly accurate assessments.


intelligent tutoring systems | 2002

A Hybrid Language Understanding Approach for Robust Selection of Tutoring Goals

Carolyn Penstein Rosé; Dumisizwe Bhembe; Antonio Roque; Stephanie Siler; Ramesh Srivastava; Kurt VanLehn

In this paper, we explore the problem of selecting appropriate interventions for students based on an analysis of their interactions with a tutoring system. In the context of the WHY2 conceptual physics tutoring system, we describe CarmelTC, a hybrid symbolic/statistical approach for analysing conceptual physics explanations in order to determine which Knowledge Construction Dialogues (KCDs) students need for the purpose of encouraging them to include important points that are missing. We briefly describe our tutoring approach. We then present a model that demonstrates a general problem with selecting interventions based on an analysis of student performance in circumstances where there is uncertainty with the interpretation, such as with speech or text based natural language input, complex and error prone mathematical or other formal language input, graphical input (i.e., diagrams, etc.), or gestures. In particular, when student performance completeness is high, intervention selection accuracy is more sensitive to analysis accuracy, and increasingly so as performance completeness increases. In light of this model, we have evaluated our CarmelTC approach and have demonstrated that it performs favourably in comparison with the widely used LSA approach, a Naive Bayes approach, and finally a purely symbolic approach.


Cognition and Instruction | 2004

Can Tutors Monitor Students' Understanding Accurately?.

Michelene T. H. Chi; Stephanie Siler; Heisawn Jeong


Archive | 2000

Interactive Conceptual Tutoring in Atlas-Andes

Carolyn P. Ros; Reva h-eedman; Pamela W. Jordan; Michael A. Ringenberg; Antonio Roque; Kay G. Schulze; Stephanie Siler; Donald Treacy; Kurt VanLehn; Anders Weinstein


Archive | 2002

Student Initiative and Questioning Strategies in Computer-Mediated Human Tutoring Dialogues

Pamela W. Jordan; Stephanie Siler


Proceedings of the Annual Meeting of the Cognitive Science Society | 2003

Accuracy of Tutors’ Assessments of their Students by Tutoring Context

Stephanie Siler; Kurt VanLehn


Archive | 2012

Detecting, Classifying, and Remediating

Stephanie Siler; David Klahr

Collaboration


Dive into the Stephanie Siler's collaboration.

Top Co-Authors

Avatar

David Klahr

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kurt VanLehn

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Roque

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Cressida Magaro

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge