Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack Mostow is active.

Publication


Featured researches published by Jack Mostow.


Artificial Intelligence | 1989

Design by derivational analogy: issues in the automated replay of design plans

Jack Mostow

Abstract Derivational analogy solves a problem by replaying the plan used to solve a previous problem, modifying it where necessary. We analyze how four published systems use this approach to help design (or redesign) complex artifacts like programs and circuits. We compare how they represent, acquire, and retrieve design plans; how they determine which parts of the old and new designs correspond; how they decide which steps of a design plan are appropriate to replay and adapt them to the new problem; and how they reuse partial plans. We show how each systems approach to these seven issues affects the SCOPE of problems it can solve, its EVOLVABILITY to solve new problems, the QUALITY of its solutions, the EFFICIENCY of its computation, and its AUTONOMY from the user.


Natural Language Engineering | 2006

Some useful tactics to modify, map and mine data from intelligent tutors

Jack Mostow; Joseph E. Beck

Mining data logged by intelligent tutoring systems has the potential to discover information of value to students, teachers, authors, developers, researchers, and the tutors themselves – information that could make education dramatically more efficient, effective, and responsive to individual needs. We factor this discovery process into tactics to modify tutors, map heterogeneous event streams into tabular data sets, and mine them. This model and the tactics identified mark out a roadmap for the emerging area of tutorial data mining, and may provide a useful vocabulary and framework for characterizing past, current, and future work in this area. We illustrate this framework using experiments that tested interventions by an automated reading tutor to help children decode words and comprehend stories.


Artificial Intelligence in Engineering | 1989

Automated reuse of design plans

Jack Mostow; Michael W. Barley; Timothy Weinrich

Abstract BOGART, part of the VEXED knowledge-based circuit design editor, partially automates circuit (re-)design by mechanically ‘replaying’ the recorded history of design decisions made in a previous design. We illustrate how it designs part of a content-addressable memory by replaying the design plan for a comparator, and how it helps implement several specification changes for a simple arithmetic and logic unit (ALU). We evaluate how well BOGART addresses five general issues raised by this approach to intelligent design automation. BOGART has been used by students in a VLSI course to help design simple NMOS digital circuits, and its techniques have been applied to mechanical design and algorithm design.


intelligent tutoring systems | 2008

Does Help Help? Introducing the Bayesian Evaluation and Assessment Methodology

Joseph E. Beck; Kai-min Chang; Jack Mostow; Albert T. Corbett

Most ITS have a means of providing assistance to the student, either on student request or when the tutor determines it would be effective. Presumably, such assistance is included by the ITS designers since they feel it benefits the students. However, whether--and how--help helps students has not been a well studied problem in the ITS community. In this paper we present three approaches for evaluating the efficacy of the Reading Tutors help: creating experimental trials from data, learning decomposition, and Bayesian Evaluation and Assessment, an approach that uses dynamic Bayesian networks. We have found that experimental trials and learning decomposition both find a negative benefit for help---that is, help hurts! However, the Bayesian Evaluation and Assessment framework finds that help both promotes student long-term learning and provides additional scaffolding on the current problem. We discuss why these approaches give divergent results, and suggest that the Bayesian Evaluation and Assessment framework is the strongest of the three. In addition to introducing Bayesian Evaluation and Assessment, a method for simultaneously assessing students and evaluating tutorial interventions, this paper describes how help can both scaffold the current problem attempt as well as teach the student knowledge that will transfer to later problems.


intelligent tutoring systems | 2006

A bayes net toolkit for student modeling in intelligent tutoring systems

Kai-min Chang; Joseph E. Beck; Jack Mostow; Albert T. Corbett

This paper describes an effort to model a students changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNT-SM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.


intelligent tutoring systems | 2008

How Who Should Practice: Using Learning Decomposition to Evaluate the Efficacy of Different Types of Practice for Different Types of Students

Joseph E. Beck; Jack Mostow

A basic question of instruction is how much students will actually learn from it. This paper presents an approach called learning decomposition,whichdetermines the relative efficacy of different types of learning opportunities. This approach is a generalization of learning curve analysis, and uses non-linear regression to determine how to weight different types of practice opportunities relative to each other. We analyze 346 students reading 6.9 million words and show that different types of practice differ reliably in how efficiently students acquire the skill of reading words quickly and accurately. Specifically, massed practice is generally not effective for helping students learn words, and rereading the same stories is not as effective as reading a variety of stories. However, we were able to analyze data for individual students learning and use bottom-up processing to detect small subgroups of students who did benefit from rereading (11 students) and from massed practice (5 students). The existence of these has two implications: 1) one size fits all instruction is adequate for perhaps 95% of the student population using computer tutors, but as a community we can do better and 2) the ITS community is well poised to study what type of instruction is optimal for the individual.


international conference on multimodal interfaces | 2002

Experimentally augmenting an intelligent tutoring system with human-supplied capabilities: adding human-provided emotional scaffolding to an automated reading tutor that listens

Gregory Aist; Barry Kort; Rob Reilly; Jack Mostow; Rosalind W. Picard

We present the first statistically reliable empirical evidence from a controlled study for the effect of human-provided emotional scaffolding on student persistence in an intelligent tutoring system. We describe an experiment that added human-provided emotional scaffolding to an automated Reading Tutor that listens, and discuss the methodology we developed to conduct this experiment. Each student participated in one (experimental) session with emotional scaffolding, and in one (control) session without emotional scaffolding, counterbalanced by order of session. Each session was divided into several portions. After each portion of the session was completed, the Reading Tutor gave the student a choice: continue, or quit. We measured persistence as the number of portions the student completed. Human-provided emotional scaffolding added to the automated Reading Tutor resulted in increased student persistence, compared to the Reading Tutor alone. Increased persistence means increased time on task, which ought lead to improved learning. If these results for reading turn out to hold for other domains too, the implication for intelligent tutoring systems is that they should respond with not just cognitive support-but emotional scaffolding as well. Furthermore, the general technique of adding human-supplied capabilities to an existing intelligent tutoring system should prove useful for studying other ITSs too.


international conference on user modeling, adaptation, and personalization | 2003

Predicting student help-request behavior in an intelligent tutor for reading

Joseph E. Beck; Peng Jia; June Sison; Jack Mostow

This paper describes our efforts at constructing a fine-grained student model in Project LISTENs intelligent tutor for reading. Reading is different from most domains that have been studied in the intelligent tutoring community, and presents unique challenges. Constructing a model of the user from voice input and mouse clicks is difficult, as is constructing a model when there is not a well-defined domain model. We use a database describing student interactions with our tutor to train a classifier that predicts whether students will click on a particular word for help with 83.2% accuracy. We have augmented the classifier with features describing properties of the words individual graphemes, and discuss how such knowledge can be used to assess student skills that cannot be directly measured.


user interface software and technology | 1995

Demonstration of a reading coach that listens

Jack Mostow; Alexander G. Hauptmann; Steven F. Roth

Project LISTEN stands for “Literacy Innovation that Speech Technology ENables.” We will demonstrate a prototype automated reading coach that displays text on a screen, listens to a child read it aloud, and helps where needed. We have tested successive prototypes of the coach on several dozen second gmders. [1] reports implementation details and evaluation results. Here we summarize its functionality, the issues it raises in human-computer interaction, and how it addresses them. We are redesigning the coach based on our experience, and will demonstrate its successor at UIST ’95.


information and communication technologies and development | 2009

Improving child literacy in Africa: Experiments with an automated reading tutor

G. Ayorkor Mills-Tettey; Jack Mostow; M. Bernardine Dias; Tracy Morrison Sweet; Sarah Belousov; M. Frederick Dias; Haijun Gong

This paper describes a research endeavor aimed at exploring the role that technology can play in improving child literacy in developing communities. An initial pilot study and subsequent four-month-long controlled field study in Ghana investigated the viability and effectiveness of an automated reading tutor in helping urban children enhance their reading skills in English. In addition to quantitative data suggesting that automated tutoring can be useful for some children in this setting, these studies and an additional preliminary pilot study in Zambia yielded useful qualitative observations regarding the feasibility of applying technology solutions to the challenge of enhancing child literacy in developing communities. This paper presents the findings, observations and lessons learned from the field studies.

Collaboration


Dive into the Jack Mostow's collaboration.

Top Co-Authors

Avatar

Joseph E. Beck

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Gregory Aist

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Andrew Cuneo

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brian Tobin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Albert T. Corbett

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yanbo Xu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juliet Bey

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge