Charles F. Schmidt
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles F. Schmidt.
Artificial Intelligence | 1978
Charles F. Schmidt; N. S. Sridharan; John L. Goodson
Understanding actions involves inferring the goal of the actor and organizing the actions into a plan structure. The BELIEVER system is a psychological theory of how human observers understand the actions of others. The present theory is concerned with single-actor sequences and can account for goal-directed actions that may succeed or fail in accomplishing the goal, as well as actions governed by norms. After discussing how AI can be applied in psychological theory construction, the BELIEVER system is presented by specifying a plan recognition process and its knowledge sources.
Intelligence\/sigart Bulletin | 1977
N. S. Sridharan; Charles F. Schmidt
The BELIEVER theory is an attempt to specify an information processing systenn that constructs intentional interpretations of an observed sequence of human actions. A frame-based system, AIMDS, is used to define three domains: the physical world; the plan domain, where interpretations are constructed using plan structures composed from plan traits; and the psychological description of time actor. The system achieves a shift of representation from propositions about physical events to statements about beliefs and intentions of the actor by hypothesizing and attributing a.plan structure to the actor.
Psychonomic science | 1970
Irwin P. Levin; Charles F. Schmidt
Ss were presented with a series of pairs of person descriptions. Each person description consisted of one or more personality trait adjectives. Within each pair of person descriptions, S had to choose the person he would most like to have as a friend and indicate how much he preferred that person over the other person in the pair. When a description consisted of several adjectives, it was suggested that Ss integrate the information through a weighted averaging process. Consistent with this notion, a reliable set size effect was obtained with positive-valued adjective sets; however, no reliable set size effect was obtained with negative adjectives. There was no evidence for discounting of inconsistent information when a pair of antonyms was included in the description of a given person.
[1989] Proceedings. The Annual AI Systems in Government Conference | 1989
Charles F. Schmidt; John L. Goodson; Stacy Marsella; John L. Bresina
The problem of how to plan in tactical situations (such as anti-submarine warfare) where planning must be responsible to events, or other agents actions, which lie outside the predictive capability of the planner is addressed. The basic difficulty presented by this type of planning problem is how to engage in actions that are coherently related to the achievement of the overall goal despite the fact that often the planner cannot develop a complete plan for achieving that overall goal. To overcome this difficulty, the authors have developed a planning model within which the planner is controlled by knowledge organized into what is termed a situation space. The situation space guides the selection of goals and the construction of complete subplans which are appropriate to situations that arise and are coherently related to the overall goal. The situation space supports the principled generation of plan failure and replanning in a reactive environment.<<ETX>>
Machine Learning Methods for Planning | 1993
Stacy Marsella; Charles F. Schmidt
Publisher Summary This chapter discusses a method for biasing the learning of nonterminal reduction rules. Learning is addressed as an integrated component of the problem reduction planner. The planner receives as input an ordered sequence of problems that it attempts to solve in order. The main criterion placed on the learning and modification of rules is the degree of dependency in both the derivation of the plans and the execution of the solutions in those plans. This focus on degree of dependency suggests the proposition that learning nonterminal rules can be viewed in terms of restrictions on the actions taken by solutions to the subproblems; that is, independence between subproblems can be achieved by restricting the actions taken to solve the subproblems. Two problem reduction learners, PRL and BU - PRL, have been designed and implemented to evaluate this restriction-based approach for biasing the learning of nonterminal rules. Both systems use a refinement-based approach to learning whereby an initial rule can be modified by feedback from problem-solving experience. The distinction between PRL and BU-PRL is that PRL uses a hypothesis-driven approach to forming that initial rule. The significant distinction between BU-PRL and PRL is that BU-PRL explores the opposing proposition that a desirable plan structure need not be coerced but rather is derivable from a state space solution.
theoretical issues in natural language processing | 1975
Charles F. Schmidt
Journal of Personality and Social Psychology | 1969
Charles F. Schmidt
Journal of Experimental Psychology | 1969
Irwin P. Levin; Charles F. Schmidt
international joint conference on artificial intelligence | 1977
Charles F. Schmidt; N. S. Sridharan
Journal of Personality and Social Psychology | 1972
Charles F. Schmidt