Renée Elio
University of Alberta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Renée Elio.
Memory & Cognition | 1984
Renée Elio; John R. Anderson
Three experiments investigated the effects of information order and representativeness on schema abstraction in a category learning task. A set of category members, in which the variability and frequency of member types were correlated, was divided into four study samples. In the high-variance condition, each sample was representative of the allowable variation in the category and the frequency with which it occurred. In the low-variance condition, the initial study sample focused only on the most frequently occurring category members. Subsequent samples gradually introduced exemplars, and hence additional variance, from remaining member types. After the fourth study sample, all subjects in all conditions had seen the same category members. Experiment 1 revealed that transfer performance was better if subjects began with a low-variance sample and were gradually introduced to the allowable variation on subsequent samples than if they consistently saw representative samples. Experiments 2 and 3 suggested that this information-order effect may interact with learning mode: Subjects induced to be more analytic about the material performed better if their initial and subsequent samples were representative of the category variation.
Cognitive Science | 1990
Renée Elio; Peternela B. Scharf
This research presents a computer model called EUREKA that begins with novice-like strategies and knowledge organizations for solving physics word problems and acquires features of knowledge organizations and basic approaches that characterize experts in this domain. EUREKA learns a highly interrelated network of problem-type schemas with associated solution methodologies. Initially, superficial features of the problem statement form the basis for both the problem-type schemas and the discriminating features that organize them in the P-MOP (Problem Memory Organization Packet) network. As EUREKA solves more problems, the content of the schemas and the discriminating features change to reflect more fundamental physics principles. This changing network allows EUREKA to shift from a novicelike means-ends strategy to a more expertlike “knowledge development” strategy in which the presence of abstract concepts are triggered by problem features. In this model, the strategy shift emerges as a natural consequence of the evolving expertlike organization of problem-type schemos. EUREKA captures many of the descriptive models of novice expert differences, and also suggests a number of empirically testable assumptions regarding problem-solving strategies and the representation of problem-solving knowledge.
Cognitive Science | 1997
Renée Elio; Francis Jeffry Pelletier
This study examines the problem of belief revision, defined as deciding which of several initially accepted sentences to disbelieve, when new information presents a logical inconsistency with the initial set. In the first three experiments, the initial sentence set included a conditional sentence, a non-conditional (ground) sentence, and an inferred conclusion drawn from the first two. The new information contradicted the inferred conclusion. Results indicated that conditional sentences were more readily abandoned than ground sentences, even when either choice would lead to a consistent belief state, and that this preference was more pronounced when problems used natural language cover stories rather than symbols. The pattern of belief revision choices differed depending on whether the contradicted conclusion from the initial belief set had been a modus ponens or modus tollens inference. Two additional experiments examined alternative model-theoretic definitions of minimal change to a belief state, using problems that contained multiple models of the initial belief state and of the new information that provided the contradiction. The results indicated that people did not follow any of four formal definitions of minimal change on these problems. The new information and the contradiction it offered was not, for example, used to select a particular model of the initial belief state as a way of reconciling the contradiction. The preferred revision was to retain only those initial sentences that had the same, unambiguous truth value within and across both the initial and new information sets. The study and results are presented in the context of certain logic-based formalizations of belief revision, syntactic and model-theoretic representations of belief states, and performance models of human deduction. Principles by which some types of sentences might be more “entrenched” than others in the face of contradiction are also discussed from the perspective of induction and theory revision.
Cognitive Science | 1986
Renée Elio
Continued practice on a task is chorocterized by several quantitative and qualitative changes in performance. The most salient is the speed-up in the time to execute the tosk. To account for these effects, some models of skilled performance have proposed automatic mechanisms that merge knowlege structures associated with the task into fewer, larger structures. The present study investigated how the representation of similor cognitive procedures might interact with the success of such automatic mechanisms. In five experiments, subjects learned complex, multistep mental arithmetic procedures. These procedures included two types of knowledge thought to characterize most cognitive procedures: “component” knowledge for achieving intermediate results and “integrative” knowledge for organizing and integrating intermediate results. Subjects simultaneously practiced two procedures that had either the some component steps or the same integrative structure. Practiceeffect models supported a procedure-independent representation for common component steps. The availability of these common steps for use in a new procedure wos also measured. Steps practiced in the context of two procedures were expected to show greater transfer to o new procedure than steps learned in the context of a single procedure. This did not always occur. A model using component/integrative knowledge distinction reconciled these results by proposing that integrotive knowledge operated on all steps of the procedure: An integral part of the knowledge associated with achieving an intermediate result or state includes how it contributes to later task demands. These results are discussed in the context of automatic mechanisms for skill acquisition.
Machine Learning | 1991
Renée Elio; Larry Watanabe
This article describes LAIR, a constructive induction system that acquires conjunctive concepts by applying a domain theory to introduce new features into the evolving concept description. Each acquired concept is added to the domain theory, making LAIR a closed-loop learning system that weakens the inductive bias with each iteration of the learning loop. LAIRs novel feature is the use of an incremental deductive strategy for constructive induction, reducing the amount of inference required for learning. A series of experiments manipulated features of learning tasks to assess this incremental method of constructive induction relative to an uncontrolled constructive induction process that extends each example description with all derivable features. These learning tasks differed in global characteristics of the domain theory, the training sequence, and the percentage of irrelevant features in the example descriptions. The results show that LAIRs constructive induction approach saves considerable inferencing effort, with little or no cost in the number of examples needed to reach a learning criterion. The experimental results also underscored the importance of viewing a domain theory as a search space, identifying several factors that impact the deductive and inductive aspects of constructive induction, such as concept definition overlap, density of features, and fan-in and fan-out of inference chains. The paper also discusses LAIRs operation as a pac-learner and its relation to other constructive induction techniques.
computational intelligence | 1997
Francis Jeffry Pelletier; Renée Elio
This is a position paper concerning the role of empirical studies of human default reasoning in the formalization of AI theories of default reasoning. We note that AI motivates its theoretical enterprise by reference to human skill at default reasoning, but that the actual research does not make any use of this sort of information and instead relies on intuitions of individual investigators. We discuss two reasons theorists might not consider human performance relevant to formalizing default reasoning: (a) that intuitions are sufficient to describe a model, and (b) that human performance in this arena is irrelevant to a competence model of the phenomenon. We provide arguments against both these reasons. We then bring forward three further considerations against the use of intuitions in this arena: (a) it leads to an unawareness of predicate ambiguity, (b) it presumes an understanding of ordinary language statements of typicality, and (c) it is similar to discredited views in other fields. We advocate empirical investigation of the range of human phenomena that intuitively embody default reasoning. Gathering such information would provide data with which to generate formal default theories and against which to test the claims of proposed theories. Our position is that such data are the very phenomena that default theories are supposed to explain.
cooperative information agents | 1999
Pat Langley; Cynthia A. Thompson; Renée Elio; Afsaneh Haddadi
In this paper, we describe the Adaptive Place Advisor, a conversational interface designed to help users decide on a destination. We view the selection of destinations as an interactive process of constraint satisfaction, with the advisory system proposing attributes and the human responding. We further characterize this task in terms of heuristic search, which leads us to consider the systems representation of problem states, the operators it uses to generate those states, and the heuristics it invokes to select these operators. In addition, we report a graphical interface that supports this process for the specific task of recommending restaurants, as well as two methods for constructing user models from interaction traces. We contrast our approach to recommendation systems with the more common scheme of showing users a ranked list of items, but we also discuss related work on conversational systems. In closing, we present our plans to evaluate the Adaptive Place Advisor experimentally and to extend its functionality.
Synthese | 2005
Francis Jeffry Pelletier; Renée Elio
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves.Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning.The informal notion has been seen as central to a number of areas of scholarly investigation, and we canvass some of them before turning our attention to its role in AI. It is because ordinary people so cleverly and effortlessly use default reasoning to solve interesting cognitive tasks that nonmonotonic formalisms were introduced into AI, and we argue that this is a form of psychologism, despite the fact that it is not usually recognized as such in AI.We close by mentioning some of the results from our empirical investigations that we believe should be incorporated into nonmonotonic formalisms.
Studia Logica | 2008
Francis Jeffry Pelletier; Renée Elio; Philip P. Hanson
Psychologism in logic is the doctrine that the semantic content of logical terms is in some way a feature of human psychology. We consider the historically influential version of the doctrine, Psychological Individualism, and the many counter-arguments to it. We then propose and assess various modifications to the doctrine that might allow it to avoid the classical objections. We call these Psychological Descriptivism, Teleological Cognitive Architecture, and Ideal Cognizers. These characterizations give some order to the wide range of modern views that are seen as psychologistic because of one or another feature. Although these can avoid some of the classic objections to psychologism, some still hold.
Journal of Atmospheric and Oceanic Technology | 1987
Renée Elio; Johannes De Haan; G. S. Strong
Abstract An experienced forecaster can use several different types of knowledge in forcing. First, there is his theoretical understanding of meteorology, which is well entrenched in current numerical models. A second type is his “local knowledge,” gained over years of experience, of how weather is likely to form in his forecast area. This kind of local familiarity is not easily captured with traditional numeric techniques, but might provide additional insights for prediction that someone unfamiliar with the area might not have. A third type of knowledge is how to interpret forecast tools already in use. This might include knowledge of the tools limitations and how it works in a particular locale. Capturing these types of knowledge is important in building computing systems that can serve as intelligent consultants to forecasters. This paper describes a prototype system, called METEOR, that incorporates all these types of knowledge to predict the location, severity, and motion of convective storms in Albe...