Witold Łukaszewicz
University of Warsaw
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Witold Łukaszewicz.
computational intelligence | 1988
Witold Łukaszewicz
In the following paper we analyze Reiters default logic and suggest modifying the notion of an extension for default theories. This modification leads to two important properties which are not guaranteed in Reiters formalism: the existence of extensions and semimonotonicity.
Journal of Automated Reasoning | 1997
Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas
In recent years, a great deal of attention has been devoted to logics of common-sense reasoning. Among the candidates proposed, circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic reasoning, but difficult to apply in practice. The major reason for this is the second-order nature of circumscription axioms and the difficulty in finding proper substitutions of predicate expressions for predicate variables. One solution to this problem is to compile, where possible, second-order formulas into equivalent first-order formulas. Although some progress has been made using this approach, the results are not as strong as one might desire and they are isolated in nature. In this article, we provide a general method that can be used in an algorithmic manner to reduce certain circumscription axioms to first-order formulas. The algorithm takes as input an arbitrary second-order formula and either returns as output an equivalent first-order formula, or terminates with failure. The class of second-order formulas, and analogously the class of circumscriptive theories that can be reduced, provably subsumes those covered by existing results. We demonstrate the generality of the algorithm using circumscriptive theories with mixed quantifiers (some involving Skolemization), variable constants, nonseparated formulas, and formulas with n-ary predicate variables. In addition, we analyze the strength of the algorithm, compare it with existing approaches, and provide formal subsumption results.
Information Fusion | 2007
Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas
In real world applications robots and software agents often have to be equipped with higher level cognitive functions that enable them to reason, act and perceive in changing, incompletely known and unpredictable environments. One of the major tasks in such circumstances is to fuse information from various data sources. There are many levels of information fusion, ranging from the fusing of low level sensor signals to the fusing of high level, complex knowledge structures. In a dynamically changing environment even a single agent may have varying abilities to perceive its environment which are dependent on particular conditions. The situation becomes even more complex when different agents have different perceptual capabilities and need to communicate with each other. In this paper, we propose a framework that provides agents with the ability to fuse both low and high level approximate knowledge in the context of dynamically changing environments while taking account of heterogeneous and contextually limited perceptual capabilities. To model limitations on an agents perceptual capabilities we introduce the idea of partial tolerance spaces. We assume that each agent has one or more approximate databases where approximate relations are represented using lower and upper approximations on sets. Approximate relations are generalizations of rough sets. It is shown how sensory and other limitations can be taken into account when constructing and querying approximate databases for each respective agent. Complex relations inherit the approximativeness of primitive relations used in their definitions. Agents then query these databases and receive answers through the filters of their perceptual limitations as represented by (partial) tolerance spaces and approximate queries. The techniques used are all tractable.
Fundamenta Informaticae | 1998
Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas
We first define general domain circumscription (GDC) and provide it with a semantics. GDC subsumes existing domain circumscription proposals in that it allows varying of arbitrary predicates, functions, or constants, to maximize the minimization of the domain of a theory. We then show that for the class of semi-universal theories without function symbols, that the domain circumscription of such theories can be constructively reduced to logically equivalent first-order theories by using an extension of the DLS algorithm, previously proposed by the authors for reducing second-order formulas. We also show that for a certain class of domain circumscribed theories, that any arbitrary second-order circumscription policy applied to these theories is guaranteed to be reducible to a logically equivalent first-order theory. In the case of semi-universal theories with functions and arbitrary theories which are not separated, we provide additional results, which although not guaranteed to provide reductions in all cases, do provide reductions in some cases. These results are based on the use of fixpoint reductions.
granular computing | 2003
Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas
The premise of this paper is that the acquisition, aggregation, merging and use of information requires some new ideas, tools and techniques which can simplify the construction, analysis and use of what we call ephemeral knowledge structures. Ephemeral knowledge structures are used and constructed by granular agents. Each agent contains its own granular information structure and granular information structures of agents can be combined together. The main concept considered in this paper is an information granule. An information granule is a concise conceptual unit that can be integrated into a larger information infrastructure consisting of other information granules and dependencies between them. The novelty of this paper is that it provides a concise and formal definition of a particular view of information granule and its associated operators, as required in advanced knowledge representation applications.
Fundamenta Informaticae | 1996
Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas
Circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic and commonsense reasoning, but difficult to apply in practice due to the use of second-order formulas. One proposal for dealing with the computational problems is to identify classes of first-order formulas whose circumscription can be shown to be equivalent to a first-order formula. In previous work, we presented an algorithm which reduces certain classes of second-order circumscription axioms to logically equivalent first-order formulas. The basis for the algorithm is an elimination lemma due to Ackermann. In this paper, we capitalize on the use of a generalization of Ackermanns Lemma in order to deal with a subclass of universal formulas called semi-Horn formulas. Our results subsume previous results by Kolaitis and Papadimitriou regarding a characterization of circumscribed definite logic programs which are first-order expressible. The method for distinguishing which formulas are reducible is based on a boundedness criterion. The approach we use is to first reduce a circumscribed semi-Horn formula to a fixpoint formula which is reducible if the formula is bounded, otherwise not. In addition to a number of other extensions, we also present a fixpoint calculus which is shown to be sound and complete for bounded fixpoint formulas.
mexican international conference on artificial intelligence | 2005
Ewa Madalińska-Bugaj; Witold Łukaszewicz
In this paper, we propose a new belief revision operator, together with a method of its calculation. Our formalization differs from most of the traditional approaches in two respects. Firstly, we formally distinguish between defeasible observations and indefeasible knowledge about the considered world. In particular, our operator is differently specified depending on whether an input formula is an observation or a piece of knowledge. Secondly, we assume that a new observation, but not a new piece of knowledge, describes exactly what a reasoning agent knows at the moment about the aspect of the world the observation concerns.
granular computing | 2005
Patrick Doherty; Witold Łukaszewicz; Andrzej Szałas
The relation of similarity is essential in understanding and developing frameworks for reasoning with vague and approximate concepts. There is a wide spectrum of choice as to what properties we associate with similarity and such choices determine the nature of vague and approximate concepts defined in terms of these relations. Additionally, robotic systems naturally have to deal with vague and approximate concepts due to the limitations in reasoning and sensor capabilities. Halpern [1] introduces the use of subjective and objective states in a modal logic formalizing vagueness and distinctions in transitivity when an agent reasons in the context of sensory and other limitations. He also relates these ideas to a solution to the Sorities and other paradoxes. In this paper, we generalize and apply the idea of similarity and tolerance spaces [2,3,4,5], a means of constructing approximate and vague concepts from such spaces and an explicit way to distinguish between an agents objective and subjective states. We also show how some of the intuitions from Halpern can be used with similarity spaces to formalize the above-mentioned Sorities and other paradoxes.
Fundamenta Informaticae | 2009
Ewa Madalińska-Bugaj; Witold Łukaszewicz
In this paper we generalize the MPMA belief update operator by admitting first-order knowledge bases and restricted first-order update formulae. This allows us to consider scenarios which cannot be properly formalized either in the originalMPMA or in other existing approaches to belief update.
Fundamenta Informaticae | 2015
Michał Korpusik; Witold Łukaszewicz; Ewa Madalińska-Bugaj
In this paper we extend a consistency-based approach (originally introduced by Delgrande and Schaub) to belief revision for structured belief bases. We explicitly distinguish between observations, i.e., facts that an epistemic agent observes or is being told, and rules representing general knowledge about the considered world. When new information becomes available respective sets are being altered in a different way to preserve parts of knowledge during the revision process. Such an approach allows us to deal with difficult and complex scenarios, involving defeasible information and derivation filtering, with common-sense results.