Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriele Kern-Isberner is active.

Publication


Featured researches published by Gabriele Kern-Isberner.


Artificial Intelligence | 1998

Characterizing the principle of minimum cross-entropy within a conditional-logical framework

Gabriele Kern-Isberner

Abstract The principle of minimum cross-entropy (ME-principle) is often used as an elegant and powerful tool to build up complete probability distributions when only partial knowledge is available. The inputs it may be applied to are a prior distribution P and some new information R , and it yields as a result the one distribution P ∗ that satisfies R and is closest to P in an information-theoretic sense. More generally, it provides a “best” solution to the problem “How to adjust P to R ?” In this paper, we show how probabilistic conditionals allow a new and constructive approach to this important principle. Though popular and widely used for knowledge representation, conditionals quantified by probabilities are not easily dealt with. We develop four principles that describe their handling in a reasonable and consistent way, taking into consideration the conditional-logical as well as the numerical and probabilistic aspects. Finally, the ME-principle turns out to be the only method for adjusting a prior distribution to new conditional information that obeys all these principles. Thus a characterization of the ME-principle within a conditional-logical framework is achieved, and its implicit logical mechanisms are revealed clearly.


Artificial Intelligence | 2004

Combining probabilistic logic programming with the power of maximum entropy

Gabriele Kern-Isberner; Thomas Lukasiewicz

This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic programming under maximum entropy. The first one is based on the usual notion of entailment under maximum entropy, and is defined for the very general case of probabilistic logic programs over Boolean events. The second one is based on a new notion of entailment under maximum entropy, where the principle of maximum entropy is coupled with the closed world assumption (CWA) from classical logic programming. It is only defined for the more restricted case of probabilistic logic programs over conjunctive events. We then analyze the nonmonotonic behavior of both approaches along benchmark examples and along general properties for default reasoning from conditional knowledge bases. It turns out that both approaches have very nice nonmonotonic features. In particular, they realize some inheritance of probabilistic knowledge along subclass relationships, without suffering from the problem of inheritance blocking and from the drowning problem. They both also satisfy the property of rational monotonicity and several irrelevance properties. We finally present algorithms for both approaches, which are based on generalizations of recent techniques for probabilistic logic programming under logical entailment. The algorithm for the first approach still produces quite large weighted entrophy maximization problems, while the one for the second approach generates optimization problems of the same size as the ones produced in probabilistic logic programming under logical entailment.


Artificial Intelligence | 2002

Explanations, belief revision and defeasible reasoning

Marcelo Alejandro Falappa; Gabriele Kern-Isberner; Guillermo Ricardo Simari

We present different constructions for nonprioritized belief revision, that is, belief changes in which the input sentences are not always accepted. First, we present the concept of explanation in a deductive way. Second, we define multiple revision operators with respect to sets of sentences (representing explanations), giving representation theorems. Finally, we relate the formulated operators with argumentative systems and default reasoning frameworks.


Archive | 2000

Methoden wissensbasierter Systeme

Christoph Beierle; Gabriele Kern-Isberner

* € (D) sind gebundene Ladenpreise in Deutschland und enthalten 7% MwSt; € (A) sind gebundene Ladenpreise in Österreich und enthalten 10% MwSt. CHF und die mit ** gekennzeichneten Preise für elektronische Produkte sind unverbindliche Preisempfehlungen und enthalten die landesübliche MwSt. Programmund Preisänderungen (auch bei Irrtümern) vorbehalten. Es gelten unsere Allgemeinen Lieferund Zahlungsbedingungen. Springer-Verlag GmbH, Handelsregistersitz: Berlin-Charlottenburg, HR B 91022. Geschäftsführung: Haank, Mos, Hendriks C. Beierle, G. Kern-Isberner Methoden wissensbasierter Systeme


Annals of Mathematics and Artificial Intelligence | 2004

A Thorough Axiomatization of a Principle of Conditional Preservation in Belief Revision

Gabriele Kern-Isberner

Although the crucial role of if-then-conditionals for the dynamics of knowledge has been known for several decades, they do not seem to fit well in the framework of classical belief revision theory. In particular, the propositional paradigm of minimal change guiding the AGM-postulates of belief revision proved to be inadequate for preserving conditional beliefs under revision. In this paper, we present a thorough axiomatization of a principle of conditional preservation in a very general framework, considering the revision of epistemic states by sets of conditionals. This axiomatization is based on a nonstandard approach to conditionals, which focuses on their dynamic aspects, and uses the newly introduced notion of conditional valuation functions as representations of epistemic states. In this way, probabilistic revision as well as possibilistic revision and the revision of ranking functions can all be dealt with within one framework. Moreover, we show that our approach can also be applied in a merely qualitative environment, extending AGM-style revision to properly handling conditional beliefs.


european conference on symbolic and quantitative approaches to reasoning and uncertainty | 1999

Probalilistic Logic Programming under Maximum Entropy

Thomas Lukasiewicz; Gabriele Kern-Isberner

In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming characterization for the problem of deciding whether a probabilistic logic program is satisfiable. Finally, and as a central contribution of this paper, we introduce an efficient technique for approximative probabilistic logic programming under maximum entropy. This technique reduces the original entropy maximization task to solving a modified and relatively small optimization problem.


Knowledge Engineering Review | 2011

Review: on the evolving relation between belief revision and argumentation

Marcelo Alejandro Falappa; Alejandro j. Garc a; Gabriele Kern-Isberner; Guillermo Ricardo Simari

Research on the relation between Belief Revision and Argumentation has always been fruitful in both directions: some argumentation formalisms can be used to define belief change operators, and belief change techniques have also been used for modeling the dynamics of beliefs in argumentation formalisms. In this paper, we give a historical perspective on how belief revision has evolved in the last three decades, and how it has been combined with argumentation. First, we will recall the foundational works concerning the links between both areas. On the basis of such insights, we will present a conceptual view on this topic and some further developments. We offer a glimpse into the future of research in this area based on the understanding of argumentation and belief revision as complementary, mutually useful disciplines.


International Journal of Intelligent Systems | 2004

Belief revision and information fusion on optimum entropy

Gabriele Kern-Isberner; Wilhelm Rödder

This article presents new methods for probabilistic belief revision and information fusion. By making use of the information theoretical principles of optimum entropy (ME principles), we define a generalized revision operator that aims at simulating the human learning of lessons, and we introduce a fusion operator that handles probabilistic information faithfully. This ME‐fusion operator satisfies basic demands, such as commutativity and the Pareto principle. A detailed analysis shows it to merge the corresponding epistemic states. Furthermore, it induces a numerical fusion operator that computes the information theoretical mean of probabilities.


Information Systems | 1996

Representation and extraction of information by probabilistic logic

Wilhelm Rödder; Gabriele Kern-Isberner

In general, a probabilistic knowledge base consists of a joint probability distribution on discrete random variables. Though it allows of easy computability and efficient propagation methods, its inherent knowledge is hardly accessible for the user. The concept introduced in this paper permits an interactive communication between man and machine by use of probabilistic logic: The user is able to convey all know-how available to the system, and conversely, knowledge embodied by the distribution is revealed in an understandable way. Uncertain rules constitute the link between commonsense and probabilistic knowledge representation. The concept developed in this paper is partly realized in the probabilistic expert system shell SPIRIT. An application of SPIRIT to a real life example is described in the appendix.


Journal of Philosophical Logic | 2012

Prioritized and Non-prioritized Multiple Change on Belief Bases

Marcelo Alejandro Falappa; Gabriele Kern-Isberner; Maurício D. Luís Reis; Guillermo Ricardo Simari

In this article we explore multiple change operators, i.e., operators in which the epistemic input is a set of sentences instead of a single sentence. We propose two types of change: prioritized change, in which the input set is fully accepted, and symmetric change, where both the epistemic state and the epistemic input are equally treated. In both kinds of operators we propose a set of postulates and we present different constructions: kernel changes and partial meet changes.

Collaboration


Dive into the Gabriele Kern-Isberner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthias Thimm

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Christian Eichhorn

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Patrick Krümpelmann

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Marco Wilhelm

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Ragni

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge