Nina Gierasimczuk
University of Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nina Gierasimczuk.
tbilisi symposium on logic language and computation | 2007
Nina Gierasimczuk
This paper is concerned with a possible mechanism for learning the meanings of quantifiers in natural language. The meaning of a natural language construction is identified with a procedure for recognizing its extension. Therefore, acquisition of natural language quantifiers is supposed to consist in collecting procedures for computing their denotations. A method for encoding classes of finite models corresponding to given quantifiers is shown. The class of finite models is represented by appropriate languages. Some facts describing dependencies between classes of quantifiers and classes of devices are presented. In the second part of the paper examples of syntax-learning models are shown. According to these models new results in quantifier learning are presented. Finally, the question of the adequacy of syntax-learning tools for describing the process of semantic learning is stated.
theoretical aspects of rationality and knowledge | 2011
Alexandru Baltag; Nina Gierasimczuk; Sonja Smets
We analyze the learning power of iterated belief revision methods, and in particular their universality: whether or not they can learn everything that can be learnt. We look in particular at three popular methods: conditioning, lexicographic revision and minimal revision. Our main result is that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. We show that a standard (well-founded) belief-revision setting is in general too narrow for this. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition (saying that only finitely many errors occur, and that every error is eventually corrected), we show that lexicographic revision is still universal in this setting, while the other two methods are not.
Lecture Notes in Computer Science | 2009
Nina Gierasimczuk; Lena Kurzen; Fernando R. Velázquez-Quesada
In formal approaches to inductive learning, the ability to learn is understood as the ability to single out a correct hypothesis from a range of possibilities. Although most of the existing research focuses on the characteristics of the learner, in many paradigms the significance of the teachers abilities and strategies is in fact undeniable. Motivated by this observation, this paper highlights the interactive nature of learning by showing its relation with games. We show how learning can be seen as a sabotage-type game between Teacher and Learner, and we present different variants based on the level of cooperativeness and the actions available to the players, characterizing the existence of winning strategies by formulas of Sabotage Modal Logic and analyzing their complexity. We also give a two-way conceptual account of how to further combine games and learning: we propose to use game theory to analyze the grammar inference approach, and moreover, we indicate that existing inductive inference games can be analyzed using learning theory tools. Our work aims at unifying game-theoretical and logical approach to formal learning theory.
language and automata theory and applications | 2009
Nina Gierasimczuk
This work provides a comparison of learning by erasing [1] and iterated epistemic update [2] as analyzed in dynamic epistemic logic (see e.g.[3]). We show that finite identification can be modelled in dynamic epistemic logic and that the elimination process of learning by erasing can be seen as iterated belief-revision modelled in dynamic doxastic logic.
Springer International Publishing | 2014
Nina Gierasimczuk; Vincent F. Hendricks; Dick de Jongh
Learning and learnability have been long standing topics of interests within the linguistic, computational, and epistemological accounts of inductive inference. Johan van Benthem’s vision of the “dynamic turn” has not only brought renewed life to research agendas in logic as the study of information processing, but likewise helped bring logic and learning in close proximity. This proximity relation is examined with respect to learning and belief revision, updating and efficiency, and with respect to how learnability fits in the greater scheme of dynamic epistemic logic and scientific method.
The Computer Journal | 2013
Nina Gierasimczuk; Dick de Jongh
This work is concerned with finite identifiability of languages from positive data. We focus on the characterization of finite identifiability [Mukouchi (1992), Lange and Zeugmann (1992)], which uses definite finite tell-tale sets (DFTTs for short), finite subsets of languages which are uniquely characteristic for them. We introduce preset learners, learning functions that explicitly use (collections of) DFTTs, and, in cases where there exist only finitely many DFTTs for each language, strict preset learners which in each case use this whole finite collection. We also introduce the concept of fastest learner, a learner which comes up with the right conjecture on any input string that objectively leaves only the right choice of language. We study the use of minimal DFTTs and their influence on the speed of finite identification. We show that: (a) in the case of finite collections of finite sets—finding a minimal DFTT is polynomial time computable, while finding a minimal-size DFTT is NP-complete; (b) in the general case—finite identifiability, minimal strict preset finite identifiability and fastest finite identifiability are shown to be mutually nonequivalent. In the end we mention the relevance of this work for dynamic epistemic logic.
Electronic Proceedings in Theoretical Computer Science | 2015
Alexandru Baltag; Nina Gierasimczuk; Sonja Smets
The 15th Conference on Theoretical Aspects of Rationality and Knowledge (TARK) took place in Carnegie Mellon University, Pittsburgh, USA from June 4 to 6, 2015. The mission of the TARK conferences is to bring together researchers from a wide variety of fields, including Artificial Intelligence, Cryptography, Distributed Computing, Economics and Game Theory, Linguistics, Philosophy, and Psychology, in order to further our understanding of interdisciplinary issues involving reasoning about rationality and knowledge. These proceedings consist of a subset of the papers / abstracts presented at the TARK conference.
theoretical aspects of rationality and knowledge | 2011
Nina Gierasimczuk; Jakub Szymanik
We study a generalization of the Muddy Children puzzle by allowing public announcements with arbitrary generalized quantifiers. We propose a new concise logical modeling of the puzzle based on the number triangle representation of quantifiers. Our general aim is to discuss the possibility of epistemic modeling that is cut for specific informational dynamics. Moreover, we show that the puzzle is solvable for any number of agents if and only if the quantifier in the announcement is positively active (satisfies a form of variety).
Information & Computation | 2011
Cédric Dégremont; Nina Gierasimczuk
Formal learning theory constitutes an attempt to describe and explain the phenomenon of learning, in particular of language acquisition. The considerations in this domain are also applicable in philosophy of science, where it can be interpreted as a description of the process of scientific inquiry. The theory focuses on various properties of the process of hypothesis change over time. Treating conjectures as informational states, we link the process of conjecture-change to epistemic update. We reconstruct and analyze the temporal aspect of learning in the context of dynamic and temporal logics of epistemic change. We first introduce the basic formal notions of learning theory and basic epistemic logic. We provide a translation of the components of learning scenarios into the domain of epistemic logic. Then, we propose a characterization of finite identifiability in an epistemic temporal language. In the end we discuss consequences and possible extensions of our work.
International Workshop on Dynamic Logic | 2017
Alexandru Baltag; Nina Gierasimczuk; Ana Lucia Vargas Sandoval; Sonja Smets
Building on previous work [4, 5] that bridged Formal Learning Theory and Dynamic Epistemic Logic in a topological setting, we introduce a Dynamic Logic for Learning Theory (DLLT), extending Subset Space Logics [9, 17] with dynamic observation modalities \([o]\varphi \), as well as with a learning operator Open image in new window , which encodes the learner’s conjecture after observing a finite sequence of data Open image in new window . We completely axiomatise DLLT, study its expressivity and use it to characterise various notions of knowledge, belief, and learning.