Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lea Frermann is active.

Publication


Featured researches published by Lea Frermann.


conference of the european chapter of the association for computational linguistics | 2014

A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge

Lea Frermann; Ivan Titov; Manfred Pinkal

Scripts representing common sense knowledge about stereotyped sequences of events have been shown to be a valuable resource for NLP applications. We present a hierarchical Bayesian model for unsupervised learning of script knowledge from crowdsourced descriptions of human activities. Events and constraints on event ordering are induced jointly in one unified framework. We use a statistical model over permutations which captures event ordering constraints in a more flexible way than previous approaches. In order to alleviate the sparsity problem caused by using relatively small datasets, we incorporate in our hierarchical model an informed prior on word distributions. The resulting model substantially outperforms a state-of-the-Art method on the event ordering task.


conference of the european chapter of the association for computational linguistics | 2014

Incremental Bayesian Learning of Semantic Categories

Lea Frermann; Mirella Lapata

Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper we focus on categories acquired from natural language stimuli, that is words (e.g., chair is a member of the FURNITURE category). We present a Bayesian model which, unlike previous work, learns both categories and their features in a single process. Our model employs particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference in an incremental setting. Comparison against a state-of-the-art graph-based approach reveals that our model learns qualitatively better categories and demonstrates cognitive plausibility during learning.


Cognitive Science | 2016

Incremental Bayesian Category Learning from Natural Language

Lea Frermann; Mirella Lapata

Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .).


north american chapter of the association for computational linguistics | 2015

A Bayesian Model for Joint Learning of Categories and their Features

Lea Frermann; Mirella Lapata

Categories such as ANIMAL or FURNITURE are acquired at an early age and play an important role in processing, organizing, and conveying world knowledge. Theories of categorization largely agree that categories are characterized by features such as function or appearance and that feature and category acquisition go hand-in-hand, however previous work has considered these problems in isolation. We present the first model that jointly learns categories and their features. The set of features is shared across categories, and strength of association is inferred in a Bayesian framework. We approximate the learning environment with natural language text which allows us to evaluate performance on a large scale. Compared to highly engineered pattern-based approaches, our model is cognitively motivated, knowledge-lean, and learns categories and features which are perceived by humans as more meaningful.


Transactions of the Association for Computational Linguistics | 2016

A Bayesian Model of Diachronic Meaning Change

Lea Frermann; Mirella Lapata


empirical methods in natural language processing | 2017

Inducing Semantic Micro-Clusters from Deep Multi-View Representations of Novels.

Lea Frermann; György Szarvas


meeting of the association for computational linguistics | 2012

Cross-lingual Parse Disambiguation based on Semantic Correspondence

Lea Frermann; Francis Bond


north american chapter of the association for computational linguistics | 2018

UNSUPERVISED INDUCTION OF LINGUISTIC CATEGORIES WITH RECORDS OF READING, SPEAKING, AND WRITING

Maria Barrett; Lea Frermann; Ana Valeria Gonzalez-Garduño; Anders Søgaard


Transactions of the Association for Computational Linguistics | 2018

Whodunnit? Crime Drama as a Case for Natural Language Understanding

Lea Frermann; Shay B. Cohen; Mirella Lapata


Archive | 2017

Bayesian models of category acquisition and meaning development

Lea Frermann

Collaboration


Dive into the Lea Frermann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

György Szarvas

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francis Bond

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ivan Titov

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Maria Barrett

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Shay B. Cohen

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge