Martha Lewis
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martha Lewis.
Electronic Proceedings in Theoretical Computer Science | 2016
Josef Bolt; Bob Coecke; Fabrizio Genovese; Martha Lewis; Dan Marsden; Robin Piedeleu
We propose applying the categorical compositional scheme of [6] to conceptual space models of cognition. In order to do this we introduce the category of convex relations as a new setting for categorical compositional semantics, emphasizing the convex structure important to conceptual space applications. We show how conceptual spaces for composite types such as adjectives and verbs can be constructed. We illustrate this new model on detailed examples.
International Journal of Approximate Reasoning | 2014
Martha Lewis; Jonathan Lawry
Abstract We introduce a model for the linguistic hedges ‘very’ and ‘quite’ within the label semantics framework, and combined with the prototype and conceptual spaces theories of concepts. The proposed model emerges naturally from the representational framework we use and as such, has a clear semantic grounding. We give generalisations of these hedge models and show that they can be composed with themselves and with other functions, going on to examine their behaviour in the limit of composition.
workshop on logic language information and computation | 2017
Bob Coecke; Fabrizio Genovese; Martha Lewis; Dan Marsden
Categorical compositional models of natural language exploit grammatical structure to calculate the meaning of sentences from the meanings of individual words. This approach outperforms conventional techniques for some standard NLP tasks. More recently, similar compositional techniques have been applied to conceptual space models of cognition.
Theoretical Computer Science | 2018
Bob Coecke; Fabrizio Genovese; Martha Lewis; Dan Marsden; Alex Toumi
Categorical compositional models of natural language exploit grammatical structure to calculate the meaning of sentences from the meanings of individual words. This approach outperforms conventional techniques for some standard NLP tasks. More recently, similar compositional techniques have been applied to conceptual space models of cognition.
International Symposium on Quantum Interaction | 2016
Yaared Al-Mehairi; Bob Coecke; Martha Lewis
We accommodate the Integrated Connectionist/Symbolic Architecture (ICS) of [32] within the categorical compositional semantics (CatCo) of [13], forming a model of categorical compositional cognition (CatCog). This resolves intrinsic problems with ICS such as the fact that representations inhabit an unbounded space and that sentences with differing tree structures cannot be directly compared. We do so in a way that makes the most of the grammatical structure available, in contrast to strategies like circular convolution. Using the CatCo model also allows us to make use of tools developed for CatCo such as the representation of ambiguity and logical reasoning via density matrices, structural meanings for words such as relative pronouns, and addressing over- and under-extension, all of which are present in cognitive processes. Moreover the CatCog framework is sufficiently flexible to allow for entirely different representations of meaning, such as conceptual spaces. Interestingly, since the CatCo model was largely inspired by categorical quantum mechanics, so is CatCog.
simulation of adaptive behavior | 2014
Martha Lewis; Anna Fedor; Michael Öllinger; Eörs Szathmáry; Chrisantha Fernando
We investigate reaction times for classification of visual stimuli composed of combinations of shapes, to distinguish between parallel and serial processing of stimuli. Reaction times in a visual XOR task are slower than in AND/OR tasks in which pairs of shapes are categorised. This behaviour is explained by the time needed to perceive shapes in the various tasks, using a parallel drift diffusion model. The parallel model explains reaction times in an extension of the XOR task, up to 7 shapes. Subsequently, the behaviour is explained by a combined model that assumes perceptual chunking, processing shapes within chunks in parallel, and chunks themselves in serial. The pure parallel model also explains reaction times for ALL and EXISTS tasks. An extension to the perceptual chunking model adds time taken to apply a logical rule. We are able to improve the fit to the data by including this extra parameter, but using model selection the extra parameter is not supported. We further simulate the behaviour exhibited using an echo state network, successfully recreating the behaviour seen in humans.
Artificial Intelligence | 2016
Martha Lewis; Jonathan Lawry
arXiv: Computation and Language | 2016
Bob Coecke; Martha Lewis; Dan Marsden
arXiv: Artificial Intelligence | 2015
Bob Coecke; Martha Lewis
arXiv: Logic in Computer Science | 2017
Joe Bolt; Bob Coecke; Fabrizio Genovese; Martha Lewis; Dan Marsden; Robin Piedeleu