Didier Schwab
Universiti Sains Malaysia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Didier Schwab.
international conference on computational linguistics | 2002
Didier Schwab; Mathieu Lafourcade; Violaine Prince
For meaning representations in NLP, we focus our attention on thematic aspects and conceptual vectors. The learning strategy of conceptual vectors relies on a morphosyntaxic analysis of human usage dictionary definitions linked to vector propagation. This analysis currently doesnt take into account negation phenomena. This work aims at studying the antonymy aspects of negation, in the larger goal of its integration into the thematic analysis. We present a model based on the idea of symmetry compatible with conceptual vectors. Then, we define antonymy functions which allows the construction of an antonymous vector and the enumeration of its potentially antinomic lexical items. Finally, we introduce a measure which evaluates how a given word is an acceptable antonym for a term.
International Journal of Speech Technology | 2010
Michael Zock; Olivier Ferret; Didier Schwab
No doubt, words play a major role in language production, hence finding them is of vital importance, be it for writing or for speaking (spontaneous discourse production, simultaneous translation). Words are stored in a dictionary, and the general belief holds, the more entries the better. Yet, to be truly useful the resource should contain not only many entries and a lot of information concerning each one of them, but also adequate navigational means to reveal the stored information. Information access depends crucially on the organization of the data (words) and the access keys (meaning/form), two factors largely overlooked. We will present here some ideas of how an existing electronic dictionary could be enhanced to support a speaker/writer to find the word s/he is looking for. To this end we suggest to add to an existing electronic dictionary an index based on the notion of association, i.e. words co-occurring in a well balanced corpus, the latter being supposed to represent the average citizen’s knowledge of the world. Before describing our approach, we will briefly take a critical look at the work being done by colleagues working on automatic, spontaneous or deliberate language production,—that is, computer-generated language, simulation of the mental lexicon, or WordNet (WN),—to see how adequate they are with regard to our goal.
international conference on computational linguistics | 2008
Michael Zock; Didier Schwab
Words play a major role in language production, hence finding them is of vital importance, be it for speaking or writing. Words are stored in a dictionary, and the general belief holds, the bigger the better. Yet, to be truly useful the resource should contain not only many entries and a lot of information concerning each one of them, but also adequate means to reveal the stored information. Information access depends crucially on the organization of the data (words) and on the navigational tools. It also depends on the grouping, ranking and indexing of the data, a factor too often overlooked. n nWe will present here some preliminary results, showing how an existing electronic dictionary could be enhanced to support language producers to find the word they are looking for. To this end we have started to build a corpus-based association matrix, composed of target words and access keys (meaning elements, related concepts/words), the two being connected at their intersection in terms of weight and type of link, information used subsequently for grouping, ranking and navigation.
International Journal of Web Engineering and Technology | 2013
Didier Schwab; Jérôme Goulian; Andon Tchechmedjiev
Word sense disambiguation WSD is a difficult problem for natural language processing. Algorithms that aim to solve the problem focus on the quality of the disambiguation alone and require considerable computational time. In this article, we focus on the study of three unsupervised stochastic algorithms for WSD: a genetic algorithm GA and a simulated annealing algorithm SA from the state of the art and our own ant colony algorithm ACA. The comparison is made both in terms of the worst case computational complexity and of the empirical performance of the algorithms in terms of F1 scores, execution time and evaluation of the semantic relatedness measure. We find that ACA leads to a shorter execution time as well as better results that surpass the first sense baseline and come close to the results of supervised systems on the coarse-grained all words task from Semeval 2007.
joint conference on lexical and computational semantics | 2013
Didier Schwab; Andon Tchechmedjiev; Jérôme Goulian; Mohammad Nasiruddin; Gilles Sérasset; Hervé Blanchon
international conference on computational linguistics | 2012
Didier Schwab; J'er^ome Goulian; Andon Tchechmedjiev; Hervé Blanchon
international conference on artificial intelligence | 2007
Didier Schwab; Mathieu Lafourcade
international conference on computational linguistics | 2012
Andon Tchechmedjiev; J'er^ome Goulian; Didier Schwab; Gilles Sérasset
The Journal of Cognitive Science | 2011
Michael Zock; Didier Schwab
computer and information technology | 2006
Didier Schwab; Mathieu Lafourcade