Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Geert Heyman is active.

Publication


Featured researches published by Geert Heyman.


conference on multimedia modeling | 2016

Cross-Modal Fashion Search

Susana Zoghbi; Geert Heyman; Juan Carlos Gomez; Marie-Francine Moens

In this demo we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we demonstrate two tasks: (1) given a query image (without any accompanying text), we retrieve textual descriptions that correspond to the visual attributes in the visual query; and (2) given a textual query that may express an interest in specific visual characteristics, we retrieve relevant images (without leveraging textual meta-data) that exhibit the required visual attributes. The first task is especially useful to manage image collections by online stores who might want to automatically organize and mine predominantly visual items according to their attributes without human input. The second task renders useful for users to find items with specific visual characteristics, in the case where there is no text available describing the target image. We use a state-of-the-art visual and textual features, as well as a state-of-the-art latent variable model to bridge between textual and visual data: bilingual latent Dirichlet allocation. Unlike traditional search engines, we demonstrate a truly cross-modal system, where we can directly bridge between visual and textual content without relying on pre-annotated meta-data.


International Journal of Computer and Electrical Engineering | 2016

Fashion meets computer vision and NLP at e-­commerce search

Susana Zoghbi; Geert Heyman; Juan Carlos Gomez; Marie-Francine Moens

In this paper, we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we investigate two tasks: 1) given a query image, we retrieve textual descriptions that correspond to the visual attributes in the query; and 2) given a textual query that may express an interest in specific visual product characteristics, we retrieve relevant images that exhibit the required visual attributes. To this end, we introduce a new dataset that consists of 53,689 images coupled with textual descriptions. The images contain fashion garments that display a great variety of visual attributes, such as different shapes, colors and textures in natural language. Unlike previous datasets, the text provides a rough and noisy description of the item in the image. We extensively analyze this dataset in the context of cross-modal e-commerce search. We investigate two state-of-the-art latent variable models to bridge between textual and visual data: bilingual latent Dirichlet allocation and canonical correlation analysis. We use state-of-the-art visual and textual features and report promising results.


BMC Bioinformatics | 2018

A deep learning approach to bilingual lexicon induction in the biomedical domain

Geert Heyman; Ivan Vulić; Marie-Francine Moens

BackgroundBilingual lexicon induction (BLI) is an important task in the biomedical domain as translation resources are usually available for general language usage, but are often lacking in domain-specific settings. In this article we consider BLI as a classification problem and train a neural network composed of a combination of recurrent long short-term memory and deep feed-forward networks in order to obtain word-level and character-level representations.ResultsThe results show that the word-level and character-level representations each improve state-of-the-art results for BLI and biomedical translation mining. The best results are obtained by exploiting the synergy between these word-level and character-level representations in the classification model. We evaluate the models both quantitatively and qualitatively.ConclusionsTranslation of domain-specific biomedical terminology benefits from the character-level representations compared to relying solely on word-level representations. It is beneficial to take a deep learning approach and learn character-level representations rather than relying on handcrafted representations that are typically used. Our combined model captures the semantics at the word level while also taking into account that specialized terminology often originates from a common root form (e.g., from Greek or Latin).


Data Mining and Knowledge Discovery | 2016

C-BiLDA extracting cross-lingual topics from non-parallel texts by distinguishing shared from unshared content

Geert Heyman; Ivan Vulić; Marie-Francine Moens

We study the problem of extracting cross-lingual topics from non-parallel multilingual text datasets with partially overlapping thematic content (e.g., aligned Wikipedia articles in two different languages). To this end, we develop a new bilingual probabilistic topic model called comparable bilingual latent Dirichlet allocation (C-BiLDA), which is able to deal with such comparable data, and, unlike the standard bilingual LDA model (BiLDA), does not assume the availability of document pairs with identical topic distributions. We present a full overview of C-BiLDA, and show its utility in the task of cross-lingual knowledge transfer for multi-class document classification on two benchmarking datasets for three language pairs. The proposed model outperforms the baseline LDA model, as well as the standard BiLDA model and two standard low-rank approximation methods (CL-LSI and CL-KCCA) used in previous work on this task.


conference of the european chapter of the association for computational linguistics | 2017

Bilingual lexicon induction by learning to combine word-level and character-level representations

Geert Heyman; Ivan Vulić; Marie-Francine Moens


Archive | 2015

Cross-modal attribute recognition in fashion

Susana Zoghbi; Geert Heyman; Juan Carlos Gomez; Marie-Francine Moens


international conference on computational linguistics | 2018

A Flexible and Easy-to-use Semantic Role Labeling Framework for Different Languages.

Quynh Ngoc Thi Do; Artuur Leeuwenberg; Geert Heyman; Marie-Francine Moens


Archive | 2018

Smart Computer-Aided Translation Environment (SCATE): Highlights

Vincent Vandeghinste; Tom Vanallemeersch; Bram Bulté; Liesbeth Augustinus; Frank Van Eynde; Joris Pelemans; Lyan Verwimp; Patrick Wambacq; Geert Heyman; Marie-Francine Moens; Lulianna van der Lek-Ciudin; Frieda Steurs; Ayla Rigouts Terryn; Els Lefever; Arda Tezcan; Lieve Macken; Veronique Hoste; Sven Coppers; Jens Brulmans; Jan Van den Bergh; Kris Luyten; Karin Coninx


Proceedings of the 21st European Symposium on Languages for Special Purposes (LSP) | 2017

Translator's methods of acquiring domain-specific terminology. Information retrieval in terminology using lexical Knowledge Patterns

Iulianna van der Lek-Ciudin; Ayla Rigouts Terryn; Geert Heyman; Els Lefever; Frieda Steurs


Proceedings EAMT 2017 | 2017

SCATE – Smart Computer-Aided Translation Environment – Year 3 (/4)

Vincent Vandeghinste; Tom Vanallemeersch; Liesbeth Augustinus; Frank Van Eynde; Joris Pelemans; Lyan Verwimp; Patrick Wambacq; Geert Heyman; Marie-Francine Moens; Iulianna van der Lek-Ciudin; Frieda Steurs; Ayla Rigouts Terryn; Els Lefever; Arda Tezcan; Lieve Macken; Sven Coppers; Jan Van den Bergh; Kris Luyten; Karin Coninx

Collaboration


Dive into the Geert Heyman's collaboration.

Top Co-Authors

Avatar

Marie-Francine Moens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frieda Steurs

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Susana Zoghbi

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Van Eynde

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Vulić

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Joris Pelemans

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge