Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susana Zoghbi is active.

Publication


Featured researches published by Susana Zoghbi.


conference on information and knowledge management | 2013

I pinned it. where can i buy one like it?: automatically linking pinterest pins to online webshops

Susana Zoghbi; Ivan Vulić; Marie-Francine Moens

The information that users of social network sites post often points towards their interests and hobbies. It can be used to recommend relevant products to users. In this paper we implement and evaluate several information retrieval models for linking the texts of pins of Pinterest to webpages of Amazon, and ranking the pages (which we call webshops) according to the personal interest of the pinner. The results show that models that combine latent concepts composed of related terms with single words yield the best performance.


Information Sciences | 2016

Latent Dirichlet allocation for linking user-generated content and e-commerce data

Susana Zoghbi; Ivan Vulić; Marie-Francine Moens

Automatic linking of online content improves navigation possibilities for end users. We focus on linking content generated by users to other relevant sites. In particular, we study the problem of linking information between different usages of the same language, e.g., colloquial and formal idioms or the language of consumers versus the language of sellers. The challenge is that the same items are described using very distinct vocabularies. As a case study, we investigate a new task of linking textual Pinterest.com pins (colloquial) to online webshops (formal). Given this task, our key insight is that we can learn associations between formal and informal language by utilizing aligned data and probabilistic modeling. Specifically, we thoroughly evaluate three different modeling paradigms based on probabilistic topic modeling: monolingual latent Dirichlet allocation (LDA), bilingual LDA (BiLDA) and a novel multi-idiomatic LDA model (MiLDA). We compare these to the unigram model with Dirichlet prior. Our results for all three topic models reveal the usefulness of modeling the hidden thematic structure of the data through topics, as opposed to the linking model based solely on the standard unigram. Moreover, our proposed MiLDA model is able to deal with intrinsic multi-idiomatic data by considering the shared vocabulary between the aligned document pairs. The proposed MiLDA obtains the largest stability (less variation with changes in parameters) and highest mean average precision scores in the linking task.


conference on information and knowledge management | 2013

Are words enough?: a study on text-based representations and retrieval models for linking pins to online shops

Susana Zoghbi; Ivan Vulić; Marie-Francine Moens

User-generated content offers opportunities to learn about peoples interests and hobbies. We can leverage this information to help users find interesting shops and businesses find interested users. However this content is highly noisy and unstructured as posted on social media sites and blogs. In this work we evaluate different textual representations and retrieval models that aim to make sense of social media data for retail applications. Our task is to link the text of pins (from Pinterest.com) to online shops (formed by clustering Amazon.coms products). Our results show that document representations that combine latent concepts with single words yield the best performance.


web search and data mining | 2018

Web Search of Fashion Items with Multimodal Querying

Katrien Laenen; Susana Zoghbi; Marie-Francine Moens

In this paper, we introduce a novel multimodal fashion search paradigm where e-commerce data is searched with a multimodal query composed of both an image and text. In this setting, the query image shows a fashion product that the user likes and the query text allows to change certain product attributes to fit the product to the users desire. Multimodal search gives users the means to clearly express what they are looking for. This is in contrast to current e-commerce search mechanisms, which are cumbersome and often fail to grasp the customers needs. Multimodal search requires intermodal representations of visual and textual fashion attributes which can be mixed and matched to form the users desired product, and which have a mechanism to indicate when a visual and textual fashion attribute represent the same concept. With a neural network, we induce a common, multimodal space for visual and textual fashion attributes where their inner product measures their semantic similarity. We build a multimodal retrieval model which operates on the obtained intermodal representations and which ranks images based on their relevance to a multimodal query. We demonstrate that our model is able to retrieve images that both exhibit the necessary query image attributes and satisfy the query texts. Moreover, we show that our model substantially outperforms two state-of-the-art retrieval models adapted to multimodal fashion search.


conference on multimedia modeling | 2016

Cross-Modal Fashion Search

Susana Zoghbi; Geert Heyman; Juan Carlos Gomez; Marie-Francine Moens

In this demo we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we demonstrate two tasks: (1) given a query image (without any accompanying text), we retrieve textual descriptions that correspond to the visual attributes in the visual query; and (2) given a textual query that may express an interest in specific visual characteristics, we retrieve relevant images (without leveraging textual meta-data) that exhibit the required visual attributes. The first task is especially useful to manage image collections by online stores who might want to automatically organize and mine predominantly visual items according to their attributes without human input. The second task renders useful for users to find items with specific visual characteristics, in the case where there is no text available describing the target image. We use a state-of-the-art visual and textual features, as well as a state-of-the-art latent variable model to bridge between textual and visual data: bilingual latent Dirichlet allocation. Unlike traditional search engines, we demonstrate a truly cross-modal system, where we can directly bridge between visual and textual content without relying on pre-annotated meta-data.


International Journal of Computer and Electrical Engineering | 2016

Fashion meets computer vision and NLP at e-­commerce search

Susana Zoghbi; Geert Heyman; Juan Carlos Gomez; Marie-Francine Moens

In this paper, we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we investigate two tasks: 1) given a query image, we retrieve textual descriptions that correspond to the visual attributes in the query; and 2) given a textual query that may express an interest in specific visual product characteristics, we retrieve relevant images that exhibit the required visual attributes. To this end, we introduce a new dataset that consists of 53,689 images coupled with textual descriptions. The images contain fashion garments that display a great variety of visual attributes, such as different shapes, colors and textures in natural language. Unlike previous datasets, the text provides a rough and noisy description of the item in the image. We extensively analyze this dataset in the context of cross-modal e-commerce search. We investigate two state-of-the-art latent variable models to bridge between textual and visual data: bilingual latent Dirichlet allocation and canonical correlation analysis. We use state-of-the-art visual and textual features and report promising results.


IEEE Latin America Transactions | 2016

What Would They Say? Predicting User's Comments in Pinterest

Juan Carlos Gomez; Tatiana Tommasi; Susana Zoghbi; Marie-Francine Moens

When we refer to an image that attracts our attention, it is natural to mention not only what is literally depicted in the image, but also the sentiments, thoughts and opinions that it invokes in ourselves. In this work we deviate from the standard mainstream tasks of associating tags or keywords to an image, or generating content image descriptions, and we introduce the novel task of automatically generate user comments for an image. We present a new dataset collected from the social media Pinterest and we propose a strategy based on building joint textual and visual user models, tailored to the specificity of the mentioned task. We conduct an extensive experimental analysis of our approach on both qualitative and quantitative terms, which allows to assess the value of the proposed approach and shows its encouraging results against several existing image-to-text methods.


international conference on weblogs and social media | 2013

Recognising personality traits using facebook status updates

Golnoosh Farnadi; Susana Zoghbi; Marie-Francine Moens; Martine De Cock


international acm sigir conference on research and development in information retrieval | 2014

Learning to bridge colloquial and formal language applied to linking and search of E-Commerce data

Ivan Vulić; Susana Zoghbi; Marie-Francine Moens


knowledge discovery and data mining | 2017

Cross-modal search for fashion attributes

Katrien Laenen; Susana Zoghbi; Marie-Francine Moens

Collaboration


Dive into the Susana Zoghbi's collaboration.

Top Co-Authors

Avatar

Marie-Francine Moens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Geert Heyman

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Ivan Vulić

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Juan Carlos Gomez

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katrien Laenen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Yagmur Gizem Cinar

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge