Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amaia Salvador is active.

Publication


Featured researches published by Amaia Salvador.


international conference on multimedia retrieval | 2016

Bags of Local Convolutional Features for Scalable Instance Search

Eva Mohedano; Kevin McGuinness; Noel E. O'Connor; Amaia Salvador; Ferran Marqués; Xavier Giro-i-Nieto

This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark.


computer vision and pattern recognition | 2017

Learning Cross-Modal Embeddings for Cooking Recipes and Food Images

Amaia Salvador; Nicholas Hynes; Yusuf Aytar; Javier Marin; Ferda Ofli; Ingmar Weber; Antonio Torralba

In this paper, we introduce Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data. Using these data, we train a neural network to find a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Additionally, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M dataset and food and cooking in general. Code, data and models are publicly available


computer vision and pattern recognition | 2016

Faster R-CNN Features for Instance Search

Amaia Salvador; Xavier Giro-i-Nieto; Ferran Marqués; Shin'ichi Satoh

Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image-and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results.


acm multimedia | 2015

Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction

Victor Campos; Amaia Salvador; Xavier Giro-i-Nieto; Brendan Jou

Visual media are powerful means of expressing emotions and sentiments. The constant generation of new content in social networks highlights the need of automated visual sentiment analysis tools. While Convolutional Neural Networks (CNNs) have established a new state-of-the-art in several vision problems, their application to the task of sentiment analysis is mostly unexplored and there are few studies regarding how to design CNNs for this purpose. In this work, we study the suitability of fine-tuning a CNN for visual sentiment prediction as well as explore performance boosting techniques within this deep learning setting. Finally, we provide a deep-dive analysis into a benchmark, state-of-the-art network architecture to gain insight about how to design patterns for CNNs on the task of visual sentiment prediction.


computer vision and pattern recognition | 2015

Cultural Event recognition with visual ConvNets and temporal models

Amaia Salvador; Daniel Manchon-Vizuete; Andrea Calafell; Xavier Giro-i-Nieto; Matthias Zeppelzauer

This paper presents our contribution to the ChaLearn Challenge 2015 on Cultural Event Classification. The challenge in this task is to automatically classify images from 50 different cultural events. Our solution is based on the combination of visual features extracted from convolutional neural networks with temporal information using a hierarchical classifier scheme. We extract visual features from the last three fully connected layers of both CaffeNet (pre-trained with ImageNet) and our fine tuned version for the ChaLearn challenge. We propose a late fusion strategy that trains a separate low-level SVM on each of the extracted neural codes. The class predictions of the low-level SVMs form the input to a higher level SVM, which gives the final event scores. We achieve our best result by adding a temporal refinement step into our classification scheme, which is applied directly to the output of each low-level SVM. Our approach penalizes high classification scores based on visual features when their time stamp does not match well an event-specific temporal distribution learned from the training and validation data. Our system achieved the second best result in the ChaLearn Challenge 2015 on Cultural Event Classification with a mean average precision of 0.767 on the test set.


Multimedia Tools and Applications | 2016

Assessment of crowdsourcing and gamification loss in user-assisted object segmentation

Axel Carlier; Amaia Salvador; Ferran Cabezas; Xavier Giro-i-Nieto; Vincent Charvillat; Oge Marques

There has been a growing interest in applying human computation – particularly crowdsourcing techniques – to assist in the solution of multimedia, image processing, and computer vision problems which are still too difficult to solve using fully automatic algorithms, and yet relatively easy for humans. In this paper we focus on a specific problem – object segmentation within color images – and compare different solutions which combine color image segmentation algorithms with human efforts, either in the form of an explicit interactive segmentation task or through an implicit collection of valuable human traces with a game. We use Click’n’Cut, a friendly, web-based, interactive segmentation tool that allows segmentation tasks to be assigned to many users, and Ask’nSeek, a game with a purpose designed for object detection and segmentation. The two main contributions of this paper are: (i) We use the results of Click’n’Cut campaigns with different groups of users to examine and quantify the crowdsourcing loss incurred when an interactive segmentation task is assigned to paid crowd-workers, comparing their results to the ones obtained when computer vision experts are asked to perform the same tasks. (ii) Since interactive segmentation tasks are inherently tedious and prone to fatigue, we compare the quality of the results obtained with Click’n’Cut with the ones obtained using a (fun, interactive, and potentially less tedious) game designed for the same purpose. We call this contribution the assessment of the gamification loss, since it refers to how much quality of segmentation results may be lost when we switch to a game-based approach to the same task. We demonstrate that the crowdsourcing loss is significant when using all the data points from workers, but decreases substantially (and becomes comparable to the quality of expert users performing similar tasks) after performing a modest amount of data analysis and filtering out of users whose data are clearly not useful. We also show that – on the other hand – the gamification loss is significantly more severe: the quality of the results drops roughly by half when switching from a focused (yet tedious) task to a more fun and relaxed game environment.


international conference on multimedia retrieval | 2015

Exploring EEG for Object Detection and Retrieval

Eva Mohedano; Kevin McGuinness; Graham Healy; Noel E. O'Connor; Alan F. Smeaton; Amaia Salvador; Sergi Porta; Xavier Giro-i-Nieto

This paper explores the potential for using Brain Computer Interfaces (BCI) as a relevance feedback mechanism in content-based image retrieval. Several experiments are performed using a rapid serial visual presentation (RSVP) of images at different rates (5Hz and 10Hz) on 8 users with different degrees of familiarization with BCI and the dataset. We compare the feedback from the BCI and mouse-based interfaces in a subset of TRECVid images, finding that, when users have limited time to annotate the images, both interfaces are comparable in performance. Comparing our best users in a retrieval task, we found that EEG-based relevance feedback can outperform mouse-based feedback.


acm multimedia | 2013

Crowdsourced object segmentation with a game

Amaia Salvador; Axel Carlier; Xavier Giro-i-Nieto; Oge Marques; Vincent Charvillat


Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia | 2014

Click'n'Cut: Crowdsourced Interactive Segmentation with Object Candidates

Axel Carlier; Vincent Charvillat; Amaia Salvador; Xavier Giro-i-Nieto; Oge Marques


international conference on image processing | 2015

Quality Control in Crowdsourced Object Segmentation

Ferran Cabezas; Axel Carlier; Vincent Charvillat; Amaia Salvador; Xavier Giró i Nieto

Collaboration


Dive into the Amaia Salvador's collaboration.

Top Co-Authors

Avatar

Xavier Giro-i-Nieto

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Ferran Marqués

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jordi Torres

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Xavier Giró i Nieto

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Oge Marques

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge