Tatiana Tommasi
Idiap Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tatiana Tommasi.
computer vision and pattern recognition | 2010
Tatiana Tommasi; Francesco Orabona; Barbara Caputo
Learning object categories from small samples is a challenging problem, where machine learning tools can in general provide very few guarantees. Exploiting prior knowledge may be useful to reproduce the human capability of recognizing objects even from only one single view. This paper presents an SVM-based model adaptation algorithm able to select and weight appropriately prior knowledge coming from different categories. The method relies on the solution of a convex optimization problem which ensures to have the minimal leave-one-out error on the training set. Experiments on a subset of the Caltech-256 database show that the proposed method produces better results than both choosing one single prior model, and transferring from all previous experience in a flat uninformative way.
Pattern Recognition Letters | 2008
Tatiana Tommasi; Francesco Orabona; Barbara Caputo
Automatic annotation of medical images is an increasingly important tool for physicians in their daily activity. Hospitals nowadays produce an increasing amount of data. Manual annotation is very costly and prone to human mistakes. This paper proposes a multi-cue approach to automatic medical image annotation. We represent images using global and local features. These cues are then combined using three alternative approaches, all based on the support vector machine algorithm. We tested our methods on the IRMA database, and with two of the three approaches proposed here we participated in the 2007 ImageCLEFmed benchmark evaluation, in the medical image annotation track. These algorithms ranked first and fifth, respectively among all submission. Experiments using the third approach also confirm the power of cue integration for this task.
international conference on computer vision | 2011
Luo Jie; Tatiana Tommasi; Barbara Caputo
The vast majority of transfer learning methods proposed in the visual recognition domain over the last years addresses the problem of object category detection, assuming a strong control over the priors from which transfer is done. This is a strict condition, as it concretely limits the use of this type of approach in several settings: for instance, it does not allow in general to use off-the-shelf models as priors. Moreover, the lack of a multiclass formulation for most of the existing transfer learning algorithms prevents using them for object categorization problems, where their use might be beneficial, especially when the number of categories grows and it becomes harder to get enough annotated data for training standard learning methods. This paper presents a multiclass transfer learning algorithm that allows to take advantage of priors built over different features and with different learning methods than the one used for learning the new task. We use the priors as experts, and transfer their outputs to the new incoming samples as additional information. We cast the learning problem within the Multi Kernel Learning framework. The resulting formulation solves efficiently a joint optimization problem that determines from where and how much to transfer, with a principled multiclass formulation. Extensive experiments illustrate the value of this approach.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014
Tatiana Tommasi; Francesco Orabona; Barbara Caputo
Learning a visual object category from few samples is a compelling and challenging problem. In several real-world applications collecting many annotated data is costly and not always possible. However, a small training set does not allow to cover the high intraclass variability typical of visual objects. In this condition, machine learning methods provide very few guarantees. This paper presents a discriminative model adaptation algorithm able to proficiently learn a target object with few examples by relying on other previously learned source categories. The proposed method autonomously chooses from where and how much to transfer information by solving a convex optimization problem which ensures to have the minimal leave-one-out error on the available training set. We analyze several properties of the described approach and perform an extensive experimental comparison with other existing transfer solutions, consistently showing the value of our algorithm.
international conference on computer vision | 2013
Tatiana Tommasi; Barbara Caputo
Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on image-to-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.
cross language evaluation forum | 2009
Tatiana Tommasi; Barbara Caputo; Petra Welter; Mark Oliver Güld; Thomas Martin Deserno
This paper describes the last round of the medical image annotation task in ImageCLEF 2009. After four years, we defined the task as a survey of all the past experience. Seven groups participated to the challenge submitting nineteen runs. They were asked to train their algorithms on 12677 images, labelled according to four different settings, and to classify 1733 images in the four annotation frameworks. The aim is to understand how each strategy answers to the increasing number of classes and to the unbalancing. A plain classification scheme using support vector machines and local descriptors outperformed the other methods.
IEEE Transactions on Robotics | 2013
Tatiana Tommasi; Francesco Orabona; Claudio Castellini; Barbara Caputo
At the time of this writing, the main means of control for polyarticulated self-powered hand prostheses is surface electromyography (sEMG). In the clinical setting, data collected from two electrodes are used to guide the hand movements selecting among a finite number of postures. Machine learning has been applied in the past to the sEMG signal (not in the clinical setting) with interesting results, which provide more insight on how these data could be used to improve prosthetic functionality. Researchers have mainly concentrated so far on increasing the accuracy of sEMG classification and/or regression, but, in general, a finer control implies a longer training period. A desirable characteristic would be to shorten the time needed by a patient to learn how to use the prosthesis. To this aim, we propose here a general method to reuse past experience, in the form of models synthesized from previous subjects, to boost the adaptivity of the prosthesis. Extensive tests on databases recorded from healthy subjects in controlled and noncontrolled conditions reveal that the method significantly improves the results over the baseline nonadaptive case. This promising approach might be employed to pretrain a prosthesis before shipping it to a patient, leading to a shorter training phase.
IEEE Transactions on Autonomous Mental Development | 2011
Claudio Castellini; Tatiana Tommasi; Nicoletta Noceti; Francesca Odone; Barbara Caputo
The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.
german conference on pattern recognition | 2015
Tatiana Tommasi; Novi Patricia; Barbara Caputo; Tinne Tuytelaars
The presence of a bias in each image data collection has recently attracted a lot of attention in the computer vision community showing the limits in generalization of any learning method trained on a specific dataset. At the same time, with the rapid development of deep learning architectures, the activation values of Convolutional Neural Networks (CNN) are emerging as reliable and robust image descriptors. In this paper we propose to verify the potential of the DeCAF features when facing the dataset bias problem. We conduct a series of analyses looking at how existing datasets differ among each other and verifying the performance of existing debiasing methods under different representations. We learn important lessons on which part of the dataset bias problem can be considered solved and which open questions still need to be tackled.
international conference on computer vision | 2015
Efstratios Gavves; Thomas Mensink; Tatiana Tommasi; Cees G. M. Snoek; Tinne Tuytelaars
How can we reuse existing knowledge, in the form of available datasets, when solving a new and apparently unrelated target task from a set of unlabeled data? In this work we make a first contribution to answer this question in the context of image classification. We frame this quest as an active learning problem and use zero-shot classifiers to guide the learning process by linking the new task to the the existing classifiers. By revisiting the dual formulation of adaptive SVM, we reveal two basic conditions to choose greedily only the most relevant samples to be annotated. On this basis we propose an effective active learning algorithm which learns the best possible target classification model with minimum human labeling effort. Extensive experiments on two challenging datasets show the value of our approach compared to the state-of-the-art active learning methodologies, as well as its potential to reuse past datasets with minimal effort for future tasks.