Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carolina Galleguillos is active.

Publication


Featured researches published by Carolina Galleguillos.


computer vision and pattern recognition | 2008

Object categorization using co-occurrence, location and appearance

Carolina Galleguillos; Andrew Rabinovich; Serge J. Belongie

In this work we introduce a novel approach to object categorization that incorporates two types of context-co-occurrence and relative location - with local appearance-based features. Our approach, named CoLA (for co-occurrence, location and appearance), uses a conditional random field (CRF) to maximize object label agreement according to both semantic and spatial relevance. We model relative location between objects using simple pairwise features. By vector quantizing this feature space, we learn a small set of prototypical spatial relationships directly from the data. We evaluate our results on two challenging datasets: PASCAL 2007 and MSRC. The results show that combining co-occurrence and spatial context improves accuracy in as many as half of the categories compared to using co-occurrence alone.


Computer Vision and Image Understanding | 2010

Context based object categorization: A critical survey

Carolina Galleguillos; Serge J. Belongie

The goal of object categorization is to locate and identify instances of an object category within an image. Recognizing an object in an image is difficult when images include occlusion, poor quality, noise or background clutter, and this task becomes even more challenging when many objects are present in the same scene. Several models for object categorization use appearance and context information from objects to improve recognition accuracy. Appearance information, based on visual cues, can successfully identify object classes up to a certain extent. Context information, based on the interaction among objects in the scene or global scene statistics, can help successfully disambiguate appearance inputs in recognition tasks. In this work we address the problem of incorporating different types of contextual information for robust object categorization in computer vision. We review different ways of using contextual information in the field of object categorization, considering the most common levels of extraction of context and the different levels of contextual interactions. We also examine common machine learning models that integrate context information into object recognition frameworks and discuss scalability, optimizations and possible future approaches.


european conference on computer vision | 2008

Weakly Supervised Object Localization with Stable Segmentations

Carolina Galleguillos; Boris Babenko; Andrew Rabinovich; Serge J. Belongie

Multiple Instance Learning (MIL) provides a framework for training a discriminative classifier from data with ambiguous labels. This framework is well suited for the task of learning object classifiers from weakly labeled image data, where only the presence of an object in an image is known, but not its location. Some recent work has explored the application of MIL algorithms to the tasks of image categorization and natural scene classification. In this paper we extend these ideas in a framework that uses MIL to recognize and localizeobjects in images. To achieve this we employ state of the art image descriptors and multiple stable segmentations. These components, combined with a powerful MIL algorithm, form our object recognition system called MILSS. We show highly competitive object categorization results on the Caltech dataset. To evaluate the performance of our algorithm further, we introduce the challenging Landmarks-18 dataset, a collection of photographs of famous landmarks from around the world. The results on this new dataset show the great potential of our proposed algorithm.


computer vision and pattern recognition | 2010

Multi-class object localization by combining local contextual interactions

Carolina Galleguillos; Brian McFee; Serge J. Belongie; Gert R. G. Lanckriet

Recent work in object localization has shown that the use of contextual cues can greatly improve accuracy over models that use appearance features alone. Although many of these models have successfully explored different types of contextual sources, they only consider one type of contextual interaction (e.g., pixel, region or object level interactions), leaving open questions about the true potential contribution of context. Furthermore, contributions across object classes and over appearance features still remain unknown. In this work, we introduce a novel model for multi-class object localization that incorporates different levels of contextual interactions. We study contextual interactions at pixel, region and object level by using three different sources of context: semantic, boundary support and contextual neighborhoods. Our framework learns a single similarity metric from multiple kernels, combining pixel and region interactions with appearance features, and then uses a conditional random field to incorporate object level interactions. We perform experiments on two challenging image databases: MSRC and PASCAL VOC 2007. Experimental results show that our model outperforms current state-of-the-art contextual frameworks and reveals individual contributions for each contextual interaction level, as well as the importance of each type of feature in object localization.


computer vision and pattern recognition | 2007

Recognizing Groceries in situ Using in vitro Training Data

Michele Merler; Carolina Galleguillos; Serge J. Belongie

The problem of using pictures of objects captured under ideal imaging conditions (here referred to as in vitro) to recognize objects in natural environments (in situ) is an emerging area of interest in computer vision and pattern recognition. Examples of tasks in this vein include assistive vision systems for the blind and object recognition for mobile robots; the proliferation of image databases on the web is bound to lead to more examples in the near future. Despite its importance, there is still a need for a freely available database to facilitate study of this kind of training/testing dichotomy. In this work one of our contributions is a new multimedia database of 120 grocery products, GroZi-120. For every product, two different recordings are available: in vitro images extracted from the web, and in situ images extracted from camcorder video collected inside a grocery store. As an additional contribution, we present the results of applying three commonly used object recognition/detection algorithms (color histogram matching, SIFT matching, and boosted Haar-like features) to the dataset. Finally, we analyze the successes and failures of these algorithms against product type and imaging conditions, both in terms of recognition rate and localization accuracy, in order to suggest ways forward for further research in this domain.


IEEE Transactions on Image Processing | 2011

Contextual Object Localization With Multiple Kernel Nearest Neighbor

Brian McFee; Carolina Galleguillos; Gert R. G. Lanckriet

Recently, many object localization models have shown that incorporating contextual cues can greatly improve accuracy over using appearance features alone. Therefore, many of these models have explored different types of contextual sources, but only considering one level of contextual interaction at the time. Thus, what context could truly contribute to object localization, through integrating cues from all levels, simultaneously, remains an open question. Moreover, the relative importance of the different contextual levels and appearance features across different object classes remains to be explored. Here we introduce a novel framework for multiple class object localization that incorporates different levels of contextual interactions. We study contextual interactions at the pixel, region and object level based upon three different sources of context: semantic, boundary support, and contextual neighborhoods. Our framework learns a single similarity metric from multiple kernels, combining pixel and region interactions with appearance features, and then applies a conditional random field to incorporate object level interactions. To effectively integrate different types of feature descriptions, we extend the large margin nearest neighbor to a novel algorithm that supports multiple kernels. We perform experiments on three challenging image databases: Graz-02, MSRC and PASCAL VOC 2007. Experimental results show that our model outperforms current state-of-the-art contextual frameworks and reveals individual contributions for each contextual interaction level as well as appearance features, indicating their relative importance for object localization.


computer vision and pattern recognition | 2011

From region similarity to category discovery

Carolina Galleguillos; Brian McFee; Serge J. Belongie; Gert R. G. Lanckriet

The goal of object category discovery is to automatically identify groups of image regions which belong to some new, previously unseen category. This task is typically performed in a purely unsupervised setting, and as a result, performance depends critically upon accurate assessments of similarity between unlabeled image regions. To improve the accuracy of category discovery, we develop a novel multiple kernel learning algorithm based on structural SVM, which optimizes a similarity space for nearest-neighbor prediction. The optimized space is then used to cluster unlabeled data and identify new categories. Experimental results on the MSRC and PASCAL VOC2007 data sets indicate that using an optimized similarity metric can improve clustering for category discovery. Furthermore, we demonstrate that including both labeled and unlabeled training data when optimizing the similarity metric can improve the overall quality of the system.


International Journal of Computer Vision | 2014

Iterative Category Discovery via Multiple Kernel Metric Learning

Carolina Galleguillos; Brian McFee; Gert R. G. Lanckriet

The goal of an object category discovery system is to annotate a pool of unlabeled image data, where the set of labels is initially unknown to the system, and must therefore be discovered over time by querying a human annotator. The annotated data is then used to train object detectors in a standard supervised learning setting, possibly in conjunction with category discovery itself. Category discovery systems can be evaluated in terms of both accuracy of the resulting object detectors, and the efficiency with which they discover categories and annotate the training data. To improve the accuracy and efficiency of category discovery, we propose an iterative framework which alternates between optimizing nearest neighbor classification for known categories with multiple kernel metric learning, and detecting clusters of unlabeled image regions likely to belong to a novel, unknown categories. Experimental results on the MSRC and PASCAL VOC2007 data sets show that the proposed method improves clustering for category discovery, and efficiently annotates image regions belonging to the discovered classes.


international conference on computer vision | 2009

BUBL: An effective region labeling tool using a hexagonal lattice

Carolina Galleguillos; Peter Faymonville; Serge J. Belongie

We propose a data labeling tool that permits accurate labeling of images using less time and effort. Our tool, BUBL, uses a hexagonal grid with a variable size tiling for accurate labeling of object contours. The hexagonal lattice is superimposed by a bubble wrap interface in order to make the labeling task enjoyable. The resulting label mask is represented by a Gaussian kernel density estimator which provides accurate bounding contours, even for objects that include hollow regions. Furthermore, multiple annotations from different users are collected for every image, making it possible to “hint” a partial labeling so the user can finish labeling in less time. We show accuracy results by simulating the application of our labeling tool for the MSRC dataset and to a subset data set of Caltech-101.


Revista Facultad De Ingenieria-universidad De Antioquia | 2005

ANÁLISIS DE CONSULTAS A UN BUSCADOR DE LA WEB CHILENA

Ricardo A. Baeza-Yates; Carolina Galleguillos

Los buscadores de paginas Web son herramientas indispensables de orientacion para sus usuarios, lo cual refleja la gran cantidad de personas que los utilizan diariamente. Las bitacoras (logs) que registran el uso de un buscador son una valiosa fuente de informacion: nos hablan de las expectativas, preferencias y necesidades de las personas, es decir, que esperan ellas encontrar en la Web. El resultado del analisis de logs de uso tiene un importante valor comercial, en particular para el comercio electronico y la publicidad en Internet. El objetivo de este estudio es encontrar relaciones entre las consultas de los usuarios de la Web Chilena (.cl) y las diversas areas de clasificacion de sus diferentes sitios. Tambien se esperan encontrar relaciones entre los tipos de busqueda realizados y los sitios visitados, para asi conocer la necesidad que los lleva a formular sus consultas. La informacion obtenida esta basada en los logs del buscador chileno TodoCL [1] entre febrero y marzo de 2004.

Collaboration


Dive into the Carolina Galleguillos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian McFee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boris Babenko

University of California

View shared research outputs
Top Co-Authors

Avatar

Eric Wiewiora

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge