Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arnau Ramisa is active.

Publication


Featured researches published by Arnau Ramisa.


computer vision and pattern recognition | 2011

Combining attributes and Fisher vectors for efficient image retrieval

Matthijs Douze; Arnau Ramisa; Cordelia Schmid

Attributes were recently shown to give excellent results for category recognition. In this paper, we demonstrate their performance in the context of image retrieval. First, we show that retrieving images of particular objects based on attribute vectors gives results comparable to the state of the art. Second, we demonstrate that combining attribute and Fisher vectors improves performance for retrieval of particular objects as well as categories. Third, we implement an efficient coding technique for compressing the combined descriptor to very small codes. Experimental results on the Holidays dataset show that our approach significantly outperforms the state of the art, even for a very compact representation of 16 bytes per image. Retrieving category images is evaluated on the “web-queries” dataset. We show that attribute features combined with Fisher vectors improve the performance and that combined image features can supplement text features.


computer vision and pattern recognition | 2012

Single image 3D human pose estimation from noisy observations

Edgar Simo-Serra; Arnau Ramisa; Guillem Alenyà; Carme Torras; Francesc Moreno-Noguer

Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.


international conference on robotics and automation | 2012

Using depth and appearance features for informed robot grasping of highly wrinkled clothes

Arnau Ramisa; Guillem Alenyà; Francesc Moreno-Noguer; Carme Torras

Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. In order to handle the large variability a deformed cloth may have, we build a Bag of Features based detector that combines appearance and 3D geometry features. An image is scanned using a sliding window with a linear classifier, and the candidate windows are refined using a non-linear SVM and a “grasp goodness” criterion to select the best grasping point. We demonstrate our approach detecting collars in deformed polo shirts, using a Kinect camera. Experimental results show a good performance of the proposed method not only in identifying the same trained textile object part under severe deformations and occlusions, but also the corresponding part in other clothes, exhibiting a degree of generalization.


Autonomous Robots | 2009

Robust vision-based robot localization using combinations of local feature region detectors

Arnau Ramisa; Adriana Tapus; David Aldavert; Ricardo Toledo; Ramon López de Mántaras

This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization.


international conference on computer vision systems | 2008

A tale of two object recognition methods for mobile robots

Arnau Ramisa; Shrihari Vasudevan; Davide Scaramuzza; Ramon López de Mántaras; Roland Siegwart

Object recognition is a key feature for building robots capable of moving and performing tasks in human environments. However, current object recognition research largely ignores the problems that the mobile robots context introduces. This work addresses the problem of applying these techniques to mobile robotics in a typical household scenario. We select two state-of-the-art object recognition methods, which are suitable to be adapted to mobile robots, and we evaluate them on a challenging dataset of typical household objects that caters to these requirements. The different advantages and drawbacks found for each method are highlighted, and some ideas for extending them are proposed. Evaluation is done comparing the number of detected objects and false positives for both approaches.


intelligent robots and systems | 2013

FINDDD: A fast 3D descriptor to characterize textiles for robot manipulation

Arnau Ramisa; Guillem Alenyà; Francesc Moreno-Noguer; Carme Torras

Most current depth sensors provide 2.5D range images in which depth values are assigned to a rectangular 2D array. In this paper we take advantage of this structured information to build an efficient shape descriptor which is about two orders of magnitude faster than competing approaches, while showing similar performance in several tasks involving deformable object recognition. Given a 2D patch surrounding a point and its associated depth values, we build the descriptor for that point, based on the cumulative distances between their normals and a discrete set of normal directions. This processing is made very efficient using integral images, even allowing to compute descriptors for every range image pixel in a few seconds. The discriminative power of our descriptor, dubbed FINDDD, is evaluated in three different scenarios: recognition of specific cloth wrinkles, instance recognition from geometry alone, and detection of reliable and informed grasping points.


Journal of Intelligent and Robotic Systems | 2011

Combining Invariant Features and the ALV Homing Method for Autonomous Robot Navigation Based on Panoramas

Arnau Ramisa; Alex Goldhoorn; David Aldavert; Ricardo Toledo; Ramon López de Mántaras

Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments.


Artificial intelligence research and development: proceedings of the 14th International Conference of the Catalan Association for Artificial Intelligence | 2011

Determining where to grasp cloth using depth information

Arnau Ramisa; Guillem Alenyà; Francesc Moreno-Noguer; Carme Torras

In this paper we address the problem of finding an initial good grasping point for the robotic manipulation of textile objects lying on a flat surface. Given as input a point cloud of the cloth acquired with a 3D camera, we propose choosing as grasping points those that maximize a new measure of wrinkledness, computed from the distribution of normal directions over local neighborhoods. Real grasping experiments using a robotic arm are performed, showing that the proposed measure leads to promising results.


international conference on robotics and automation | 2008

Mobile robot localization using panoramic vision and combinations of feature region detectors

Arnau Ramisa; A. Tapus; R.L. de Mantaras; R. Toledo

This paper presents a vision-based approach for mobile robot localization. The environmental model is topological. The new approach uses a constellation of different types of affine covariant regions to characterize a place. This type of representation permits a reliable and distinctive environment modeling. The performance of the proposed approach is evaluated using a database of panoramic images from different rooms. Additionally, we compare different combinations of complementary feature region detectors to find the one that achieves the best results. Our experimental results show promising results for this new localization method. Additionally, similarly to what happens with single detectors, different combinations exhibit different strengths and weaknesses depending on the situation, suggesting that a context-aware method to combine the different detectors would improve the localization results.


Engineering Applications of Artificial Intelligence | 2014

Learning RGB-D descriptors of garment parts for informed robot grasping

Arnau Ramisa; Guillem Alenyí; Francesc Moreno-Noguer; Carme Torras

Robotic handling of textile objects in household environments is an emerging application that has recently received considerable attention thanks to the development of domestic robots. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields a desired configuration. In this work we propose a vision-based method, built on the Bag of Visual Words approach, that combines appearance and 3D information to detect parts suitable for grasping in clothes, even when they are highly wrinkled. We also contribute a new, annotated, garment part dataset that can be used for benchmarking classification, part detection, and segmentation algorithms. The dataset is used to evaluate our approach and several state-of-the-art 3D descriptors for the task of garment part detection. Results indicate that appearance is a reliable source of information, but that augmenting it with 3D information can help the method perform better with new clothing items.

Collaboration


Dive into the Arnau Ramisa's collaboration.

Top Co-Authors

Avatar

Ramon López de Mántaras

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Ricardo Toledo

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

David Aldavert

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Francesc Moreno-Noguer

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Carme Torras

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Guillem Alenyà

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fei Yan

University of Surrey

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge