Carme Torras
Spanish National Research Council
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carme Torras.
Computers & Graphics | 2001
Pablo Jiménez; Federico Thomas; Carme Torras
Abstract Many applications in Computer Graphics require fast and robust 3D collision detection algorithms. These algorithms can be grouped into four approaches: space–time volume intersection, swept volume interference, multiple interference detection and trajectory parameterization. While some approaches are linked to a particular object representation scheme (e.g., space–time volume intersection is particularly suited to a CSG representation), others do not. The multiple interference detection approach has been the most widely used under a variety of sampling strategies, reducing the collision detection problem to multiple calls to static interference tests. In most cases, these tests boil down to detecting intersections between simple geometric entities, such as spheres, boxes aligned with the coordinate axes, or polygons and segments. The computational cost of a collision detection algorithm depends not only on the complexity of the basic interference test used, but also on the number of times this test is applied. Therefore, it is crucial to apply this test only at those instants and places where a collision can truly occur. Several strategies have been developed to this end: (1) to find a lower time bound for the first collision, (2) to reduce the pairs of primitives within objects susceptible of interfering, and (3) to cut down the number of object pairs to be considered for interference. These strategies rely on distance computation algorithms, hierarchical object representations, orientation-based pruning criteria, and space partitioning schemes. This paper tries to provide a comprehensive survey of all these techniques from a unified viewpoint, so that well-known algorithms are presented as particular instances of general approaches.
IEEE Sensors Journal | 2011
Sergi Foix; Guillem Alenyà; Carme Torras
This paper reviews the state-of-the art in the field of lock-in time-of-flight (ToF) cameras, their advantages, their limitations, the existing calibration methods, and the way they are being used, sometimes in combination with other sensors. Even though lock-in ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight, and reduced power consumption have motivated their increasing usage in several research areas, such as computer graphics, machine vision, and robotics.
Artificial Intelligence | 2001
Pedro Meseguer; Carme Torras
Abstract Symmetry often appears in real-world constraint satisfaction problems, but strategies for exploiting it are only beginning to be developed. Here, a framework for exploiting symmetry within depth-first search is proposed, leading to two heuristics for variable selection and a domain pruning procedure. These strategies are then applied to two highly symmetric combinatorial problems, namely the Ramsey problem and the generation of balanced incomplete block designs. Experimental results show that these general-purpose strategies can compete with, and in some cases outperform, previous more ad hoc procedures.
Image and Vision Computing | 1996
Gordon Wells; Christophe Venaille; Carme Torras
Abstract Most vision-based robot positioning techniques rely on analytical formulations of the relationship between the robot pose and the projected image coordinates of several geometric features of the observed scene. This usually requires that several simple features such as points, lines or circles be visible in the image, which must either be unoccluded in multiple views or else part of a 3D model. Featurematching algorithms, camera calibration, models of the camera geometry and object feature relationships are also necessary for pose determination. These steps are often computationally intensive and error-prone, and the complexity of the resulting formulations often limits the number of controllable degrees of freedom. We provide a comparative survey of existing visual robot positioning methods, and present a new technique based on neural learning and global image descriptors which overcomes many of these limitations. A feedforward neural network is used to learn the complex implicit relationship between the pose displacements of a 6-dof robot and the observed variations in global descriptors of the image, such as geometric moments and Fourier descriptors. The trained network may then be used to move the robot from arbitrary initial positions to a desired pose with respect to the observed scene. The method is shown to be capable of positioning an industrial robot with respect to a variety of complex objects with an acceptable precision for an industrial inspection application, and could be useful in other real-world tasks such as grasping, assembly and navigation.
international conference on robotics and automation | 2011
Guillem Alenyà; Babette Dellen; Carme Torras
Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modelling for this application has strong demands, particularly in what concerns 3D information gathering and speed. This paper shows that Time-of-Flight (ToF) cameras achieve a good compromise between both demands, providing a suitable complement to color vision. A new method is proposed to segment plant images into their composite surface patches by combining hierarchical color segmentation with quadratic surface fitting using ToF depth data. Experimentation shows that the interpolated depth maps derived from the obtained surfaces fit well the original scenes. Moreover, candidate leaves to be approached by a measuring instrument are ranked, and then robot-mounted cameras move closer to them to validate their suitability to being sampled. Some ambiguities arising from leaves overlap or occlusions are cleared up in this way. The work is a proof-of-concept that dense color data combined with sparse depth as provided by a ToF camera yields a good enough 3D approximation for automated plant measuring at the high throughput imposed by the application.
Machine Learning | 1992
José del R. Millán; Carme Torras
This paper presents a reinforcement connectionist system which finds and learns the suitable situation-action rules so as to generate feasible paths for a point robot in a 2D environment with circular obstacles. The basic reinforcement algorithm is extended with a strategy for discovering stable solution paths. Equipped with this strategy and a powerful codification scheme, the path-finder (i) learns quickly, (ii) deals with continuous-valued inputs and outputs, (iii) exhibits good noise-tolerance and generalization capabilities, (iv) copes with dynamic environments, and (v) solves an instance of the path finding problem with strong performance demands.
computer vision and pattern recognition | 2012
Edgar Simo-Serra; Arnau Ramisa; Guillem Alenyà; Carme Torras; Francesc Moreno-Noguer
Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.
computer vision and pattern recognition | 2013
Edgar Simo-Serra; Ariadna Quattoni; Carme Torras; Francesc Moreno-Noguer
We introduce a novel approach to automatically recover 3D human pose from a single image. Most previous work follows a pipelined approach: initially, a set of 2D features such as edges, joints or silhouettes are detected in the image, and then these observations are used to infer the 3D pose. Solving these two problems separately may lead to erroneous 3D poses when the feature detector has performed poorly. In this paper, we address this issue by jointly solving both the 2D detection and the 3D inference problems. For this purpose, we propose a Bayesian framework that integrates a generative model based on latent variables and discriminative 2D part detectors based on HOGs, and perform inference using evolutionary algorithms. Real experimentation demonstrates competitive results, and the ability of our methodology to provide accurate 2D and 3D pose estimations even when the 2D detectors are inaccurate.
international conference on robotics and automation | 1988
Federico Thomas; Carme Torras
When a set of constraints is imposed on the degrees of freedom between several rigid bodies, finding the configuration or configurations that satisfy all these constraints is a matter of special interest. The problem is not new and has been discussed, not only in kinematics, but also more recently in the design of object-level robot programming languages. In this last domain, several languages have been developed, from different points of view, that are able to partially solve the problem. A more general method is derived than those previously proposed that were based on the symbolic manipulation of chains of matrix products, using the theory of continuous groups. >
international conference on robotics and automation | 2012
Arnau Ramisa; Guillem Alenyà; Francesc Moreno-Noguer; Carme Torras
Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. In order to handle the large variability a deformed cloth may have, we build a Bag of Features based detector that combines appearance and 3D geometry features. An image is scanned using a sliding window with a linear classifier, and the candidate windows are refined using a non-linear SVM and a “grasp goodness” criterion to select the best grasping point. We demonstrate our approach detecting collars in deformed polo shirts, using a Kinect camera. Experimental results show a good performance of the proposed method not only in identifying the same trained textile object part under severe deformations and occlusions, but also the corresponding part in other clothes, exhibiting a degree of generalization.