Barry Ridge
University of Ljubljana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Barry Ridge.
international conference on robotics and automation | 2010
Barry Ridge; Danijel Skočaj; Aleš Leonardis
For a developmental robotic system to function successfully in the real world, it is important that it be able to form its own internal representations of affordance classes based on observable regularities in sensory data. Usually successful classifiers are built using labeled training data, but it is not always realistic to assume that labels are available in a developmental robotics setting. There does, however, exist an advantage in this setting that can help circumvent the absence of labels: co-occurrence of correlated data across separate sensory modalities over time. The main contribution of this paper is an online classifier training algorithm based on Kohonens learning vector quantization (LVQ) that, by taking advantage of this co-occurrence information, does not require labels during training, either dynamically generated or otherwise. We evaluate the algorithm in experiments involving a robotic arm that interacts with various household objects on a table surface where camera systems extract features for two separate visual modalities. It is shown to improve its ability to classify the affordances of novel objects over time, coming close to the performance of equivalent fully-supervised algorithms.
international conference on advanced robotics | 2013
Bojan Nemec; Fares J. Abu-Dakka; Barry Ridge; Ales Ude; Jimmy Alison Jørgensen; Thiusius Rajeeth Savarimuthu; Jerome Jouffroy; Henrik Gordon Petersen; Norbert Krüger
In this paper we propose a new algorithm that can be used for adaptation of robot trajectories in automated assembly tasks. Initial trajectories and forces are obtained by demonstration and iteratively adapted to specific environment configurations. The algorithm adapts Cartesian space trajectories to match the forces recorded during the human demonstration. Experimentally we show the effectiveness of our approach on learning of Peg-in-Hole (PiH) task. We performed our experiments on two different robotic platforms with workpieces of different shapes.
intelligent robots and systems | 2013
Barry Ridge; Ales Ude
When it comes to learning how to manipulate objects from experience with minimal prior knowledge, robots encounter significant challenges. When the objects are unknown to the robot, the lack of prior object models demands a robust feature descriptor such that the robot can reliably compare objects and the effects of their manipulation. In this paper, using an experimental platform that gathers 3-D data from the Kinect RGB-D sensor, as well as push action trajectories from a tracking system, we address these issues using an action-grounded 3-D feature descriptor. Rather than using pose-invariant visual features, as is often the case with object recognition, we ground the features of objects with respect to their manipulation, that is, by using shape features that describe the surface of an object relative to the push contact point and direction. Using this setup, object push affordance learning trials are performed by a human and both pre-push and post-push object features are gathered, as well as push action trajectories. A self-supervised multi-view online learning algorithm is employed to bootstrap both the discovery of affordance classes in the post-push view, as well as a discriminative model for predicting them in the pre-push view. Experimental results demonstrate the effectiveness of self-supervised class discovery, class prediction and feature relevance determination on a collection of unknown objects.
International Journal of Advanced Robotic Systems | 2015
Barry Ridge; Aleš Leonardis; Ales Ude; Miha Deniša; Danijel Skočaj
Continuous learning of object affordances in a cognitive robot is a challenging problem, the solution to which arguably requires a developmental approach. In this paper, we describe scenarios where robotic systems interact with household objects by pushing them using robot arms while observing the scene with cameras, and which must incrementally learn, without external supervision, both the effect classes that emerge from these interactions as well as a discriminative model for predicting them from object properties. We formalize the scenario as a multi-view learning problem where data co-occur over two separate data views over time, and we present an online learning framework that uses a self-supervised form of learning vector quantization to build the discriminative model. In various experiments, we demonstrate the effectiveness of this approach in comparison with related supervised methods using data from experiments performed using two different robotic platforms.
Adaptive Behavior | 2017
Philipp Zech; Simon Haller; Safoura Rezapour Lakani; Barry Ridge; Emre Ugur; Justus H. Piater
J. J. Gibson’s concept of affordance, one of the central pillars of ecological psychology, is a truly remarkable idea that provides a concise theory of animal perception predicated on environmental interaction. It is thus not surprising that this idea has also found its way into robotics research as one of the underlying theories for action perception. The success of the theory in this regard has meant that existing research is both abundant and diffuse by virtue of the pursuit of multiple different paths and techniques with the common goal of enabling robots to learn, perceive, and act upon affordances. Up until now, there has existed no systematic investigation of existing work in this field. Motivated by this circumstance, in this article, we begin by defining a taxonomy for computational models of affordances rooted in a comprehensive analysis of the most prominent theoretical ideas of import in the field. Subsequently, after performing a systematic literature review, we provide a classification of existing research within our proposed taxonomy. Finally, by both quantitatively and qualitatively assessing the data resulting from the classification process, we highlight gaps in the research terrain and outline open questions for the investigation of affordances in robotics that we believe will help inform future work, prioritize research goals, and potentially advance the field toward greater robot autonomy.
international conference on advanced robotics | 2015
Barry Ridge; Emre Ugur; Ales Ude
Recent work in robotics, particularly in the domains of object manipulation and affordance learning, has seen the development of action-grounded features, that is, object features that are defined dynamically with respect to manipulation actions. Rather than using pose-invariant features, as is often the case with object recognition, such features are grounded with respect to the manipulation of the object, for instance, by using shape features that describe the surface of an object relative to the push contact point and direction. In this paper we provide an experimental comparison between action-grounded features and non-grounded features in an object affordance classification setting. Using an experimental platform that gathers 3-D data from the Kinect RGB-D sensor, as well as push action trajectories from an electromagnetic tracking system, we provide experimental results that demonstrate the effectiveness of this action-grounded approach across a range of state-of-the-art classifiers.
international conference on advanced robotics | 2017
Timotej Gašpar; Barry Ridge; Robert Bevec; Martin Bem; Igor Kovac; Ales Ude; Ziga Gosar
In an increasingly competitive manufacturing industry it is becoming ever more important to rapidly react to changes in market demands. In order to satisfy these requirements, it is crucial that automated manufacturing processes are flexible and can be adapted to new production requirements quickly. In this paper we present a novel automatically reconfigurable robot workcell that addresses the issues of flexible manufacturing. The proposed workcell is reconfigurable in terms of hardware and software. The hardware elements of the workcell, both those selected off-the-shelf and those developed specifically for the system, allow for fast cell setup and reconfiguration, while the software aims to provide a modular, robot-independent, ROS-based programming environment. While the proposed workcell is being developed in such a way as to address the needs of production-oriented SMEs where batch sizes are relatively small, it will also be of interest to enterprises with larger production lines since it additionally targets high performance in terms of speed, interoperability of robotic elements, and ease of use.
Archive | 2019
Timotej Gašpar; Robert Bevec; Barry Ridge; Ales Ude
Reconfigurable manufacturing systems (RMS) provide means to deal with changes and uncertainties in highly dynamic production processes. They allow for a relatively quick adjustment of various modules within the production line. To further increase the flexibility of such systems, multiple robots can be used within. Multi-robot systems provide a higher degree of flexibility and efficiency compared to single-robot systems. These systems can perform tasks that require a high level of dexterity. However, in order to ensure the robots are able to precisely perform cooperative tasks, it is necessary to have a well calibrated system. In this paper, we present a novel approach for robot base frame calibration by exploiting the collaborative robots’ kinesthetic guidance feature. The developed method is suitable in RMS, as it is more time efficient and intuitive without drawbacks in precision.
ieee-ras international conference on humanoid robots | 2016
Barry Ridge; Ales Ude
Many 3D feature descriptors have been developed over the years to solve problems that require the representation of object shape, e. g. object recognition or pose estimation, but comparatively few have been developed specifically to tackle the problem of object affordance learning, a domain where the interaction between action parameters and sensory features play a crucial role. In previous work, we introduced a feature descriptor that divided an object point cloud into coarse-grained cells, derived simple features from each of the cells, and grounded those features with respect to a reference frame defined by a pushing action. We also compared this action-grounded descriptor to an equivalent non-action-grounded descriptor coupled with action features in a push affordance classification task and established that the action-grounded encoding can provide improved performance. In this paper, we investigate modifying more well-established 3D shape descriptors based on surface geometry, in particular the Viewpoint Feature Histogram (VFH), such that they are action-grounded in a similar manner, compare them to volumetric octree-based representations, and conclude that having multi-scaled representations in which parts at each scale can be referenced with respect to each other may be a crucial component in action-grounded affordance learning.
international conference on computer vision systems | 2007
Gregor Berginc; Barry Ridge; Ondrej Vanek; Manuela Hutter; Nick Hawes