Gert Kootstra
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gert Kootstra.
british machine vision conference | 2008
Gert Kootstra; Arco Nederveen; Bart de Boer
Humans are very sensitive to symmetry in visual patterns. Symmetry is detected and recognized very rapidly. While viewing symmetrical patterns eye fixations are concentrated along the axis of symmetry or the symmetrical center of the patterns. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of saliency. These models do not take symmetry into account. In this paper, we discuss local symmetry as measure of saliency. We developed a number of symmetry models an performed an eye tracking study with human participants viewing photographic images to test the models. The performance of our symmetry models is compared with the contrast saliency model of Itti et al. [1]. The results show that the symmetry models better match the human data than the contrast model. This indicates that symmetry is a salient structural feature for humans, a finding which can be exploited in computer vision.
Robotics and Autonomous Systems | 2014
Gert Kootstra; Arne Bilberg; Danica Kragic
For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits.
intelligent robots and systems | 2011
Mila Popovic; Gert Kootstra; Jimmy Alison Jørgensen; Danica Kragic; Norbert Krüger
Grasping unknown objects based on real-world visual input is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information, which is a sparse but powerful description of the scene. Based on this representation we generate edge-based and surface-based grasps. The results show that the method generates successful grasps, that the edge and surface information are complementary, and that the method can deal with more complex scenes. We furthermore present a benchmark for visual-based grasping.
The International Journal of Robotics Research | 2012
Gert Kootstra; Mila Popovic; Jimmy Alison Jørgensen; Kamil Kukliński; Konstantsin Miatliuk; Danica Kragic; Norbert Krüger
Grasping unknown objects based on visual input, where no a priori knowledge about the objects is used, is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information which provides a sparse but powerful description of the scene. Based on this representation, we generate contour-based and surface-based grasps. We test our method in two real-world scenarios, as well as on a vision-based grasping benchmark providing a hybrid scenario using real-world stereo images as input and a simulator for extensive and repetitive evaluation of the grasps. The results show that the proposed method is able to generate successful grasps, and in particular that the contour and surface information are complementary for the task of grasping unknown objects. This allows for dealing with rather complex scenes.
ieee-ras international conference on humanoid robots | 2010
Gert Kootstra; Niklas Bergström; Danica Kragic
This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach named Automatic Detection And Segmentation (ADAS). For the detection of objects, we use symmetry, one of the Gestalt principles for figure-ground segregation to detect salient objects in a scene. From the initial seed, the object is segmented by iteratively applying graph cuts. We base the segmentation on both 2D and 3D cues: color, depth, and plane information. Instead of using a standard grid-based representation of the image, we use super pixels. Besides being a more natural representation, the use of super pixels greatly improves the processing time of the graph cuts, and provides more noise-robust color and depth information. The results show that both the object-detection as well as the object-segmentation method are successful and outperform existing methods.
international conference on robotics and automation | 2011
Gert Kootstra; Danica Kragic
In many scenarios, domestic robot will regularly encounter unknown objects. In such cases, top-down knowledge about the object for detection, recognition, and classification cannot be used. To learn about the object, or to be able to grasp it, bottom-up object segmentation is an important competence for the robot. Also when there is top-down knowledge, prior segmentation of the object can improve recognition and classification. In this paper, we focus on the problem of bottom-up detection and segmentation of unknown objects. Gestalt psychology studies the same phenomenon in human vision. We propose the utilization of a number of Gestalt principles. Our method starts by generating a set of hypotheses about the location of objects using symmetry. These hypotheses are then used to initialize the segmentation process. The main focus of the paper is on the evaluation of the resulting object segments using Gestalt principles to select segments with high figural goodness. The results show that the Gestalt principles can be successfully used for detection and segmentation of unknown objects. The results furthermore indicate that the Gestalt measures for the goodness of a segment correspond well with the objective quality of the segment. We exploit this to improve the overall segmentation performance.
international conference on pattern recognition | 2010
Gert Kootstra; Niklas Bergström; Danica Kragic
For the interpretation of a visual scene, it is important for a robotic system to pay attention to the objects in the scene and segment them from their background. We focus on the segmentation of previously unseen objects in unknown scenes. The attention model therefore needs to be bottom-up and context-free. In this paper, we propose the use of symmetry, one of the Gestalt principles for figure-ground segregation, to guide the robots attention. We show that our symmetry-saliency model outperforms the contrast-saliency model, proposed in (Itti et al 1998). The symmetry model performs better in finding the objects of interest and selects a fixation point closer to the center of the object. Moreover, the objects are better segmented from the background when the initial points are selected on the basis of symmetry.
international conference on robotics and automation | 2008
Gert Kootstra; J Ypma; B. de Boer
Object recognition is a challenging problem for artificial systems. This is especially true for objects that are placed in cluttered and uncontrolled environments. To challenge this problem, we discuss an active approach to object recognition. Instead of passively observing objects, we use a robot to actively explore the objects. This enables the system to learn objects from different viewpoints and to actively select viewpoints for optimal recognition. Active vision furthermore simplifies the segmentation of the object from its background. As the basis for object recognition we use the Scale Invariant Feature Transform (SIFT). SIFT has been a successful method for image representation. However, a known drawback of SIFT is that the computational complexity of the algorithm increases with the number of keypoints. We discuss a growing-when-required (GWR) network for efficient clustering of the key- points. The results show successful learning of 3D objects in real-world environments. The active approach is successful in separating the object from its cluttered background, and the active selection of viewpoint further increases the performance. Moreover, the GWR-network strongly reduces the number of keypoints.
international conference on advanced robotics | 2011
Gert Kootstra; Arne Bilberg; Danica Kragic
In this paper, we present a novel tactile-array sensor for use in robotic grippers based on flexible piezoresistive rubber. We start by describing the physical principles of piezoresistive materials, and continue by outlining how to build a flexible tactile-sensor array using conductive thread electrodes. A real-time acquisition system scans the data from the array which is then further processed. We validate the properties of the sensor in an application that classifies a number of household objects while performing a palpation procedure with a robotic gripper. Based on the haptic feedback, we classify various rigid and deformable objects. We represent the array of tactile information as a time series of features and use this as the input for a k-nearest neighbors classifier. Dynamic time warping is used to calculate the distances between different time series. The results from our novel tactile sensor are compared to results obtained from an experimental setup using a Weiss Robotics tactile sensor with similar characteristics. We conclude by exemplifying how the results of the classification can be used in different robotic applications.
Robotics and Autonomous Systems | 2009
Gert Kootstra; Bart de Boer
Monte-Carlo localization uses particle filtering to estimate the position of the robot. The method is known to suffer from the loss of potential positions when there is ambiguity present in the environment. Since many indoor environments are highly symmetric, this problem of premature convergence is problematic for indoor robot navigation. It is, however, rarely studied in particle filters. We introduce a number of so-called niching methods used in genetic algorithms, and implement them on a particle filter for Monte-Carlo localization. The experiments show a significant improvement in the diversity maintaining performance of the particle filter.