Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Niklas Bergström is active.

Publication


Featured researches published by Niklas Bergström.


international conference on robotics and automation | 2011

Mind the gap - robotic grasping under incomplete observation

Jeannette Bohg; Matthew Johnson-Roberson; Beatriz León; Javier Felip; Xavi Gratal; Niklas Bergström; Danica Kragic; Antonio Morales

We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robots understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned.


intelligent robots and systems | 2008

Modeling of natural human-robot encounters

Niklas Bergström; Takayuki Kanda; Takahiro Miyashita; Hiroshi Ishiguro; Norihiro Hagita

For a person to feel comfortable when approaching a robot it is necessary for the robot to behave in an expected way. Peoplepsilas behavior around a robot not being aware of them were observed during a preliminary experiment. Based on those observations people were classified into four groups depending on their interest in the robot. People were tracked with a laser range finder based system, and their positions, directions and velocities were estimated. A second classification based on that information was made and the relation between the two classifications were mapped. Different actions were created for the robot to be able to react naturally to different human behaviors. In this paper we evaluate three different robot behaviors with respect to how natural they appear. One behavior that actively tries to engage people, one that passively indicates that people have been noticed and a third that makes random gestures. During an experiment test subjects were instructed to act according to the groups from the classification based on interest, and the robotpsilas performance with regard to naturalness was evaluated. Both first and third person evaluation made clear that the active and passive behavior were considered equally natural, while a robot randomly making gestures was considered much less natural.


intelligent robots and systems | 2011

Generating object hypotheses in natural scenes through human-robot interaction

Niklas Bergström; Mårten Björkman; Danica Kragic

We propose a method for interactive modeling of objects and object relations based on real-time segmentation of video sequences. In interaction with a human, the robot can perform multi-object segmentation through principled modeling of physical constraints. The key contribution is an efficient multi-labeling framework, that allows object modeling and disambiguation in natural scenes. Object modeling and labeling is done in a real-time segmentation system, to which hypotheses and constraints denoting relations between objects can be added incrementally. Through instructions such as key presses or spoken words, a scene can be segmented in regions corresponding to multiple physical objects. The approach solves some of the difficult problems related to disambiguation of objects merged due to their direct physical contact. Results show that even a limited set of simple interactions with a human operator can substantially improve segmentation results.


ieee-ras international conference on humanoid robots | 2010

Fast and Automatic Detection and Segmentation of unknown objects

Gert Kootstra; Niklas Bergström; Danica Kragic

This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach named Automatic Detection And Segmentation (ADAS). For the detection of objects, we use symmetry, one of the Gestalt principles for figure-ground segregation to detect salient objects in a scene. From the initial seed, the object is segmented by iteratively applying graph cuts. We base the segmentation on both 2D and 3D cues: color, depth, and plane information. Instead of using a standard grid-based representation of the image, we use super pixels. Besides being a more natural representation, the use of super pixels greatly improves the processing time of the graph cuts, and provides more noise-robust color and depth information. The results show that both the object-detection as well as the object-segmentation method are successful and outperform existing methods.


international conference on pattern recognition | 2010

Using Symmetry to Select Fixation Points for Segmentation

Gert Kootstra; Niklas Bergström; Danica Kragic

For the interpretation of a visual scene, it is important for a robotic system to pay attention to the objects in the scene and segment them from their background. We focus on the segmentation of previously unseen objects in unknown scenes. The attention model therefore needs to be bottom-up and context-free. In this paper, we propose the use of symmetry, one of the Gestalt principles for figure-ground segregation, to guide the robots attention. We show that our symmetry-saliency model outperforms the contrast-saliency model, proposed in (Itti et al 1998). The symmetry model performs better in finding the objects of interest and selects a fixation point closer to the center of the object. Moreover, the objects are better segmented from the background when the initial points are selected on the basis of symmetry.


international conference on computer vision systems | 2009

Integration of Visual Cues for Robotic Grasping

Niklas Bergström; Jeannette Bohg; Danica Kragic

In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.


international conference on computer vision systems | 2011

Scene understanding through autonomous interactive perception

Niklas Bergström; Carl Henrik Ek; Mårten Björkman; Danica Kragic

We propose a framework for detecting, extracting and modeling objects in natural scenes from multi-modal data. Our framework is iterative, exploiting different hypotheses in a complementary manner. We employ the framework in realistic scenarios, based on visual appearance and depth information. Using a robotic manipulator that interacts with the scene, object hypotheses generated using appearance information are confirmed through pushing. The framework is iterative, each generated hypothesis is feeding into the subsequent one, continuously refining the predictions about the scene. We show results that demonstrate the synergic effect of applying multiple hypotheses for real-world scene understanding. The method is efficient and performs in real-time.


intelligent robots and systems | 2011

Representing actions with kernels

Guoliang Luo; Niklas Bergström; Carl Henrik Ek; Danica Kragic

A long standing research goal is to create robots capable of interacting with humans in dynamic environments. To realise this a robot needs to understand and interpret the underlying meaning and intentions of a human action through a model of its sensory data. The visual domain provides a rich description of the environment and data is readily available in most system through inexpensive cameras. However, such data is very high-dimensional and extremely redundant making modeling challenging.


Computer Vision and Image Understanding | 2014

Detecting, segmenting and tracking unknown objects using multi-label MRF inference

Mårten Björkman; Niklas Bergström; Danica Kragic

This article presents a unified framework for detecting, segmenting and tracking unknown objects in everyday scenes, allowing for inspection of object hypotheses during interaction over time. A heterogeneous scene representation is proposed, with background regions modeled as a combinations of planar surfaces and uniform clutter, and foreground objects as 3D ellipsoids. Recent energy minimization methods based on loopy belief propagation, tree-reweighted message passing and graph cuts are studied for the purpose of multi-object segmentation and benchmarked in terms of segmentation quality, as well as computational speed and how easily methods can be adapted for parallel processing. One conclusion is that the choice of energy minimization method is less important than the way scenes are modeled. Proximities are more valuable for segmentation than similarity in colors, while the benefit of 3D information is limited. It is also shown through practical experiments that, with implementations on GPUs, multi-object segmentation and tracking using state-of-art MRF inference methods is feasible, despite the computational costs typically associated with such methods.


Sensors | 2016

Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties

Shouren Huang; Niklas Bergström; Yuji Yamakawa; Taku Senoo; Masatoshi Ishikawa

It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot’s end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed.

Collaboration


Dive into the Niklas Bergström's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mårten Björkman

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alessandro Pieropan

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gert Kootstra

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hedvig Kjellström

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge