Jürgen Leitner
Dalle Molle Institute for Artificial Intelligence Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jürgen Leitner.
Science & Engineering Faculty | 2013
Simon Harding; Jürgen Leitner; Jürgen Schmidhuber
Combining domain knowledge about both imaging processing and machine learning techniques can expand the abilities of Genetic Programming when used for image processing. We successfully demonstrate our new approach on several different problem domains. We show that the approach is fast, scalable and robust. In addition, by virtue of using off-the-shelf image processing libraries we can generate human readable programs that incorporate sophisticated domain knowledge.
2009 Advanced Technologies for Enhanced Quality of Life | 2009
Jürgen Leitner
This paper reviews the literature related to multi-robot research with a focus on space applications. It starts by examining definitions of, and some of the fields of research, in multi-robot systems. An overview of space applications with multiple robots and cooperating multiple robots is presented. The multi-robot cooperation techniques used in theoretical research as well as experiments are reviewed, and the applicability for space applications is investigated.
Science & Engineering Faculty | 2013
Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber
We present an easy-to-use, modular framework for performing computer vision related tasks in support of cognitive robotics research on the iCub humanoid robot. The aim of this biologically inspired, bottom-up architecture is to facilitate research towards visual perception and cognition processes, especially their influence on robotic object manipulation and environment interaction. The icVision framework described provides capabilities for detection of objects in the 2D image plane and locate those objects in 3D space to facilitate the creation of a world model.
International Journal of Advanced Robotic Systems | 2012
Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber
We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range) of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN) and Genetic Programming (GP), are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robots kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robots workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.
international conference on development and learning | 2012
Jürgen Leitner; Pramod Chandrashekhariah; Simon Harding; Mikhail Frank; Gabriele Spina; Alexander Förster; Jochen Triesch; Jürgen Schmidhuber
In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for training. These samples are learned for further object identification using Cartesian Genetic Programming (CGP). The learned identification is able to provide robust and fast segmentation of the objects, without using features. We showcase our system and its performance on the iCub humanoid robot.
international conference on informatics in control automation and robotics | 2014
Jürgen Leitner; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber
We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
computer vision and pattern recognition | 2017
Fangyi Zhang; Jürgen Leitner; Michael Milford; Peter Corke
This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuomotor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.
international conference on robotics and automation | 2016
William Chamberlain; Jürgen Leitner; Tom Drummond; Peter Corke
Robotic vision is limited by line of sight and on-board camera capabilities. Robots can acquire video or images from remote cameras, but processing additional data has a computational burden. This paper applies the Distributed Robotic Vision Service, DRVS, to robot path planning using data outside line-of-sight of the robot. DRVS implements a distributed visual object detection service to distributes the computation to remote camera nodes with processing capabilities. Robots request task-specific object detection from DRVS by specifying a geographic region of interest and object type. The remote camera nodes perform the visual processing and send the high-level object information to the robot. Additionally, DRVS relieves robots of sensor discovery by dynamically distributing object detection requests to remote camera nodes. Tested over two different indoor path planning tasks DRVS showed dramatic reduction in mobile robot compute load and wireless network utilization.
The International Journal of Robotics Research | 2018
Niko Sünderhauf; Oliver Brock; Walter J. Scheirer; Raia Hadsell; Dieter Fox; Jürgen Leitner; Ben Upcroft; Pieter Abbeel; Wolfram Burgard; Michael Milford; Peter Corke
The application of deep learning in robotics leads to very specific problems and research questions that are typically not addressed by the computer vision and machine learning communities. In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning. We explain the need for better evaluation metrics, highlight the importance and unique challenges for deep robotic learning in simulation, and explore the spectrum between purely data-driven and model-driven approaches. We hope this paper provides a motivating overview of important research directions to overcome the current limitations, and helps to fulfill the promising potentials of deep learning in robotics.
international conference on robotics and automation | 2017
Jürgen Leitner; Adam W. Tow; Niko Sünderhauf; Jake E. Dean; Joseph W. Durham; Matthew Cooper; Markus Eich; Christopher Lehnert; Ruben Mangels; Christopher McCool; Peter T. Kujala; Lachlan Nicholson; Trung Pham; James Sergeant; Liao Wu; Fangyi Zhang; Ben Upcroft; Peter Corke
Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark. Designed to be reproducible, it consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils. A well-defined evaluation protocol enables the comparison of complete robotic systems — including perception and manipulation — instead of sub-systems only. Our paper also describes and reports results achieved by an open baseline system based on a Baxter robot.
Collaboration
Dive into the Jürgen Leitner's collaboration.
Dalle Molle Institute for Artificial Intelligence Research
View shared research outputsDalle Molle Institute for Artificial Intelligence Research
View shared research outputsDalle Molle Institute for Artificial Intelligence Research
View shared research outputsDalle Molle Institute for Artificial Intelligence Research
View shared research outputs