Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jürgen Leitner is active.

Publication


Featured researches published by Jürgen Leitner.


Science & Engineering Faculty | 2013

Cartesian Genetic Programming for Image Processing

Simon Harding; Jürgen Leitner; Jürgen Schmidhuber

Combining domain knowledge about both imaging processing and machine learning techniques can expand the abilities of Genetic Programming when used for image processing. We successfully demonstrate our new approach on several different problem domains. We show that the approach is fast, scalable and robust. In addition, by virtue of using off-the-shelf image processing libraries we can generate human readable programs that incorporate sophisticated domain knowledge.


2009 Advanced Technologies for Enhanced Quality of Life | 2009

Multi-robot Cooperation in Space: A Survey

Jürgen Leitner

This paper reviews the literature related to multi-robot research with a focus on space applications. It starts by examining definitions of, and some of the fields of research, in multi-robot systems. An overview of space applications with multiple robots and cooperating multiple robots is presented. The multi-robot cooperation techniques used in theoretical research as well as experiments are reviewed, and the applicability for space applications is investigated.


Science & Engineering Faculty | 2013

An Integrated, Modular Framework for Computer Vision and Cognitive Robotics Research (icVision)

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

We present an easy-to-use, modular framework for performing computer vision related tasks in support of cognitive robotics research on the iCub humanoid robot. The aim of this biologically inspired, bottom-up architecture is to facilitate research towards visual perception and cognition processes, especially their influence on robotic object manipulation and environment interaction. The icVision framework described provides capabilities for detection of objects in the 2D image plane and locate those objects in 3D space to facilitate the creation of a world model.


International Journal of Advanced Robotic Systems | 2012

Learning Spatial Object Localization from Vision on a Humanoid Robot

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range) of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN) and Genetic Programming (GP), are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robots kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robots workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.


international conference on development and learning | 2012

Autonomous learning of robust visual object detection and identification on a humanoid

Jürgen Leitner; Pramod Chandrashekhariah; Simon Harding; Mikhail Frank; Gabriele Spina; Alexander Förster; Jochen Triesch; Jürgen Schmidhuber

In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for training. These samples are learned for further object identification using Cartesian Genetic Programming (CGP). The learned identification is able to provide robust and fast segmentation of the objects, without using features. We showcase our system and its performance on the iCub humanoid robot.


international conference on informatics in control automation and robotics | 2014

Reactive reaching and grasping on a humanoid: Towards closing the action-perception loop on the iCub

Jürgen Leitner; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.


computer vision and pattern recognition | 2017

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

Fangyi Zhang; Jürgen Leitner; Michael Milford; Peter Corke

This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuomotor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.


international conference on robotics and automation | 2016

A distributed robotic vision service

William Chamberlain; Jürgen Leitner; Tom Drummond; Peter Corke

Robotic vision is limited by line of sight and on-board camera capabilities. Robots can acquire video or images from remote cameras, but processing additional data has a computational burden. This paper applies the Distributed Robotic Vision Service, DRVS, to robot path planning using data outside line-of-sight of the robot. DRVS implements a distributed visual object detection service to distributes the computation to remote camera nodes with processing capabilities. Robots request task-specific object detection from DRVS by specifying a geographic region of interest and object type. The remote camera nodes perform the visual processing and send the high-level object information to the robot. Additionally, DRVS relieves robots of sensor discovery by dynamically distributing object detection requests to remote camera nodes. Tested over two different indoor path planning tasks DRVS showed dramatic reduction in mobile robot compute load and wireless network utilization.


The International Journal of Robotics Research | 2018

The limits and potentials of deep learning for robotics

Niko Sünderhauf; Oliver Brock; Walter J. Scheirer; Raia Hadsell; Dieter Fox; Jürgen Leitner; Ben Upcroft; Pieter Abbeel; Wolfram Burgard; Michael Milford; Peter Corke

The application of deep learning in robotics leads to very specific problems and research questions that are typically not addressed by the computer vision and machine learning communities. In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning. We explain the need for better evaluation metrics, highlight the importance and unique challenges for deep robotic learning in simulation, and explore the spectrum between purely data-driven and model-driven approaches. We hope this paper provides a motivating overview of important research directions to overcome the current limitations, and helps to fulfill the promising potentials of deep learning in robotics.


international conference on robotics and automation | 2017

The ACRV picking benchmark: A robotic shelf picking benchmark to foster reproducible research

Jürgen Leitner; Adam W. Tow; Niko Sünderhauf; Jake E. Dean; Joseph W. Durham; Matthew Cooper; Markus Eich; Christopher Lehnert; Ruben Mangels; Christopher McCool; Peter T. Kujala; Lachlan Nicholson; Trung Pham; James Sergeant; Liao Wu; Fangyi Zhang; Ben Upcroft; Peter Corke

Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark. Designed to be reproducible, it consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils. A well-defined evaluation protocol enables the comparison of complete robotic systems — including perception and manipulation — instead of sub-systems only. Our paper also describes and reports results achieved by an open baseline system based on a Baxter robot.

Collaboration


Dive into the Jürgen Leitner's collaboration.

Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Alexander Förster

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Simon Harding

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Mikhail Frank

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Adam W. Tow

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Niko Sünderhauf

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fangyi Zhang

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge