Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mikhail Frank is active.

Publication


Featured researches published by Mikhail Frank.


Science & Engineering Faculty | 2013

An Integrated, Modular Framework for Computer Vision and Cognitive Robotics Research (icVision)

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

We present an easy-to-use, modular framework for performing computer vision related tasks in support of cognitive robotics research on the iCub humanoid robot. The aim of this biologically inspired, bottom-up architecture is to facilitate research towards visual perception and cognition processes, especially their influence on robotic object manipulation and environment interaction. The icVision framework described provides capabilities for detection of objects in the 2D image plane and locate those objects in 3D space to facilitate the creation of a world model.


International Journal of Advanced Robotic Systems | 2012

Learning Spatial Object Localization from Vision on a Humanoid Robot

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range) of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN) and Genetic Programming (GP), are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robots kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robots workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.


ieee-ras international conference on humanoid robots | 2011

AutoIncSFA and vision-based developmental learning for humanoid robots

Varun Raj Kompella; Leo Pape; Jonathan Masci; Mikhail Frank; Jürgen Schmidhuber

Humanoids have to deal with novel, unsupervised high-dimensional visual input streams. Our new method AutoIncSFA learns to compactly represent such complex sensory input sequences by very few meaningful features corresponding to high-level spatio-temporal abstractions, such as: a person is approaching me, or: an object was toppled. We explain the advantages of AutoIncSFA over previous related methods, and show that the compact codes greatly facilitate the task of a reinforcement learner driving the humanoid to actively explore its world like a playing baby, maximizing intrinsic curiosity reward signals for reaching states corresponding to previously unpredicted AutoIncSFA features.


international conference on development and learning | 2012

Autonomous learning of robust visual object detection and identification on a humanoid

Jürgen Leitner; Pramod Chandrashekhariah; Simon Harding; Mikhail Frank; Gabriele Spina; Alexander Förster; Jochen Triesch; Jürgen Schmidhuber

In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for training. These samples are learned for further object identification using Cartesian Genetic Programming (CGP). The learned identification is able to provide robust and fast segmentation of the objects, without using features. We showcase our system and its performance on the iCub humanoid robot.


international conference on informatics in control automation and robotics | 2014

Reactive reaching and grasping on a humanoid: Towards closing the action-perception loop on the iCub

Jürgen Leitner; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.


international conference on informatics in control, automation and robotics | 2012

The Modular Behavioral Environment for Humanoids and other Robots (MoBeE)

Mikhail Frank; Jürgen Leitner; Marijn F. Stollenga; Simon Harding; Alexander Förster; Jürgen Schmidhuber


Science & Engineering Faculty | 2012

Transferring spatial perception between robots operating in a shared workspace

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber


Science & Engineering Faculty | 2012

icVision: A modular vision system for cognitive robotics research

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber


biologically inspired cognitive architectures | 2013

Learning Visual Object Detection and Localisation Using icVision

Jürgen Leitner; Simon Harding; Pramod Chandrashekhariah; Mikhail Frank; Alexander Förster; Jochen Triesch; Jürgen Schmidhuber


BICA | 2012

An Integrated, Modular Framework for Computer Vision and Cognitive Robotics Research (icVision).

Jürgen Leitner; Simon Harding; Mikhail Frank; Alexander Förster; Jürgen Schmidhuber

Collaboration


Dive into the Mikhail Frank's collaboration.

Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Alexander Förster

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jürgen Leitner

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Simon Harding

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Marijn F. Stollenga

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jochen Triesch

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Pramod Chandrashekhariah

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Uwe Foerster

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jonathan Masci

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge