Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Varun Raj Kompella is active.

Publication


Featured researches published by Varun Raj Kompella.


Neural Computation | 2012

Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams

Varun Raj Kompella; Matthew D. Luciw; Jürgen Schmidhuber

We introduce here an incremental version of slow feature analysis (IncSFA), combining candid covariance-free incremental principal components analysis (CCIPCA) and covariance-free incremental minor components analysis (CIMCA). IncSFAs feature updating complexity is linear with respect to the input dimensionality, while batch SFAs (BSFA) updating complexity is cubic. IncSFA does not need to store, or even compute, any covariance matrices. The drawback to IncSFA is data efficiency: it does not use each data point as effectively as BSFA. But IncSFA allows SFA to be tractably applied, with just a few parameters, directly on high-dimensional input streams (e.g., visual input of an autonomous agent), while BSFA has to resort to hierarchical receptive-field-based architectures when the input dimension is too high. Further, IncSFAs updates have simple Hebbian and anti-Hebbian forms, extending the biological plausibility of SFA. Experimental results show IncSFA learns the same set of features as BSFA and can handle a few cases where BSFA fails.


Frontiers in Neurorobotics | 2013

An intrinsic value system for developing multiple invariant representations with incremental slowness learning

Matthew D. Luciw; Varun Raj Kompella; Sohrob Kazerounian; Jürgen Schmidhuber

Curiosity Driven Modular Incremental Slow Feature Analysis (CD-MISFA;) is a recently introduced model of intrinsically-motivated invariance learning. Artificial curiosity enables the orderly formation of multiple stable sensory representations to simplify the agents complex sensory input. We discuss computational properties of the CD-MISFA model itself as well as neurophysiological analogs fulfilling similar functional roles. CD-MISFA combines 1. unsupervised representation learning through the slowness principle, 2. generation of an intrinsic reward signal through learning progress of the developing features, and 3. balancing of exploration and exploitation to maximize learning progress and quickly learn multiple feature sets for perceptual simplification. Experimental results on synthetic observations and on the iCub robot show that the intrinsic value system is essential for representation learning. Representations are typically explored and learned in order from least to most costly, as predicted by the theory of curiosity.


international joint conference on artificial intelligence | 2011

Incremental slow feature analysis

Varun Raj Kompella; Matthew D. Luciw; Jürgen Schmidhuber

The Slow Feature Analysis (SFA) unsupervised learning framework extracts features representing the underlying causes of the changes within a temporally coherent high-dimensional raw sensory input signal. We develop the first online version of SFA, via a combination of incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, online SFA adapts along with non-stationary environments, which makes it a generally useful unsupervised preprocessor for autonomous learning agents. We compare online SFA to batch SFA in several experiments and show that it indeed learns without a teacher to encode the input stream by informative slow features representing meaningful abstract environmental properties. We extend online SFA to deep networks in hierarchical fashion, and use them to successfully extract abstract object position information from high-dimensional video.


Artificial Intelligence | 2017

Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots

Varun Raj Kompella; Marijn F. Stollenga; Matthew D. Luciw; Jürgen Schmidhuber

Abstract In the absence of external guidance, how can a robot learn to map the many raw pixels of high-dimensional visual inputs to useful action sequences? We propose here Continual Curiosity driven Skill Acquisition (CCSA). CCSA makes robots intrinsically motivated to acquire, store and reuse skills. Previous curiosity-based agents acquired skills by associating intrinsic rewards with world model improvements, and used reinforcement learning to learn how to get these intrinsic rewards. CCSA also does this, but unlike previous implementations, the world model is a set of compact low-dimensional representations of the streams of high-dimensional visual information, which are learned through incremental slow feature analysis. These representations augment the robots state space with new information about the environment. We show how this information can have a higher-level (compared to pixels) and useful interpretation, for example, if the robot has grasped a cup in its field of view or not. After learning a representation, large intrinsic rewards are given to the robot for performing actions that greatly change the feature output, which has the tendency otherwise to change slowly in time. We show empirically what these actions are (e.g., grasping the cup) and how they can be useful as skills. An acquired skill includes both the learned actions and the learned slow feature representation. Skills are stored and reused to generate new observations, enabling continual acquisition of complex skills. We present results of experiments with an iCub humanoid robot that uses CCSA to incrementally acquire skills to topple, grasp and pick-place a cup, driven by its intrinsic motivation from raw pixel vision.


international conference on development and learning | 2012

Autonomous learning of abstractions using Curiosity-Driven Modular Incremental Slow Feature Analysis

Varun Raj Kompella; Matthew D. Luciw; Marijn F. Stollenga; Leo Pape; Jürgen Schmidhuber

To autonomously learn behaviors in complex environments, vision-based agents need to develop useful sensory abstractions from high-dimensional video. We propose a modular, curiosity-driven learning system that autonomously learns multiple abstract representations. The policy to build the library of abstractions is adapted through reinforcement learning, and the corresponding abstractions are learned through incremental slow-feature analysis (IncSFA). IncSFA learns each abstraction based on how the inputs change over time, directly from unprocessed visual data. Modularity is induced via a gating system, which also prevents abstraction duplication. The system is driven by a curiosity signal that is based on the learnability of the inputs by the current adaptive module. After the learning completes, the result is multiple slow-feature modules serving as distinct behavior-specific abstractions. Experiments with a simulated iCub humanoid robot show how the proposed method effectively learns a set of abstractions from raw un-preprocessed video, to our knowledge the first curious learning agent to demonstrate this ability.


ieee-ras international conference on humanoid robots | 2011

AutoIncSFA and vision-based developmental learning for humanoid robots

Varun Raj Kompella; Leo Pape; Jonathan Masci; Mikhail Frank; Jürgen Schmidhuber

Humanoids have to deal with novel, unsupervised high-dimensional visual input streams. Our new method AutoIncSFA learns to compactly represent such complex sensory input sequences by very few meaningful features corresponding to high-level spatio-temporal abstractions, such as: a person is approaching me, or: an object was toppled. We explain the advantages of AutoIncSFA over previous related methods, and show that the compact codes greatly facilitate the task of a reinforcement learner driving the humanoid to actively explore its world like a playing baby, maximizing intrinsic curiosity reward signals for reaching states corresponding to previously unpredicted AutoIncSFA features.


international symposium on neural networks | 2014

Explore to see, learn to perceive, get the actions for free: SKILLABILITY

Varun Raj Kompella; Marijn F. Stollenga; Matthew D. Luciw; Jürgen Schmidhuber

How can a humanoid robot autonomously learn and refine multiple sensorimotor skills as a byproduct of curiosity driven exploration, upon its high-dimensional unprocessed visual input? We present SKILLABILITY, which makes this possible. It combines the recently introduced Curiosity Driven Modular Incremental Slow Feature Analysis (Curious Dr. MISFA) with the well-known options framework. Curious Dr. MISFAs objective is to acquire abstractions as quickly as possible. These abstractions map high-dimensional pixel-level vision to a low-dimensional manifold. We find that each learnable abstraction augments the robots state space (a set of poses) with new information about the environment, for example, when the robot is grasping a cup. The abstraction is a function on an image, called a slow feature, which can effectively discretize a high-dimensional visual sequence. For example, it maps the sequence of the robot watching its arm as it moves around, grasping randomly, then grasping a cup, and moving around some more while holding the cup, into a step function having two outputs: when the cup is or is not currently grasped. The new state space includes this grasped/not grasped information. Each abstraction is coupled with an option. The reward function for the options policy (learned through Least Squares Policy Iteration) is high for transitions that produce a large change in the step-functionlike slow features. This corresponds to finding bottleneck states, which are known good subgoals for hierarchical reinforcement learning - in the example, the subgoal corresponds to grasping the cup. The final skill includes both the learned policy and the learned abstraction. SKILLABILITY makes our iCub the first humanoid robot to learn complex skills such as to topple or grasp an object, from raw high-dimensional video input, driven purely by its intrinsic motivations.


international conference on robotics and automation | 2011

Detection and avoidance of semi-transparent obstacles using a collective-reward based approach

Varun Raj Kompella; Peter F. Sturm

Most of the computer and robot-vision algorithms are designed mainly for opaque objects and non-opaque objects have received less attention, in spite of them being omnipresent in man-made environments. With an increasing usage of such objects, especially those made of glass, plastic etc., it becomes necessarily important to detect this class of objects while building a robot navigation system. Obstacle avoidance forms a primary yet challenging task in mobile robot navigation. The main objective of this paper is to present an algorithm to detect and avoid obstacles that are made of semi-transparent materials, such as plastic or glass. The algorithm makes use of a technique called the collective-reward based approach to detect such objects from single images captured by an uncalibrated camera in a live video stream. Random selection techniques are incorporated in the method to make the algorithm run in real-time. A mobile robot then uses the information after detection to perform an obstacle avoidance maneuver. Experiments were conducted on a real robot to test the efficacy of the algorithm.


Neural Computation | 2016

Optimal curiosity-driven modular incremental slow feature analysis

Varun Raj Kompella; Matthew D. Luciw; Marijn F. Stollenga; Juergen Schmidhuber

Consider a self-motivated artificial agent who is exploring a complex environment. Part of the complexity is due to the raw high-dimensional sensory input streams, which the agent needs to make sense of. Such inputs can be compactly encoded through a variety of means; one of these is slow feature analysis (SFA). Slow features encode spatiotemporal regularities, which are information-rich explanatory factors (latent variables) underlying the high-dimensional input streams. In our previous work, we have shown how slow features can be learned incrementally, while the agent explores its world, and modularly, such that different sets of features are learned for different parts of the environment (since a single set of regularities does not explain everything). In what order should the agent explore the different parts of the environment? Following Schmidhuber’s theory of artificial curiosity, the agent should always concentrate on the area where it can learn the easiest-to-learn set of features that it has not already learned. We formalize this learning problem and theoretically show that, using our model, called curiosity-driven modular incremental slow feature analysis, the agent on average will learn slow feature representations in order of increasing learning difficulty, under certain mild conditions. We provide experimental results to support the theoretical analysis.


simulation of adaptive behavior | 2014

An Anti-hebbian Learning Rule to Represent Drive Motivations for Reinforcement Learning

Varun Raj Kompella; Sohrob Kazerounian; Jürgen Schmidhuber

We present a motivational system for an agent undergoing reinforcement learning (RL), which enables it to balance multiple drives, each of which is satiated by different types of stimuli. Inspired by drive reduction theory, it uses Minor Component Analysis (MCA) to model the agent’s internal drive state, and modulates incoming stimuli on the basis of how strongly the stimulus satiates the currently active drive. The agent’s dynamic policy continually changes through least-squares temporal difference updates. It automatically seeks stimuli that first satiate the most active internal drives, then the next most active drives, etc. We prove that our algorithm is stable under certain conditions. Experimental results illustrate its behavior.

Collaboration


Dive into the Varun Raj Kompella's collaboration.

Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Matthew D. Luciw

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Marijn F. Stollenga

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sohrob Kazerounian

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Masci

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Mikhail Frank

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Peter F. Sturm

Cincinnati Children's Hospital Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge