Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roman Ilin is active.

Publication


Featured researches published by Roman Ilin.


IEEE Transactions on Neural Networks | 2008

Beyond Feedforward Models Trained by Backpropagation: A Practical Training Tool for a More Efficient Universal Approximator

Roman Ilin; Robert Kozma; Paul J. Werbos

Cellular simultaneous recurrent neural network (SRN) has been shown to be a function approximator more powerful than the multilayer perceptron (MLP). This means that the complexity of MLP would be prohibitively large for some problems while SRN could realize the desired mapping with acceptable computational constraints. The speed of training of complex recurrent networks is crucial to their successful application. This work improves the previous results by training the network with extended Kalman filter (EKF). We implemented a generic cellular SRN (CSRN) and applied it for solving two challenging problems: 2-D maze navigation and a subset of the connectedness problem. The speed of convergence has been improved by several orders of magnitude in comparison with the earlier results in the case of maze navigation, and superior generalization has been demonstrated in the case of connectedness. The implications of this improvements are discussed.


The Open Neuroimaging Journal | 2010

Neurally and mathematically motivated architecture for language and thought.

Leonid I. Perlovsky; Roman Ilin

Neural structures of interaction between thinking and language are unknown. This paper suggests a possible architecture motivated by neural and mathematical considerations. A mathematical requirement of computability imposes significant constraints on possible architectures consistent with brain neural structure and with a wealth of psychological knowledge. How language interacts with cognition. Do we think with words, or is thinking independent from language with words being just labels for decisions? Why is language learned by the age of 5 or 7, but acquisition of knowledge represented by learning to use this language knowledge takes a lifetime? This paper discusses hierarchical aspects of language and thought and argues that high level abstract thinking is impossible without language. We discuss a mathematical technique that can model the joint language-thought architecture, while overcoming previously encountered difficulties of computability. This architecture explains a contradiction between human ability for rational thoughtful decisions and irrationality of human thinking revealed by Tversky and Kahneman; a crucial role in this contradiction might be played by language. The proposed model resolves long-standing issues: how the brain learns correct words-object associations; why animals do not talk and think like people. We propose the role played by language emotionality in its interaction with thought. We relate the mathematical model to Humboldt’s “firmness” of languages; and discuss possible influence of language grammar on its emotionality. Psychological and brain imaging experiments related to the proposed model are discussed. Future theoretical and experimental research is outlined.


International Journal of Natural Computing Research | 2010

Cognitively Inspired Neural Network for Recognition of Situations

Roman Ilin; Leonid I. Perlovsky

The authors present a cognitively inspired mathematical learning framework called Neural Modeling Fields (NMF). They apply it to learning and recognition of situations composed of objects. NMF successfully overcomes the combinatorial complexity of associating subsets of objects with situations and demonstrates fast and reliable convergence. The implications of the current results for building multi-layered intelligent systems are also discussed.


Neural Networks | 2009

2009 Special Issue: Cross-situational learning of object-word mapping using Neural Modeling Fields

José F. Fontanari; Vadim Tikhanoff; Angelo Cangelosi; Roman Ilin; Leonid I. Perlovsky

The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients--batch learning and clutter detection--the NMF mechanism was capable to infer perfectly the correct object-word mapping.


2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning | 2007

Efficient Learning in Cellular Simultaneous Recurrent Neural Networks - The Case of Maze Navigation Problem

Roman Ilin; Robert Kozma; Paul J. Werbos

Cellular simultaneous recurrent neural networks (SRN) show great promise in solving complex function approximation problems. In particular, approximate dynamic programming is an important application area where SRNs have significant potential advantages compared to other approximation methods. Learning in SRNs, however, proved to be a notoriously difficult problem, which prevented their broader use. This paper introduces an extended Kalman filter approach to train SRNs. Using the two-dimensional maze navigation problem as a testbed, we illustrate the operation of the method and demonstrate its benefits in generalization and testing performance


international joint conference on neural network | 2006

Cellular SRN Trained by Extended Kalman Filter Shows Promise for ADP

Roman Ilin; Robert Kozma; Paul J. Werbos

Cellular simultaneous recurrent neural network has been suggested to be a function approximator more powerful than the MLPs, in particular for solving approximate dynamic programming problems. The 2D maze navigation has been considered as a proof-of-concept task. Present work improves the previous results by training the network with extended Kalman filter (EKF). The original EKF algorithm has been slightly modified. The speed of convergence has been improved by several orders of magnitude in comparison with the earlier results. The implications of this improvement are discussed.


Advanced Robotics | 2008

Intentional Control for Planetary Rover SRR

Robert Kozma; Terry Huntsberger; Hrand Aghazarian; Edward Tunstel; Roman Ilin; Walter J. Freeman

Intentional behavior is a basic property of intelligence, and it incorporates the cyclic operation of prediction, testing by action, sensing, perceiving and assimilating the experienced features. Intentional neurodynamic principles are applied for on-line processing of multisensory inputs and for the generation of dynamic behavior using the SRR (Sample Return Rover) platform at the indoor facility of the Planetary Robotics Laboratory, Jet Propulsion Laboratory. The studied sensory modalities include CMOS camera vision, orientation based on an internal motion unit and accelerometer signals. The control architecture employs a biologically inspired dynamic neural network operating on the principle of chaotic neural dynamics manifesting intentionality in the style of mammalian brains. Learning is based on Hebbian rules coupled with reinforcement. The central issue of this work is to study how the developed control system builds associations between the sensory modalities to achieve robust autonomous action selection. The proposed system builds such associations in a self-organized way and it is called Self-Organized Development of Autonomous Adaptive Systems (SODAS). This system operates autonomously, without the need for human intervention, which is a potentially very beneficial feature in challenging environments, such as encountered in space explorations at remote planetary environments. The experiments illustrate obstacle avoidance combined with goal-oriented navigation by the SRR robot using SODAS control principles.


IEEE Transactions on Neural Networks | 2012

Unsupervised Learning of Categorical Data With Competing Models

Roman Ilin

This paper considers the unsupervised learning of high-dimensional binary feature vectors representing categorical information. A cognitively inspired framework, referred to as modeling fields theory (MFT), is utilized as the basic methodology. A new MFT-based algorithm, referred to as accelerated maximum a posteriori (MAP), is proposed. Accelerated MAP allows simultaneous learning and selection of the number of models. The key feature of accelerated MAP is a steady increase of the regularization penalty resulting in competition among models. The differences between this approach and other mixture learning and model selection methodologies are described. The operation of this algorithm and its parameter selection are discussed. Numerical experiments aimed at finding performance limits are conducted. The performance with real-world data is tested by applying the algorithm to a text categorization problem and to the clustering Congressional voting data.


Neural Networks | 2013

2013 Special Issue: Mirror neurons, language, and embodied cognition

Leonid I. Perlovsky; Roman Ilin

Basic mechanisms of the mind, cognition, language, its semantic and emotional mechanisms are modeled using dynamic logic (DL). This cognitively and mathematically motivated model leads to a dual-model hypothesis of language and cognition. This models joint emergence of language and cognition from mirror neuron system.


international symposium on neural networks | 2010

Grounded symbols in the brain

Leonid I. Perlovsky; Roman Ilin

What is the nature of symbols in human brain-mind? Is it amodal, like logical statements, like arithmetic, or is it grounded in perceptions? We describe a mathematical technique modeling grounded symbols, based on neural modeling fields (MFT) and dynamic logic (DL), which have overcome limitations of classical artificial intelligence and connectionist approaches. The paper relates these difficulties to classical logic, discusses why logic could not have overcome these difficulties, and how DL overcomes past limitations. We apply MFT-DL to one aspect of symbolic operations, learning higher-level sign-symbols from lower level ones. We relate MFT-DL to essential mechanisms of concepts and grounding. Experimental neuroimaging evidence for DL in brain imaging is discussed as well as future research directions.

Collaboration


Dive into the Roman Ilin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ross Deming

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jun Zhang

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor V. Ternovskiy

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Paul J. Werbos

National Science Foundation

View shared research outputs
Top Co-Authors

Avatar

Alan C. O'Connor

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge