Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christo Panchev is active.

Publication


Featured researches published by Christo Panchev.


Robotics and Autonomous Systems | 2004

Towards multimodal neural robot learning

Stefan Wermter; Cornelius Weber; Mark Elshaw; Christo Panchev; Harry R. Erwin; Friedemann Pulvermüller

Abstract Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that ‘mirror’ neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.


Neurocomputing | 2004

Spike-timing-dependent synaptic plasticity: from single spikes to spike trains

Christo Panchev; Stefan Wermter

Abstract We present a neurobiologically motivated model of a neuron with active dendrites and dynamic synapses, and a training algorithm which builds upon single spike-timing-dependent synaptic plasticity derived from neurophysiological evidence. We show that in the presence of a moderate level of noise, the plasticity rule can be extended from single to multiple pre-synaptic spikes and applied to effectively train a neuron in detecting temporal sequences of spike trains. The trained neuron responds reliably under different regimes and types of noise.


international conference on artificial neural networks | 2002

Spike-Timing Dependent Competitive Learning of Integrate-and-Fire Neurons with Active Dendrites

Christo Panchev; Stefan Wermter; Huixin Chen

Presented is a model of an integrate-and-fire neuron with active dendrites and a spike-timing dependent Hebbian learning rule. The learning algorithm effectively trains the neuron when responding to several types of temporal encoding schemes: temporal code with single spikes, spike bursts and phase coding. The neuron model and learning algorithm are tested on a neural network with a self-organizing map of competitive neurons. The goal of the presented work is to develop computationally efficient models rather than approximating the real neurons. The approach described in this paper demonstrates the potential advantages of using the processing functionalities of active dendrites as a novel paradigm of computing with networks of artificial spiking neurons.


Connection Science | 2006

Temporal sequence detection with spiking neurons: towards recognizing robot language instructions

Christo Panchev; Stefan Wermter

We present an approach for recognition and clustering of spatio temporal patterns based on networks of spiking neurons with active dendrites and dynamic synapses. We introduce a new model of an integrate-and-fire neuron with active dendrites and dynamic synapses (ADDS) and its synaptic plasticity rule. The neuron employs the dynamics of the synapses and the active properties of the dendrites as an adaptive mechanism for maximizing its response to a specific spatio-temporal distribution of incoming action potentials. The learning algorithm follows recent biological evidence on synaptic plasticity. It goes beyond the current computational approaches which are based only on the relative timing between single pre- and post-synaptic spikes and implements a functional dependence based on the state of the dendritic and somatic membrane potentials around the pre- and post-synaptic action potentials. The learning algorithm is demonstrated to effectively train the neuron towards a selective response determined by the spatio-temporal pattern of the onsets of input spike trains. The model is used in the implementation of a part of a robotic system for natural language instructions. We test the model with a robot whose goal is to recognize and execute language instructions. The research in this article demonstrates the potential of spiking neurons for processing spatio-temporal patterns and the experiments present spiking neural networks as a paradigm which can be applied for modelling sequence detectors at word level for robot instructions.


international conference on artificial neural networks | 2006

Occlusion, attention and object representations

Neill R. Taylor; Christo Panchev; Matthew Hartley; Stathis Kasderidis; John G. Taylor

Occlusion is currently at the centre of analysis in machine vision. We present an approach to it that uses attention feedback to an occluded object to obtain its correct recognition. Various simulations are performed using a hierarchical visual attention feedback system, based on contrast gain (which we discuss as to its relation to possible hallucinations that could be caused by feedback). We then discuss implications of our results for object representations per se.


Hybrid Neural Systems, revised papers from a workshop | 1998

Towards Hybrid Neural Learning Internet Agents

Stefan Wermter; Garen Arevian; Christo Panchev

The following chapter explores learning internet agents. In recent years, with the massive increase in the amount of available information on the Internet, a need has arisen for being able to organize and access that data in a meaningful and directed way. Many well-explored techniques from the field of AI and machine learning have been applied in this context. In this paper, special emphasis is placed on neural network approaches in implementing a learning agent. First, various important approaches are summarized. Then, an approach for neural learning internet agents is presented, one that uses recurrent neural networks for the learning of classifying a textual stream of information. Experimental results are presented showing that a neural network model based on a recurrent plausibility network can act as a scalable, robust and useful news routing agent. concluding section examines the need for a hybrid integration of various techniques to achieve optimal results in the problem domain specified, in particular exploring the hybrid integration of Preference Moore machines and recurrent networks to extract symbolic knowledge.


Image and Vision Computing | 2009

A hierarchical attention-based neural network architecture, based on human brain guidance, for perception, conceptualisation, action and reasoning

John G. Taylor; Matthew Hartley; Neill R. Taylor; Christo Panchev; Stathis Kasderidis

We present a neural network software architecture, guided by that of the human and more generally primate brain, for the construction of an autonomous cognitive system (which we have named GNOSYS). GNOSYS is created so as to be able to attend to stimuli, to conceptualise them, to learn their predicted reward value and reason about them so as to attain those stimuli in the environment with greatest predicted value. We apply this software system to an embodied version in a robot, and describe the activities in the various component modules of GNOSYS, as well as the overall results. We briefly compare our system with some others proposed to have cognitive powers, and finish by discussion of future developments we propose for our system, as well as expanding on the arguments for and against our approach to creating such a software system.


International Journal of Approximate Reasoning | 2003

Symbolic state transducers and recurrent neural preference machines for text mining

Garen Arevian; Stefan Wermter; Christo Panchev

This paper focuses on symbolic transducers and recurrent neural preference machines to support the task of mining and classifying textual information. These encoding symbolic transducers and learning neural preference machines can be seen as independent agents, each one tackling the same task in a different manner. Systems combining such machines can potentially be more robust as the strengths and weaknesses of the different approaches yield complementary knowledge, wherein each machine models the same information content via different paradigms. An experimental analysis of the performance of these symbolic transducer and neural preference machines is presented. It is demonstrated that each approach can be successfully used for information mining and news classification using the Reuters news corpus. Symbolic transducer machines can be used to manually encode relevant knowledge quickly in a data-driven approach with no training, while trained neural preference machines can give better performance based on additional training.


Cognitive Systems Research | 2002

Hybrid preference machines based on inspiration from neuroscience

Stefan Wermter; Christo Panchev

In the past, a variety of computational problems have been tackled with different connectionist network approaches. However, very little research has been done on a framework which connects neuroscience-inspired models with connectionist models and higher level symbolic processing. In this paper, we outline a preference machine framework which focuses on a hybrid integration of various neural and symbolic techniques in order to address how we may process higher level concepts based on concepts from neuroscience. It is a first hybrid framework which allows a link between spiking neural networks, connectionist preference machines and symbolic finite state machines. Furthermore, we present an example experiment on interpreting a neuroscience-inspired network by using preferences which may be connected to connectionist or symbolic interpretations.


international conference on artificial neural networks | 2007

Robust text classification using a hysteresis-driven extended SRN

Garen Arevian; Christo Panchev

Recurrent Neural Network (RNN) models have been shown to perform well on artificial grammars for sequential classification tasks over long-term time-dependencies. However, there is a distinct lack of the application of RNNs to real-world text classification tasks. This paper presents results on the capabilities of extended two-context layer SRN models (xRNN) applied to the classification of the Reuters-21578 corpus. The results show that the introduction of high levels of noise to sequences of words in titles, where noise is defined as the unimportant stopwords found in natural language text, is very robustly handled by the classifiers which maintain consistent levels of performance. Comparisons are made with SRN and MLP models, as well as other existing classifiers for the text classification task.

Collaboration


Dive into the Christo Panchev's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Garen Arevian

University of Sunderland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harry R. Erwin

University of Sunderland

View shared research outputs
Top Co-Authors

Avatar

Huixin Chen

University of Sunderland

View shared research outputs
Top Co-Authors

Avatar

Kevin Burn

University of Sunderland

View shared research outputs
Top Co-Authors

Avatar

Michael P. Oakes

University of Wolverhampton

View shared research outputs
Researchain Logo
Decentralizing Knowledge