Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Natacha Gueorguieva is active.

Publication


Featured researches published by Natacha Gueorguieva.


Neurocomputing | 2005

A parallel growing architecture for self-organizing maps with unsupervised learning

Iren Valova; Daniel Szer; Natacha Gueorguieva; Alexandre Buer

Self-organizing maps (SOMs) have become popular for tasks in data visualization, pattern classification or natural language processing and can be seen as one of the major concepts for artificial neural networks of today. Their general idea is to approximate a high dimensional and previously unknown input distribution by a lower-dimensional neural network structure with the goal to model the topology of the input space as close as possible. Classical SOMs read the input values in random but sequential order one by one and thus adjust the network structure over space: the network will be built while reading larger and larger parts of the input. In contrast to this approach, we present a SOM that processes the whole input in parallel and organizes itself over time. The main reason for parallel input processing lies in the fact that knowledge can be used to recognize parts of patterns in the input space that have already been learned. This way, networks can be developed that do not reorganize their structure from scratch every time a new set of input vectors is presented, but rather adjust their internal architecture in accordance with previous mappings. One basic application could be a modeling of the whole-part relationship through layered architectures.


Applied Intelligence | 2011

Bridging the fuzzy, neural and evolutionary paradigms for automatic target recognition

Iren Valova; Gary Milano; Kevin Bowen; Natacha Gueorguieva

Multilayer perceptron neural networks possess pattern recognition properties that make them well suited for use in automatic target recognition systems. Their application is hindered, however, by the lack of a training algorithm, which reliably finds a nearly global optimal set of weights in a relatively short time. The approach presented here is based on implementation of genetic algorithms and fuzzy logic in training the proposed hybrid architecture. Compared to other approaches, it offers the following three main advantages. The neuro-computing technique is capable of fast and adaptive distortion-invariant pattern recognition for rapidly changing targets. On the other hand, genetic algorithms and fuzzy logic offer very sophisticated configuration control, which combines the results of previous computations with the external operating environment. Third, it allows us to significantly improve the reliability of object detection in the input scene with respect to associated distortions at no additional computational cost. This paper examines using genetic algorithms as an efficient way to train a feedforward neural net, the inputs for which are provided by a fuzzy front end, to be applied to automatic target recognition. The system is tested using actual laser detection and range data as training data and the results of the analysis show that the proposed system results in a much faster convergence on a weight set, and a high rate of successful recognition.


Procedia Computer Science | 2013

Initialization Issues in Self-organizing Maps☆

Iren Valova; George Georgiev; Natacha Gueorguieva; Jacob Olson

Abstract In this paper we present analysis and solutions to problems related to initial positioning of neurons in a classic self-organizing map (SOM) neural network. This means that we are not concerned with the multitude of growing variants, where new neurons are placed where needed. For our work, we consider placing the neurons on a Hilbert curve, as SOM have the tendency to converge similarly to self-similar curves. Another point of adjustment in SOM is the initial number of neurons, which depends on the data set. Our investigations show that initializing the neurons on a self-similar curve such as Hilbert provides a quality coverage of the input topology in much less number of epochs as compared to the usual random neuron placement. The meaning of quality is measured by absence of tangles in the network, which is one-dimensional SOM utilizing the traditional Kohonen training algorithm. The tangling of SOM presents the problem of topologically close neighbors that are actually far apart in the neuron chain of the 1D network. This is related to issues of proper clustering and analysis of cluster labels and classification. We also experiment and provide analysis where the number of neurons is concerned.


Neural Computing and Applications | 2007

Modeling of inhibition/excitation firing in olfactory bulb through spiking neurons

Iren Valova; Natacha Gueorguieva; Frank Troescher; Oxana Lapteva

AbstractSpiking neural systems are based on biologically inspired neural models of computation since they take into account the precise timing of spike events and therefore are suitable to analyze dynamical aspects of neuronal signal transmission. These systems gained increasing interest because they are more sophisticated than simple neuron models found in artificial neural systems; they are closer to biophysical models of neurons, synapses, and related elements and their synchronized firing of neuronal assemblies could serve the brain as a code for feature binding and pattern segmentation. The simulations are designed to exemplify certain properties of the olfactory bulb (OB) dynamics and are based on an extension of the integrate-and-fire (IF) neuron, and the idea of locally coupled excitation and inhibition cells. We introduce the background theory to making an appropriate choice of model parameters. The following two forms of connectivity offering certain computational and analytical advantages, either through symmetry or statistical properties in the study of OB dynamics have been used: all-to-all coupling,receptive field style coupling. Our simulations showed that the inter-neuron transmission delay controls the size of spatial variations of the input and also smoothes the network response. Our IF extended model proves to be a useful basis from which we can study more sophisticated features as complex pattern formation, and global stability and chaos of OB dynamics.


Journal of Experimental and Theoretical Artificial Intelligence | 2006

Learning and data clustering with an RBF-based spiking neuron network

Natacha Gueorguieva; Iren Valova; Georgi Georgiev

A spiking neuron is a simplified model of the biological neuron as the input, output, and internal representation of information based on the relative timing of individual spikes, and is closely related to the biological network. We extend the learning algorithms with spiking neurons developed by earlier workers. These algorithms explicitly concerned a single pair of pre- and postsynaptic spikes and cannot be applied to situations involving multiple spikes arriving at the same synapse. The aim of the algorithm presented here is to achieve synaptic plasticity by using relative timing between single pre- and postsynaptic spikes and therefore to improve the performance on large datasets. The learning algorithm is based on spike timing-dependent synaptic plasticity, which uses exact spike timing to optimize the information stream through the neural network as well as to enforce the competition between neurons during unsupervised Hebbian learning. We demonstrate the performance of the proposed spiking neuron model and learning algorithm on clustering and provide a comparative analysis with other state-of-the-art approaches.


Neural Computing and Applications | 2004

An oscillation-driven neural network for the simulation of an olfactory system

Iren Valova; Natacha Gueorguieva; Yukio Kosugi

Understanding the nonlinear dynamics of an olfactory bulb (OB) is essential for the modelling of the brain and nervous system. We have analysed the nature of odour-receptor interactions and the conditions controlling neural oscillations. This analysis is the basis for the proposed biologically plausible three-tiered model of an oscillation-driven neural network (ODNN) with three non-linearities. The layered architecture of the bulb is viewed as a composition of different processing stages performing specific computational tasks. The presented three-tiered model of the olfactory system (TTOS) contains the sensory, olfactory bulb and anterior nucleus tiers. The number of excitatory (mitral/tufted) cells differs from the number of inhibitory (granule) cells, which improves the cognitive ability of the model. The odour molecules are first received at the sensory layer, where receptor neurons spatio-temporally encode them in terms of spiking frequencies. Neurons expressing a specific receptor project to two or more topographically fixed glomeruli in the OB and create a sensory map. Excitatory postsynaptic potentials are formed in the primary dendrite of mitral cells and are encoded in an exclusive way to present them to the coupled non-linear oscillatory model of the next mitral-granule layer. In a noisy background, our model functions as an associative memory, although it operates in oscillatory mode. While feed-forward networks and recurrent networks with symmetric connections always converge to static states, learning and pattern retrieval in an asymmetrically connected neural network based on oscillations are not well studied. We derive the requirements under which a state is stable and test whether a given equilibrium state is stable against noise. The ODNN demonstrates its capability to discriminate odours by using nonlinear dendro-dendritic interactions between neurons. This model allows us to visualise and analyse how the brain is able to encode information from countless molecules with different odour receptors.


international conference on artificial neural networks | 2003

Building RBF neural network topology through potential functions

Natacha Gueorguieva; Iren Valova

In this paper we propose a strategy to shape adaptive radial basis functions through potential functions. DYPOF (DYnamic POtential Functions) neural network (NN) is designed based on radial basis functions (RBF) NN with a two-stage training procedure. Static (fixed number of RBF) and dynamic (ability to add or delete one or more RBF) versions of our learning algorithm are introduced. We investigate the change of cluster shape with the dimension of the input data, the choice of univariate potential function, and the construction of multivariate potential functions. Several data sets are considered to demonstrate the classification performance on the training and testing exemplars as well as compare DYPOF with other neural networks.


International Journal of Smart Engineering System Design | 2003

DYPOF: Dynamically Adaptive RBF Neural Network with Potential Functions

Natacha Gueorguieva; Iren Valova

We present a method for data classification, which performs recognition based on a set of potential fields synthesized over the domain on input space by a number of potential function units. Proposed is DYPOF (DYnamic POtential Functions) neural network (NN), based on radial basis functions with a two-stage training procedure. A fundamental component in building DYPOF is a potential function entity (PFE), designed to generate a respective decision potential function. The desirable shape of the potential field characterizing the distribution of training set is synthesized by adjusting the weights as well as the parameter vectors of cumulative potential functions generated by the PFEs. The automatic adjustment of the minimum necessary number of hidden units-learning adjustment units (LAU)-for a given set of teaching patterns provides the network with a capability of performing dynamic adaptation and self-organization. We investigate the dependence of our method on these parameters and apply it to several data sets. The results indicate the power of the PFEs in generating classification solutions for various shapes of teaching patterns that are robust with respect to noise in the data.


ieee international conference on fuzzy systems | 2016

Fuzzyfication of principle component analysis for data dimensionalty reduction

Natacha Gueorguieva; Iren Valova; George Georgiev

Principal component analysis (PCA) extracts small uncorrelated data from original high dimensional data space and is widely used for data analysis. The methodology of classical PCA is based on orthogonal projection defined in convex vector space. Thus, a norm between two projected vectors is unavoidably smaller than the norm between any two objects before implementation of PCA. Due to this, in some cases when the PCA cannot capture the data structure, its implementation does not necessarily confirm the real similarity of data in the higher dimensional space, making the results unacceptable. In order to avoid this problem for the purposes of high dimensional data clustering, we propose new Fuzzy PCA (FPCA) algorithm. The novelty is in the extracted similarity structures of objects in high-dimensional space and dissimilarities between objects based on their cluster structure and dimension when the algorithm is implemented in conjunction with known fuzzy clustering algorithms as FCM, GK or GG algorithms. This is done by using the fuzzy membership functions and modification of the classical PCA approach by considering the similarity structures during the construction of projections in smaller dimensional space. The effectiveness of the proposed algorithm is tested on several benchmark data sets. We also evaluate the clustering efficiency by implementing validation measures.


Procedia Computer Science | 2013

Biologically Inspired Olfactory Learning Architecture

George Georgiev; Mrinal Gosavi; Iren Valova; Natacha Gueorguieva

Abstract Neurons communicate via electrochemical currents, thus simulation is typically accomplished through modeling the dynamical nature of the neurons electrical properties. In this paper we utilize Hodgkin-Huxley model and briefly compare it to Leaky integrate-and-fire model. The Hodgkin-Huxley model is a conductance-based model where current flows across the cell membrane due to charging of the membrane capacitance, and movement of ions across ion channels. The leaky integrate-and-fire model is widely used example of formal spiking neuron model. In it the action potentials are generated when the membrane potential crosses a fixed threshold value and the dynamics of the membrane potential is governed by a ‘leaky current’. Conductance-based models (HH models) for excitable cells are developed to help understand underlying mechanisms that contribute to action potential generation, repetitive firing and oscillatory patterns. These factors contribute in modeling the olfactory bulbs dynamic behaviors. Due to these characteristics, we have focused on the conductance-based neuronal models in this work. The model consists of input, mitral and granule layer, connected by synapses. A series of simulations accounting for various olfactory activities are run to explain certain effects of the dynamic behavior of the olfactory bulb (OB). These simulation results are verified against documented evidence in published Journal papers.

Collaboration


Dive into the Natacha Gueorguieva's collaboration.

Top Co-Authors

Avatar

Iren Valova

University of Massachusetts Dartmouth

View shared research outputs
Top Co-Authors

Avatar

George Georgiev

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Georgi Georgiev

University of Wisconsin–Oshkosh

View shared research outputs
Top Co-Authors

Avatar

Vyacheslav Glukh

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Matthias Kempka

University of Massachusetts Dartmouth

View shared research outputs
Top Co-Authors

Avatar

Oxana Lapteva

University of Massachusetts Dartmouth

View shared research outputs
Top Co-Authors

Avatar

Alexandre Buer

University of Massachusetts Dartmouth

View shared research outputs
Top Co-Authors

Avatar

Austin Krauza

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Daniel Szer

University of Massachusetts Dartmouth

View shared research outputs
Top Co-Authors

Avatar

David Brady

City University of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge