Iren Valova
University of Massachusetts Amherst
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Iren Valova.
Procedia Computer Science | 2012
George Georgiev; Iren Valova; Natacha Gueorguieva; Leo Lei
Abstract Heart is one of the most important organs in the human body and disorders in its functioning can cause serious problems. Arrhythmias are abnormal heart beats. In fact, arrhythmias are heart diseases, caused by heart electrical-conductive system disorders. They are characterized with very slow (bradycardia) or very fast (tachycardia) heart functions resulting in an inefficient pumping. The heart state is generally reflected in the shape of ECG waveform and heart rate. Various computer-based methodologies for automatic diagnosis have been proposed by researchers; however the entire process can generally be subdivided into a number of separate processing modules such as preprocessing, feature extraction/selection, and classification. In this research we focus on filtering the ECG signal in order to remove high frequency noise and enhance the QRS complexes, and on feature extraction. The latter is the determination of a feature vector from the ECG pattern vector. Our feature selection approach is based on implementation of orthonormal functions. Representing ECG morphology with coefficients of orthonormal polynomials results in robust estimates of a few descriptive signal parameters. Exposition of subtle features of normal and deviating ECG pattern vectors allows their accurate representation. The experimental data includes recordings from MIT dataset.
Procedia Computer Science | 2012
Iren Valova; Natacha Gueorguieva; George Georgiev
Abstract Voltage-gated sodium channels play an important role in action potentials. If enough channels open during a change in the cells membrane potential, a small but significant number of sodium ( Na + ) ions will move into the cell reducing its electrochemical gradient and further depolarizing the cell. Voltage-gated Na + channels play a fundamental role in the excitability of nerve and muscle cells. Na + channels both open and close more quickly than potassium ( K + ) channels, producing an influx of positive charge ( Na + ) toward the beginning of the action potential and an efflux ( K + ) toward the end. The study of K + channels is essential as they appear to be more diverse in structure and function than any other types of ion channel. K + channels shape the action potential, set the membrane potential, and determine firing rates. There already are some drugs in clinical use that target K + channels which improve our ability to regulate excitability. In this research, we study the influence of voltage dependence on channel activation and inactivation by simulating different channel subtypes as well as the effect of different kinetic parameters on membrane excitability.
Procedia Computer Science | 2011
Iren Valova; Natacha Gueorguieva; George Gueorguiev; Vyacheslav Glukh
Abstract Information processing in the brain results from the spread and interaction of electrical and chemical signals within and among neurons. The equations that describe brain dynamics generally do not have analytical solutions. The recent expansion in the use of simulation tools in the field of neuroscience has been encouraged by the rapid growth of quantitative observations that both stimulate and constrain the formulation of new hypotheses of neuronal function. The purpose of this research is to study, simulate and analyze the influence of Ca concentration on the Na channel. Ca deviation from its normal levels show major clinical problems resulting from the decreased (increased) excitability of neurons as fatigue, depression, confusion, cardiac arrhythmias etc. become evident. We simulate the sensitivity of the Na channel to the concentration of Ca and its “stabilizing” effect on nerve and muscle excitability. Our research is based on Hodgkin and Huxley research, Moore-Cox model of the Na channel as well as NEURON simulation environment. The latter is a powerful and flexible tool for implementing biologically realistic models of electrical and chemical signalling in neurons while adding the necessary expansion and modifications required by the stated goals.
Procedia Computer Science | 2011
Aaron Larocque; Iren Valova
Abstract Clustering is a way of classifying a multi-dimensional dataset by the similarities of its dimensions. The results from clustering must be analyzed to test the accuracy of the algorithm and its implementation. This analysis is sometimes done by a visual representation of the clustered dataset. However, it is impossible to visually represent a dataset with more than four dimensions. Statistical analysis makes this feasible. The analysis performed on the output calculates the centroid of each cluster and the clusters relation to that centroid. We have investigated two modes of hierarchical clustering and spectral clustering. The standard deviation of each dimension from the centroid, the maximum Euclidean distance from the centroid, and the dimensions that elements of each cluster have in common are also computed. The performed experiments demonstrate which clustering algorithm presents most accurate results under certain circumstances through the use of a synthesis of visual representation and the statistical analysis proposed above.
Neurocomputing | 2011
Derek Beaton; Iren Valova; Daniel MacLean
Self-organization is a widely used technique in unsupervised learning and data analysis, largely exemplified by k-means clustering, self-organizing maps (SOM) and adaptive resonance theory. In this paper we present a new algorithm: TurSOM, inspired by Turings unorganized machines and Kohonens SOM. Turings unorganized machines are an early model of neural networks characterized by self-organizing connections, as opposed to self-organizing neurons in SOM. TurSOM introduces three new mechanisms to facilitate both neuron and connection self-organization. These mechanisms are: a connection learning rate, connection reorganization, and a neuron responsibility radius. TurSOM is implemented in a 1-dimensional network (i.e. chain of neurons) to exemplify the theoretical implications of these features. In this paper we demonstrate that TurSOM is superior to the classical SOM algorithm in several ways: (1) speed until convergence; (2) independent clusters; and (3) tangle-free networks.
Archive | 2010
George Georgiev; Iren Valova; Natacha Gueorguieva
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data causes a significant computational problem. Decision, tree classification is a popular approach to the problem and an efficient form for representing a decision process in hierarchical pattern recognition systems. They are characterized by the property that samples are subjected to a sequence of decision rules before they assigned to a unique class.
Archive | 2010
Iren Valova; Derek Beaton; Daniel MacLean
In this chapter we offer a survey of self-organizing feature maps with emphasis on recent advances, and more specifically, on growing architectures. Several of the methods are developed by the authors and offer unique combination of theoretical fundamentals and neural network architectures. Included in this survey of dynamic architectures, will also be examples of application domains, usage and resources for learners and researchers alike, to pursue their interest in SOMs. The primary reason for pursuing this branch of machine learning, is that these techniques are unsupervised – requiring no a priori knowledge or trainer. As such, SOMs lend themselves readily to difficult problem domains in machine learning, such as clustering, pattern identification and recognition and feature extraction. SOMs utilize competitive neural network learning algorithms introduced by Kohonen in the early 1980’s. SOMs maintain the features (in terms of vectors) of the input space the network is observing. This chapter, as work emphasizing dynamic architectures, will be incomplete without presenting the significant achievements in SOMs including the work of Fritzke and his growing architectures. To exemplify more modern approaches we present state-of-the art developments in SOMs. These approaches include parallelization (ParaSOM – as developed by the authors), incremental learning (ESOINN), connection reorganization (TurSOM – as developed by the authors), and function space organization (mnSOM). Additionally, we introduce some methods of analyzing SOMs. These include methods for measuring the quality of SOMs with respect to input, neighbors and map size. We also present techniques of posterior recognition, clustering and input feature significance. In summary, this chapter presents a modern gamut of self-organizing neural networks, and measurement and analysis techniques.
ieee/aiaa digital avionics systems conference | 2006
Derek Beaton; Iren Valova; Daniel MacLean; John Hammond
A self-organizing map (SOM) is a type of unsupervised artificial neural network (ANN) that can be used in applications of pattern recognition, and classification. A SOM is a viable approach to many avionics problem domains that include threat identification (classification), air traffic flow management (pattern recognition) and intelligent systems for vehicle autonomy (classification and pattern recognition). An implementation of a parallelized SOM entitled ParaSOM, was developed which allows for a more accurate mapping of input; and far less iterations (hundreds, as opposed to tens or hundreds of thousands) are required in this implementation versus a classical SOM (Kohonen, 1995) and many of its variations - including but not limited to growing cell structures (Fritzke, 1994); growing grid (Fritzke, 1995); hierarchical (Lampinen and Oja, 1992); and growing hierarchical (Dittenbach et al., 2000). In a recent advancement to ParaSOM a genetic algorithm (GA) implementing evolutionary computation was created that quasi-randomly generates (or randomly selects) values for ParaSOM parameters from a lower and upper bound pairing of values for each ParaSOM parameter elected for use during execution. When used in conjunction with a convergence test, the GA identifies parameters of ParaSOM that bring execution and performance as close to optimum as possible, without human interaction. An automated generation of parameters for optimum performance of ParaSOM allows for a more accurate use of ParaSOM and therefore more accurate use in the problem domains discussed. Optimum performance is defined as highest accuracy of classification with least amount of iterations prior to convergence
Archive | 2007
Iren Valova; Natacha Gueorguieva; George Georgiev
Archive | 2010
Iren Valova; Natacha Gueorguieva; George Georgiev; Vyacheslav Glukh