Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Garen Arevian is active.

Publication


Featured researches published by Garen Arevian.


Hybrid Neural Systems, revised papers from a workshop | 1998

Towards Hybrid Neural Learning Internet Agents

Stefan Wermter; Garen Arevian; Christo Panchev

The following chapter explores learning internet agents. In recent years, with the massive increase in the amount of available information on the Internet, a need has arisen for being able to organize and access that data in a meaningful and directed way. Many well-explored techniques from the field of AI and machine learning have been applied in this context. In this paper, special emphasis is placed on neural network approaches in implementing a learning agent. First, various important approaches are summarized. Then, an approach for neural learning internet agents is presented, one that uses recurrent neural networks for the learning of classifying a textual stream of information. Experimental results are presented showing that a neural network model based on a recurrent plausibility network can act as a scalable, robust and useful news routing agent. concluding section examines the need for a hybrid integration of various techniques to achieve optimal results in the problem domain specified, in particular exploring the hybrid integration of Preference Moore machines and recurrent networks to extract symbolic knowledge.


web intelligence | 2007

Recurrent Neural Networks for Robust Real-World Text Classification

Garen Arevian

This paper explores the application of recurrent neural networks for the task of robust text classification of a real-world benchmarking corpus. There are many well-established approaches which are used for text classification, but they fail to address the challenge from a more multi-disciplinary viewpoint such as natural language processing and artificial intelligence. The results demonstrate that these recurrent neural networks can be a viable addition to the many techniques used in web intelligence for tasks such as context sensitive email classification and web site indexing.This paper explores the application of recurrent neural networks for the task of robust text classification of a real-world benchmarking corpus. There are many well-established approaches which are used for text classification, but they fail to address the challenge from a more multi-disciplinary viewpoint such as natural language processing and artificial intelligence. The results demonstrate that these recurrent neural networks can be a viable addition to the many techniques used in web intelligence for tasks such as context sensitive email classification and web site indexing.


International Journal of Approximate Reasoning | 2003

Symbolic state transducers and recurrent neural preference machines for text mining

Garen Arevian; Stefan Wermter; Christo Panchev

This paper focuses on symbolic transducers and recurrent neural preference machines to support the task of mining and classifying textual information. These encoding symbolic transducers and learning neural preference machines can be seen as independent agents, each one tackling the same task in a different manner. Systems combining such machines can potentially be more robust as the strengths and weaknesses of the different approaches yield complementary knowledge, wherein each machine models the same information content via different paradigms. An experimental analysis of the performance of these symbolic transducer and neural preference machines is presented. It is demonstrated that each approach can be successfully used for information mining and news classification using the Reuters news corpus. Symbolic transducer machines can be used to manually encode relevant knowledge quickly in a data-driven approach with no training, while trained neural preference machines can give better performance based on additional training.


international conference on artificial neural networks | 2007

Robust text classification using a hysteresis-driven extended SRN

Garen Arevian; Christo Panchev

Recurrent Neural Network (RNN) models have been shown to perform well on artificial grammars for sequential classification tasks over long-term time-dependencies. However, there is a distinct lack of the application of RNNs to real-world text classification tasks. This paper presents results on the capabilities of extended two-context layer SRN models (xRNN) applied to the classification of the Reuters-21578 corpus. The results show that the introduction of high levels of noise to sequences of words in titles, where noise is defined as the unimportant stopwords found in natural language text, is very robustly handled by the classifiers which maintain consistent levels of performance. Comparisons are made with SRN and MLP models, as well as other existing classifiers for the text classification task.


international symposium on neural networks | 2000

Meaning spotting and robustness of recurrent networks

Stefan Wermter; Christo Panchev; Garen Arevian

This paper describes and evaluates the behavior of preference-based recurrent networks which process text sequences. First, we train a recurrent plausibility network to learn a semantic classification of the Reuters news title corpus. Then we analyze the robustness and incremental learning behavior of these networks in more detail. We demonstrate that these recurrent networks use their recurrent connections to support incremental processing. In particular, we compare the performance of the real title models with reversed title models and even random title models. We find that the recurrent networks can, even under these severe conditions, provide good classification results. We claim that previous context in recurrent connections and a meaning spotting strategy are pursued by the network which supports this robust processing.


International Conference on Innovative Techniques and Applications of Artificial Intelligence | 2006

A biologically motivated neural network architecture for the avoidance of catastrophic interference

J. F. Dale Addison; Garen Arevian; John MacIntyre

This paper describes a neural network architecture which has been developed specifically to investigate and alleviate the effects of catastrophic interference. This is the tendency of certain types of feed forward network to forget what they have learned when required to learn a second pattern set which overlaps significantly in content with the first. This work considers a neural network architecture which performs a pattern separated representation of the inputs and develops an attractor dynamic representation, which is subsequently associated with the original pattern. The paper then describes an excitatory and inhibitory function which ensures only the top firing neurons are retained. The paper considers the biological plausibility of this network and reports a series of experiments designed to evaluate the neural networks ability to recall patterns after learning a second data set, as well as the time to relearn the original data set.


joint ifsa world congress and nafips international conference | 2001

Modular preference Moore machines in news mining agents

Stefan Wermter; Garen Arevian

This paper focuses on hybrid symbolic neural architectures that support the task of classifying textual information in learning agents. We give an outline of these symbolic and neural preference Moore machines. Furthermore, we demonstrate how they can be used in the context of information mining and news classification. Using the Reuters newswire text data, we demonstrate how hybrid symbolic and neural machines can provide an effective foundation for learning news agents.


national conference on artificial intelligence | 1999

Hybrid neural plausibility networks for news agents

Stefan Wermter; Christo Panchev; Garen Arevian


international conference on artificial neural networks | 1999

Recurrent neural network learning for text routing

Stefan Wermter; Garen Arevian; Christo Panchev


Archive | 2000

Network Analysis in a Neural Learning Internet Agent

Stefan Wermter; Garen Arevian; Christo Panchev

Collaboration


Dive into the Garen Arevian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John MacIntyre

University of Sunderland

View shared research outputs
Researchain Logo
Decentralizing Knowledge