Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan A. Tepper is active.

Publication


Featured researches published by Jonathan A. Tepper.


Trends in Cognitive Sciences | 2002

Connectionist natural language parsing

Dominic Palmer-Brown; Jonathan A. Tepper; Heather M. Powell

The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar rules. This review also considers the extent to which connectionist parsers offer computational models of human sentence processing and provide plausible accounts of psycholinguistic data. In considering these issues, special attention is paid to the level of realism, the nature of the modularity, and the type of processing that is to be found in a wide range of parsers.


Connection Science | 2002

A corpus-based connectionist architecture for large-scale natural language parsing

Jonathan A. Tepper; Heather M. Powell; Dominic Palmer-Brown

We describe a deterministic shift-reduce parsing model that combines the advantages of connectionism with those of traditional symbolic models for parsing realistic sub-domains of natural language. It is a modular system that learns to annotate natural language texts with syntactic structure. The parser acquires its linguistic knowledge directly from pre-parsed sentence examples extracted from an annotated corpus. The connectionist modules enable the automatic learning of linguistic constraints and provide a distributed representation of linguistic information that exhibits tolerance to grammatical variation. The inputs and outputs of the connectionist modules represent symbolic information which can be easily manipulated and interpreted and provide the basis for organizing the parse. Performance is evaluated using labelled precision and recall. (For a test set of 4128 words, precision and recall of 75% and 69%, respectively, were achieved.) The work presented represents a significant step towards demonstrating that broad coverage parsing of natural language can be achieved with simple hybrid connectionist architectures which approximate shift-reduce parsing behaviours. Crucially, the model is adaptable to the grammatical framework of the training corpus used and so is not predisposed to a particular grammatical formalism.


Advances in Econometrics; 19, pp 71-91 (2004) | 2004

Tools for non-linear time series forecasting in economics - an empirical comparison of regime switching vector autoregressive models and recurrent neural networks

Jane M. Binner; Thomas Elger; Birger Nilsson; Jonathan A. Tepper

The purpose of this study is to contrast the forecasting performance of two non-linear models, a regime-switching vector autoregressive model (RS-VAR) and a recurrent neural network (RNN), to that of a linear benchmark VAR model. Our specific forecasting experiment is U.K. inflation and we utilize monthly data from 1969 to 2003. The RS-VAR and the RNN perform approximately on par over both monthly and annual forecast horizons. Both non-linear models perform significantly better than the VAR model.


Knowledge Based Systems | 2005

Extracting finite structure from infinite language

T. McQueen; Adrian A. Hopgood; Tony Allen; Jonathan A. Tepper

This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map with laterally interconnected neurons. A derivation of functional-equivalence theory is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set. ed sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar [A. Cleeremans, D. Schreiber, J. McClelland, Finite state automata and simple recurrent networks, Neural Computation, 1 (1989) 372-381] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.


Archive | 2004

A recurrent self-organizing map for temporal sequence processing

T. McQueen; Adrian A. Hopgood; Jonathan A. Tepper; Tony Allen

We present a novel approach to unsupervised temporal sequence proc- essing in the form of an unsupervised, recurrent neural network based on a self- organizing map (SOM). A standard SOM clusters each input vector irrespective of context, whereas the recurrent SOM presented here clusters each input based on an input vector and a context vector. The latter acts as a recurrent conduit feeding back a 2-D representation of the previous winning neuron. This recurrency allows the network to operate on temporal sequence processing tasks. The network has been applied to the difficult natural language processing problem of position vari- ant recognition, e.g. recognising a noun phrase regardless of its position within a sentence.


european semantic web conference | 2018

Detecting Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network

Ziqi Zhang; David Robinson; Jonathan A. Tepper

In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, and empirical research. Despite a large number of emerging scientific studies to address the problem, a major limitation of existing work is the lack of comparative evaluations, which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and gated recurrent networks. We conduct an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available Twitter datasets to date, and show that compared to previously reported results on these datasets, our proposed method is able to capture both word sequence and order information in short texts, and it sets new benchmark by outperforming on 6 out of 7 datasets by between 1 and 13% in F1. We also extend the existing dataset collection on this task by creating a new dataset covering different topics.


Archive | 1995

Integrating Symbolic and Subsymbolic Architectures for Parsing Arithmetic Expressions and Natural Language Sentences

Jonathan A. Tepper; Heather M. Powell; Dominic Palmer-Brown

Connectionism is a relatively new approach to language processing and has comparatively few standard methods for syntax analysis and parsing relative to classical symbolic methods. The interest in connectionism has arisen due to its learning capability, tolerance to noisy input, and ability to generalize from previous examples. Classical rule-based techniques are well understood but tend to be intolerant of minor variations that do not strictly adhere to predefined rules.


Knowledge Based Systems | 2016

On the importance of sluggish state memory for learning long term dependency

Jonathan A. Tepper; Mahmud S. Shertil; Heather M. Powell

The vanishing gradients problem inherent in Simple Recurrent Networks (SRN) trained with back-propagation, has led to a significant shift towards the use of Long Short-Term Memory (LSTM) and Echo State Networks (ESN), which overcome this problem through either second order error-carousel schemes or different learning algorithms, respectively.This paper re-opens the case for SRN-based approaches, by considering a variant, the Multi-recurrent Network (MRN). We show that memory units embedded within its architecture can ameliorate against the vanishing gradient problem, by providing variable sensitivity to recent and more historic information through layer- and self-recurrent links with varied weights, to form a so-called sluggish state-based memory.We demonstrate that an MRN, optimised with noise injection, is able to learn the long term dependency within a complex grammar induction task, significantly outperforming the SRN, NARX and ESN. Analysis of the internal representations of the networks, reveals that sluggish state-based representations of the MRN are best able to latch on to critical temporal dependencies spanning variable time delays, to maintain distinct and stable representations of all underlying grammar states. Surprisingly, the ESN was unable to fully learn the dependency problem, suggesting the major shift towards this class of models may be premature.


International Conference on Innovative Techniques and Applications of Artificial Intelligence | 2004

Extracting Finite Structure from Infinite Language

T. McQueen; Adrian A. Hopgood; Tony Allen; Jonathan A. Tepper

This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map [1] with laterally interconnected neurons. A derivation of functional-equivalence theory [2] is used that allows the model to exploit similarities between the fixture context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally-related neurons together to form states. Results show that the model is able to learn the Reber grammar [3] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.


Physica A-statistical Mechanics and Its Applications | 2009

Does money matter in inflation forecasting

Jane M. Binner; Peter Tino; Jonathan A. Tepper; Richard G. Anderson; Barry E. Jones; Graham Kendall

Collaboration


Dive into the Jonathan A. Tepper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heather M. Powell

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar

Sin Wee Lee

University of East London

View shared research outputs
Top Co-Authors

Avatar

Adrian A. Hopgood

Sheffield Hallam University

View shared research outputs
Top Co-Authors

Avatar

T. McQueen

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar

Tony Allen

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Tino

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge