Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Downs is active.

Publication


Featured researches published by Tom Downs.


International Journal of Approximate Reasoning | 1990

Probabilistic arithmetic. I. numerical methods for calculating convolutions and dependency bounds

Robert C. Williamson; Tom Downs

Probabilistic arithmetic involves the calculation of the distribution of arithmetic functions of random variables. This work on probabilistic arithmetic began as an investigation into the possibility of adapting existing numerical procedures (developed for fixed numbers) to handle random variables (by replacing the basic operations of arithmetic by the appropriate convolutions). The general idea is similar to interval arithmetic and fuzzy arithmetic. In this paper we present a new and general numerical method for calculating the appropriate convolutions of a wide range of probability distributions. An important feature of the method is the manner in which the probability distributions are represented. We use lower and upper discrete approximations to the quantile function (the quasi-inverse of the distribution function). This results in any representation error being always contained within the lower and upper bounds. This method of representation has advantages over other methods previously proposed. The representation fits in well with the idea of dependency bounds. Stochastic dependencies that arise within the course of a sequence of operations on random variables are the severest limit to the simple application of convolution algorithms to the formation of a general probabilistic arithmetic. We examine this dependency error and show how dependency bounds are a possible means of reducing its effect. Dependency bounds are lower and upper bounds on the distribution of a function of random variables that contain the true distribution even when nothing is known of the dependence of the random variables. They are based on the Frechet inequalities for the joint distribution of a set of random variables in terms of their marginal distributions. We show how the dependency bounds can be calculated numerically using our numerical representation of probability distributions. Examples of the methods developed are presented, and relationships with other work on numerically handling uncertainties are briefly described.


Neural Networks | 1993

Constructive higher-order network that is polynomial time

Nicholas J. Redding; Adam Kowalczyk; Tom Downs

Constructive learning algorithms are important because they address two practical difficulties of learning in artificial neural networks. First, it is not always possible to determine the minimal network consistent with a particular problem. Second, algorithms like backpropagation can require networks that are larger than the minimal architecture for satisfactory convergence. Further, constructive algorithms have the advantage that polynomial-time learning is possible if network size is chosen by the learning algorithm so that the learning of the problem under consideration is simplified. This article considers the representational ability of feedforward networks (FFNs) in terms of the fan-in required by the hidden units of a network. We define network order to be the maximum fan-in of the hidden units of a network. We prove, in terms of the problems they may represent, that a higher-order network (HON) is at least as powerful as any other FFN architecture when the order of the networks are the same. Next, we present a detailed theoretical development of a constructive, polynomial-time algorithm that will determine an exact HON realization with minimal order for an arbitrary binary or bipolar mapping problem. This algorithm does not have any parameters that need tuning for good performance. We show how an FFN with sigmoidal hidden units can be determined from the HON realization in polynomial time. Last, simulation results of the constructive HON algorithm are presented for the two-or-more clumps problem, demonstrating that the algorithm performs well when compared with the Tiling and Upstart algorithms.


IEEE Transactions on Neural Networks | 1992

Using random weights to train multilayer networks of hard-limiting units

Peter L. Barlett; Tom Downs

A gradient descent algorithm suitable for training multilayer feedforward networks of processing units with hard-limiting output functions is presented. The conventional backpropagation algorithm cannot be applied in this case because the required derivatives are not available. However, if the network weights are random variables with smooth distribution functions, the probability of a hard-limiting unit taking one of its two possible values is a continuously differentiable function. In the paper, this is used to develop an algorithm similar to backpropagation, but for the hard-limiting case. It is shown that the computational framework of this algorithm is similar to standard backpropagation, but there is an additional computational expense involved in the estimation of gradients. Upper bounds on this estimation penalty are given. Two examples which indicate that, when this algorithm is used to train networks of hard-limiting units, its performance is similar to that of conventional backpropagation applied to networks of units with sigmoidal characteristics are presented.


congress on evolutionary computation | 2003

An implementation of genetic algorithms as a basis for a trading system on the foreign exchange market

Andrei Hryshko; Tom Downs

Foreign exchange trading has emerged in recent times as a significant activity in many countries. As with most forms of trading, the activity is influenced by many random parameters so that the creation of a system that effectively emulates the trading process is very helpful. In this paper, we try to create such a system with a genetic algorithm engine to emulate trader behaviour on the foreign exchange market and to find the most profitable trading strategy.


Monthly Notices of the Royal Astronomical Society | 2005

Applying machine learning to catalogue matching in astrophysics

David Rohde; Michael J. Drinkwater; Marcus Gallagher; Tom Downs; Marianne T. Doyle

We present the results of applying automated machine learning techniques to the problem of matching different object catalogues in astrophysics. In this study, we take two partially matched catalogues where one of the two catalogues has a large positional uncertainty. The two catalogues we used here were taken from the H I Parkes All Sky Survey (HIPASS) and SuperCOSMOS optical survey. Previous work had matched 44 per cent (1887 objects) of HIPASS to the SuperCOSMOS catalogue. A supervised learning algorithm was then applied to construct a model of the matched portion of our catalogue. Validation of the model shows that we achieved a good classification performance (99.12 per cent correct). Applying this model to the unmatched portion of the catalogue found 1209 new matches. This increases the catalogue size from 1887 matched objects to 3096. The combination of these procedures yields a catalogue that is 72 per cent matched.


Neurocomputing | 2003

Boosting the HONG network

Ajantha S. Atukorale; Tom Downs; Ponnuthurai N. Suganthan

This paper gives a brief description of a hierarchical architecture (HONG) that has been described in Atukorale and Suganthan (Neurocomputing 35 (2000) 165). The learning algorithm it uses is a mixed unsupervised/supervised method with most of the learning being unsupervised. The architecture generates multiple classifications for every data pattern presented, and combines them to obtain the final classification. The main objective of this paper is to show how boosting can be used to improve the performance of the HONG classifier.


international symposium on neural networks | 2000

On the performance of the HONG network for pattern classification

Ajantha S. Atukorale; Ponnuthurai N. Suganthan; Tom Downs

A neural network model called the hierarchical overlapped neural gas (HONG) network is introduced and its performance on several datasets is described. In order to obtain improved classification accuracy, the HONG network partitions the input space by projecting the input data onto several different second layer neural gas networks. This duplication enables the HONG network to generate multiple classifications for every sample presented in the form of confidence values, and these confidence values are combined to obtain the final classification. Excellent recognition rates for several benchmark datasets are presented.


intelligent data engineering and automated learning | 2004

Boosting the Tree Augmented Naïve Bayes Classifier

Tom Downs; Adelina Tang

The Tree Augmented Naive Bayes (TAN) classifier relaxes the sweeping independence assumptions of the Naive Bayes approach by taking account of conditional probabilities. It does this in a limited sense, by incorporating the conditional probability of each attribute given the class and (at most) one other attribute. The method of boosting has previously proven very effective in improving the performance of Naive Bayes classifiers and in this paper, we investigate its effectiveness on application to the TAN classifier.


IEEE Transactions on Neural Networks | 1993

Comments on "Optimal training of thresholded linear correlation classifiers" [with reply]

David Lovell; Ah Chung Tsoi; Tom Downs; Thomas H. Hildebrandt

A difficulty with the application of the closed-form training algorithm for the neocognitron proposed by T.H. Hildebrandt (ibid., vol.2, p.557-88, Nov. 1991) is reported. In applying this algorithm the commenters have observed that S-cells frequently fail to respond to features that they have been trained to extract. Results which indicate that this training vector rejection in an important factor in the overall classification performance of the neocognitron trained using Hildebrandts procedure are presented. In reply, Hildebrandt explains that the negative results obtained by the commenter are not specific to the proposed algorithm and are easily explained in terms of set theory.


intelligent data engineering and automated learning | 2004

Improving support vector solutions by selecting a sequence of training subsets

Tom Downs; Jianxiong Wang

In this paper we demonstrate that it is possible to gradually improve the performance of support vector machine (SVM) classifiers by using a genetic algorithm to select a sequence of training subsets from the available data. Performance improvement is possible because the SVM solution generally lies some distance away from the Bayes optimal in the space of learning parameters. We illustrate performance improvements on a number of benchmark data sets.

Collaboration


Dive into the Tom Downs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrei Hryshko

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Ian A. Wood

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ponnuthurai N. Suganthan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ah Chung Tsoi

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Lovell

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Researchain Logo
Decentralizing Knowledge