Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Apostolos Nikolaos Refenes is active.

Publication


Featured researches published by Apostolos Nikolaos Refenes.


Neural Computing and Applications | 1993

Currency exchange rate prediction and neural network design strategies

Apostolos Nikolaos Refenes; Magali E. Azema-Barac; L. Chen; S. A. Karoussos

This paper describes a non trivial application in forecasting currency exchange rates, and its implementation using a multi-layer perceptron network. We show that with careful network design, the backpropagation learning procedure is an effective way of training neural networks for time series prediction. The choice of squashing function is an important design issue in achieving fast convergence and good generalisation performance. We evaluate the use of symmetric and asymmetric squashing functions in the learning procedure, and show that symmetric functions yield faster convergence and better generalisation performance. We derive analytic results to show the conditions under which symmetric squashing functions yield faster convergence, and to quantify the upper bounds on the convergence improvement. The network is evaluated both for long-term forecasting without feedback (i.e. only the forecast prices are used for the remaining trading days), and for short- term forecasting with hourly feedback. The network learns the training set near perfect, and shows accurate prediction, making at least 22% profit on the last 60 trading days of 1989.


hawaii international conference on system sciences | 1992

Currency exchange rate forecasting by error backpropagation

Apostolos Nikolaos Refenes; Magali E. Azema-Barac; S.A. Karoussos

The paper describes a neural network system for forecasting time series and its application to a non-trivial task in forecasting currency exchange rates. The architecture consists of a two-layer backpropagation network with a fixed number of inputs modelling a window moving along the time series in fixed steps to capture the regularities in the underlying data. Several network configurations are described and the results are analysed. The effect of varying the window and step size is also discussed as are the effects of overtraining. The error backpropagation network was trained with currency exchange data for the period 1988-9 on hourly updates. The first 200 trading days were used as the training set and the following three months as the test set. The network is evaluated both for long term forecasting without feedback (i.e. only the forecast prices are used for the remaining trading days) and for short term forecasting with hourly feedback. By careful network design and analysis of the training set, the backpropagation learning procedure is an active way of forecasting time series. The network learns the training set near perfect and shows accurate prediction, making at least 20% profit on the last 60 trading days of 1989.<<ETX>>


Microprocessing and Microprogramming | 1991

Iiistological image understanding by error backpropagation

Apostolos Nikolaos Refenes; Cesare Alippi

Abstract The paper describes a non trivial application from histopathology, and its implementation using a multi-layer perception network. We show that with careful network design, the backpropagation learning procedure is an effective way of training neural networks for histological image understanding. The choice of squashing function is an important design issue in achieving fast convergence and good generalisation performance. We evaluate the use of symmetric and asymmetric squashing functions in the learning procedure, and show that symmetric functions yield faster convergence and 100% generalisation performance. We derive analytic results to show the conditions under which symmetric squashing functions yield faster convergence, and to quantify the upper bounds on the convergence improvement.


Microprocessing and Microprogramming | 1993

Removal of catastrophic noise in hetero-associative training samples

Eran Tuv; Apostolos Nikolaos Refenes

Abstract In many applications, sensor failures, recording errors, and source limitations can affect data collection to the extent that a significant proportion of the training set consists of “malicious” training vectors. We present a method for detecting malicious vectors in hetero-associative training samples. We propose a general metric to quantify maliciousness and investigate four methods for dealing with the problem. We present an algorithm which permits the incremental augmentation of the noise-free part of the data set (cf. stepwise clean-up), and show that it is in general superior to other possible techniques. In particular, we show that the algorithm yields faster convergence and better generalisation for small percentages of catastrophic noise in the training sample.


Microprocessing and Microprogramming | 1992

Sound recognition and optimal neural network design

Apostolos Nikolaos Refenes; E. B. Chan

Abstract This paper describes a non trivial application from sound recognition, and its implementation using the constructive learning procedure CLS+. We show that CLS+ produces near-optimal network design and with careful training it achieves generalisation results of 87.3% on non-noisy data, and 81.6% on noisy data. This compares well with conventional approaches such as the Rabsam method which gives 79.2% and 78.9% respectively, and also with fixed geometry networks.


Microprocessing and Microprogramming | 1989

PARLE: A language for expressing parallelism and integrating symbolic and numeric computations

Eugeniusz Eberbach; Stephen C. McCabe; Apostolos Nikolaos Refenes

Abstract The Parallel Architectures Research Language PARLE, described here, is the higher-level component of the SPAN Kernel System, in which it acts as a compilation target. Support for integrated symbolic and numeric processing is provided through the facility to treat lists and arrays as synonymous data-structures known as n-ary lists; by permitting the use of asynchronous processes; and by allowing a high degree of dynamicity in the creation of local processes. In addition, PARLE is a procedural programming language, and hence shares the control-flow attributes of present time numeric languages. Support for parallelism is provided through simple control mechanisms to specify parallelism both at the statement level and at the processor level. This is achieved by a capability to explicitly define processors and treat such processor definitions as a means of grouping code and data definitions. To enable inter-processor communication, all processors are treated as residing within a shared address space and communicate using primitives for message-passing.


Future Generation Computer Systems | 1985

Fifth generation and VLSI architectures

Philip C. Treleaven; Apostolos Nikolaos Refenes

Abstract Most Western Governments (USA, Japan, EEC, etc.) have now launched national programmes to develop computer systems for use in the 1990s. These so-called Fifth Generation computers are viewed as “knowledge” processing systems which support the symbolic computation underlying Artificial Intelligence applications. The major driving force in Fifth Generation computer design is to efficiently support very high level programming languages (i.e. VHLL architecture). Historycally, however, commercial VHLL architectures have been largely unsuccesful. The driving force in computer designs has principally been advances in hardware which at the present time means architectures to exploit very large scale integration (i.e. VLSI architecture). This paper examines VHLL architectures and VLSI architectures and their probable influences on Fifth Generation computers. Interestingly the major problem for both architecture classes is parallelism; how to orchestrate a single parallel computation so that it can be distributed across an ensemble of processors.


Microprocessing and Microprogramming | 1991

The design and implementation of VOOM: a parallel virtual Object Oriented machine

A.T. Balou; Apostolos Nikolaos Refenes

Abstract The architecture of a highly-parallel general-purpose computer system supporting the Object-Oriented model of computation is presented. The basic features of the architecture are described with the aid of a low-level Virtual Machine, which has served as a basis for hardware implementations. The proposed parallel architecture consists of self-contained computers interconnected through a packet-switching network. Each node is a microcomputer comprising: (a) an execution unit, which provides direct support to the object-oriented style by processing messages; (b) a pre-fetch unit which realizes an early-bind scheme; (c) a memory management unit which is responsible for object management and memory recycling; and (d) a communication unit, which implements inter-node message passing and intra-node scheduling.


international parallel and distributed processing symposium | 1990

Optimizing connectionist datasets with ConSTrainer

Apostolos Nikolaos Refenes

In many connectionist applications training datasets lie usually selected randomly from databases. Such datasets are not guaranteed to represent the entire task domain fairly, and there is often a requirement to pre-process the dataset for compression and normalization. ConSTrainer is a window-based toolkit dedicated to the task of collecting and optimizing datasets for training connectionist networks. One of the most important features of ConSTrainer is its support for optimizing and validating training datasets. Two types of optimization features are examined in this paper: malicious training vector detection, and dataset compression. This paper describes ConSTrainers facilities for optimizing connectionist datasets and demonstrates their utilization in a non-trivial application for diagnostic decision support in Histopathology.<<ETX>>


Microprocessing and Microprogramming | 1990

Message passing via singly-buffered channels: an efficient & flexible communications control mechanism☆

Apostolos Nikolaos Refenes

Abstract Message-passing via singly-buffered channels provide an efficient and safe mechanism for controlling the communication and synchronisation between concurrent processes. Message-passing via singly-buffered channels is a symmetric communications mechanism that permits an arbitrary number of processes to be synchronised in one of three complementary ways: a common handshake, an singly-buffered receive action, or singly-buffered send action. This is a generalisation of the usual approach employed in languages like CSP and Ada, in which communication is asymmetric and restricted to involve only two processes. A formal description of the mechanism is given and a generic implementation strategy is described.

Collaboration


Dive into the Apostolos Nikolaos Refenes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.T. Balou

University College London

View shared research outputs
Top Co-Authors

Avatar

Eugene Eberbach

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. B. Chan

University College London

View shared research outputs
Top Co-Authors

Avatar

Eran Tuv

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. Chen

University College London

View shared research outputs
Top Co-Authors

Avatar

S. A. Karoussos

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge