Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Tino is active.

Publication


Featured researches published by Peter Tino.


IEEE Transactions on Neural Networks | 2011

Minimum Complexity Echo State Network

Ali Rodan; Peter Tino

Reservoir computing (RC) refers to a new class of state-space models with a fixed state transition structure (the reservoir) and an adaptable readout form the state space. The reservoir is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be exploited by the reservoir-to-output readout mapping. The field of RC has been growing rapidly with many successful applications. However, RC has been criticized for not being principled enough. Reservoir construction is largely driven by a series of randomized model-building stages, with both researchers and practitioners having to rely on a series of trials and errors. To initialize a systematic study of the field, we concentrate on one of the most popular classes of RC methods, namely echo state network, and ask: What is the minimal complexity of reservoir construction for obtaining competitive models and what is the memory capacity (MC) of such simplified reservoirs? On a number of widely used time series benchmarks of different origin and characteristics, as well as by conducting a theoretical analysis we show that a simple deterministically constructed cycle reservoir is comparable to the standard echo state network methodology. The (short-term) of linear cyclic reservoirs can be made arbitrarily close to the proved optimal value.


IEEE Transactions on Neural Networks | 2001

Financial volatility trading using recurrent neural networks

Peter Tino; Christian Schittenkopf; Georg Dorffner

We simulate daily trading of straddles on financial indexes. The straddles are traded based on predictions of daily volatility differences in the indexes. The main predictive models studied are recurrent neural nets (RNN). Such applications have often been studied in isolation. However, due to the special character of daily financial time-series, it is difficult to make full use of RNN representational power. Recurrent networks either tend to overestimate noisy data, or behave like finite-memory sources with shallow memory; they hardly beat classical fixed-order Markov models. To overcome data nonstationarity, we use a special technique that combines sophisticated models fitted on a larger data set, with a fixed set of simple-minded symbolic predictors using only recent inputs. Finally, we compare our predictors with the GARCH family of econometric models designed to capture time-dependent volatility structure in financial returns. GARCH models have been used to trade volatility. Experimental results show that while GARCH models cannot generate any significantly positive profit, by careful use of recurrent networks or Markov models, the market makers can generate a statistically significant excess profit, but then there is no reason to prefer RNN over much more simple and straightforward Markov models. We argue that any report containing RNN results on financial tasks should be accompanied by results achieved by simple finite-memory sources combined with simple techniques to fight nonstationarity in the data.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

Hierarchical GTM: constructing localized nonlinear projection manifolds in a principled way

Peter Tino; Ian T. Nabney

It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets and, therefore, a hierarchical visualization system is desirable. In this paper, we extend an existing locally linear hierarchical visualization system PhiVis in several directions: 1) We allow for nonlinear projection manifolds. The basic building block is the Generative Topographic Mapping (GTM). 2) We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. 3) Using tools from differential geometry, we derive expressions for local directional curvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the ancestor visualization plots which are captured by a child model. We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set and apply our system to two more complex 12- and 18-dimensional data sets.


Machine Learning | 2001

Predicting the Future of Discrete Sequences from Fractal Representations of the Past

Peter Tino; Georg Dorffner

We propose a novel approach for building finite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by first transforming the n-block structure of the training sequence into a geometric structure of points in a unit hypercube, such that the longer is the common suffix shared by any two n-blocks, the closer lie their point representations. Such a transformation embodies a Markov assumption—n-blocks with long common suffixes are likely to produce similar continuations. Prediction contexts are found by detecting clusters in the geometric n-block representation of the training sequence via vector quantization. We compare our model with both the classical (fixed order) and variable memory length Markov models on five data sets with different memory and stochastic components. Fixed order Markov models (MMs) fail on three large data sets on which the advantage of allowing variable memory length can be exploited. On these data sets, our predictive models have a superior, or comparable performance to that of VLMMs, yet, their construction is fully automatic, which, is shown to be problematic in the case of VLMMs. On one data set, VLMMs are outperformed by the classical MMs. On this set, our models perform significantly better than MMs. On the remaining data set, classical MMs outperform the variable context length strategies.


IEEE Transactions on Neural Networks | 2014

Learning in the Model Space for Cognitive Fault Diagnosis

Huanhuan Chen; Peter Tino; Ali Rodan; Xin Yao

The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.


IEEE Transactions on Neural Networks | 2013

Incorporating Privileged Information Through Metric Learning

Shereen Fouad; Peter Tino; Somak Raychaudhury; Petra Schneider

In some pattern analysis problems, there exists expert knowledge, in addition to the original data involved in the classification process. The vast majority of existing approaches simply ignore such auxiliary (privileged) knowledge. Recently a new paradigm-learning using privileged information-was introduced in the framework of SVM+. This approach is formulated for binary classification and, as typical for many kernel-based methods, can scale unfavorably with the number of training examples. While speeding up training methods and extensions of SVM+ to multiclass problems are possible, in this paper we present a more direct novel methodology for incorporating valuable privileged knowledge in the model construction phase, primarily formulated in the framework of generalized matrix learning vector quantization. This is done by changing the global metric in the input space, based on distance relations revealed by the privileged information. Hence, unlike in SVM+, any convenient classifier can be used after such metric modification, bringing more flexibility to the problem of incorporating privileged information during the training. Experiments demonstrate that the manipulation of an input space metric based on privileged data improves classification accuracy. Moreover, our methods can achieve competitive performance against the SVM+ formulations.


IEEE Transactions on Evolutionary Computation | 2012

Improving Generalization Performance in Co-Evolutionary Learning

Siang Yew Chong; Peter Tino; Day Chyi Ku; Xin Yao

Recently, the generalization framework in co-evolutionary learning has been theoretically formulated and demonstrated in the context of game-playing. Generalization performance of a strategy (solution) is estimated using a collection of random test strategies (test cases) by taking the average game outcomes, with confidence bounds provided by Chebyshevs theorem. Chebyshevs bounds have the advantage that they hold for any distribution of game outcomes. However, such a distribution-free framework leads to unnecessarily loose confidence bounds. In this paper, we have taken advantage of the near-Gaussian nature of average game outcomes and provided tighter bounds based on parametric testing. This enables us to use small samples of test strategies to guide and improve the co-evolutionary search. We demonstrate our approach in a series of empirical studies involving the iterated prisoners dilemma (IPD) and the more complex Othello game in a competitive co-evolutionary learning setting. The new approach is shown to improve on the classical co-evolutionary learning in that we obtain increasingly higher generalization performance using relatively small samples of test strategies. This is achieved without large performance fluctuations typical of the classical approach. The new approach also leads to faster co-evolutionary search where we can strictly control the condition (sample sizes) under which the speedup is achieved (not at the cost of weakening precision in the estimates).


knowledge discovery and data mining | 2013

Model-based kernel for efficient time series analysis

Huanhuan Chen; Fengzhen Tang; Peter Tino; Xin Yao

We present novel, efficient, model based kernels for time series data rooted in the reservoir computation framework. The kernels are implemented by fitting reservoir models sharing the same fixed deterministically constructed state transition part to individual time series. The proposed kernels can naturally handle time series of different length without the need to specify a parametric model class for the time series. Compared with most time series kernels, our kernels are computationally efficient. We show how the model distances used in the kernel can be calculated analytically or efficiently estimated. The experimental results on synthetic and benchmark time series classification tasks confirm the efficiency of the proposed kernel in terms of both generalization accuracy and computational speed. This paper also investigates on-line reservoir kernel construction for extremely long time series.


IEEE Transactions on Computational Intelligence and Ai in Games | 2009

Relationship Between Generalization and Diversity in Coevolutionary Learning

Siang Yew Chong; Peter Tino; Xin Yao

Games have long played an important role in the development and understanding of coevolutionary learning systems. In particular, the search process in coevolutionary learning is guided by strategic interactions between solutions in the population, which can be naturally framed as game playing. We study two important issues in coevolutionary learning - generalization performance and diversity - using games. The first one is concerned with the coevolutionary learning of strategies with high generalization performance, that is, strategies that can outperform against a large number of test strategies (opponents) that may not have been seen during coevolution. The second one is concerned with diversity levels in the population that may lead to the search of strategies with poor generalization performance. It is not known if there is a relationship between generalization and diversity in coevolutionary learning. This paper investigates whether there is such a relationship in coevolutionary learning through a detailed empirical study. We systematically investigate the impact of various diversity maintenance approaches on the generalization performance of coevolutionary learning quantitatively using case studies. The problem of the iterated prisoners dilemma (IPD) game is considered. Unlike past studies, we can measure both the generalization performance and the diversity level of the population of evolved strategies. Results from our case studies show that the introduction and maintenance of diversity do not necessarily lead to the coevolutionary learning of strategies with high generalization performance. However, if individual strategies can be combined (e.g., using a gating mechanism), there is the potential of exploiting diversity in coevolutionary learning to improve generalization performance. Specifically, when the introduction and maintenance of diversity lead to a speciated population during coevolution, where each specialist strategy is capable of outperforming different opponents, the population as a whole can have a significantly higher generalization performance compared to individual strategies.


knowledge discovery and data mining | 2004

A generative probabilistic approach to visualizing sets of symbolic sequences

Peter Tino; Ata Kabán; Yi Sun

There is a notable interest in extending probabilistic generative modeling principles to accommodate for more complex structured data types. In this paper we develop a generative probabilistic model for visualizing sets of discrete symbolic sequences. The model, a constrained mixture of discrete hidden Markov models, is a generalization of density-based visualization methods previously developed for static data sets. We illustrate our approach on sequences representing web-log data and chorals by J.S. Bach.

Collaboration


Dive into the Peter Tino's collaboration.

Top Co-Authors

Avatar

Xin Yao

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yuan Shen

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Huanhuan Chen

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georg Dorffner

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Zoe Kourtzi

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Behzad Bordbar

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Kerstin Bunte

University of Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge