Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pei Ling Lai is active.

Publication


Featured researches published by Pei Ling Lai.


International Journal of Neural Systems | 2000

KERNEL AND NONLINEAR CANONICAL CORRELATION ANALYSIS

Pei Ling Lai; Colin Fyfe

We review a neural implementation of the statistical technique of Canonical Correlation Analysis (CCA) and extend it to nonlinear CCA. We then derive the method of kernel-based CCA and compare these two methods on real and artificial data sets before using both on the Blind Separation of Sources.


Neural Networks | 1999

A neural implementation of canonical correlation analysis

Pei Ling Lai; Colin Fyfe

We derive a new method of performing Canonical Correlation Analysis with Artificial Neural Networks. We demonstrate the networks capabilities on artificial data and then compare its effectiveness with that of a standard statistical method on real data. We demonstrate the capabilities of the network in two situations where standard statistical techniques are not effective: where we have correlations stretching over three data sets and where the maximum nonlinear correlation is greater than any linear correlation. The network is also applied to Beckers (Network: Computation in Neural Systems, 1996, 7:7-31) random dot stereogram data and shown to be extremely effective at detecting shift information.


international symposium on neural networks | 2007

Reinforcement Learning Reward Functions for Unsupervised Learning

Colin Fyfe; Pei Ling Lai

We extend a reinforcement learning algorithm, REINFORCE [13] which has previously been used to cluster data [10]. By using base Gaussian learners, we extend the method so that it can perform a variety of unsupervised learning tasks such as principal component analysis, exploratory projection pursuit and canonical correlation analysis.


Neurocomputing | 2008

Gaussian processes for canonical correlation analysis

Colin Fyfe; Gayle Leen; Pei Ling Lai

We consider several stochastic process methods for performing canonical correlation analysis (CCA). The first uses a Gaussian process formulation of regression in which we use the current projection of one data set as the target for the other and then repeat with the second projection as the target for adapting the parameters of the first. The second uses a method which relies on probabilistically sphering the data, concatenating the two streams and then performing a probabilistic PCA. The third gets the canonical correlation projections directly without having to calculate the filters first. We also investigate the use of nonlinearity and a method for sparsification of these algorithms.


Neural Processing Letters | 2000

Simultaneous Identification of Face and Orientation

Pei Ling Lai; Colin Fyfe

We have recently developed an extension of a Principal Component Analysis Artificial Neural Network which we have linked to the statistical technique of Factor Analysis. The learning rule can be shown to be optimal for data sets corrupted by Gaussian noise. We now derive from a new cost function a novel learning rule which is optimal for a standard data set. We compare both rules on a data set composed of 10 faces in a mixture of poses. The first learning rule performs best on the face data. Our conclusion is that the first rule which is optimal for Gaussian noise is more generally useful but that specific rules may be optimal for finding the independent factors underlying specific data sets dependent on the noise in the data set.


international conference on pattern recognition | 2000

Canonical correlation analysis neural networks

Colin Fyfe; Pei Ling Lai

We review a new method of performing canonical correlation analysis (CCA) with artificial neural networks. We have previously (1998, 1999) compared its capabilities with standard statistical methods on simple data sets such as an abstraction of random dot stereograms. In this paper, we show that this original rule is only one of a family of rules which use Hebbian and anti-Hebbian learning to find correlations between data sets. We derive slightly different rules from Beckers information theoretic criteria and from probabilistic assumptions. We then derive a robust version of this last rule and then compare the effectiveness of these rules on a standard data set.


international conference on artificial neural networks | 2006

The sphere-concatenate method for gaussian process canonical correlation analysis

Pei Ling Lai; Gayle Leen; Colin Fyfe

We have recently developed several ways of using Gaussian Processes to perform Canonical Correlation Analysis. We review several of these methods, introduce a new way to perform Canonical Correlation Analysis with Gaussian Processes which involves sphering each data stream separately with probabilistic principal component analysis (PCA), concatenating the sphered data and re-performing probabilistic PCA. We also investigate the effect of sparsifying this last method. We perform a comparative study of these methods.


Archive | 2000

Seeking Independence Using Biologically-Inspired ANN’s

Pei Ling Lai; Darryl Charles; Colin Fyfe

In this chapter, we make a case for research into the use of biologically-inspired artificial neural networks (BIANNs) in the search for independence. This may seem an odd case to have to make in a book derived from a workshop which followed the Seventh International Conference on Artificial Neural Networks (ICANN99), the most prestigious ANN conference in Europe. However, as this book demonstrates, many of the methods currently being investigated by the neural network community are very different from the biologically-inspired networks which we will advocate.


Neural Processing Letters | 2001

A Family of Canonical Correlation Networks

Pei Ling Lai; Colin Fyfe

We have previously introduced [2, 5] a neural implementation of Canonical Correlation Analysis (CCA). In this paper, we re-derive the learning method from a probabilistic perspective and then show that similar networks can be derived based on the pioneering work of Becker [1] if certain simplifying assumptions are made. Becker has shown that her network is able to find depth information from an abstraction of random dot stereogram data and so finally we note the similarity of the derived methods with those of Stone [3] which was used with a smooth stereo disparity data set to extract depth information.


Applied Intelligence | 2000

Unsupervised Extraction of Structural Information from HighDimensional Visual Data

Stephen McGlinchey; Darryl Charles; Pei Ling Lai; Colin Fyfe

We present three unsupervised artificial neural networks for the extraction of structural information from visual data. The ability of each network to represent structured knowledge in a manner easily accessible to human interpretation is illustrated using artificial visual data. These networks are used to collectively demonstrate a variety of unsupervised methods for identifying features in visual data and the structural representation of these features in terms of orientation, temporal and topographical ordering, and stereo disparity.

Collaboration


Dive into the Pei Ling Lai's collaboration.

Researchain Logo
Decentralizing Knowledge