Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lizhong Wu is active.

Publication


Featured researches published by Lizhong Wu.


Neural Computation | 1996

A smoothing regularizer for feedforward and recurrent neural networks

Lizhong Wu; John E. Moody

We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first-order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent connections described by the training criterion with the regularizer is where = {U, V, W} is the network parameter set, Z(t) are the targets, I(t) = {X(s), s = 1,2, , t} represents the current and all historical input information, N is the size of the training data set, is the regularizer, and is a regularization parameter. The closed-form expression for the regularizer for time-lagged recurrent networks is where is the Euclidean matrix norm and is a factor that depends upon the maximal value of the first derivatives of the internal unit activations f(). Simplifications of the regularizer are obtained for simultaneous recurrent nets ( 0), two-layer feedforward nets, and one layer linear nets. We have successfully tested this regularizer in a number of case studies and found that it performs better than standard quadratic weight decay.


Proceedings of the IEEE/IAFE 1997 Computational Intelligence for Financial Engineering (CIFEr) | 1997

Optimization of trading systems and portfolios

John E. Moody; Lizhong Wu

We propose to train trading systems and portfolios by optimizing objective functions that directly measure trading and investment performance. Rather than basing a trading system on forecasts or training via a supervised learning algorithm using labelled trading data, we train our systems using recurrent reinforcement learning algorithms. The objective functions that we consider as evaluation functions for reinforcement learning are profit or wealth, economic utility, the Sharpe ratio, and our proposed Differential Sharpe Ratio. The trading and portfolio management systems require prior decisions as input in order to properly take into account the effects of transactions costs, market impact, and taxes. This temporal dependence on system state requires the use of reinforcement versions of standard recurrent learning algorithms. We present empirical results in controlled experiments that demonstrate the efficacy of some of our methods. We find that maximizing the differential Sharpe ratio yields more consistent results than maximizing profits, and that both methods outperform a trading system based on forecasts that minimize MSE.


Proceedings of 1995 Conference on Computational Intelligence for Financial Engineering (CIFEr) | 1995

Price behavior and Hurst exponents of tick-by-tick interbank foreign exchange rates

John E. Moody; Lizhong Wu

Our previous analysis of tick-by-tick interbank foreign exchange (FX) rates has suggested that the market is not efficient on short time scales. We find that the price changes show mean-reverting rather than random-walk behavior (Moody and Wu, 1994). The results of rescaled range and Hurst exponent analysis presented in the first part of this paper further confirms the mean-reverting attribute in the FX data. The second part of this paper reports on the highly significant correlations between Bid/Ask spreads, volatility and forecastability found in the FX data. These interactions show that higher volatility results in higher forecast error and increased risk for market makers, and that, to compensate for this increase in risk, market makers increase their Bid/Ask spreads.


Archive | 1998

Reinforcement Learning for Trading Systems and Portfolios: Immediate vs Future Rewards

John E. Moody; Matthew Saffell; Yuansong Liao; Lizhong Wu

We propose to train trading systems and portfolios by optimizing financial objective functions via reinforcement learning. The performance functions that we consider as value functions are profit or wealth, the Sharpe ratio and our recently proposed differential Sharpe ratio for online learning. In Moody & Wu (1997), we presented empirical results in controlled experiments that demonstrated the efficacy of some of our methods for optimizing trading systems. Here we extend our previous work to the use of Q-Learning, a reinforcement learning technique that uses approximated future rewards to choose actions, and compare its performance to that of our previous systems which are trained to maximize immediate reward. We also provide new simulation results that demonstrate the presence of predictability in the monthly S&P 500 Stock Index for the 25 year period 1970 through 1994.


Journal of Forecasting | 1998

Performance functions and reinforcement learning for trading systems and portfolios

John E. Moody; Lizhong Wu; Yuansong Liao; Matthew Saffell


Proceedings of the IEEE/IAFE 1997 Computational Intelligence for Financial Engineering (CIFEr) | 1997

What is the "true price"? state space models for high frequency FX data

John E. Moody; Lizhong Wu


Archive | 1995

IMPROVED ESTIMATES FOR The rescaled range and Hurst exponents

John Moody; Lizhong Wu


Archive | 1995

Long memory and hurst exponents of tick-by-tick interbank foreign exchange rates

John Earl Moody; Lizhong Wu


Archive | 2001

Applications of Artificial Neural Networks to Time Series Prediction

Yuansong Liao; John E. Moody; Lizhong Wu


neural information processing systems | 1996

Multi-effect Decompositions for Financial Data Modeling

Lizhong Wu; John E. Moody

Collaboration


Dive into the Lizhong Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge