Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. Y. Michael Wong is active.

Publication


Featured researches published by K. Y. Michael Wong.


Journal of Physics A | 2000

From shrinking to percolation in an optimization model

J. van Mourik; K. Y. Michael Wong; Désiré Bollé

A model of noise reduction for signal processing and other optimization tasks is introduced. Each noise source puts a symmetric constraint on the space of the signal vector within a tolerance bound. When the number of noise sources increases, sequences of transitions take place, causing the solution space to vanish. We find that the transition from an extended solution space to a shrunk space is retarded because of the symmetry of the constraints, in contrast with the analogous problem of pattern storage. For low tolerance, the solution space vanishes by volume reduction, whereas for high tolerance, the vanishing becomes more and more like percolation.


International Journal of Modern Physics B | 2004

DIVERSITY AND ADAPTATION IN LARGE POPULATION GAMES

K. Y. Michael Wong; S. W. Lim; Peixun Luo

We consider a version of large population games whose players compete for resources using strategies with adaptable preferences. The system efficiency is measured by the variance of the decisions. In the regime where the system can be plagued by the maladaptive behavior of the players, we find that diversity among the players improves the system efficiency, though it slows the convergence to the steady state. Diversity causes a mild spread of resources at the transient state, but reduces the uneven distribution of resources in the steady state.


Journal of the Association for Information Science and Technology | 2003

Relevance data for language models using maximum likelihood

David Bodoff; Bin Wu; K. Y. Michael Wong

We present a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval (IR) parameters. Similar to language models, our method uses explicitly hypothesized distributions for documents and queries, but we add to this an explicitly hypothesized distribution for relevance judgments. The method unifies document-oriented and query-oriented views. Performance is better than the Rocchio heuristic for document and/or query modification. The maximum likelihood methodology also motivates a heuristic estimate of the MLE optimization. The method can be used to test competing hypotheses regarding the processes of authors term selection, searchers term selection, and assessors relevancy judgments.


Physical Review E | 2003

Dynamical and stationary properties of on-line learning from finite training sets.

Peixun Luo; K. Y. Michael Wong

The dynamical and stationary properties of on-line learning from finite training sets are analyzed by using the cavity method. For large input dimensions, we derive equations for the macroscopic parameters, namely, the student-teacher correlation, the student-student autocorrelation and the learning force fluctuation. This enables us to provide analytical solutions to Adaline learning as a benchmark. Theoretical predictions of training errors in transient and stationary states are obtained by a Monte Carlo sampling procedure. Generalization and training errors are found to agree with simulations. The physical origin of the critical learning rate is presented. Comparison with batch learning is discussed throughout the paper.


Progress of Theoretical Physics Supplement | 2005

Variational Bayesian Approach to Support Vector Regression

Zhuo Gao; K. Y. Michael Wong

We consider a variational Bayesian approach to support vector regression (SVR). Its main advantage is that one can estimate the leave-one-out error of an SVR analytically without doing the cross-validation. Comparing our theory with the simulations on both artificial (the sine function) and benchmark (the Boston Housing) datasets, we get a good agreement. Furthermore, the smoothness of the hyperparameter dependence of the leave-one-out error can be tuned, which is useful for determining the optimal hyperparameters.


international conference on artificial neural networks | 2005

Smooth performance landscapes of the variational Bayesian approach

Zhuo Gao; K. Y. Michael Wong

We consider the practical advantage of the Bayesian approach over maximum a posteriori methods in its ability to smoothen the landscape of generalization performance measures in the space of hyperparameters, which is vitally important for determining the optimal hyperparameters. The variational method is used to approximate the intractable distribution. Using the leave-one-out error of support vector regression as an example, we demonstrate a further advantage of this method in the analytical estimation of the leave-one-out error, without doing the cross-validation. Comparing our theory with the simulations on both artificial (the sinc function) and benchmark (the Boston Housing) data sets, we get a good agreement.


Progress of Theoretical Physics Supplement | 2005

Cooperating Agents in the Minority Game

K. Y. Michael Wong; S. W. Lim; Zhuo Gao

We consider the Minority Game in which agents compete for resources by striving to be in the minority group. The agents adapt to the environment by reinforcement learning of the preferences of the policies they hold. Diversity of preferences of policies among agents is introduced by adding random biases to the cumulative payoffs of their policies. Agent cooperation becomes increasingly important when diversity increases.


Physical Review E | 2005

Effects of diversity on multiagent systems : Minority games

K. Y. Michael Wong; S. W. Lim; Zhuo Gao

We consider a version of large population games whose agents compete for resources using strategies with adaptable preferences. The games can be used to model economic markets, ecosystems, or distributed control. Diversity of initial preferences of strategies is introduced by randomly assigning biases to the strategies of different agents. We find that diversity among the agents reduces their maladaptive behavior. We find interesting scaling relations with diversity for the variance and other parameters such as the convergence time, the fraction of fickle agents, and the variance of wealth, illustrating their dynamical origin. When diversity increases, the scaling dynamics is modified by kinetic sampling and waiting effects. Analyses yield excellent agreement with simulations.


intelligent data engineering and automated learning | 2003

Agent-based modeling of efficient markets

S. W. Lim; K. Y. Michael Wong; Peixun Luo

We consider the Minority Game which models the collective behavior of agents simultaneously and adaptively competing in a market, or distributively performing load balancing tasks. The variance of the buy-sell decisions is a measure of market inefficiency. When the initial condition of the strategies picked by the agents are the same, the market is inefficient in the regime of low agent complexity, caused by the maladaptive behavior of the agents. However, the market becomes increasingly efficient when the randomness in the initial condition increases. Implications to the occurence of maladaptation, the prediction of market trend and the search for optimal load balancing are discussed.


intelligent data engineering and automated learning | 2003

Dynamics of gradient-based learning and applications to hyperparameter estimation

K. Y. Michael Wong; Peixun Luo; Fuli Li

We analyse the dynamics of gradient-based learning algorithms using the cavity method, considering the cases of batch learning with non-vanishing rates, and on-line learning. It has an an excellent agreement with simulations. Applications to efficient and precise estimation of hyperparameters are proposed.

Collaboration


Dive into the K. Y. Michael Wong's collaboration.

Top Co-Authors

Avatar

Peixun Luo

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

S. W. Lim

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhuo Gao

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hidetoshi Nishimori

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bin Wu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S Li

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Y. W. Tong

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Y.W Tong

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge