Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chi-Sing Leung is active.

Publication


Featured researches published by Chi-Sing Leung.


international conference on computer graphics and interactive techniques | 2008

Intrinsic colorization

Xiaopei Liu; Liang Wan; Yingge Qu; Tien-Tsin Wong; Stephen Lin; Chi-Sing Leung; Pheng-Ann Heng

In this paper, we present an example-based colorization technique robust to illumination differences between grayscale target and color reference images. To achieve this goal, our method performs color transfer in an illumination-independent domain that is relatively free of shadows and highlights. It first recovers an illumination-independent intrinsic reflectance image of the target scene from multiple color references obtained by web search. The reference images from the web search may be taken from different vantage points, under different illumination conditions, and with different cameras. Grayscale versions of these reference images are then used in decomposing the grayscale target image into its intrinsic reflectance and illumination components. We transfer color from the color reflectance image to the grayscale reflectance image, and obtain the final result by relighting with the illumination component of the target image. We demonstrate via several examples that our method generates results with excellent color consistency.


IEEE Transactions on Multimedia | 2007

Discrete Wavelet Transform on Consumer-Level Graphics Hardware

Tien-Tsin Wong; Chi-Sing Leung; Pheng-Ann Heng; Jianqing Wang

Discrete wavelet transform (DWT) has been heavily studied and developed in various scientific and engineering fields. Its multiresolution and locality nature facilitates applications requiring progressiveness and capturing high-frequency details. However, when dealing with enormous data volume, its performance may drastically reduce. On the other hand, with the recent advances in consumer-level graphics hardware, personal computers nowadays usually equip with a graphics processing unit (GPU) based graphics accelerator which offers SIMD-based parallel processing power. This paper presents a SIMD algorithm that performs the convolution-based DWT completely on a GPU, which brings us significant performance gain on a normal PC without extra cost. Although the forward and inverse wavelet transforms are mathematically different, the proposed algorithm unifies them to an almost identical process that can be efficiently implemented on GPU. Different wavelet kernels and boundary extension schemes can be easily incorporated by simply modifying input parameters. To demonstrate its applicability and performance, we apply it to wavelet-based geometric design, stylized image processing, texture-illuminance decoupling, and JPEG2000 image encoding


Neural Networks | 2003

Dual extended Kalman filtering in recurrent neural networks

Chi-Sing Leung; Lai-Wan Chan

In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case, during training, we need to estimate the state of hidden layer, as well as the weights of the RNN. This paper discusses how to use the dual extended Kalman filtering (DEKF) for this dual estimation and how to use our proposing DEKF for removing some unimportant weights from a trained RNN. In our approach, one Kalman algorithm is used for estimating the state of the hidden layer, and one recursive least square (RLS) algorithm is used for estimating the weights. After training, we use the error covariance matrix of the RLS algorithm to remove unimportant weights. Simulation showed that our approach is an effective joint-learning-pruning method for RNNs under the online operation.


IEEE Transactions on Neural Networks | 1999

On the Kalman filtering method in neural network training and pruning

John Sum; Chi-Sing Leung; Gilbert H. Young; Wing-Kay Kan

In the use of extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems on how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition will be presented with a simple example illustrated. Then based on three assumptions--1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.


IEEE Transactions on Neural Networks | 2001

Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks

Chi-Sing Leung; Ah-Chung Tsoi; Lai-Wan Chan

Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network.


Neural Networks | 2001

A pruning method for the recursive least squared algorithm

Chi-Sing Leung; Kwok-wo Wong; Pui-Fai Sum; Lai-Wan Chan

The recursive least squared (RLS) algorithm is an effective online training method for neural networks. However, its conjunctions with weight decay and pruning have not been well studied. This paper elucidates how generalization ability can be improved by selecting an appropriate initial value of the error covariance matrix in the RLS algorithm. Moreover, how the pruning of neural networks can be benefited by using the final value of the error covariance matrix will also be investigated. Our study found that the RLS algorithm is implicitly a weight decay method, where the weight decay effect is controlled by the initial value of the error covariance matrix; and that the inverse of the error covariance matrix is approximately equal to the Hessian matrix of the network being trained. We propose that neural networks are first trained by the RLS algorithm and then some unimportant weights are removed based on the approximate Hessian matrix. Simulation results show that our approach is an effective training and pruning method for neural networks.


IEEE Transactions on Neural Networks | 1999

Analysis for a class of winner-take-all model

John Sum; Chi-Sing Leung; Peter Kwong-Shun Tam; Gilbert H. Young; Wing-Kay Kan; Lai-Wan Chan

Recently we have proposed a simple circuit of winner-take-all (WTA) neural network. Assuming no external input, we have derived an analytic equation for its network response time. In this paper, we further analyze the network response time for a class of winner-take-all circuits involving self-decay and show that the network response time of such a class of WTA is the same as that of the simple WTA model.


Pattern Recognition | 2008

Parallelization of cellular neural networks on GPU

Tze-Yui Ho; Ping-Man Lam; Chi-Sing Leung

Recently, cellular neural networks (CNNs) have been demonstrated to be a highly effective paradigm applicable in a wide range of areas. Typically, CNNs can be implemented using VLSI circuits, but this would unavoidably require additional hardware. On the other hand, we can also implement CNNs purely by software; this, however, would result in very low performance when given a large CNN problem size. Nowadays, conventional desktop computers are usually equipped with programmable graphics processing units (GPUs) that can support parallel data processing. This paper introduces a GPU-based CNN simulator. In detail, we carefully organize the CNN data as 4-channel textures, and efficiently implement the CNN computation as fragment programs running in parallel on a GPU. In this way, we can create a high performance but low-cost CNN simulator. Experimentally, we demonstrate that the resultant GPU-based CNN simulator can run 8-17 times faster than a CPU-based CNN simulator.


IEEE Transactions on Neural Networks | 2006

Generalized RLS approach to the training of neural networks

Yong Xu; Kwok-Wo Wong; Chi-Sing Leung

Recursive least square (RLS) is an efficient approach to neural network training. However, in the classical RLS algorithm, there is no explicit decay in the energy function. This will lead to an unsatisfactory generalization ability for the trained networks. In this paper, we propose a generalized RLS (GRLS) model which includes a general decay term in the energy function for the training of feedforward neural networks. In particular, four different weight decay functions, namely, the quadratic weight decay, the constant weight decay and the newly proposed multimodal and quartic weight decay are discussed. By using the GRLS approach, not only the generalization ability of the trained networks is significantly improved but more unnecessary weights are pruned to obtain a compact network. Furthermore, the computational complexity of the GRLS remains the same as that of the standard RLS algorithm. The advantages and tradeoffs of using different decay functions are analyzed and then demonstrated with examples. Simulation results show that our approach is able to meet the design goals: improving the generalization ability of the trained network while getting a compact network.


IEEE Transactions on Multimedia | 2002

The plenoptic illumination function

Tien-Tsin Wong; Chi-Wing Fu; Pheng-Arm Heng; Chi-Sing Leung

Image-based modeling and rendering has been demonstrated as a cost-effective and efficient approach to virtual reality applications. The computational model that most image-based techniques are based on is the plenoptic function. Since the original formulation of the plenoptic function does not include illumination, most previous image-based virtual reality applications simply assume that the illumination is fixed. We propose a formulation of the plenoptic function, called the plenoptic illumination function, which explicitly specifies the illumination component. Techniques based on this new formulation can be extended to support relighting as well as view interpolation. To relight images with various illumination configurations, we also propose a local illumination model, which utilizes the rules of image superposition. We demonstrate how this new formulation can be applied to extend two existing image-based representations, panorama representation such as QuickTime VR and two-plane parameterization, to support relighting with trivial modifications. The core of this framework is compression, and we therefore show how to exploit two types of data correlation, the intra-pixel and the inter-pixel correlations, in order to achieve a manageable storage size.

Collaboration


Dive into the Chi-Sing Leung's collaboration.

Top Co-Authors

Avatar

John Sum

National Chung Hsing University

View shared research outputs
Top Co-Authors

Avatar

Tien-Tsin Wong

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ruibin Feng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ping-Man Lam

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yi Xiao

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hing Cheung So

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Zi-Fa Han

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Wai Ming Tsang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge