Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chern-Loon Lim is active.

Publication


Featured researches published by Chern-Loon Lim.


Signal Processing | 2014

Image reconstruction from a complete set of geometric and complex moments

Barmak Honarvar; Raveendran Paramesran; Chern-Loon Lim

An image can be reconstructed from the finite set of its orthogonal moments. Since geometric and complex moment kernels do not satisfy orthogonality criterion, direct image reconstruction using them is deemed to be difficult. In this paper, we propose a technique to reconstruct an image from either geometric moments (GMs) or complex moments (CMs). We utilize a relationship between GMs and Stirling numbers of the second kind. Then, by using the invertibility property of the Stirling transform, the original image can be reconstructed from its complete set of either geometric or complex moments. Further, based on previous works on blur effects on a moment domain and using the proposed reconstruction methods, a formulation is shown to obtain an estimated original image from the degraded image moments and the blur parameter. The reconstruction performance of the proposed methods on blur images is presented to validate the theoretical framework.


Information Sciences | 2011

Fast computation of exact Zernike moments using cascaded digital filters

Chern-Loon Lim; Barmak Honarvar; Kim-Han Thung; Raveendran Paramesran

Zernike moments have been extensively used and have received much research attention in a number of fields: object recognition, image reconstruction, image segmentation, edge detection and biomedical imaging. However, computation of these moments is time consuming. Thus, we present a fast computation technique to calculate exact Zernike moments by using cascaded digital filters. The novelty of the method proposed in this paper lies in the computation of exact geometric moments directly from digital filter outputs, without the need to first compute geometric moments. The mathematical relationship between digital filter outputs and exact geometric moments is derived and then they are used in the formulation of exact Zernike moments. A comparison of the speed of performance of the proposed algorithm with other state-of-the-art alternatives shows that the proposed algorithm betters current computation time and uses less memory.


international symposium on intelligent signal processing and communication systems | 2013

Video-based heart rate measurement using short-time Fourier transform

Yong-Poh Yu; Ban-Hoe Kwan; Chern-Loon Lim; Siaw-Lang Wong; P. Raveendran

Non-invasive heart rate measurement is essential in medical and sport sciences. Typically, the first stage of heart rate measurements is obtained from non-invasive sensors. In this paper, the data from video sequences of a subject cycling are processed using short-time Fourier transform (STFT) to indicate the heart rate of the subject. The choice of using STFT is its ability to provide more accurately localized temporal and frequency information, especially for the rapidly changing heart rate pattern during the exercise routine. Experimental results show the proposed method can provide an acceptable result and the root mean square error is less than 2.5 beats per minute (BPM) for heart rate variations between 80 BPM and 130 BPM.


Digital Signal Processing | 2013

The fast recursive computation of Tchebichef moment and its inverse transform based on Z-transform

Barmak Honarvar Shakibaei Asli; Raveendran Paramesran; Chern-Loon Lim

The outputs of cascaded digital filters operating as accumulators are combined with a simplified Tchebichef polynomials to form Tchebichef moments (TMs). In this paper, we derive a simplified recurrence relationship to compute Tchebichef polynomials based on Z-transform properties. This paves the way for the implementation of second order digital filter to accelerate the computation of the Tchebichef polynomials. Then, some aspects of digital filter design for image reconstruction from TMs are addressed. The new proposed digital filter structure for reconstruction is based on the 2D convolution between the digital filter outputs used in the computation of the TMs and the impulse response of the proposed digital filter. They operate as difference operators and accordingly act on the transformed image moment sets to reconstruct the original image. Experimental results show that both the proposed algorithms to compute TMs and inverse Tchebichef moments (ITMs) perform better than existing methods in term of computation speed.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Efficient Hardware Accelerators for the Computation of Tchebichef Moments

Kah-Hyong Chang; Raveendran Paramesran; Barmak Honarvar Shakibaei Asli; Chern-Loon Lim

Moments extraction from high resolution images in real time may require a large amount of hardware resources. Using a direct method may involve a critically high operating frequency. This paper presents two improved digital-filter based moment accelerators, as exemplified by a Tchebichef moments computation engine, to introduce features that contribute to an area-efficient and timing-efficient accelerator design. The design of the accelerators invariably consists of two on-chip units: the digital Alter and the matrix multiplication units. Among the features introduced are: a data-shifting means, a filter load distribution method, a reduced set of column filters, sectioned left shifters, a double-line buffer, time-multiplexed and pipelined matrix multiplication sections, and multichip amenable features. A total of 98 frames of test data from high definition videos, real and synthetic images are used in the functional tests. The single-chip field-programmable gate array implementation results show the successful realizations of accelerators capable of moment computations of (31, 31) orders, at 50 frames of 1920 × 1080 8-bit pixels per second, and (63, 63) orders, at 30 frames of 512 × 512 pixels per second. These performances have exceeded that of existing multichip and multiplatform designs.


international symposium on communications control and signal processing | 2014

Heart rate estimation from facial images using filter bank

Yong-Poh Yu; P. Raveendran; Chern-Loon Lim

This paper introduces a new method to estimate the instantaneous heart rates from facial images obtained from video camera. In this study, facial images of a subject whose heart rate varying from 150 beats per minute (BPM) to 110 BPM in 100 seconds are captured through video camera and analyzed using filter bank. The filter bank is utilized to provide the localized temporal information. We test the proposed method on videos with duration of 20s. The proposed method is compared to short-time Fourier Transform, which can also give temporal information. Experimental results show the proposed method can provide an acceptable result. The root mean square error is about 2.31 beats per minute (BPM). The average percentage error is 1.42% while the correlation coefficient 0.96.


Journal of The Franklin Institute-engineering and Applied Mathematics | 2016

Blind image quality assessment for Gaussian blur images using exact Zernike moments and gradient magnitude

Chern-Loon Lim; Raveendran Paramesran; Wissam A. Jassim; Yong-Poh Yu; King Ngi Ngan

Abstract Features that exhibit human perception on the effect of blurring on digital images are useful in constructing a blur image quality metric. In this paper, we show some of the exact Zernike moments (EZMs) that closely model the human quality scores for images of varying degrees of blurriness can be used to measure these distortions. A theoretical framework is developed to identify these EZMs. Together with the selected EZMs, the gradient magnitude (GM), which measures the contrast information, is used as a weight in the formulation of the proposed blur metric. The design of the proposed metric consists of two stages. In the first stage, the EZM differences and the GM dissimilarities between the edge points of the test image and the same re-blurred image are extracted. Next, the mean of the weighted EZM features are then pooled to produce a quality score using support vector machine regressor (SVR). We compare the performance of the proposed blur metric with other state-of-the-art full-reference (FR) and no-reference (NR) blur metrics on three benchmark databases. The results using Pearson׳s correlation coefficient (CC) and Spearman׳s ranked-order correlation coefficient (SROCC) for the LIVE image database are 0.9659 and 0.9625 respectively. Similarly, high correlations with the subjective scores are achieved for the other two databases as well.


asia international conference on modelling and simulation | 2007

Edge Vector Based Mode Decision for H.264/AVC Intra Prediction

Chern-Loon Lim; Kim-Han Thung; P. Raveendran

H.264/AVC exploits temporal and spatial redundancies to obtain higher coding gain. This comes with a cost of higher complexity encoder and higher computational time, which makes it difficult for real-time applications. In this paper, we present a fast intra-prediction mode selection method for H.264/AVC based on local edge information. Prior to intra prediction process, edge vectors of every pixel in a 4x4 block are totaled up to determine the edge direction. Based on the edge direction information, one directional mode is chosen and compared with the DC mode to choose the best mode with least rate distortion cost. Experimental results show that the proposed method reduces the computation complexity while it gives an almost similar visual quality of the reconstructed frame as compared to the one obtained by using exhaustive prediction method. The proposed method also outperforms Feng Pans method (2005) in computational time and number of bits generated


IEEE Transactions on Image Processing | 2015

Online Temporally Consistent Indoor Depth Video Enhancement via Static Structure

Lu Sheng; King Ngi Ngan; Chern-Loon Lim; Songnan Li

In this paper, we propose a new method to online enhance the quality of a depth video based on the intermediary of a so-called static structure of the captured scene. The static and dynamic regions of the input depth frame are robustly separated by a layer assignment procedure, in which the dynamic part stays in the front while the static part fits and helps to update this structure by a novel online variational generative model with added spatial refinement. The dynamic content is enhanced spatially while the static region is otherwise substituted by the updated static structure so as to favor the long-range spatio-temporal enhancement. The proposed method both performs long-range temporal consistency on the static region and keeps necessary depth variations in the dynamic content. Thus, it can produce flicker-free and spatially optimized depth videos with reduced motion blur and depth distortion. Our experimental results reveal that the proposed method is effective in both static and dynamic indoor scenes and is compatible with depth videos captured by Kinect and time-of-flight camera. We also demonstrate that excellent performance can be achieved by the proposed method in comparison with the existing spatio-temporal approaches. In addition, our enhanced depth videos and static structures can act as effective cues to improve various applications, including depth-aided background subtraction and novel view synthesis, showing satisfactory results with few visual artifacts.


Biomedical Optics Express | 2015

Dynamic heart rate estimation using principal component analysis

Yong-Poh Yu; P. Raveendran; Chern-Loon Lim; Ban-Hoe Kwan

In this paper, facial images from various video sequences are used to obtain a heart rate reading. In this study, a video camera is used to capture the facial images of eight subjects whose heart rates vary dynamically, between 81 and 153 BPM. Principal component analysis (PCA) is used to recover the blood volume pulses (BVP) which can be used for the heart rate estimation. An important consideration for accuracy of the dynamic heart rate estimation is to determine the shortest video duration that realizes it. This video duration is chosen when the six principal components (PC) are least correlated amongst them. When this is achieved, the first PC is used to obtain the heart rate. The results obtained from the proposed method are compared to the readings obtained from the Polar heart rate monitor. Experimental results show the proposed method is able to estimate the dynamic heart rate readings using less computational requirements when compared to the existing method. The mean absolute error and the standard deviation of the absolute errors between experimental readings and actual readings are 2.18 BPM and 1.71 BPM respectively.

Collaboration


Dive into the Chern-Loon Lim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

King Ngi Ngan

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge