Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Litong Feng is active.

Publication


Featured researches published by Litong Feng.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Motion-Resistant Remote Imaging Photoplethysmography Based on the Optical Properties of Skin

Litong Feng; Lai-Man Po; Xuyuan Xu; Yuming Li; Ruiyi Ma

Remote imaging photoplethysmography (RIPPG) can achieve contactless monitoring of human vital signs. However, the robustness to a subjects motion is a challenging problem for RIPPG, especially in facial video-based RIPPG. The RIPPG signal originates from the radiant intensity variation of human skin with pulses of blood and motions can modulate the radiant intensity of the skin. Based on the optical properties of human skin, we build an optical RIPPG signal model in which the origins of the RIPPG signal and motion artifacts can be clearly described. The region of interest (ROI) of the skin is regarded as a Lambertian radiator and the effect of ROI tracking is analyzed from the perspective of radiometry. By considering a digital color camera as a simple spectrometer, we propose an adaptive color difference operation between the green and red channels to reduce motion artifacts. Based on the spectral characteristics of photoplethysmography signals, we propose an adaptive bandpass filter to remove residual motion artifacts of RIPPG. We also combine ROI selection on the subjects cheeks with speeded-up robust features points tracking to improve the RIPPG signal quality. Experimental results show that the proposed RIPPG can obtain greatly improved performance in accessing heart rates in moving subjects, compared with the state-of-the-art facial video-based RIPPG methods.


Journal of Visual Communication and Image Representation | 2016

Integration of image quality and motion cues for face anti-spoofing

Litong Feng; Lai-Man Po; Yuming Li; Xuyuan Xu; Fang Yuan; Terence Chun-Ho Cheung; Kwok-Wai Cheung

A multi-cues integration framework is proposed using a hierarchical neural network.Bottleneck representations are effective in multi-cues feature fusion.Shearlet is utilized to perform face image quality assessment.Motion-based face liveness features are automatically learned using autoencoders. Many trait-specific countermeasures to face spoofing attacks have been developed for security of face authentication. However, there is no superior face anti-spoofing technique to deal with every kind of spoofing attack in varying scenarios. In order to improve the generalization ability of face anti-spoofing approaches, an extendable multi-cues integration framework for face anti-spoofing using a hierarchical neural network is proposed, which can fuse image quality cues and motion cues for liveness detection. Shearlet is utilized to develop an image quality-based liveness feature. Dense optical flow is utilized to extract motion-based liveness features. A bottleneck feature fusion strategy can integrate different liveness features effectively. The proposed approach was evaluated on three public face anti-spoofing databases. A half total error rate (HTER) of 0% and an equal error rate (EER) of 0% were achieved on both REPLAY-ATTACK database and 3D-MAD database. An EER of 5.83% was achieved on CASIA-FASD database.


Signal Processing-image Communication | 2013

Depth map misalignment correction and dilation for DIBR view synthesis

Xuyuan Xu; Lai-Man Po; Ka-Ho Ng; Litong Feng; Kwok-Wai Cheung; Chun-Ho Cheung; Chi-Wang Ting

The quality of the synthesized views by Depth Image Based Rendering (DIBR) highly depends on the accuracy of the depth map, especially the alignment of object boundaries of texture image. In practice, the misalignment of sharp depth map edges is the major cause of the annoying artifacts at the disoccluded regions of the synthesized views. Conventional smooth filter approach blurs the depth map to reduce the disoccluded regions. The drawbacks are the degradation of 3D perception of the reconstructed 3D videos and the destruction of the texture in background regions. Conventional edge preserving filter utilizes the color image information in order to align the depth edges with color edges. Unfortunately, the characteristics of color edges and depth edges are very different which causes annoying boundaries artifacts in the synthesized virtual views. Recent solution of reliability-based approach uses reliable warping information from other views to fill the holes. However, it is not suitable for the view synthesis in video-plus-depth based DIBR applications. In this paper, a new depth map preprocessing approach is proposed. It utilizes Watershed color segmentation method to correct the depth map misalignment and then the depth map object boundaries are extended to cover the transitional edge regions of color image. This approach can handle the sharp depth map edges lying inside or outside the object boundaries in 2D sense. The quality of the disoccluded regions of the synthesized views can be significantly improved and unknown depth values can also be estimated. Experimental results show that the proposed method achieves superior performance for view synthesis by DIBR especially for generating large baseline virtual views.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

No-Reference Video Quality Assessment With 3D Shearlet Transform and Convolutional Neural Networks

Yuming Li; Lai-Man Po; Chun-Ho Cheung; Xuyuan Xu; Litong Feng; Fang Yuan; Kwok-Wai Cheung

In this paper, we propose an efficient general-purpose no-reference (NR) video quality assessment (VQA) framework that is based on 3D shearlet transform and convolutional neural network (CNN). Taking video blocks as input, simple and efficient primary spatiotemporal features are extracted by 3D shearlet transform, which are capable of capturing natural scene statistics properties. Then, CNN and logistic regression are concatenated to exaggerate the discriminative parts of the primary features and predict a perceptual quality score. The resulting algorithm, which we name shearlet- and CNN-based NR VQA (SACONVA), is tested on well-known VQA databases of Laboratory for Image & Video Engineering, Image & Video Processing Laboratory, and CSIQ. The testing results have demonstrated that SACONVA performs well in predicting video quality and is competitive with current state-of-the-art full-reference VQA methods and general-purpose NR-VQA algorithms. Besides, SACONVA is extended to classify different video distortion types in these three databases and achieves excellent classification accuracy. In addition, we also demonstrate that SACONVA can be directly applied in real applications such as blind video denoising.


international symposium on circuits and systems | 2013

Depth-aided exemplar-based hole filling for DIBR view synthesis

Xuyuan Xu; Lai-Man Po; Chun-Ho Cheung; Litong Feng; Ka-Ho Ng; Kwok-Wai Cheung

Quality of synthesized view by Depth-Image-Based Rendering (DIBR) highly depends on hole filling, especially for synthesized view with large disocclusion. Many hole filling methods are proposed to improve the synthesized view quality and inpainting is the most popular approach to recover the disocclusions. However, the conventional inpainting either makes the hole regions blurred via diffusion or propagates the foreground information to the disoclusion regions. Annoying artifacts are created in the synthesized virtual views. This paper proposes a depth-aided exemplar-based inpainting method for recovering large disoclusion. It consists of two processes, warped depth map filling and warped color image filling. Since depth map can be considered as a grey-scale image without texture, it is much easier to be filled. Disoccluded regions of color image are predicted based on its associated filled depth map information. Regions with texture lying around the background have higher priority to be filled than other regions and disoccluded regions are filled by propagating the background texture through the exemplar-based inpainting. Thus artifacts created by diffusion or using foreground information for prediction can be eliminated. Experimental results show texture can be recovered in large disocclusions and the proposed method has better visual quality compared to existing methods.


international conference on digital signal processing | 2014

Motion artifacts suppression for remote imaging photoplethysmography

Litong Feng; Lai-Man Po; Xuyuan Xu; Yuming Li

Remote imaging photoplethysmography (RIPPG) is able to access human vital signs without physical contact. However most of the conventional RIPPG approaches are susceptive to motions of subjects or camera. Overcoming motion artifacts presents one of the most challenging problems. Focusing on the motion artifacts problem, the effects of motion artifacts on RIPPG signals were analyzed. In order to suppress motion artifacts for RIPPG, region of interest (ROI) is stabilized by using face tracking based on feature points tracking. And adaptive bandpass filter is further used to suppress the residual motion artifacts. With the addition of motion artifacts, the sorting of independent component analysis (ICA) outputs becomes more important, hence reference sine signals are generated to be correlated with ICA output components, and the cardiac pulse wave is automatically picked up from ICA output components, with the largest correlation coefficient. Fourteen subjects were enrolled to test the robustness with large motion artifacts for the proposed RIPPG method. Experimental results show that the proposed method could obtain a much better performance in accessing pulse rates for moving subjects, compared to the state-of-the-art method. The effectiveness of our method in motion artifacts suppression was verified by comparison with a commercial oximeter using Bland-Altman analysis and Pearsons correlation. With the efficient motion artifact suppression, RIPPG method has good potential in broadening the application of vital signs accesses.


international conference on digital signal processing | 2016

No-reference image quality assessment with deep convolutional neural networks

Yuming Li; Lai-Man Po; Litong Feng; Fang Yuan

The state-of-the-art general-purpose no-reference image or video quality assessment (NR-I/VQA) algorithms usually rely on elaborated hand-crafted features which capture the Natural Scene Statistics (NSS) properties. However, designing these features is usually not an easy problem. In this paper, we describe a novel general-purpose NR-IQA framework which is based on deep Convolutional Neural Networks (CNN). Directly taking a raw image as input and outputting the image quality score, this new framework integrates the feature learning and regression into one optimization process, which provides an end-to-end solution to the NR-IQA problem and frees us from designing hand-crafted features. This approach achieves excellent performance on the LIVE dataset and is very competitive with other state-of-the-art NR-IQA algorithms.


international symposium on circuits and systems | 2015

Frame adaptive ROI for photoplethysmography signal extraction from fingertip video captured by smartphone

Lai-Man Po; Xuyuan Xu; Litong Feng; Yuming Li; Kwok-Wai Cheung; Chun-Ho Cheung

Photoplethysmography (PPG) has been widely used in clinical applications for monitoring vital signs especially heart rate by pulse oximeter. Recent researches have demonstrated the possibility of using fingertip video based PPG approach to estimate heart rate by smartphones. However, due to the variation of camera sensor characteristics in difference smartphones, the conventional fixed region-of-interest (ROI) for PPG signal extraction technique is not reliable. In this paper, a novel frame adaptive ROI method is proposed to detour the color saturation or cut-off distortion in the fingertip video capturing process for improving the reliability due to variation and limited dynamic range of the camera sensors in different smartphone models. Experimental results demonstrate that the proposed method can produce good pulsatile waveform and achieve high heart rate estimation accuracy using different smartphone models as compared with a FDA (U.S. Food and Drug Administration) approved commercial pulse oximeter.


conference of the industrial electronics society | 2013

An adaptive background biased depth map hole-filling method for Kinect

Litong Feng; Lai-Man Po; Xuyuan Xu; Ka-Ho Ng; Chun-Ho Cheung; Kwok-Wai Cheung

The launch of Kinect provides a convenient way to access the depth information in real time. However the depth map quality still needs to be enhanced for 3D visual applications. In this paper, an adaptive background biased depth map hole-filling method is proposed. First, depth holes caused by abnormal reflection are filled by color similarity in-painting, and a soft decision for color similarity checking is performed by the use of probabilities in random walks color segmentation. Afterwards it is assumed that the lost information in the rest of depth holes belongs to the background. The background depth information is extracted by automatic thresholding in the neighborhood of each hole. Depth holes are in-painted with the background information in their local neighborhood. Combination of color similarity in-painting and background biased in-painting is able to perform depth map hole-filling adaptively for different kinds of depth holes for Kinect. The hole-filling results and virtual view synthesis results show that the Kinect depth map quality can be improved significantly by the proposed method.


Multimedia Tools and Applications | 2018

Block-based adaptive ROI for remote photoplethysmography

Lai-Man Po; Litong Feng; Yuming Li; Xuyuan Xu; Terence Chun-Ho Cheung; Kwok-Wai Cheung

Remote photoplethysmography (rPPG) can achieve contactless human vital signs monitoring, but its signal quality is limited by the remote operation nature. In practical applications, improving the rPPG signal quality becomes an essential task. As a remote imaging technique, rPPG utilizes a camera to capture a video of a skin area, especially the facial area, then focuses on a particular sub-area as the region of interest (ROI). In this paper, we investigated a novel adaptive ROI (AROI) approach for improving the rPPG signal quality. In this approach, block-based spatial-temporal division is performed on a captured face video. Based on these segmented video pipelines, the spatial-temporal quality distribution of the rPPG signals is estimated using a signal-to-noise ratio (SNR) feature. Afterwards, AROIs are calculated through mean-shift clustering and adaptive thresholding in SNR maps. As the AROI can be dynamically adjusted according to the spatial-temporal quality distribution of rPPG signals on the face, the quality of the final recovered rPPG signal is improved. The performance of the proposed AROI approach was evaluated with both still and moving subjects. Compared to conventional ROI methods for rPPG, the proposed AROI obtained a higher accuracy in heart rate measurement. And the state-of-the-art motion-resistant rPPG techniques can be effectively enhanced through being integrated with the AROI.

Collaboration


Dive into the Litong Feng's collaboration.

Top Co-Authors

Avatar

Lai-Man Po

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xuyuan Xu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yuming Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Kwok-Wai Cheung

Chu Hai College of Higher Education

View shared research outputs
Top Co-Authors

Avatar

Chun-Ho Cheung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Fang Yuan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ka-Ho Ng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi-Wang Ting

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mengyang Liu

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge