Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jie Chen is active.

Publication


Featured researches published by Jie Chen.


IEEE Transactions on Image Processing | 2014

A Rain Pixel Recovery Algorithm for Videos With Highly Dynamic Scenes

Jie Chen; Lap-Pui Chau

Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.


international conference on image processing | 2013

Human motion capture data recovery via trajectory-based sparse representation

Junhui Hou; Lap-Pui Chau; Ying He; Jie Chen; Nadia Magnenat-Thalmann

Motion capture is widely used in sports, entertainment and medical applications. An important issue is to recover motion capture data that has been corrupted by noise and missing data entries during acquisition. In this paper, we propose a new method to recover corrupted motion capture data through trajectory-based sparse representation. The data is firstly represented as trajectories with fixed length and high correlation. Then, based on the sparse representation theory, the original trajectories can be recovered by solving the sparse representation of the incomplete trajectories through the OMP algorithm using a dictionary learned by K-SVD. Experimental results show that the proposed algorithm achieves much better performance, especially when significant portions of data is missing, than the existing algorithms.


international conference on image processing | 2013

A novel SVD-based image quality assessment metric

Shuigen Wang; Chenwei Deng; Weisi Lin; Baojun Zhao; Jie Chen

Image distortion can be categorized into two aspects: content-dependent degradation and content-independent one. An existing full-reference image quality assessment (IQA) metric cannot deal with these two different impacts well. Singular value decomposition (SVD) as a useful mathematical tool has been used in various image processing applications. In this paper, SVD is employed to separate the structural (content-dependent) and the content-independent components. For each portion, we design a specific assessment model to tailor for its corresponding distortion properties. The proposed models are then fused to obtain the final quality score. Experimental results with the TID database demonstrate that the proposed metric achieves better performance in comparison with the relevant state-of-the-art quality metrics.


IEEE Transactions on Image Processing | 2018

Light Field Compression With Disparity-Guided Sparse Coding Based on Structural Key Views

Jie Chen; Junhui Hou; Lap-Pui Chau

Recent imaging technologies are rapidly evolving for sampling richer and more immersive representations of the 3D world. One of the emerging technologies is light field (LF) cameras based on micro-lens arrays. To record the directional information of the light rays, a much larger storage space and transmission bandwidth are required by an LF image as compared with a conventional 2D image of similar spatial dimension. Hence, the compression of LF data becomes a vital part of its application. In this paper, we propose an LF codec with disparity guided Sparse Coding over a learned perspective-shifted LF dictionary based on selected Structural Key Views (SC-SKV). The sparse coding is based on a limited number of optimally selected SKVs; yet the entire LF can be recovered from the coding coefficients. By keeping the approximation identical between encoder and decoder, only the residuals of the non-key views, disparity map, and the SKVs need to be compressed into the bit stream. An optimized SKV selection method is proposed such that most LF spatial information can be preserved. To achieve optimum dictionary efficiency, the LF is divided into several coding regions, over which the reconstruction works individually. Experiments and comparisons have been carried out over benchmark LF data set, which show that the proposed SC-SKV codec produces convincing compression results in terms of both rate-distortion performance and visual quality compared with Joint Exploration Model: with 37.9% BD-rate reduction and 1.17-dB BD-PSNR improvement achieved on average, especially with up to 6-dB improvement for low bit rate scenarios.Recent imaging technologies are rapidly evolving for sampling richer and more immersive representations of the 3D world. One of the emerging technologies is light field (LF) cameras based on micro-lens arrays. To record the directional information of the light rays, a much larger storage space and transmission bandwidth are required by an LF image as compared with a conventional 2D image of similar spatial dimension. Hence, the compression of LF data becomes a vital part of its application. In this paper, we propose an LF codec with disparity guided Sparse Coding over a learned perspective-shifted LF dictionary based on selected Structural Key Views (SC-SKV). The sparse coding is based on a limited number of optimally selected SKVs; yet the entire LF can be recovered from the coding coefficients. By keeping the approximation identical between encoder and decoder, only the residuals of the non-key views, disparity map, and the SKVs need to be compressed into the bit stream. An optimized SKV selection method is proposed such that most LF spatial information can be preserved. To achieve optimum dictionary efficiency, the LF is divided into several coding regions, over which the reconstruction works individually. Experiments and comparisons have been carried out over benchmark LF data set, which show that the proposed SC-SKV codec produces convincing compression results in terms of both rate-distortion performance and visual quality compared with Joint Exploration Model: with 37.9% BD-rate reduction and 1.17-dB BD-PSNR improvement achieved on average, especially with up to 6-dB improvement for low bit rate scenarios.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Light Field Compressed Sensing Over a Disparity-Aware Dictionary

Jie Chen; Lap-Pui Chau

Light field (LF) acquisition faces the challenge of extremely bulky data. Available hardware solutions usually compromise the sensor resource between spatial and angular resolutions. In this paper, a compressed sensing framework is proposed for the sampling and reconstruction of a high-resolution LF based on a coded aperture camera. First, an LF dictionary based on perspective shifting is proposed for the sparse representation of the highly correlated LF. Then, two separate methods, i.e., subaperture scan and normalized fluctuation, are proposed to acquire/calculate the scene disparity, which will be used during the LF reconstruction with the proposed disparity-aware dictionary. At last, a hardware implementation of the proposed LF acquisition/reconstruction scheme is carried out. Both quantitative and qualitative evaluation show that the proposed methods produce the state-of-the-art performance in both reconstruction quality and computation efficiency.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Multiscale Dictionary Learning via Cross-Scale Cooperative Learning and Atom Clustering for Visual Signal Processing

Jie Chen; Lap-Pui Chau

For sparse signal representation, the sparsity across the scales is a promising yet underinvestigated direction. In this paper, we aim to design a multiscale sparse representation scheme to explore such potential. A multiscale dictionary (MD) structure is designed. A cross-scale matching pursuit algorithm is proposed for multiscale sparse coding. Two dictionary learning methods, cross-scale cooperative learning and cross-scale atom clustering, are proposed each focusing on one of the two important attributes of an efficient MD: the similarity and uniqueness of corresponding atoms in different scales. We analyze and compare their different advantages in the application of image denoising under different noise levels, where both methods produce state-of-the-art denoising results.


international conference on digital signal processing | 2014

Dynamic scene rain removal for moving cameras

Cheen-Hau Tan; Jie Chen; Lap-Pui Chau

Rain removal is important to ensure the robustness of applications which rely on video input towards rainy conditions. A number of algorithms have thus been proposed to remove the rain effect based on the properties of rain. However, most of these methods are not able to remove rain effectively for scenes taken from moving cameras. We propose a rain removal algorithm which can effectively remove rain from dynamic scenes taken from moving cameras by improving a recent state-of-the-art rain removal method. We do so by first aligning neighboring frames to a target frame before the target frame is de-rained. Experiments show that our proposed method is able to remove rain effectively for moving camera scenes.


international conference on image processing | 2013

An enhanced window-variant dark channel prior for depth estimation using single foggy image

Jie Chen; Lap-Pui Chau

The dark channel prior is a simple yet efficient way to estimate the scene depth information using one single foggy image. However the prior fails for pixels with low colour saturation. Based on the observation that areas with dramatic colour changes tend to belong to similar depth, a window variation mechanism is proposed in this paper based on the neighbourhood scene complexity and colour saturation rate to achieve an ideal compromise between depth resolution and precision. The proposed method greatly alleviates the intrinsic drawbacks of the original dark channel prior. Experiments show the proposed method produces more accurate depth estimation in most of the scenes than the original prior.


international symposium on circuits and systems | 2014

A fast adaptive guided filtering algorithm for light field depth interpolation

Jie Chen; Lap-Pui Chau

Light field camera provides 4D information of the light rays, from which the scene depth information can be inferred. The disparity/depth maps calculated from light field data are always noisy with missing and false entries in homogeneous regions or areas where view-dependant effects are present. In this paper we proposed an adaptive guided filtering (AGF) algorithm to get an optimized output disparity/depth map. A guidance image is used to provide the image contour and texture information, the filter is able to preserve the disparity edges, smooth the regions without influence of the image texture, and reject the data entries with low confidence during coefficients regression. Experiment shows AGF is much faster in implementation as compared to other variational or hierarchical based optimization algorithms, and produces competitive visual results.


international conference on digital signal processing | 2014

A light field sparse representation structure and its fast coding technique

Jie Chen; Alexander Matyasko; Lap-Pui Chau

The dimensionality of light field data is typically very large for efficient implementation of sparse representation algorithms, such as for dictionary training and sparse coding. We propose a framework for creating light field dictionary using the method of perspective-shearing. Such a dictionary has a special organized structure for different central view patterns and perspective disparities. Based on this dictionary structure, a two-stage sparse coding algorithm is proposed to speed up the reconstruction process by incorporating an interim Winner-Take-All (WTA) hash coding stage into the Orthogonal Matching Pursuit (OMP) algorithm; this stage proves to speed up the sparse coding process by almost three times but still maintains the reconstruction quality. The proposed scheme produces impressive light field reconstruction qualities for compressed light field sensing.

Collaboration


Dive into the Jie Chen's collaboration.

Top Co-Authors

Avatar

Lap-Pui Chau

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Junhui Hou

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Cheen-Hau Tan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yun Ni

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

He Li

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Junhui Hou

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ying He

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Alexander Matyasko

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Hui Liu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Weisi Lin

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge