Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huanjing Yue is active.

Publication


Featured researches published by Huanjing Yue.


IEEE Transactions on Multimedia | 2013

Cloud-Based Image Coding for Mobile Devices—Toward Thousands to One Compression

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

Current image coding schemes make it hard to utilize external images for compression even if highly correlated images can be found in the cloud. To solve this problem, we propose a method of cloud-based image coding that is different from current image coding even on the ground. It no longer compresses images pixel by pixel and instead tries to describe images and reconstruct them from a large-scale image database via the descriptions. First, we describe an input image based on its down-sampled version and local feature descriptors. The descriptors are used to retrieve highly correlated images in the cloud and identify corresponding patches. The down-sampled image serves as a target to stitch retrieved image patches together. Second, the down-sampled image is compressed using current image coding. The feature vectors of local descriptors are predicted by the corresponding vectors extracted in the decoded down-sampled image. The predicted residual vectors are compressed by transform, quantization, and entropy coding. The experimental results show that the visual quality of reconstructed images is significantly better than that of intra-frame coding in HEVC and JPEG at thousands to one compression .


IEEE Transactions on Image Processing | 2013

Landmark Image Super-Resolution by Retrieving Web Images

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

This paper proposes a new super-resolution (SR) scheme for landmark images by retrieving correlated web images. Using correlated web images significantly improves the exemplar-based SR. Given a low-resolution (LR) image, we extract local descriptors from its up-sampled version and bundle the descriptors according to their spatial relationship to retrieve correlated high-resolution (HR) images from the web. Though similar in content, the retrieved images are usually taken with different illumination, focal lengths, and shot perspectives, resulting in uncertainty for the HR detail approximation. To solve this problem, we first propose aligning these images to the up-sampled LR image through a global registration, which identifies the corresponding regions in these images and reduces the mismatching. Second, we propose a structure-aware matching criterion and adaptive block sizes to improve the mapping accuracy between LR and HR patches. Finally, these matched HR patches are blended together by solving an energy minimization problem to recover the desired HR image. Experimental results demonstrate that our SR scheme achieves significant improvement compared with four state-of-the-art schemes in terms of both subjective and objective qualities.


IEEE Transactions on Image Processing | 2015

Image Denoising by Exploring External and Internal Correlations

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.


computer vision and pattern recognition | 2014

CID: Combined Image Denoising in Spatial and Frequency Domains Using Web Images

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

In this paper, we propose a novel two-step scheme to filter heavy noise from images with the assistance of retrieved Web images. There are two key technical contributions in our scheme. First, for every noisy image block, we build two three dimensional (3D) data cubes by using similar blocks in retrieved Web images and similar nonlocal blocks within the noisy image, respectively. To better use their correlations, we propose different denoising strategies. The denoising in the 3D cube built upon the retrieved images is performed as median filtering in the spatial domain, whereas the denoising in the other 3D cube is performed in the frequency domain. These two denoising results are then combined in the frequency domain to produce a denoising image. Second, to handle heavy noise, we further propose using the denoising image to improve image registration of the retrieved Web images, 3D cube building, and the estimation of filtering parameters in the frequency domain. Afterwards, the proposed denoising is performed on the noisy image again to generate the final denoising result. Our experimental results show that when the noise is high, the proposed scheme is better than BM3D by more than 2 dB in PSNR and the visual quality improvement is clear to see.


international conference on multimedia and expo | 2012

SIFT-Based Image Compression

Huanjing Yue; Xiaoyan Sun; Feng Wu; Jingyu Yang

This paper proposes a novel image compression scheme based on the local feature descriptor - Scale Invariant Feature Transform (SIFT). The SIFT descriptor characterizes an image region invariantly to scale and rotation. It is used widely in image retrieval. By using SIFT descriptors, our compression scheme is able to make use of external image contents to reduce visual redundancy among images. The proposed encoder compresses an input image by SIFT descriptors rather than pixel values. It separates the SIFT descriptors of the image into two groups, a visual description which is a significantly sub sampled image with key SIFT descriptors embedded and a set of differential SIFT descriptors, to reduce the coding bits. The corresponding decoder generates the SIFT descriptors from the visual description and the differential set. The SIFT descriptors are used in our SIFT-based matching to retrieve the candidate predictive patches from a large image dataset. These candidate patches are then integrated into the visual description, presenting the final reconstructed images. Our preliminary but promising results demonstrate the effectiveness of our proposed image coding scheme towards perceptual quality. Our proposed image compression scheme provides a feasible approach to make use of the visual correlation among images.


acm multimedia | 2012

IMShare: instantly sharing your mobile landmark images by search-based reconstruction

Lican Dai; Huanjing Yue; Xiaoyan Sun; Feng Wu

Instantly sharing captured landmark images is becoming fashionable, much like when you write a blog or chat with friends by mobile phone. However, real-time transmission of high-resolution images poses a significant challenge to contemporary mobile networks. Either long delays in transmission or largely reduced image resolution can lead to bad user experience. In this paper, we propose a novel mobile-cloud scheme IMShare to enable instant sharing of high-resolution images. On the mobile side, high-resolution images are described by their thumbnails and SIFT (Scale-Invariant Feature Transform) descriptors. After compression, data sent by mobile phones can be reduced to an average of 2.6 kilobytes (KB) per mega pixel. On the cloud side, high-resolution images are reproduced from a large-scale image database by retrieving partial duplicate images by SIFT descriptors and stitching corresponding image patches together under the guidance of the thumbnails. IMShare is the first scheme to demonstrate that not only visually pleasant images can be reconstructed using this mobile-cloud method but also the reconstruction can be done in seconds using parallel computing. Our user study of a half million images in a database shows that the proposed IMShare significantly outperforms the current method on subjective quality.


IEEE Transactions on Image Processing | 2017

Depth Map Super-Resolution Considering View Synthesis Quality

Jianjun Lei; Lele Li; Huanjing Yue; Feng Wu; Nam Ling; Chunping Hou

Accurate and high-quality depth maps are required in lots of 3D applications, such as multi-view rendering, 3D reconstruction and 3DTV. However, the resolution of captured depth image is much lower than that of its corresponding color image, which affects its application performance. In this paper, we propose a novel depth map super-resolution (SR) method by taking view synthesis quality into account. The proposed approach mainly includes two technical contributions. First, since the captured low-resolution (LR) depth map may be corrupted by noise and occlusion, we propose a credibility based multi-view depth maps fusion strategy, which considers the view synthesis quality and interview correlation, to refine the LR depth map. Second, we propose a view synthesis quality based trilateral depth-map up-sampling method, which considers depth smoothness, texture similarity and view synthesis quality in the up-sampling filter. Experimental results demonstrate that the proposed method outperforms state-of-the-art depth SR methods for both super-resolved depth maps and synthesized views. Furthermore, the proposed method is robust to noise and achieves promising results under noise-corruption conditions.


IEEE Transactions on Image Processing | 2017

Contrast Enhancement Based on Intrinsic Image Decomposition

Huanjing Yue; Jingyu Yang; Xiaoyan Sun; Feng Wu; Chunping Hou

In this paper, we propose to introduce intrinsic image decomposition priors into decomposition models for contrast enhancement. Since image decomposition is a highly ill-posed problem, we introduce constraints on both reflectance and illumination layers to yield a highly reliable solution. We regularize the reflectance layer to be piecewise constant by introducing a weighted


international symposium on circuits and systems | 2013

SIFT-based image super-resolution

Huanjing Yue; Jingyu Yang; Xiaoyan Sun; Feng Wu

\ell _{1}


Proceedings of SPIE | 2013

Image denoising using cloud images

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

norm constraint on neighboring pixels according to the color similarity, so that the decomposed reflectance would not be affected much by the illumination information. The illumination layer is regularized by a piecewise smoothness constraint. The proposed model is effectively solved by the Split Bregman algorithm. Then, by adjusting the illumination layer, we obtain the enhancement result. To avoid potential color artifacts introduced by illumination adjusting and reduce computing complexity, the proposed decomposition model is performed on the value channel in HSV space. Experiment results demonstrate that the proposed method performs well for a wide variety of images, and achieves better or comparable subjective and objective quality compared with the state-of-the-art methods.In this paper, we propose to introduce intrinsic image decomposition priors into decomposition models for contrast enhancement. Since image decomposition is a highly illposed problem, we introduce constraints on both reflectance and illumination layers to yield a highly reliable solution. We regularize the reflectance layer to be piecewise constant by introducing a weighted ℓ1 norm constraint on neighboring pixels according to the color similarity, so that the decomposed reflectance would not be affected much by the illumination information. The illumination layer is regularized by a piecewise smoothness constraint. The proposed model is effectively solved by the Split Bregman algorithm. Then, by adjusting the illumination layer, we obtain the enhancement result. To avoid potential color artifacts introduced by illumination adjusting and reduce computing complexity, the proposed decomposition model is performed on the value channel in HSV space. Experiment results demonstrate that the proposed method performs well for a wide variety of images, and achieves better or comparable subjective and objective quality compared with the state-of-the-art methods.

Collaboration


Dive into the Huanjing Yue's collaboration.

Top Co-Authors

Avatar

Feng Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lican Dai

University of Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge