Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingjing Fu is active.

Publication


Featured researches published by Jingjing Fu.


international symposium on circuits and systems | 2012

Texture-assisted Kinect depth inpainting

Dan Miao; Jingjing Fu; Yan Lu; Shipeng Li; Chang Wen Chen

The emergence of Kinect facilitates the possibility of depth capture in real-time and with low cost by consumers. It also provides powerful tool and inspiration for researchers to engage in new array of technology development. However, the quality of the depth map captured from Kinect is still inadequate for many applications due to holes, noises and artifacts existing within the depth information. In this paper, we present a texture assisted Kinect depth inpainting framework, aiming at obtaining improved depth information. In this framework, the relationship between texture and depth is investigated, and the characteristics of depth are also exploited. More specifically, texture edge information is extracted to assist the depth inpainting. Furthermore, filtering and diffusion are designed for hole-filling and edge alignment. Experiment results demonstrate that the Kinect depth can be appropriately repaired in both smooth and edge region. Comparing with the original depth, the inpainted depth information enhances the quality of advanced processing such as 3D reconstruction.


international symposium on circuits and systems | 2012

Kinect-like depth denoising

Jingjing Fu; Shiqi Wang; Yan Lu; Shipeng Li; Wenjun Zeng

Accuracy and stability of Kinect-like depth data is limited by its generating principle. In order to serve further applications with high quality depth, the preprocessing on depth data is essential. In this paper, we analyze the characteristics of the Kinect-like depth data by examing its generation principle and propose a spatial-temporal denoising algorithm taking into account its special properties. Both the intra-frame spatial correlation and the inter-frame temporal correlation are exploited to fill the depth hole and suppress the depth noise. Moreover, a divisive normalization approach is proposed to assist the noise filtering process. The 3D rendering results of the processed depth demonstrates that the lost depth is recovered in some hole regions and the noise is suppressed with depth features preserved.


IEEE Transactions on Multimedia | 2013

Kinect-Like Depth Data Compression

Jingjing Fu; Dan Miao; Weiren Yu; Shiqi Wang; Yan Lu; Shipeng Li

Unlike traditional RGB video, Kinect-like depth is characterized by its large variation range and instability. As a result, traditional video compression algorithms cannot be directly applied to Kinect-like depth compression with respect to coding efficiency. In this paper, we propose a lossy Kinect-like depth compression framework based on the existing codecs, aiming to enhance the coding efficiency while preserving the depth features for further applications. In the proposed framework, the Kinect-like depth is reformed first by divisive normalized bilateral filter (DNBL) to suppress the depth noises caused by disparity normalization, and then block-level depth padding is implemented for invalid depth region compensation in collaboration with mask coding to eliminate the sharp variation caused by depth measurement failures. Before the traditional video coding, the inter-frame correlation of reformed depth is explored by proposed 2D+T prediction, in which depth volume is developed to simulate 3D volume to generate pseudo 3D prediction reference for depth uniqueness detection. The unique depth region, called active region is fed into the video encoder for traditional intra and inter prediction with residual coding, while the inactive region is skipped during depth coding. The experimental results demonstrate that our compression scheme can save 55%-85% in terms of bit cost and reduce coding complexity by 20%-65% in comparison with the traditional video compression algorithms. The visual quality of the 3D reconstruction is also improved after employing our compression scheme.


international symposium on circuits and systems | 2012

Content-aware layered compound video compression

Shiqi Wang; Jingjing Fu; Yan Lu; Shipeng Li; Wen Gao

Compound video compression is crucial for remote control and data assessment. In this paper, we propose a content-aware layered video coding scheme as an attempt to efficiently compress the compound video. In this scheme, the compound video is analyzed and processed progressively at three pyramid levels: block, object and layer. Firstly, the compound video is analyzed by a block type classification technique to access each blocks spatial and temporal properties. Secondly, the natural video object is detected adaptively in each frame based on the block type. Finally, the compound video content is distributed into different layers and specifically designed video coding algorithms are employed to compress each layer. Experiments demonstrate that our proposed scheme can preserve the advantages of the employed compression algorithms for each layer and outperform each of them in the compound video compression.


international symposium on circuits and systems | 2014

High frame rate screen video coding for screen sharing applications

Dan Miao; Jingjing Fu; Yan Lu; Shipeng Li; Chang Wen Chen

In this paper, we propose a high frame rate screen video compression scheme aiming at improving the interactive user experience on screen sharing applications. The proposed screen video compression is performed as two-layer coding: a base layer coding using the conventional video codec and an enhancement layer coding using the proposed open-loop coding scheme. For efficient frame level layer selection and compression, the content update of each frame is evaluated through global motion detection. The screen frame with significant content update is fed to the conventional video encoder in base layer. In contrast, the frame with little update is compressed in enhancement layer in which the duplicate content is indicated by global motion vector and skip flag while the updated content is encoded by distinct intra modes in terms of inherent local features. The experimental results demonstrate that for the screen video containing interaction, the proposed coding scheme can achieve 3.09ms/frame encoding rate and 2.33ms/frame decoding rate with efficient rate distortion performance.


Journal of Visual Communication and Image Representation | 2013

Depth sensor assisted real-time gesture recognition for interactive presentation

Hanjie Wang; Jingjing Fu; Yan Lu; Xilin Chen; Shipeng Li

In this paper, we present a gesture recognition approach to enable real-time manipulating projection content through detecting and recognizing speakers gestures from the depth maps captured by a depth sensor. To overcome the limited measurement accuracy of depth sensor, a robust background subtraction method is proposed for effective human body segmentation and a distance map is adopted to detect human hands. Potential Active Region (PAR) is utilized to ensure the generation of valid hand trajectory to avoid extra computational cost on the recognition of meaningless gestures and three different detection modes are designed for complexity reduction. The detected hand trajectory is temporally segmented into a series of movements, which are represented as Motion History Images. A set-based soft discriminative model is proposed to recognize gestures from these movements. The proposed approach is evaluated on our dataset and performs efficiently and robustly with 90% accuracy.


international conference on multimedia and expo | 2012

Kinect-Like Depth Compression with 2D+T Prediction

Jingjing Fu; Dan Miao; Weiren Yu; Shiqi Wang; Yan Lu; Shipeng Li

The Kinect-like depth compression becomes increasingly important due to the growing requirement on Kinect depth data transmission and storage. Considering the temporal inconsistency of Kinect depth introduced by the random depth measurement error, we propose 2D+T prediction algorithm aiming at fully exploiting the temporal depth correlation to enhance the Kinect depth compression efficiency. In our 2D+T prediction, each depth block is treated as a subsurface, and it the motion trend is detected by comparing with the reliable 3D reconstruction surface, which is integrated by accumulated depth information stored in depth volume. The comparison is implemented under the error tolerant rule, which is derived from the depth error model. The experimental results demonstrate our algorithm can remarkably reduce the bitrate cost and the compression complexity. And the visual quality of the 3D reconstruction results generated from our reconstructed depth is similar to that of traditional video compression algorithm.


international conference on multimedia and expo | 2013

Layered screen video coding leveraging hardware video codec

Dan Miao; Jingjing Fu; Yan Lu; Shipeng Li; Chang Wen Chen

In this paper, we propose a layered screen video coding scheme based on existing video codecs to leverage hardware video codec for efficient screen video compression. In this scheme, the screen video compression is performed as two-layer coding: base layer coding and enhancement layer coding. The screen video is first analyzed in both frame and block levels for useful temporal and spatial information extraction to assist coding content selection in each layer. The non-skip screen frames are directly compressed by the conventional video codec in the base layer, while the screen contents sensitive to the video quality degradation are selected for improved coding in the enhancement layer. For contents to be enhanced, two intra coding modes are designed to improve the quality of the compressed text/graphics contents and suppress the artifacts introduced by chroma downsampling. The experimental results demonstrate that the screen video quality is improved objectively and subjectively by the proposed scheme with low cost on bitrate and computation complexity. Moreover, an average of 2.95dB coding gain is achieved in high bitrate.


international symposium on circuits and systems | 2014

An adaptive multi-layer low-latency transmission scheme for H.264 based screen sharing system.

Ming Yang; Jingjing Fu; Yan Lu; Jianfei Cai; Chuan Heng Foh

Virtual screen system is becoming an essential part in the mobile cloud computing platform. However, designing a low-latency interactive communication for the high-resolution screen content is still challenging due to the network dynamics and the unique characteristics of screen content. In this paper we propose a H.264 based low-latency screen sharing system. To achieve high play-out frame rate, we decouple the low-latency screen content communication problem into two parts, a scalable H.264 based encoding and an optimal scalable stream transmission scheduling. By leveraging the unique characteristics of screen content, a multi-layer scalable video encoding scheme is designed to achieve a certain error resilience while keeping good video coding efficiency. In the transmission scheduling module, an optimal frame skipping policy is proposed to schedule the frames in the buffer to maximize the play-out frame rate. In the performance evaluation, we simulate our system in both one-hop end-to-end topology and two-hop proxy-based topology. The simulation results show that the proposed scheme achieves much better performance on frame rate and average delay, especially in the low bandwidth condition.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2016

A High-Fidelity and Low-Interaction-Delay Screen Sharing System

Dan Miao; Jingjing Fu; Yan Lu; Shipeng Li; Chang Wen Chen

The pervasive computing environment and wide network bandwidth provide users more opportunities to share screen content among multiple devices. In this article, we introduce a remote display system to enable screen sharing among multiple devices with high fidelity and responsive interaction. In the developed system, the frame-level screen content is compressed and transmitted to the client side for screen sharing, and the instant control inputs are simultaneously transmitted to the server side for interaction. Even if the screen responds immediately to the control messages and updates at a high frame rate on the server side, it is difficult to update the screen content with low delay and high frame rate in the client side due to non-negligible time consumption on the whole screen frame compression, transmission, and display buffer updating. To address this critical problem, we propose a layered structure for screen coding and rendering to deliver diverse screen content to the client side with an adaptive frame rate. More specifically, the interaction content with small region screen update is compressed by a blockwise screen codec and rendered at a high frame rate to achieve smooth interaction, while the natural video screen content is compressed by standard video codec and rendered at a regular frame rate for a smooth video display. Experimental results with real applications demonstrate that the proposed system can successfully reduce transmission bandwidth cost and interaction delay during screen sharing. Especially for user interaction in small regions, the proposed system can achieve a higher frame rate than most previous counterparts.

Collaboration


Dive into the Jingjing Fu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiqi Wang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Debin Zhao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hanjie Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge