Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongyang Chao is active.

Publication


Featured researches published by Hongyang Chao.


international conference on computer vision | 2015

MeshStereo: A Global Stereo Model with Mesh Alignment Regularization for View Interpolation

Chi Zhang; Zhiwei Li; Yanhua Cheng; Rui Cai; Hongyang Chao; Yong Rui

We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark.


european conference on computer vision | 2016

Building Dual-Domain Representations for Compression Artifacts Reduction

Jun Guo; Hongyang Chao

We propose a highly accurate approach to remove artifacts of JPEG-compressed images. Our approach jointly learns a very deep convolutional network in both DCT and pixel domains. The dual-domain representation can make full use of DCT-domain prior knowledge of JPEG compression, which is usually lacking in traditional network-based approaches. At the same time, it can also benefit from the prowess and the efficiency of the deep feed-forward architecture, in comparison to capacity-limited sparse-coding-based approaches. Two simple strategies, i.e., Adam and residual learning, are adopted to train the very deep network and later proved to be a success. Extensive experiments demonstrate the large improvements of our approach over the state of the arts.


wireless communications, networking and information security | 2010

(N, 1) secret sharing approach based on steganography with gray digital images

Jinsuk Baek; Cheonshik Kim; Paul S. Fisher; Hongyang Chao

We present a description of a technique to embed secret data to an image, called information hiding or stegnography. We utilize some simple observed relationships between the binary representation of a pixel, the gray code representation, and the utilization of a simple Exclusive-OR operation based upon N images available to the sender and the receiver, called the cover images. We present the algorithms for embedding the secret data in the altered, last image, N+1, called the stego image; as well as extracting this data on the receiving side. We present some experimental images utilizing two cover images, and one stego image and show that the procedure we propose has a high PSNR value, and an almost identical histogram when compared to the before stego image. We also discuss the robustness of this algorithm under attack methods such as steganalysis.


european conference on computer vision | 2014

As-Rigid-As-Possible Stereo under Second Order Smoothness Priors

Chi Zhang; Zhiwei Li; Rui Cai; Hongyang Chao; Yong Rui

Imposing smoothness priors is a key idea of the top-ranked global stereo models. Recent progresses demonstrated the power of second order priors which are usually defined by either explicitly considering three-pixel neighborhoods, or implicitly using a so-called 3D-label for each pixel. In contrast to the traditional first-order priors which only prefer fronto-parallel surfaces, second-order priors encourage arbitrary collinear structures. However, we still can find defective regions in matching results even under such powerful priors, e.g., large textureless regions. One reason is that most of the stereo models are non-convex, where pixel-wise smoothness priors, i.e., local constraints, are too flexible to prevent the solution from trapping in bad local minimums. On the other hand, long-range spatial constraints, especially the segment-based priors, have advantages on this problem. However, segment-based priors are too rigid to handle curved surfaces. We present a mixture model to combine the benefits of these two kinds of priors, whose energy function consists of two terms 1) a Laplacian operator on the disparity map which imposes pixel-wise second-order smoothness; 2) a segment-wise matching cost as a function of quadratic surface, which encourages “as-rigid-as-possible” smoothness. To effectively solve the problem, we introduce an intermediate term to decouple the two subenergies, which enables an alternated optimization algorithm that is about an order of magnitude faster than PatchMatch [1]. Our approach is one of the top ranked models on the Middlebury benchmark at sub-pixel accuracy.


international conference on acoustics, speech, and signal processing | 2013

An optimally scalable and cost-effective fractional-pixel motion estimation algorithm for HEVC

Huang Li; Yihao Zhang; Hongyang Chao

Fractional-pixel motion compensation is still one of the most time-consuming parts in the upcoming High Efficiency Video Coding (HEVC) standard. In this paper, we propose an optimally scalable and cost-effective fractional-pixel motion estimation (FPME) algorithm to optimally fit to different and varying constrains of computing resources. Our main contribution include two aspects. Firstly, an optimally scalable and cost-effective FPME algorithm based on a cost-benefit analysis is proposed, where we present a improved fractional-pixel MV prediction method and a new cost-effective priority for each search point in HEVC. Secondly, a complexity adjustment strategy is delivered to enable the ability for the FPME to adjust its complexity to match different given constraints on time. Experiments show that the proposed algorithm can achieve best R-D performance while optimally adjust its complexity, based on any given time constraints. As a side product, the proposed algorithm can also serve as the best fast algorithm which has already reduced computing complexity by a factor 74% with almost no loss on PSNR and bitrates.


international conference on image processing | 2013

An optimally complexity scalable multi-mode decision algorithm for HEVC

Yihao Zhang; Shichao Huang; Huang Li; Hongyang Chao

Quad-tree based Coding Unit structure in HEVC provides more motion compensation sizes to improve rate-distortion performance at the cost of greatly increased computational complexity. Different from other researches on fast algorithms, we develop an optimally complexity scalable multi-mode decision algorithm (OCSMD) for HEVC. There are two major contributions in this paper. The first one is a novel feature proposed to describe the relationship between MV field and CU depth. The second is that we build a cost-performance priority predicting model in frame level based on the feature with negligible overhead as well as no conflict with the standard. Our method may allocate computational resources to the MD of all the CUs in frame level under arbitrary complexity constraints, while obtaining nearly optimal coding performance. The experimental result shows that our algorithm can adjust complexity under varying computing capacity while achieving near-optimal R-D performance.


international conference on image processing | 2010

Semantics-driven portrait cartoon stylization

Ming Yang; Shu Lin; Ping Luo; Liang Lin; Hongyang Chao

This paper proposes an efficient framework for transforming an input human portrait image into an artistic cartoon style. Compared to the previous work of non-photorealistic rendering (NPR), our method exploits the portrait semantics for enriching and manipulating the cartooning style, based on a semantic grammar model. The proposed framework consists of two phases: a portrait parsing phase to localize and recognize facial components in a hierarchic manner, and further calculate the portrait saliency with the facial components; a cartoon stylizing phase to abstract and cartoonize the portrait according to the parsed semantics and saliency, in which the regions and structure (edges/boundaries) of the portrait are rendered in two layers. In the experiments, we test our method with different types of human portraits: daily photos, identification photos, and studio photos, and find satisfactory results; a quantitative evaluation of subjective preference is presented as well.


international conference on image processing | 2006

A High Accurate Predictor Based Fractional Pixel Search for H.264

Hongyang Chao; Jiyuan Lu

In this paper, we proposed a new fractional pixel motion estimation method that effectively extends predictor based algorithm for integer pixel search to fractional pixel search. It generates very precise predictors and makes use of simple refinement patterns in successive search. According to our experiments, the proposed method not only has superior speed compared with other methods, but also produces the same PSNR as full fractional pixel search (FFPS) for all block modes of H.264. However, current fast fractional pixel search methods adopted by H.264 JM do not guarantee the same search quality as FFPS in different block mode.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

On Combining Fractional-Pixel Interpolation and Motion Estimation: A Cost-Effective Approach

Jiyuan Lu; Peizhao Zhang; Hongyang Chao; Paul S. Fisher

The additional complexity of the adoption of fractional-pixel motion compensation technology arises from two aspects: fractional-pixel interpolation (FPI) and fractional-pixel motion estimation (FPME). Different from current fast algorithms, we use the internal link between FPME and FPI as a factor in considering optimization by integrally manipulating them rather than attempting to speed them up separately. In this paper, a refinement search order for FPME is proposed to satisfy the criteria of cost/performance efficiency. And then, some strategies, i.e., FPME skipping, early termination and search pattern pruning, are also given for reducing the number of search positions with negligible coding loss. We also propose a FPI algorithm to save redundant interpolation as well as reduce duplicate calculation. Experimental results show that our integrated algorithm significantly improves the overall speed of FPME and FPI. Compared with the FFPS+XFPI and CBFPS+XFPI, the proposed algorithm has already reduced the speed by a factor of 65% and 32%. Additionally, our FPI algorithm can be used to cooperate with any fast FPME algorithms to greatly reduce the computational time of FPI.


Neurocomputing | 2016

A versatile sparse representation based post-processing method for improving image super-resolution

Jun Yang; Jun Guo; Hongyang Chao

The objective of this work is single image super-resolution (SR), in which the input is specified by a low-resolution image and a consistent higher-resolution image should be returned. We propose a novel post-processing procedure named iterative fine-tuning and approximation (IFA) for mainstream SR methods. Internal image statistics are complemented by iteratively fine-tuning and performing linear subspace approximation on the outputs of existing external SR methods, helping to better reconstruct missing details and reduce unwanted artifacts. The primary concept of our method is that it first explores and enhances internal image information by grouping similar image patches and then finds their sparse or low-rank representations by iteratively learning the bases or primary components, thereby enhancing the primary structures and some details of the image. We evaluate the proposed IFA procedure over two standard benchmark datasets and demonstrate that IFA can yield substantial improvements for most existing methods via tweaking their outputs, achieving state-of-the-art performance.

Collaboration


Dive into the Hongyang Chao's collaboration.

Top Co-Authors

Avatar

Paul S. Fisher

Winston-Salem State University

View shared research outputs
Top Co-Authors

Avatar

Jun Guo

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Jiyuan Lu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Chi Zhang

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Huang Li

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Jun Yang

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Liang Lin

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Ming Yang

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Yihao Zhang

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge