Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yingzhen Yang is active.

Publication


Featured researches published by Yingzhen Yang.


computer vision and pattern recognition | 2015

Self-tuned deep super resolution

Zhangyang Wang; Yingzhen Yang; Zhaowen Wang; Shiyu Chang; Wei Han; Jianchao Yang; Thomas S. Huang

Deep learning has been successfully applied to image super resolution (SR). In this paper, we propose a deep joint super resolution (DJSR) model to exploit both external and self similarities for SR. A Stacked Denoising Convolutional Auto Encoder (SDCAE) is first pre-trained on external examples with proper data augmentations. It is then fine-tuned with multi-scale self examples from each input, where the reliability of self examples is explicitly taken into account. We also enhance the model performance by sub-model training and selection. The DJSR model is extensively evaluated and compared with state-of-the-arts, and show noticeable performance improvements both quantitatively and perceptually on a wide range of images.


IEEE Transactions on Image Processing | 2015

Learning Super-Resolution Jointly From External and Internal Examples

Zhangyang Wang; Yingzhen Yang; Zhaowen Wang; Shiyu Chang; Jianchao Yang; Thomas S. Huang

Single image super-resolution (SR) aims to estimate a high-resolution (HR) image from a low-resolution (LR) input. Image priors are commonly learned to regularize the, otherwise, seriously ill-posed SR problem, either using external LR-HR pairs or internal similar patterns. We propose joint SR to adaptively combine the advantages of both external and internal SR methods. We define two loss functions using sparse coding-based external examples, and epitomic matching based on internal examples, as well as a corresponding adaptive weight to automatically balance their contributions according to their reconstruction errors. Extensive SR results demonstrate the effectiveness of the proposed method over the existing state-of-the-art methods, and is also verified by our subjective evaluation study.


computer vision and pattern recognition | 2016

Studying Very Low Resolution Recognition Using Deep Networks

Zhangyang Wang; Shiyu Chang; Yingzhen Yang; Ding Liu; Thomas S. Huang

Visual recognition research often assumes a sufficient resolution of the region of interest (ROI). That is usually violated in practice, inspiring us to explore the Very Low Resolution Recognition (VLRR) problem. Typically, the ROI in a VLRR problem can be smaller than 16 16 pixels, and is challenging to be recognized even by human experts. We attempt to solve the VLRR problem using deep learning methods. Taking advantage of techniques primarily in super resolution, domain adaptation and robust regression, we formulate a dedicated deep learning method and demonstrate how these techniques are incorporated step by step. Any extra complexity, when introduced, is fully justified by both analysis and simulation results. The resulting Robust Partially Coupled Networks achieves feature enhancement and recognition simultaneously. It allows for both the flexibility to combat the LR-HR domain mismatch, and the robustness to outliers. Finally, the effectiveness of the proposed models is evaluated on three different VLRR tasks, including face identification, digit recognition and font recognition, all of which obtain very impressive performances.


british machine vision conference | 2014

Regularized l1-Graph for Data Clustering.

Yingzhen Yang; Zhangyang Wang; Jianchao Yang; Jiawei Han; Thomas S. Huang

l1-Graph has been proven to be effective in data clustering, which partitions the data space by using the sparse representation of the data as the similarity measure. However, the sparse representation is performed for each datum independently without taking into account the geometric structure of the data. Motivated by l1-Graph and manifold leaning, we propose Regularized l1-Graph (Rl1-Graph) for data clustering. Compared to l1-Graph, the sparse representations of Rl1-Graph are regularized by the geometric information of the data. In accordance with the manifold assumption, the sparse representations vary smoothly along the geodesics of the data manifold through the graph Laplacian constructed by the sparse codes. Experimental results on various data sets demonstrate the superiority of our algorithm compared to l1-Graph and other competing clustering methods.


european conference on computer vision | 2016

ℓ0-sparse subspace clustering

Yingzhen Yang; Jiashi Feng; Nebojsa Jojic; Jianchao Yang; Thomas S. Huang

Subspace clustering methods with sparsity prior, such as Sparse Subspace Clustering (SSC) [1], are effective in partitioning the data that lie in a union of subspaces. Most of those methods require certain assumptions, e.g. independence or disjointness, on the subspaces. These assumptions are not guaranteed to hold in practice and they limit the application of existing sparse subspace clustering methods. In this paper, we propose \(\ell ^{0}\)-induced sparse subspace clustering (\(\ell ^{0}\)-SSC). In contrast to the required assumptions, such as independence or disjointness, on subspaces for most existing sparse subspace clustering methods, we prove that subspace-sparse representation, a key element in subspace clustering, can be obtained by \(\ell ^{0}\)-SSC for arbitrary distinct underlying subspaces almost surely under the mild i.i.d. assumption on the data generation. We also present the “no free lunch” theorem that obtaining the subspace representation under our general assumptions can not be much computationally cheaper than solving the corresponding \(\ell ^{0}\) problem of \(\ell ^{0}\)-SSC. We develop a novel approximate algorithm named Approximate \(\ell ^{0}\)-SSC (\(\hbox {A}\ell ^{0}\)-SSC) that employs proximal gradient descent to obtain a sub-optimal solution to the optimization problem of \(\ell ^{0}\)-SSC with theoretical guarantee, and the sub-optimal solution is used to build a sparse similarity matrix for clustering. Extensive experimental results on various data sets demonstrate the superiority of \(\hbox {A}\ell ^{0}\)-SSC compared to other competing clustering methods.


european conference on computer vision | 2016

\(\ell ^{0}\)-Sparse Subspace Clustering

Yingzhen Yang; Jiashi Feng; Nebojsa Jojic; Jianchao Yang; Thomas S. Huang

Subspace clustering methods with sparsity prior, such as Sparse Subspace Clustering (SSC) [1], are effective in partitioning the data that lie in a union of subspaces. Most of those methods require certain assumptions, e.g. independence or disjointness, on the subspaces. These assumptions are not guaranteed to hold in practice and they limit the application of existing sparse subspace clustering methods. In this paper, we propose \(\ell ^{0}\)-induced sparse subspace clustering (\(\ell ^{0}\)-SSC). In contrast to the required assumptions, such as independence or disjointness, on subspaces for most existing sparse subspace clustering methods, we prove that subspace-sparse representation, a key element in subspace clustering, can be obtained by \(\ell ^{0}\)-SSC for arbitrary distinct underlying subspaces almost surely under the mild i.i.d. assumption on the data generation. We also present the “no free lunch” theorem that obtaining the subspace representation under our general assumptions can not be much computationally cheaper than solving the corresponding \(\ell ^{0}\) problem of \(\ell ^{0}\)-SSC. We develop a novel approximate algorithm named Approximate \(\ell ^{0}\)-SSC (\(\hbox {A}\ell ^{0}\)-SSC) that employs proximal gradient descent to obtain a sub-optimal solution to the optimization problem of \(\ell ^{0}\)-SSC with theoretical guarantee, and the sub-optimal solution is used to build a sparse similarity matrix for clustering. Extensive experimental results on various data sets demonstrate the superiority of \(\hbox {A}\ell ^{0}\)-SSC compared to other competing clustering methods.


visual communications and image processing | 2015

Designing a composite dictionary adaptively from joint examples

Zhangyang Wang; Yingzhen Yang; Jianchao Yang; Thomas S. Huang

We study the complementary behaviors of external and internal examples in image restoration, and are motivated to formulate a composite dictionary design framework. The composite dictionary consists of the global part learned from external examples, and the sample-specific part learned from internal examples. The dictionary atoms in both parts are further adaptively weighted to emphasize their model statistics. Experiments demonstrate that the joint utilization of external and internal examples leads to substantial improvements, with successful applications in image denoising and super resolution.


international conference on acoustics, speech, and signal processing | 2014

EPITOMIC IMAGE COLORIZATION

Yingzhen Yang; Xinqi Chu; Tian-Tsong Ng; Alex Yong Sang Chia; Jianchao Yang; Hailin Jin; Thomas S. Huang

Image colorization adds color to grayscale images. It not only increases the visual appeal of grayscale images, but also enriches the information conveyed by scientific images that lack color information. We develop a new image colorization method, epitomic image colorization, which automatically transfers color from the reference color image to the target grayscale image by a robust feature matching scheme using a new feature representation, namely the heterogeneous feature epitome. As a generative model, heterogeneous feature epitome is a condensed representation of image appearance which is employed for measuring the dissimilarity between reference patches and target patches in a way robust to noise in the reference image. We build a Markov Random Field (MRF) model with the learned heterogeneous feature epitome from the reference image, and inference in the MRF model achieves robust feature matching for transferring color. Our method renders better colorization results than the current state-of-the-art automatic colorization methods in our experiments.


Knowledge and Information Systems | 2016

Large-scale supervised similarity learning in networks

Shiyu Chang; Guo-Jun Qi; Yingzhen Yang; Charu C. Aggarwal; Jiayu Zhou; Meng Wang; Thomas S. Huang

The problem of similarity learning is relevant to many data mining applications, such as recommender systems, classification, and retrieval. This problem is particularly challenging in the context of networks, which contain different aspects such as the topological structure, content, and user supervision. These different aspects need to be combined effectively, in order to create a holistic similarity function. In particular, while most similarity learning methods in networks such as SimRank utilize the topological structure, the user supervision and content are rarely considered. In this paper, a factorized similarity learning (FSL) is proposed to integrate the link, node content, and user supervision into a uniform framework. This is learned by using matrix factorization, and the final similarities are approximated by the span of low-rank matrices. The proposed framework is further extended to a noise-tolerant version by adopting a hinge loss alternatively. To facilitate efficient computation on large-scale data, a parallel extension is developed. Experiments are conducted on the DBLP and CoRA data sets. The results show that FSL is robust and efficient and outperforms the state of the art. The code for the learning algorithm used in our experiments is available at http://www.ifp.illinois.edu/~chang87/.


International Journal of Computer Vision | 2018

Subspace Learning by \(\ell ^{0}\)-Induced Sparsity

Yingzhen Yang; Jiashi Feng; Nebojsa Jojic; Jianchao Yang; Thomas S. Huang

Subspace clustering methods partition the data that lie in or close to a union of subspaces in accordance with the subspace structure. Such methods with sparsity prior, such as sparse subspace clustering (SSC) (Elhamifar and Vidal in IEEE Trans Pattern Anal Mach Intell 35(11):2765–2781, 2013) with the sparsity induced by the

Collaboration


Dive into the Yingzhen Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiashi Feng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Haichao Zhang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jiayu Zhou

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge