Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yanduo Zhang is active.

Publication


Featured researches published by Yanduo Zhang.


IEEE Access | 2017

Robust Face Super-Resolution via Locality-Constrained Low-Rank Representation

Tao Lu; Zixiang Xiong; Yanduo Zhang; Bo Wang; Tongwei Lu

Learning-based face super-resolution relies on obtaining accurate a priori knowledge from the training data. Representation-based approaches (e.g., sparse representation-based and neighbor embedding-based schemes) decompose the input images using sophisticated regularization techniques. They give reasonably good reconstruction performance. However, in real application scenarios, the input images are often noisy, blurry, or suffer from other unknown degradations. Traditional face super-resolution techniques treat image noise at the pixel level without considering the underlying image structures. In order to rectify this shortcoming, we propose in this paper a unified framework for representation-based face super-resolution by introducing a locality-constrained low-rank representation (LLR) scheme to reveal the intrinsic structures of input images. The low-rank representation part of LLR clusters an input image into the most accurate subspace from a global dictionary of atoms, while the locality constraint enables recovery of local manifold structures from local patches. In addition, low-rank, sparsity, locality, accuracy, and robustness of the representation coefficients are exploited in LLR via regularization. Experiments on the FEI, CMU face database, and real surveillance scenario show that LLR outperforms the state-of-the-art face super-resolution algorithms (e.g., convolutional neural network-based deep learning) both objectively and subjectively.


IEEE Transactions on Image Processing | 2014

MsLRR: A Unified Multiscale Low-Rank Representation for Image Segmentation

Xiaobai Liu; Qian Xu; Jiayi Ma; Hai Jin; Yanduo Zhang

In this paper, we present an efficient multiscale low-rank representation for image segmentation. Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. Since low-level superpixel features are usually corrupted by image noise, we propose to infer the low-rank refined affinity matrix. The inference is guided by two observations on natural images. First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. The internal image statistics are referred to as replication prior, and we quantitatively justified it on real image databases. Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. We formulate these two purposes with one unified formulation and develop an efficient optimization procedure. The proposed representation can be used for both unsupervised or supervised image segmentation tasks. Our experiments on public data sets demonstrate the presented method can substantially improve segmentation accuracy.


Journal of Zhejiang University Science C | 2017

Efficient vulnerability detection based on an optimized rule-checking static analysis technique

Deng Chen; Yanduo Zhang; Wei Wei; Shi-xun Wang; Rubing Huang; Xiaolin Li; Binbin Qu; Sheng Jiang

Static analysis is an efficient approach for software assurance. It is indicated that its most effective usage is to perform analysis in an interactive way through the software development process, which has a high performance requirement. This paper concentrates on rule-based static analysis tools and proposes an optimized rule-checking algorithm. Our technique improves the performance of static analysis tools by filtering vulnerability rules in terms of characteristic objects before checking source files. Since a source file always contains vulnerabilities of a small part of rules rather than all, our approach may achieve better performance. To investigate our technique’s feasibility and effectiveness, we implemented it in an open source static analysis tool called PMD and used it to conduct experiments. Experimental results show that our approach can obtain an average performance promotion of 28.7% compared with the original PMD. While our approach is effective and precise in detecting vulnerabilities, there is no side effect.


visual communications and image processing | 2013

From local representation to global face hallucination: A novel super-resolution method by nonnegative feature transformation

Tao Lu; Ruimin Hu; Zhen Han; Junjun Jiang; Yanduo Zhang

Most of global face hallucination methods treat the face as a whole, ignoring the fact that the face is composed by part-based organs. Therefore, the results obtained by these methods always lack of detailed information. Nonnegative matrix factorization (NMF) based face hallucination method is properly used to enhance the detailed information. Usually, NMF basis is only learnt from high-resolution (HR) samples, leading to over-smooth output and lack of high frequency details. In order to solve this problem, we propose a simple but novel face hallucination method using nonnegative feature transformation by two-step framework. In particular, we learn the NMF basis from low-resolution (LR) and HR samples separately, and then transform the local representation feature of input into the global representation subspaces, keeping the weights into the HR samples space for output. Furthermore, the maximum a posteriori (MAP) method is used to estimate a better output. Experiments show that the hallucinated face of the proposed method is not only more high-frequency details, but also has better performance than many state-of-art algorithms.


Multimedia Tools and Applications | 2018

Robust and efficient face recognition via low-rank supported extreme learning machine

Tao Lu; Yingjie Guan; Yanduo Zhang; Shenming Qu; Zixiang Xiong

Recently, face recognition algorithms have made great progress in various real-world applications, e.g., authentication and criminal investigation. Deep-learning offers an end-to-end paradigm for vision recognition tasks and achieves good performance. However, designing and training the complex network architecture are time-consuming and labor-intensive. Moreover, under complex scenarios, illumination change, noise or occlusion in images degrade the performance of recognition algorithms. In order to ameliorate these issues, we propose an efficient three-layered low-rank supported extreme learning machine (LSELM) algorithm for face recognition which improves the recognition performance under complex scenarios with high efficiency. In the first layer, a given probe sample is clustered into certain training subspace as pre-clustering. In the second layer, with this subspace, a low-rank subspace of probe sample as robust feature which is insensitive to disguise, noise, variant expression or illumination will be recovered by low-rank decomposition. Furthermore, these low-rank discriminative features are coded to support training a forward neural network termed LSELM. Experimental results indicate that the proposed approach is on par with some deep-learning based face recognition algorithms on recognition performance but with less time complexity over some popular face datasets e.g., AR, Extend Yale-B, CMU PIE and LFW datasets.


international conference on multimedia and expo | 2017

DLML: Deep linear mappings learning for face super-resolution with nonlocal-patch

Tao Lu; Lanlan Pan; Junjun Jiangs; Yanduo Zhang; Zixiang Xiong

Learning-based face super-resolution approaches rely on representative dictionary as self-similarity prior from training samples to estimate the relationship between the low-resolution (LR) and high-resolution (HR) image patches. The most popular approaches, learn mapping function directly from LR patches to HR ones but neglects the multi-layered nature of image degradation process (resolution down-sampling) which means observed LR images are gradually formed from HR version to lower resolution ones. In this paper, we present a novel deep linear mappings learning framework for face super-resolution to learn the complex relationship between LR features and HR ones by alternately updating multi-layered embedding dictionaries and linear mapping matrices instead of directly mapping. Furthermore, in contrast to existing position based studies that only use local patch for self-similarity prior, we develop a feature-induced nonlocal dictionary pair embedding method to support hierarchical multiple linear mappings learning. With coarse-to-fine nature of deep learning architecture, cascaded incremental linear mappings matrices can be used to exploit the complex relationship between LR and HR images. Experimental results demonstrate that such framework outperforms state-of-the-art (including both general super-resolution approaches and face super-resolution approaches) on FEI face database.


international conference on parallel and distributed systems | 2016

Very Low-Resolution Face Recognition via Semi-Coupled Locality-Constrained Representation

Tao Lu; Wei Yang; Yanduo Zhang; Xiaolin Li; Zixiang Xiong

Recognition tasks in very low-resolution (VLR) images are more challenging than those in high-resolution (HR) due to lack of adequate discriminative information. Previous VLR and HR coupled learning scheme limits both the representation and discriminative ability of features. In this work, we propose a semi-coupled locality-constrained representation (SLR) approach to learn the discriminative representations and the mapping relationship between VLR and HR features simultaneously. Both VLR and HR local manifold geometries are coded during representation, while the learned mapping function improves the manifold consistency by transforming VLR features to HR ones. Finally, the resolutionrobust features are fed into a sparse representation based classifier (SRC) to predict the face labels. The proposed algorithm gives better performance than many state-of-the-art VLR recognition algorithms.


software engineering and knowledge engineering | 2015

Mining Universal Specification Based on Probabilistic Model

Deng Chen; Yanduo Zhang; Rongcun Wang; Xun Li; Li Peng; Wei Wei

Class temporal specification is a kind of important program specifications, which specifies that methods of a class should be called in a particular sequence. Dynamic specification mining is a promising approach to achieve this kind of specifications automatically. However, they always infer partial specifications, that is, the mined specifications are biased to input programs or program execution traces. In this paper, we propose to mine class temporal specifications based on a probabilistic model in an online mode. Since our method can evolve mined specifications persistently, universal specifications can be achieved. To investigate our techniques feasibility and effectiveness, we implemented it in a prototype tool ISpecMiner and used the tool to perform experiments. Experimental results show that our method is promising to infer universal specifications if sufficient traces are provided for mining.


international congress on image and signal processing | 2016

Design of a panorama parking system based on DM6437

Deng Chen; Yanduo Zhang; Wei Wei; Xiaolin Li; Xun Li; Tao Lu; Huabing Zhou; Rui Zhu; Haijiao Xu; Li Peng

Parking cars in a crowded parking lot is nontrivial for many drivers. Car cameras have been used extensively to assist parking. However, due to the limitation of the Angle of View (AoV), dead zone still exists. In this paper, we give the design of a panorama parking system based on DM6437. Our system captures videos around a car through four wide-angle cameras mounted at different sides of the car. Then, it leverages image mosaic techniques to provide a real-time panorama video of the car. With the help of our system, drivers can observe the environment around the car as far as three meters from a top view. Experimental results show that our system can satisfy the requirements for practical use and provide strong assurance for parking cars safely.


software engineering and knowledge engineering | 2015

Extracting More Object Usage Scenarios for API Protocol Mining

Deng Chen; Yanduo Zhang; Rongcun Wang; Binbin Qu; Jianping Ju; Wei Wei

Automatic protocol mining is a promising approach to infer precise and complete API protocols. However, the effect of the approach largely depends upon the quality of input object usage scenarios, in terms of noise and diversity. This paper aims to extract as many object usage scenarios as possible from object-oriented programs for automatic protocol mining. A large corpus of object usage scenarios can help with eliminating noise accurately and is likely to be diverse. Therefore, precise and complete protocols may be achieved. Given an object-oriented program p, generally, object usage scenarios that can be collected from a run of p is not more than the number of instances used in p. Relying on the inheritance relationship among classes, our technique can extract a maximum of n times more object usage scenarios from p, where n is the average inheritance depth of all object usage scenarios in p. In order to investigate the effect of our technique on mining protocols, we implement it in our previous prototype tool ISpecMiner and use the tool to mine protocols from several real-world applications. The experimental results show that our technique is promising to achieve complete and precise API protocols. In addition, protocols of classes that have not been used in programs can be also achieved, which is helpful for program documentation and understanding.

Collaboration


Dive into the Yanduo Zhang's collaboration.

Top Co-Authors

Avatar

Tao Lu

Wuhan Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Deng Chen

Wuhan Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Huabing Zhou

Wuhan Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Wei

Wuhan Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaolin Li

Wuhan Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rongcun Wang

China University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junjun Jiang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Xun Li

Wuhan Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge