Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yan-Ming Zhang is active.

Publication


Featured researches published by Yan-Ming Zhang.


european conference on machine learning | 2013

Fast k NN graph construction with locality sensitive hashing

Yan-Ming Zhang; Kaizhu Huang; Guanggang Geng; Cheng-Lin Liu

The k nearest neighbors (kNN) graph, perhaps the most popular graph in machine learning, plays an essential role for graph-based learning methods. Despite its many elegant properties, the brute force kNN graph construction method has computational complexity of O(n2), which is prohibitive for large scale data sets. In this paper, based on the divide-and-conquer strategy, we propose an efficient algorithm for approximating kNN graphs, which has the time complexity of O(l(d+logn)n) only (d is the dimensionality and l is usually a small number). This is much faster than most existing fast methods. Specifically, we engage the locality sensitive hashing technique to divide items into small subsets with equal size, and then build one kNN graph on each subset using the brute force method. To enhance the approximation quality, we repeat this procedure for several times to generate multiple basic approximate graphs, and combine them to yield a high quality graph. Compared with existing methods, the proposed approach has features that are: (1) much more efficient in speed (2) applicable to generic similarity measures; (3) easy to parallelize. Finally, on three benchmark large-scale data sets, our method beats existing fast methods with obvious advantages.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Drawing and Recognizing Chinese Characters with Recurrent Neural Network

Xu-Yao Zhang; Fei Yin; Yan-Ming Zhang; Cheng-Lin Liu; Yoshua Bengio

Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Learning locality preserving graph from data.

Yan-Ming Zhang; Kaizhu Huang; Xinwen Hou; Cheng-Lin Liu

Machine learning based on graph representation, or manifold learning, has attracted great interest in recent years. As the discrete approximation of data manifold, the graph plays a crucial role in these kinds of learning approaches. In this paper, we propose a novel learning method for graph construction, which is distinct from previous methods in that it solves an optimization problem with the aim of directly preserving the local information of the original data set. We show that the proposed objective has close connections with the popular Laplacian Eigenmap problem, and is hence well justified. The optimization turns out to be a quadratic programming problem with n(n - 1)/2 variables (n is the number of data points). Exploiting the sparsity of the graph, we further propose a more efficient cutting plane algorithm to solve the problem, making the method better scalable in practice. In the context of clustering and semi-supervised learning, we demonstrated the advantages of our proposed method by experiments.


international conference on data mining | 2011

Fast and Robust Graph-based Transductive Learning via Minimum Tree Cut

Yan-Ming Zhang; Kaizhu Huang; Cheng-Lin Liu

In this paper, we propose an efficient and robust algorithm for graph-based transductive classification. After approximating a graph with a spanning tree, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves typical graph-based methods, which either have a cubic time complexity (for a dense graph) or


IEEE Transactions on Neural Networks | 2015

MTC: A Fast and Robust Graph-Based Transductive Learning Method

Yan-Ming Zhang; Kaizhu Huang; Guanggang Geng; Cheng-Lin Liu

O(kn^2)


Pattern Recognition | 2016

Adaptive spatial pooling for image classification

Yinglu Liu; Yan-Ming Zhang; Xu-Yao Zhang; Cheng-Lin Liu

(for a sparse graph with


Pattern Recognition | 2014

Minimum-risk training for semi-Markov conditional random fields with application to handwritten Chinese/Japanese text recognition

Xiang-Dong Zhou; Yan-Ming Zhang; Feng Tian; Hong-An Wang; Cheng-Lin Liu

k


european conference on machine learning | 2009

Subspace Regularization: A New Semi-supervised Learning Method

Yan-Ming Zhang; Xinwen Hou; Shiming Xiang; Cheng-Lin Liu

denoting the node degree). %In addition to its great scalability on large data, our proposed algorithm demonstrates high robustness and accuracy. In particular, on a graph with 400,000 nodes (in which 10,000 nodes are labeled) and 10,455,545 edges, our algorithm achieves the highest accuracy of


international symposium on neural networks | 2014

Integrating supervised subspace criteria with restricted Boltzmann Machine for feature extraction

Guo-Sen Xie; Xu-Yao Zhang; Yan-Ming Zhang; Cheng-Lin Liu

99.6\%


IEEE Signal Processing Letters | 2014

Combination of Classification and Clustering Results with Label Propagation

Xu-Yao Zhang; Peipei Yang; Yan-Ming Zhang; Kaizhu Huang; Cheng-Lin Liu

but takes less than

Collaboration


Dive into the Yan-Ming Zhang's collaboration.

Top Co-Authors

Avatar

Cheng-Lin Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Kaizhu Huang

Xi'an Jiaotong-Liverpool University

View shared research outputs
Top Co-Authors

Avatar

Xu-Yao Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guanggang Geng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xinwen Hou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fei Yin

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Feng Tian

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guo-Sen Xie

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hong-An Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jun-Yu Ye

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge