Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jufeng Yang is active.

Publication


Featured researches published by Jufeng Yang.


international conference on document analysis and recognition | 2009

HMM-Based Online Recognition of Handwritten Chemical Symbols

Yang Zhang; Guangshun Shi; Jufeng Yang

In this paper, we present an online handwritten recognition method for Chemical Symbols, a widely used symbol in education and academic interactions. This method is based on Hidden Markov Models (HMMs), which are increasingly being used to model characters. We built an HMM for each symbol and used 11-dimensional local features which are suitable for online handwritten recognition, and obtained top-1 accuracy of 89.5% and top-3 accuracy of 98.7% on a dataset containing 5,670 train samples and 2,016 test samples. These initial results are promising and warrant further research in this direction.


international conference on document analysis and recognition | 2009

The Understanding and Structure Analyzing for Online Handwritten Chemical Formulas

Xin Wang; Guangshun Shi; Jufeng Yang

In this paper, we propose a novel approach for understanding and analyzing the online handwritten chemical formulas. With the structural characteristics,semantic rules, and more importantly grammatical rules, the analyzing process is divided into 3 levels:formula level, molecule level, and text level. A formal description of the chemical formula based-on the grammatical rules is summed up and applied to the analyzing process which generates grammar spanning graphs from the analyzed result step-by-step, and that is used for the further structure representation and data retrieval. Our work, as an important component of applying mobile computing research in education, is proved effective and promising.


international conference on multimedia and expo | 2016

Discovering affective regions in deep convolutional neural networks for visual sentiment prediction

Ming Sun; Jufeng Yang; Kai Wang; Hui Shen

In this paper, we address the problem of automatically recognizing emotions in still images. While most of current work focus on improving whole-image representations using CNNs, we argue that discovering affective regions and supplementing local features will boost the performance, which is inspired by the observation that both global distributions and salient objects carry massive sentiments. We propose an algorithm to discover affective regions via deep framework, in which we use an off-the-shelf tool to generate N object proposals from a query image and rank these proposals with their objectness scores. Then, each proposals sentiment score is computed using a pre-trained and fine-tuned CNN model. We combine both scores and select top K regions from the N candidates. These K regions are regarded as the most affective ones of the input image. Finally, we extract deep features from the whole-image and the selected regions, respectively, and sentiment label is predicted. The experiments show that our method is able to detect the affective local regions and achieve state-of-the-art performances on several popular datasets.


international joint conference on artificial intelligence | 2017

Joint Image Emotion Classification and Distribution Learning via Deep Convolutional Neural Network

Jufeng Yang; Dongyu She; Ming Sun

Visual sentiment analysis is attracting more and more attention with the increasing tendency to express emotions through visual contents. Recent algorithms in Convolutional Neural Networks (CNNs) considerably advance the emotion classification, which aims to distinguish differences among emotional categories and assigns a single dominant label to each image. However, the task is inherently ambiguous since an image usually evokes multiple emotions and its annotation varies from person to person. In this work, we address the problem via label distribution learning and develop a multi-task deep framework by jointly optimizing classification and distribution prediction. While the proposed method prefers to the distribution datasets with annotations of different voters, the majority voting scheme is widely adopted as the ground truth in this area, and few dataset has provided multiple affective labels. Hence, we further exploit two weak forms of prior knowledge, which are expressed as similarity information between labels, to generate emotional distribution for each category. The experiments conducted on both distribution datasets, i.e. Emotion6, Flickr LDL, Twitter LDL, and the largest single label dataset, i.e. Flickr and Instagram, demonstrate the proposed method outperforms the state-of-the-art approaches.


soft computing | 2009

The asymptotic optimization of pre-edited ANN classifier

Kai Wang; Jufeng Yang; Guangshun Shi; Qingren Wang

The generalization problem of an artificial neural network (ANN) classifier with unlimited size of training sample, namely asymptotic optimization in probability, is discussed in this paper. As an improved ANN network model, the pre-edited ANN classifier shows better practical performance than the standard one. However, it has not been widely applied due to the absence of the related theoretical support. To further promote its application in practice, the asymptotic optimization of the pre-edited ANN classifier is studied in this paper. To help study ANN asymptotic optimization in probability, we gives a review of the previous research works on asymptotic optimization in probability of non-parametric classifier, and grouped the main methods into four classes: two-step method, one-step method, generalization method and hypothesis method. In this paper, we adopt generalization/hypothesis mixed method to prove that pre-edited ANN is asymptotically optimal in probability. Furthermore, a simulation is presented to provide an experimental support for our theoretical work.


soft computing | 2017

An intelligent character recognition method to filter spam images on cloud

Jun Chen; Hong Zhao; Jufeng Yang; Jian Zhang; Tao Li; Kai Wang

Cloud storage has become an important way for data sharing in recent years. Data protection for data owner and harmful data filtering for data recipients are two non-negligible problems in cloud storage. Illegal or unsuitable messages on cloud have a negative impact on minors and they are easily converted into images to avoid text-based filtering. To detect the spam image with the embedded harmful messages on cloud, soft computing methods are required for intelligent character recognition. HOG, proposed by Dalal and Triggs, has been demonstrated so far to be one of the best features for intelligent character recognition. A pre-defined sliding window is always used for the generation of candidate character images when HOG is applied to recognize the whole word. However, due to the difference in character sizes, the pre-defined window cannot exactly match with each character. Variations on scale and translation usually occur in the character image to be recognized, which have a great influence on the performance of intelligent character recognition. Aiming to solve this problem, STRHOG, an extended version of HOG, is proposed in this paper. Experiments on two public datasets and one our dataset have shown encouraging results for our work. The improved intelligent character recognition is helpful for filtering spam images on cloud. To make a fair comparison with other methods, nearest neighbor classifier is used for the intelligent character recognition. It is expected that the performance should be further improved by using better classifiers such as fuzzy neural network.


european conference on computer vision | 2016

A Benchmark for Automatic Visual Classification of Clinical Skin Disease Images

Xiaoxiao Sun; Jufeng Yang; Ming Sun; Kai Wang

Skin disease is one of the most common human illnesses. It pervades all cultures, occurs at all ages, and affects between 30 % and 70 % of individuals, with even higher rates in at-risk. However, diagnosis of skin diseases by observing is a very difficult job for both doctors and patients, where an intelligent system can be helpful. In this paper, we mainly introduce a benchmark dataset for clinical skin diseases to address this problem. To the best of our knowledge, this dataset is currently the largest for visual recognition of skin diseases. It contains 6,584 images from 198 classes, varying according to scale, color, shape and structure. We hope that this benchmark dataset will encourage further research on visual skin disease classification. Moreover, the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks(CNNs), we also perform extensive analyses on this dataset using the state of the art methods including CNNs.


pacific rim international conference on artificial intelligence | 2018

Deep Coordinated Textual and Visual Network for Sentiment-Oriented Cross-Modal Retrieval.

Jiamei Fu; Dongyu She; Xingxu Yao; Yuxiang Zhang; Jufeng Yang

Cross-modal retrieval has attracted more and more attention recently, which enables people to retrieve desired information efficiently from a large amount of multimedia data. Most methods on cross-modal retrieval only focus on aligning the objects in image and text, while sentiment alignment is also essential for facilitating various applications, e.g., entertainment, advertisement, etc. This paper studies the problem of retrieving visual sentiment concepts with a goal to extract sentiment-oriented information from social multimedia content, i.e., sentiment oriented cross-media retrieval. Such problem is inherently challenging due to the subjective and ambiguity characteristics of the adjectives like “sad” and “awesome”. Thus, we focus on modeling visual sentiment concepts with adjective-noun pairs, e.g., “sad dog” and “awesome flower”, where associating adjectives with concrete objects makes the concepts more tractable. This paper proposes a deep coordinated textural and visual network with two branches to learn a joint semantic embedding space for both images and texts. The visual branch is based on a convolutional neural network (CNN) pre-trained on a large dataset, which is optimized with the classification loss. The textual branch is added on the fully-connected layer providing supervision of the textual semantic space. In order to learn the coordinated representation for different modalities, the multi-task loss function is optimized during the end-to-end training process. We have conducted extensive experiments on a subset of the large-scale VSO dataset. The results show that the proposed model is able to retrieval sentiment-oriented data, which performs favorably against the state-of-the-art methods.


international joint conference on artificial intelligence | 2018

Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network

Yuxiang Zhang; Jiamei Fu; Dongyu She; Ying Zhang; Senzhang Wang; Jufeng Yang

Emotion analysis of on-line user generated textual content is important for natural language processing and social media analytics tasks. Most of previous emotion analysis approaches focus on identifying users emotional states from text by classifying emotions into one of the finite categories, e.g., joy, surprise, anger and fear. However, there exists ambiguity characteristic for the emotion analysis, since a single sentence can evoke multiple emotions with different intensities. To address this problem, we introduce emotion distribution learning and propose a multi-task convolutional neural network for text emotion analysis. The end-to-end framework optimizes the distribution prediction and classification tasks simultaneously, which is able to learn robust representations for the distribution dataset with annotations of different voters. While most work adopt the majority voting scheme for the ground truth labeling, we also propose a lexicon-based strategy to generate distributions from a single label, which provides prior information for the emotion classification. Experiments conducted on five public text datasets (i.e., SemEval, Fairy Tales, ISEAR, TEC, CBET) demonstrate that our proposed method performs favorably against the state-of-the-art approaches.


european conference on computer vision | 2018

Sub-GAN: An Unsupervised Generative Model via Subspaces

Jie Liang; Jufeng Yang; Hsin-Ying Lee; Kai Wang; Ming-Hsuan Yang

The recent years have witnessed significant growth in constructing robust generative models to capture informative distributions of natural data. However, it is difficult to fully exploit the distribution of complex data, like images and videos, due to the high dimensionality of ambient space. Sequentially, how to effectively guide the training of generative models is a crucial issue. In this paper, we present a subspace-based generative adversarial network (Sub-GAN) which simultaneously disentangles multiple latent subspaces and generates diverse samples correspondingly. Since the high-dimensional natural data usually lies on a union of low-dimensional subspaces which contain semantically extensive structure, Sub-GAN incorporates a novel clusterer that can interact with the generator and discriminator via subspace information. Unlike the traditional generative models, the proposed Sub-GAN can control the diversity of the generated samples via the multiplicity of the learned subspaces. Moreover, the Sub-GAN follows an unsupervised fashion to explore not only the visual classes but the latent continuous attributes. We demonstrate that our model can discover meaningful visual attributes which is hard to be annotated via strong supervision, e.g., the writing style of digits, thus avoid the mode collapse problem. Extensive experimental results show the competitive performance of the proposed method for both generating diverse images with satisfied quality and discovering discriminative latent subspaces.

Collaboration


Dive into the Jufeng Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge