Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yongqing Sun is active.

Publication


Featured researches published by Yongqing Sun.


multimedia information retrieval | 2006

Visual pattern discovery using web images

Yongqing Sun; Satoshi Shimada; Masashi Morimoto

In this paper, a novel approach for discovering visual patterns associated with semantic concepts using web image resources is proposed.This approach can be used to improve the performance in image clustering and retrieval, image annotation, and other applications such as object recognition. Exploring the rich information in web images that represent semantic concepts as both visual content and text information, this research attempts to effectively learn intrinsic patterns related to semantic concepts. Because the quality of learning algorithms is strongly related to the selection of positive and negative samples, positive samples are first selected effectively, then negative samples are determined reliably based on the selected positive samples. Finally, a good quality visual model associated with a semantic concept is built through an unsupervised learning process. The proposed scheme is completely automatic,needing no human intervention,and is robust and reliable for generic images. Experimental results demonstrate the effectiveness of the proposed approach.


conference on multimedia modeling | 2016

Attribute Discovery for Person Re-Identification

Takayuki Umeda; Yongqing Sun; Go Irie; Kyoko Sudo; Tetsuya Kinebuchi

An incremental attribute discovery method for person re-identification is proposed in this paper. Recent studies have shown the effectiveness of the attribute-based approach. Unfortunately, the approach has difficulty in discriminating people who are similar in terms of the pre-defined semantic attributes. To solve this problem, we automatically discover and learn new attributes that permit successful discrimination through a pair-wise learning process. We evaluate our method on two benchmark datasets and demonstrate that it significantly improves the performance of the person re-identification task.


Multimedia Tools and Applications | 2016

Visual concept detection of web images based on group sparse ensemble learning

Yongqing Sun; Kyoko Sudo; Yukinobu Taniguchi

Due to the huge intra-class variations for visual concept detection, it is necessary for concept learning to collect large scale training data to cover a wide variety of samples as much as possible. But it presents great challenges on both how to collect and how to train the large scale data. In this paper, we propose a novel web image sampling approach and a novel group sparse ensemble learning approach to tackle these two challenging problems respectively. For data collection, in order to alleviate manual labeling efforts, we propose a web image sampling approach based on dictionary coherence to select coherent positive samples from web images. We propose to measure the coherence in terms of how dictionary atoms are shared because shared atoms represent common features with regard to a given concept and are robust to occlusion and corruption. For efficient training of large scale data, in order to exploit the hidden group structures of data, we propose a novel group sparse ensemble learning approach based on Automatic Group Sparse Coding (AutoGSC). After AutoGSC, we present an algorithm to use the reconstruction errors of data instances to calculate the ensemble gating function for ensemble construction and fusion. Experiments show that our proposed methods can achieve promising results and outperforms existing approaches.


conference on multimedia modeling | 2015

Cross-Domain Concept Detection with Dictionary Coherence by Leveraging Web Images

Yongqing Sun; Kyoko Sudo; Yukinobu Taniguchi

We propose a novel scheme to address video concept learning by leveraging social media, one that includes the selection of web training data and the transfer of subspace learning within a unified framework. Due to the existence of cross-domain incoherence resulting from the mismatch of data distributions, how to select sufficient positive training samples from scattered and diffused social media resources is a challenging problem in the training of effective concept detectors. In this paper, given a concept, the coherent positive samples from web images for further concept learning are selected based on the degree of image coherence. Then, by exploiting both the selected dataset and video keyframes, we train a robust concept classifier by means of a transfer subspace learning method. Experiment results demonstrate that the proposed approach can achieve constant overall improvement despite cross-domain incoherence.


advances in multimedia | 2013

Group Sparse Ensemble Learning for Visual Concept Detection

Yongqing Sun; Kyoko Sudo; Yukinobu Taniguchi

To exploit the hidden group structures of data and thus detect concepts in videos, this paper proposes a novel group sparse ensemble learning approach based on Automatic Group Sparse Coding (AutoGSC). We first adopt AutoGSC to learn both a common dictionary over different data groups and an individual group-specific dictionary for each data group which can help us to capture the discrimination information contained in different data groups. Next, we represent each data instance by using a sparse linear combination of both dictionaries. Finally, we propose an algorithm to use the reconstruction errors of data instances to calculate the ensemble gating function for ensemble construction and fusion. Experiments on the TRECVid 2008 benchmark show that the ensemble learning proposal achieves promising results and outperforms existing approaches.


conference on multimedia modeling | 2013

Sampling of Web Images with Dictionary Coherence for Cross-Domain Concept Detection

Yongqing Sun; Kyoko Sudo; Yukinobu Taniguchi; Masashi Morimoto

Due to the existence of cross-domain incoherence resulting from the mismatch of data distributions, how to select sufficient positive training samples from scattered and diffused web resources is a challenging problem in the training of effective concept detectors. In this paper, we propose a novel sampling approach to select coherent positive samples from web images for further concept learning based on the degree of image coherence with a given concept. We propose to measure the coherence in terms of how dictionary atoms are shared since shared atoms represent common features with regard to a given concept and are robust to occlusion and corruption. Thus, two kinds of dictionaries are learned through online dictionary learning methods: one is the concept dictionary learned from key-point features of all the positive training samples while the other is the image dictionary learned from those of web images. Intuitively, the coherence degree is then calculated by the Frobenius norm of the product matrix of the two dictionaries. Experimental results show that the proposed approach can achieve constant overall improvement despite cross-domain incoherence.


acm multimedia | 2008

A novel region-based approach to visual concept modeling using web images

Yongqing Sun; Satoshi Shimada; Yukinobu Taniguchi; Akira Kojima


Archive | 2013

TRECVid 2013 Semantic Video Concept Detection by NTT-MD-DUT

Yongqing Sun; Kyoko Sudo; Yukinobu Taniguchi; Haojie Li; Yue Guan; Lijuan Liu


Journal of Information Processing | 2010

Visual Concept Modeling Scheme Using Early Learning of Region-based Semantics for Web Images

Yongqing Sun; Satoshi Shimada; Masashi Morimoto; Yukinobu Taniguchi


TRECVID | 2008

NTTLAB AT TRECVID 2008 BBC Rushes Summarization Task.

Uwe Kowalik; Yousuke Torii; Yongqing Sun; Kota Hidaka; Go Irie; Mitsuhiro Wagatsuma; Yukinobu Taniguchi; Akira Kojima; Hidenobu Nagata

Collaboration


Dive into the Yongqing Sun's collaboration.

Top Co-Authors

Avatar

Yukinobu Taniguchi

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar

Satoshi Shimada

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Akira Kojima

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Go Irie

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Kota Hidaka

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Haojie Li

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lijuan Liu

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yue Guan

Dalian University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge