Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junbo Guo is active.

Publication


Featured researches published by Junbo Guo.


international world wide web conferences | 2010

Context-oriented web video tag recommendation

Zhineng Chen; Juan Cao; Yicheng Song; Junbo Guo; Yongdong Zhang; Jintao Li

Tag recommendation is a common way to enrich the textual annotation of multimedia contents. However, state-of-the-art recommendation methods are built upon the pair-wised tag relevance, which hardly capture the context of the web video, i.e., when who are doing what at where. In this paper we propose the context-oriented tag recommendation (CtextR) approach, which expands tags for web videos under the context-consistent constraint. Given a web video, CtextR first collects the multi-form WWW resources describing the same event with the video, which produce an informative and consistent context; and then, the tag recommendation is conducted based on the obtained context. Experiments on an 80,031 web video collection show CtextR recommends various relevant tags to web videos. Moreover, the enriched tags improve the performance of web video categorization.


international conference on pattern recognition | 2008

Invariant visual patterns for video copy detection

Xiao Wu; Yongdong Zhang; Yufeng Wu; Junbo Guo; Jintao Li

Large scale video copy detection task requires compact feature insensitive to various copy changes. Based on local feature trajectory behavior we discover invariant visual patterns for generating robust feature. Bag of Trajectory (BoT) technical is adopted for fast pattern matching. Our algorithm with lower cost is more robust compared to the state-of-art schemes.


international conference on multimedia and expo | 2008

Web video recommendation and long tail discovering

Xiao Wu; Yongdong Zhang; Junbo Guo; Jintao Li

Given countless web videos available online, one problem is how to help users find videos to their taste in an efficient way. In this paper, to facilitate userpsilas browsing we propose relevant and exploratory recommendation algorithms utilizing multimodal similarity and contextual network to organize web videos of various topics. Comparison experiments demonstrate proposed approach generates more accurate video relevancy. And our method is more flexible in discovering user latent interests in long tail videos.


conference on image and video retrieval | 2009

VideoMap: an interactive video retrieval system of MCG-ICT-CAS

Juan Cao; Yongdong Zhang; Junbo Guo; Lei Bao; Jintao Li

This paper presents the highlights of our interactive video retrieval system VideoMap. To enhance the efficiency, the system has a map based displaying interface, which gives the user a global view about the similarity relationships among the whole video collection, and provides an active annotating manner to quickly localize the potential positive samples. Meanwhile, the proposed map supports multiple modality feedback, including the visual shots, high-level concepts and keywords. The system can improve the retrieval performance by automatically optimizing these feedback strategies.


conference on multimedia modeling | 2013

Stripe Model: An Efficient Method to Detect Multi-form Stripe Structures

Yi Liu; Dongming Zhang; Junbo Guo; Shouxun Lin

We present a general mathematical model for multiple forms of stripes. Based on the model, we propose a method to detect stripes built on scale-space. This method generates difference of Gaussian (DoG) maps by subtracting neighbor Gaussian layers, and reserves extremal responses in each DoG map by comparing to its neighbors. Candidate stripe regions are then formed from connected extremal responses. After that, approximate centerlines of stripes are extracted from candidate stripe regions using non-maximum suppression, which eliminates undesired edge responses simultaneously. And stripe masks could be restored from those centerlines with the estimated stripe width. Owing to the ability of extracting candidate regions, our method avoids traversing to do costly directional calculation on all pixels, so it is very efficient. Experiments show the robustness and efficiency of the proposed method, and demonstrate its ability to be applied to different kinds of applications in the image processing stage.


conference on multimedia modeling | 2010

Bag of spatio-temporal synonym sets for human action recognition

Lin Pang; Juan Cao; Junbo Guo; Shouxun Lin; Yan Song

Recently, bag of spatio-temporal local features based methods have received significant attention in human action recognition. However, it remains a big challenge to overcome intra-class variations in cases of viewpoint, geometric and illumination variance. In this paper we present Bag of Spatio-temporal Synonym Sets (ST-SynSets) to represent human actions, which can partially bridge the semantic gap between visual appearances and category semantics. Firstly, it re-clusters the original visual words into a higher level ST-SynSet based on the distribution consistency among different action categories using Information Bottleneck clustering method. Secondly, it adaptively learns a distance metric with both the visual and semantic constraints for ST-SynSets projection. Experiments and comparison with state-of-art methods show the effectiveness and robustness of the proposed method for human action recognition, especially in multiple viewpoints and illumination conditions.


international conference on multimedia and expo | 2008

Object retrieval based on spatially frequent items with informative patches

Ke Gao; Shouxun Lin; Junbo Guo; Dongming Zhang; Yongdong Zhang; Yufeng Wu

Spatial relation of local image patches plays an important role in object-based image retrieval. An approach called spatial frequent items is proposed as an extension of Bag-of-Words method by introducing spatial relations between patches. Spatial frequent items are defined as frequent pairs of adjacent local image patches in polar coordinates, and exploited using data mining. Based on these frequent configurations, we develop a method to encode patches and their spatial relations for image indexing and retrieval. Besides, to avoid the interference of background patches, informative patches are filtrated based on their local entropy and self-similarity in the preprocess stage. Experimental results demonstrate that our method can be 8.6% more effective than the state-of-art object retrieval methods.


conference on information and knowledge management | 2018

Rumor Detection with Hierarchical Social Attention Network

Han Guo; Juan Cao; Yazi Zhang; Junbo Guo; Jintao Li

Microblogs have become one of the most popular platforms for news sharing. However, due to its openness and lack of supervision, rumors could also be easily posted and propagated on social networks, which could cause huge panic and threat during its propagation. In this paper, we detect rumors by leveraging hierarchical representations at different levels and the social contexts. Specifically, we propose a novel hierarchical neural network combined with social information (HSA-BLSTM). We first build a hierarchical bidirectional long short-term memory model for representation learning. Then, the social contexts are incorporated into the network via attention mechanism, such that important semantic information is introduced to the framework for more robust rumor detection. Experimental results on two real world datasets demonstrate that the proposed method outperforms several state-of-the-arts in both rumor detection and early detection scenarios.


Archive | 2008

Method for obtaining stream media link address

Dongming Zhang; Yongdong Zhang; Jintao Li; Junbo Guo


conference on multimedia modeling | 2013

VTrans: A Distributed Video Transcoding Platform

Zhe Ouyang; Feng Dai; Junbo Guo; Yongdong Zhang

Collaboration


Dive into the Junbo Guo's collaboration.

Top Co-Authors

Avatar

Jintao Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yongdong Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dongming Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Juan Cao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Sheng Tang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shouxun Lin

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiao Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Feng Dai

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Han Guo

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ke Gao

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge