Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaojie Jin is active.

Publication


Featured researches published by Xiaojie Jin.


computer vision and pattern recognition | 2017

Deep Self-Taught Learning for Weakly Supervised Object Localization

Zequn Jie; Yunchao Wei; Xiaojie Jin; Jiashi Feng; Wei Liu

Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.


international joint conference on artificial intelligence | 2018

Sharing Residual Units Through Collective Tensor Factorization To Improve Deep Neural Networks

Yunpeng Chen; Xiaojie Jin; Bingyi Kang; Jiashi Feng; Shuicheng Yan

Residual units are wildly used for alleviating optimization difficulties when building deep neural networks. However, the performance gain does not well compensate the model size increase, indicating low parameter efficiency in these residual units. In this work, we first revisit the residual function in several variations of residual units and demonstrate that these residual functions can actually be explained with a unified framework based on generalized block term decomposition. Then, based on the new explanation, we propose a new architecture, Collective Residual Unit (CRU), which enhances the parameter efficiency of deep neural networks through collective tensor factorization. CRU enables knowledge sharing across different residual units using shared factors. Experimental results show that our proposed CRU Network demonstrates outstanding parameter efficiency, achieving comparable classification performance to ResNet-200 with the model size of ResNet-50. By building a deeper network using CRU, we can achieve state-of-the-art single model classification accuracy on ImageNet-1k and Places365-Standard benchmark datasets. (Code and trained models are available on GitHub)The residual unit and its variations are wildly used in building very deep neural networks for alleviating optimization difficulty. In this work, we revisit the standard residual function as well as its several successful variants and propose a unified framework based on tensor Block Term Decomposition (BTD) to explain these apparently different residual functions from the tensor decomposition view. With the BTD framework, we further propose a novel basic network architecture, named the Collective Residual Unit (CRU). CRU further enhances parameter efficiency of deep residual neural networks by sharing core factors derived from collective tensor factorization over the involved residual units. It enables efficient knowledge sharing across multiple residual units, reduces the number of model parameters, lowers the risk of over-fitting, and provides better generalization ability. Extensive experimental results show that our proposed CRU network brings outstanding parameter efficiency—it achieves comparable classification performance with ResNet-200 while using a model size as small as ResNet-50 on the ImageNet-1k and Places365-Standard benchmark datasets.


european conference on computer vision | 2016

Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks

Xiaojie Jin; Yunpeng Chen; Jian Dong; Jiashi Feng; Shuicheng Yan

Intermediate features at different layers of a deep neural network are known to be discriminative for visual patterns of different complexities. However, most existing works ignore such cross-layer heterogeneities when classifying samples of different complexities. For example, if a training sample has already been correctly classified at a specific layer with high confidence, we argue that it is unnecessary to enforce rest layers to classify this sample correctly and a better strategy is to encourage those layers to focus on other samples.


international joint conference on artificial intelligence | 2017

Training Group Orthogonal Neural Networks with Privileged Information

Yunpeng Chen; Xiaojie Jin; Jiashi Feng; Shuicheng Yan

Learning rich and diverse representations is critical for the performance of deep convolutional neural networks (CNNs). In this paper, we consider how to use privileged information to promote inherent diversity of a single CNN model such that the model can learn better representations and offer stronger generalization ability. To this end, we propose a novel group orthogonal convolutional neural network (GoCNN) that learns untangled representations within each layer by exploiting provided privileged information and enhances representation diversity effectively. We take image classification as an example where image segmentation annotations are used as privileged information during the training process. Experiments on two benchmark datasets -- ImageNet and PASCAL VOC -- clearly demonstrate the strong generalization ability of our proposed GoCNN model. On the ImageNet dataset, GoCNN improves the performance of state-of-the-art ResNet-152 model by absolute value of 1.2% while only uses privileged information of 10% of the training images, confirming effectiveness of GoCNN on utilizing available privileged knowledge to train better CNNs.


international joint conference on artificial intelligence | 2017

Online Robust Low-Rank Tensor Learning

Ping Li; Jiashi Feng; Xiaojie Jin; Luming Zhang; Xianghua Xu; Shuicheng Yan

The rapid increase of multidimensional data (a.k.a. tensor) like videos brings new challenges for low-rank data modeling approaches such as dynamic data size, complex high-order relations, and multiplicity of low-rank structures. Resolving these challenges require a new tensor analysis method that can perform tensor data analysis online, which however is still absent. In this paper, we propose an Online Robust Lowrank Tensor Modeling (ORLTM) approach to address these challenges. ORLTM dynamically explores the high-order correlations across all tensor modes for low-rank structure modeling. To analyze mixture data from multiple subspaces, ORLTM introduces a new dictionary learning component. ORLTM processes data streamingly and thus requires quite low memory cost that is independent of data size. This makes ORLTM quite suitable for processing large-scale tensor data. Empirical studies have validated the effectiveness of the proposed method on both synthetic data and one practical task, i.e., video background subtraction. In addition, we provide theoretical analysis regarding computational complexity and memory cost, demonstrating the efficiency of ORLTM rigorously.


neural information processing systems | 2017

Dual Path Networks

Yunpeng Chen; Jianan Li; Huaxin Xiao; Xiaojie Jin; Shuicheng Yan; Jiashi Feng


national conference on artificial intelligence | 2016

Deep learning with S-shaped rectified linear activation units

Xiaojie Jin; Chunyan Xu; Jiashi Feng; Yunchao Wei; Junjun Xiong; Shuicheng Yan


arXiv: Computer Vision and Pattern Recognition | 2016

Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods.

Xiaojie Jin; Xiaotong Yuan; Jiashi Feng; Shuicheng Yan


neural information processing systems | 2016

Tree-Structured Reinforcement Learning for Sequential Object Localization

Zequn Jie; Xiaodan Liang; Jiashi Feng; Xiaojie Jin; Wen Feng Lu; Shuicheng Yan


international conference on computer vision | 2017

Video Scene Parsing with Predictive Feature Learning

Xiaojie Jin; Xin Li; Huaxin Xiao; Xiaohui Shen; Zhe Lin; Jimei Yang; Yunpeng Chen; Jian Dong; Luoqi Liu; Zequn Jie; Jiashi Feng; Shuicheng Yan

Collaboration


Dive into the Xiaojie Jin's collaboration.

Top Co-Authors

Avatar

Jiashi Feng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yunpeng Chen

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Zequn Jie

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jian Dong

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Luming Zhang

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yunchao Wei

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge