Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weiyue Wang is active.

Publication


Featured researches published by Weiyue Wang.


european conference on computer vision | 2014

Finding Coherent Motions and Semantic Regions in Crowd Scenes: A Diffusion and Clustering Approach

Weiyue Wang; Weiyao Lin; Yuanzhe Chen; Jianxin Wu; Jingdong Wang; Bin Sheng

This paper addresses the problem of detecting coherent motions in crowd scenes and subsequently constructing semantic regions for activity recognition. We first introduce a coarse-to-fine thermal-diffusion-based approach. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion filed, named as thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. Finally, these semantic regions are used to recognize activities in crowded scenes. Experiments on various videos demonstrate the effectiveness of our approach.


IEEE Transactions on Image Processing | 2016

A Diffusion and Clustering-Based Approach for Finding Coherent Motions and Understanding Crowd Scenes

Weiyao Lin; Yang Mi; Weiyue Wang; Jianxin Wu; Jingdong Wang; Tao Mei

This paper addresses the problem of detecting coherent motions in crowd scenes and presents its two applications in crowd scene understanding: semantic region detection and recurrent activity mining. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion field named thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles, which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. These semantic regions can be used to recognize pre-defined activities in crowd scenes. Finally, we introduce a cluster-and-merge process, which automatically discovers recurrent activities in crowd scenes by clustering and merging the extracted coherent motions. Experiments on various videos demonstrate the effectiveness of our approach.This paper addresses the problem of detecting coherent motions in crowd scenes and presents its two applications in crowd scene understanding: semantic region detection and recurrent activity mining. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion field named thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles, which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. These semantic regions can be used to recognize pre-defined activities in crowd scenes. Finally, we introduce a cluster-and-merge process, which automatically discovers recurrent activities in crowd scenes by clustering and merging the extracted coherent motions. Experiments on various videos demonstrate the effectiveness of our approach.


international conference on robotics and automation | 2017

Self-paced cross-modality transfer learning for efficient road segmentation

Weiyue Wang; Naiyan Wang; Xiaomin Wu; Suya You; Ulrich Neumann

Accurate road segmentation is a prerequisite for autonomous driving. Current state-of-the-art methods are mostly based on convolutional neural networks (CNNs). Nevertheless, their good performance is at expense of abundant annotated data and high computational cost. In this work, we address these two issues by a self-paced cross-modality transfer learning framework with efficient projection CNN. To be specific, with the help of stereo images, we first tackle a relevant but easier task, i.e. free-space detection with well developed unsupervised methods. Then, we transfer these useful but noisy knowledge in depth modality to single RGB modality with self-paced CNN learning. Finally, we only need to fine-tune the CNN with a few annotated images to get good performance. In addition, we propose an efficient projection CNN, which can improve the fine-grained segmentation results with little additional cost. At last, we test our method on KITTI road benchmark. Our proposed method surpasses all published methods at a speed of 15fps.


ieee international conference on multimedia big data | 2015

Extracting Recurrent Motion Flows from Crowded Scene Videos: A Coherent Motion-Based Approach

Yang Mi; Lihang Liu; Weiyao Lin; Weiyue Wang

In this paper, we propose a new approach which utilizes coherent motion regions to extract and visualize recurrent motion flows in crowded scene surveillance videos. The proposed approach first extract coherent motion regions from a crowded scene video. Then a frame-level clustering process is proposed to cluster frames into different recurrent-motion-pattern (RMP) groups according to the coherent-region similarity between frames. By merging similar coherent regions from the same RMP group, we can achieve motion flow regions representing the major motion flows in each recurrent motion pattern. Finally, a flow curve extraction process is also proposed which extracts flow curves from motion flow regions to provide a proper visualization of the recurrent motion patterns. Experimental results demonstrate that our approach can precisely achieve recurrent motion flows for various crowded scene videos.


international conference on computer vision | 2017

Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks

Weiyue Wang; Qiangui Huang; Suya You; Chao Yang; Ulrich Neumann


computer vision and pattern recognition | 2018

SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation

Weiyue Wang; Ronald Yu; Qiangui Huang; Ulrich Neumann


computer vision and pattern recognition | 2018

Recurrent Slice Networks for 3D Segmentation of Point Clouds

Qiangui Huang; Weiyue Wang; Ulrich Neumann


arXiv: Computer Vision and Pattern Recognition | 2016

Scene Labeling using Gated Recurrent Units with Explicit Long Range Conditioning

Qiangui Huang; Weiyue Wang; Kevin Zhou; Suya You; Ulrich Neumann


european conference on computer vision | 2018

Depth-aware CNN for RGB-D Segmentation.

Weiyue Wang; Ulrich Neumann


arXiv: Computer Vision and Pattern Recognition | 2018

Stochastic Video Long-term Interpolation

Qiangeng Xu; Hanwang Zhang; Weiyue Wang; Peter N. Belhumeur; Ulrich Neumann

Collaboration


Dive into the Weiyue Wang's collaboration.

Top Co-Authors

Avatar

Ulrich Neumann

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Qiangui Huang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weiyao Lin

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yang Mi

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald Yu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge