Weiyue Wang
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Weiyue Wang.
european conference on computer vision | 2014
Weiyue Wang; Weiyao Lin; Yuanzhe Chen; Jianxin Wu; Jingdong Wang; Bin Sheng
This paper addresses the problem of detecting coherent motions in crowd scenes and subsequently constructing semantic regions for activity recognition. We first introduce a coarse-to-fine thermal-diffusion-based approach. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion filed, named as thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. Finally, these semantic regions are used to recognize activities in crowded scenes. Experiments on various videos demonstrate the effectiveness of our approach.
IEEE Transactions on Image Processing | 2016
Weiyao Lin; Yang Mi; Weiyue Wang; Jianxin Wu; Jingdong Wang; Tao Mei
This paper addresses the problem of detecting coherent motions in crowd scenes and presents its two applications in crowd scene understanding: semantic region detection and recurrent activity mining. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion field named thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles, which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. These semantic regions can be used to recognize pre-defined activities in crowd scenes. Finally, we introduce a cluster-and-merge process, which automatically discovers recurrent activities in crowd scenes by clustering and merging the extracted coherent motions. Experiments on various videos demonstrate the effectiveness of our approach.This paper addresses the problem of detecting coherent motions in crowd scenes and presents its two applications in crowd scene understanding: semantic region detection and recurrent activity mining. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion field named thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles, which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. These semantic regions can be used to recognize pre-defined activities in crowd scenes. Finally, we introduce a cluster-and-merge process, which automatically discovers recurrent activities in crowd scenes by clustering and merging the extracted coherent motions. Experiments on various videos demonstrate the effectiveness of our approach.
international conference on robotics and automation | 2017
Weiyue Wang; Naiyan Wang; Xiaomin Wu; Suya You; Ulrich Neumann
Accurate road segmentation is a prerequisite for autonomous driving. Current state-of-the-art methods are mostly based on convolutional neural networks (CNNs). Nevertheless, their good performance is at expense of abundant annotated data and high computational cost. In this work, we address these two issues by a self-paced cross-modality transfer learning framework with efficient projection CNN. To be specific, with the help of stereo images, we first tackle a relevant but easier task, i.e. free-space detection with well developed unsupervised methods. Then, we transfer these useful but noisy knowledge in depth modality to single RGB modality with self-paced CNN learning. Finally, we only need to fine-tune the CNN with a few annotated images to get good performance. In addition, we propose an efficient projection CNN, which can improve the fine-grained segmentation results with little additional cost. At last, we test our method on KITTI road benchmark. Our proposed method surpasses all published methods at a speed of 15fps.
ieee international conference on multimedia big data | 2015
Yang Mi; Lihang Liu; Weiyao Lin; Weiyue Wang
In this paper, we propose a new approach which utilizes coherent motion regions to extract and visualize recurrent motion flows in crowded scene surveillance videos. The proposed approach first extract coherent motion regions from a crowded scene video. Then a frame-level clustering process is proposed to cluster frames into different recurrent-motion-pattern (RMP) groups according to the coherent-region similarity between frames. By merging similar coherent regions from the same RMP group, we can achieve motion flow regions representing the major motion flows in each recurrent motion pattern. Finally, a flow curve extraction process is also proposed which extracts flow curves from motion flow regions to provide a proper visualization of the recurrent motion patterns. Experimental results demonstrate that our approach can precisely achieve recurrent motion flows for various crowded scene videos.
international conference on computer vision | 2017
Weiyue Wang; Qiangui Huang; Suya You; Chao Yang; Ulrich Neumann
computer vision and pattern recognition | 2018
Weiyue Wang; Ronald Yu; Qiangui Huang; Ulrich Neumann
computer vision and pattern recognition | 2018
Qiangui Huang; Weiyue Wang; Ulrich Neumann
arXiv: Computer Vision and Pattern Recognition | 2016
Qiangui Huang; Weiyue Wang; Kevin Zhou; Suya You; Ulrich Neumann
european conference on computer vision | 2018
Weiyue Wang; Ulrich Neumann
arXiv: Computer Vision and Pattern Recognition | 2018
Qiangeng Xu; Hanwang Zhang; Weiyue Wang; Peter N. Belhumeur; Ulrich Neumann