Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gangyi Ding is active.

Publication


Featured researches published by Gangyi Ding.


international conference on information system and artificial intelligence | 2016

An Online LC-KSVD Based Dictionary Learning for Multi-target Tracking

Shuo Tang; Longfei Zhang; Jia-Li Yan; Xiang-Wei Tan; Gangyi Ding

In this paper, we propose a novel framework for multi-objects tracking on solving two kind of challenges. One is how to discriminate different targets with similar appearance, the other is distinct the single target with serious variation over time. The proposed framework extracts discriminative appearance information of different objects from historical recordings of all tracked targets by a label consistent K-SVD (LC-KSVD) dictionary learning method. We validated our proposed framework on three publicly available video sequences with some state-of-the-art approaches. The experiment results showed that our proposed method achieves competitive results with 7.7% improvement in MOTP.


pacific rim conference on multimedia | 2015

Adaptive Multiple Appearances Model Framework for Long-Term Robust Tracking

Shuo Tang; Longfei Zhang; Jiapeng Chi; Zhufan Wang; Gangyi Ding

Tracking an object in long term is still a great challenge in computer vision. Appearance modeling is one of keys to build a good tracker. Much research attention focuses on building an appearance model by employing special features and learning method, especially online learning. However, one model is not enough to describe all historical appearances of the tracking target during a long term tracking task because of view port exchanging, illuminance varying, camera switching, etc. We propose the Adaptive Multiple Appearance Model (AMAM) framework to maintain not one model but appearance model set to solve this problem. Different appearance representations of the tracking target could be employed and grouped unsupervised and modeled by Dirichlet Process Mixture Model (DPMM) automatically. And tracking result can be selected from candidate targets predicted by trackers based on those appearance models by voting and confidence map. Experimental results on multiple public datasets demonstrate the better performance compared with state-of-the-art methods.


Archive | 2018

An effective method for the abnormal monitoring of stage performance based on visual sensor network

Fuquan Zhang; Gangyi Ding; Lin Xu; Bo Chen; Zuoyong Li

Abnormal monitoring of stage performance plays a vital role in the stage performance. For the real-time stage performance, detection efficiency and accuracy are particularly important. As the traditional monitoring method based on sparse description model to realize abnormal behavior of stage performance did not realize the manifold structure during the performance, the behavior characteristics are sparse, and the decomposition has higher volatility, the recognition accuracy of abnormal behavior is low. Therefore, an abnormal monitoring method of stage performance based on visual sensor network is proposed, the overall structure of the abnormal monitoring system of stage performance based on the vision sensor network is analyzed, the hardware structure and software composition of the system are designed, and the method of monitoring the abnormal behavior of the system is analyzed emphatically. Through the background subtraction, the weighted threshold-based segmentation of the target image from the background image, the chaotic search particle swarm optimization algorithm based on image target detection and tracking algorithm for target tracking by mean shift, the abnormal behavior of local linear embedding and detection method based on sparse representation, a comprehensive analysis of the local manifold structure of sample is set. Enhance the stage performance of abnormal behavior detection efficiency and accuracy. The experimental results show that the proposed method has higher detection efficiency and accuracy and has higher robustness.


Ksii Transactions on Internet and Information Systems | 2017

Exploring Audience Response in Performing Arts with a Brain-Adaptive Digital Performance System

Shuo Yan; Gangyi Ding; Hongsong Li; Ningxiao Sun; Zheng Guan; Yufeng Wu; Longfei Zhang; Tianyu Huang

Audience response is an important indicator of the quality of performing arts. Psychophysiological measurements enable researchers to perceive and understand audience response by collecting their bio-signals during a live performance. However, how the audience respond and how the performance is affected by these responses are the key elements but are hard to implement. To address this issue, we designed a brain-computer interactive system called Brain-Adaptive Digital Performance (BADP) for the measurement and analysis of audience engagement level through an interactive three-dimensional virtual theater. The BADP system monitors audience engagement in real time using electroencephalography (EEG) measurement and tries to improve it by applying content-related performing cues when the engagement level decreased. In this article, we generate EEG-based engagement level and build thresholds to determine the decrease and re-engage moments. In the experiment, we simulated two types of theatre performance to provide participants a high-fidelity virtual environment using the BADP system. We also create content-related performing cues for each performance under three different conditions. The results of these evaluations show that our algorithm could accurately detect the engagement status and the performing cues have a positive impact on regaining audience engagement across different performance types. Our findings open new perspectives in audience-based theatre performance design.


International Conference on Smart Vehicular Technology, Transportation, Communication and Applications | 2017

Review of Intelligent Computing Application

Yiou Wang; Tianyuan Liu; Fuquan Zhang; Lin Xu; Gangyi Ding; Rui Xiong; Fei Liu

Intelligent computing systems can automatically sense environmental changes in the sensor network, make judgments and prediction on the environmental status in time, and provide response strategies in different environments, applying some technologies of pattern recognition, time series prediction and big data analytics. Nowadays, intelligent computing is successfully used in a lot of areas such as transportation, healthcare, home, performance and environmental monitoring. In this paper, the concept of intelligent computing is introduced. Then, the applications in different areas are discussed detailedly. Finally, the future of intelligent computing is analyzed and the conclusion is briefly given.


International Conference on Smart Vehicular Technology, Transportation, Communication and Applications | 2017

A Video Coloring Method Based on CNN and Feature Point Tracking

George Guan; Fuquan Zhang; Gangyi Ding; Meng Niu; Lin Xu

Black and white films were the main form of human culture records before. Colorization of those films is creative. At present, Colorization of black and white films is still handmade which is expensive and time consuming. In this paper, a framework based on CNN and particle filter tracking algorithm is proposed, which can color black and white video and try to solve the problem of dynamic frame based on context correlation. At the same time, the objective function of CNN structure and particle filter tracking are optimized. The result of colorization on videos is satisfactory.


international conference on information system and artificial intelligence | 2016

Online Tracking Based on Multiple Appearances Model

Shuo Tang; Longfei Zhang; Jia-Li Yan; Xiang-Wei Tan; Gangyi Ding

Tracking target in a long-term is still a big challenge in computer vision. In recent research, many researchers pay much attention on updating current appearance of tracking target to build one online appearance model. However, one appearance model is always not enough to describe historical appearance information especially for long-term tracking task. In this paper, we propose an online multiple appearances model based on Dirichlet Process Mixture Model (DPMM), which can make different appearance representations of the tracking target grouped dynamically and in an unsupervised way. Since DPMMs appealing properties are characterized by Gibbs sampling and Gibbs sampling costs too much, we proposed an online Bayesian learning algorithm instead of Gibbs sampling to reliably and efficiently learn a DPMM from scratch through sequential approximation in a streaming fashion to adapt new tracking targets. Experiments on multiple challenging benchmark public dataset demonstrate the proposed tracking algorithm performs favorably against the state-of-the-art.


chinese conference on pattern recognition | 2016

Online Adaptive Multiple Appearances Model for Long-Term Tracking

Shuo Tang; Longfei Zhang; Xiang-Wei Tan; Jia-Li Yan; Gangyi Ding

How to build a good appearance descriptor for tracking target is a basic challenge for long-term robust tracking. In recent research, many tracking methods pay much attention to build one online appearance model and updating by employing special visual features and learning methods. However, one appearance model is not enough to describe the appearance of the target with historical information for long-term tracking task. In this paper, we proposed an online adaptive multiple appearances model to improve the performance. Building appearance model sets, based on Dirichlet Process Mixture Model (DPMM), can make different appearance representations of the tracking target grouped dynamically and in an unsupervised way. Despite the DPMM’s appealing properties, it characterized by computationally intensive inference procedures which often based on Gibbs samplers. However, Gibbs samplers are not suitable in tracking because of high time cost. We proposed an online Bayesian learning algorithm to reliably and efficiently learn a DPMM from scratch through sequential approximation in a streaming fashion to adapt new tracking targets. Experiments on multiple challenging benchmark public dataset demonstrate the proposed tracking algorithm performs 22 % better against the state-of-the-art.


conference on multimedia modeling | 2014

Multi-view Action Synchronization in Complex Background

Longfei Zhang; Shuo Tang; Shikha Singhal; Gangyi Ding

This paper addresses temporal synchronization of human actions under multiple view situation. Many researchers focused on frame by frame alignment for sync these multi-view videos, and expolited features such as interesting point trajectory or 3d human motion feature for event detecting individual. However, since background are complex and dynamic in real world, traditional image-based features are not fit for video representation. We explore the approach by using robust spatio-temporal features and self-similarity matrices to represent actions across views. Multiple sequences can be aligned their temporal patch(Sliding window) using the Dynamic Time Warping algorithm hierarchically and measured by meta-action classifiers. Two datasets including the Pump and the Olympic dataset are used as test cases. The methods are showed the effectiveness in experiment and suited general video event dataset.


conference on multimedia modeling | 2013

Multi-camera Egocentric Activity Detection for Personal Assistant

Longfei Zhang; Yue Gao; Wei Tong; Gangyi Ding; Alexander G. Hauptmann

We demonstrate an egocentric human activity assistant system that has been developed to aid people in doing explicitly encoded motion behavior, such as operating a home infusion pump in sequence. This system is based on a robust multi-camera egocentric human behavior detection approach. This approach detects individual actions in interesting hot regions by spatio-temporal mid-level features, which are built by spatial bag-of-words method in time sliding window. Using a specific infusion pump as a test case, our goal is to detect individual human actions in the operations of a home medical device to see whether the patient is correctly performing the required actions.

Collaboration


Dive into the Gangyi Ding's collaboration.

Top Co-Authors

Avatar

Longfei Zhang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shuo Tang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fuquan Zhang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jia-Li Yan

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lin Xu

Fujian Normal University

View shared research outputs
Top Co-Authors

Avatar

Xiang-Wei Tan

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yufeng Wu

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Chen

Civil Aviation University of China

View shared research outputs
Top Co-Authors

Avatar

George Guan

Beijing Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge