Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sihao Ding is active.

Publication


Featured researches published by Sihao Ding.


Multimedia Tools and Applications | 2017

SurvSurf: human retrieval on large surveillance video data

Sihao Ding; Gang Li; Ying Li; Xinfeng Li; Qiang Zhai; Adam C. Champion; Junda Zhu; Dong Xuan; Yuan F. Zheng

The volume of surveillance videos is increasing rapidly, where humans are the major objects of interest. Rapid human retrieval in surveillance videos is therefore desirable and applicable to a broad spectrum of applications. Existing big data processing tools that mainly target textual data cannot be applied directly for timely processing of large video data due to three main challenges: videos are more data-intensive than textual data; visual operations have higher computational complexity than textual operations; and traditional segmentation may damage video data’s continuous semantics. In this paper, we design SurvSurf, a human retrieval system on large surveillance video data that exploits characteristics of these data and big data processing tools. We propose using motion information contained in videos for video data segmentation. The basic data unit after segmentation is called M-clip. M-clips help remove redundant video contents and reduce data volumes. We use the MapReduce framework to process M-clips in parallel for human detection and appearance/motion feature extraction. We further accelerate vision algorithms by processing only sub-areas with significant motion vectors rather than entire frames. In addition, we design a distributed data store called V-BigTable to structuralize M-clips’ semantic information. V-BigTable enables efficient retrieval on a huge amount of M-clips. We implement the system on Hadoop and HBase. Experimental results show that our system outperforms basic solutions by one order of magnitude in computational time with satisfactory human retrieval accuracy.


Pattern Recognition | 2016

Simultaneous body part and motion identification for human-following robots

Sihao Ding; Qiang Zhai; Ying Li; Junda Zhu; Yuan F. Zheng; Dong Xuan

Human-following robots are important for home, industrial and battlefield applications. To effectively interact with human, a robot needs to locate a persons position and understand his/her motion. Vision based techniques are widely used. However, due to the close distance between human and robot, and the limitation in a cameras field of view, only part of a human body can be observed most of the time. As such, the human motion observed by a robot is inherently ambiguous. Simultaneously identifying the body part being observed and the motion the person undergoing is a challenging problem, and has not been well studied in the past. In this paper, we propose a novel method solving the body part and motion identification problem in a unified framework. The relative position of an observed part with respect to the whole body and the motion type are treated as continuous and discrete labels, respectively, and the most probable labeling is inferred by structured learning. A fast part-distribution estimation is introduced to reduce the computational cost. The proposed approach is able to identify different body parts without explicitly building models for each single part, and to recognize the motion with only partial body observations. The proposed approach is evaluated using actual videos captured by a human-following robot as well as the synthesized videos from the public UCF50 dataset, originally developed for action recognition. The result demonstrates the effectiveness of the approach. HighlightsWe propose a method achieving body part and motion identification simultaneously.Part and motion are treated as combined continuous and discrete labels.We formulate the problem into a structured learning framework.The evaluation is done on actual videos and videos from the public datasets.


international conference on computer communications | 2015

VM-tracking: Visual-motion sensing integration for real-time human tracking

Qiang Zhai; Sihao Ding; Xinfeng Li; Fan Yang; Jin Teng; Junda Zhu; Dong Xuan; Yuan F. Zheng; Wei Zhao

Human tracking in video has many practical applications such as visual guided navigation, assisted living, etc. In such applications, it is necessary to accurately track multiple humans across multiple cameras, subject to real-time constraints. Despite recent advances in visual tracking research, the tracking systems purely relying on visual information fail to meet the accuracy and real-time requirements at the same time. In this paper, we present a novel accurate and real-time human tracking system called VM-Tracking. The system aggregates the information of motion (M) sensor on human, and integrates it with visual (V) data based on physical locations. The system has two key features, i.e. location-based VM fusion and appearance-free tracking, which significantly distinguish itself from other existing human tracking systems. We have implemented the VM-Tracking system and conducted comprehensive experiments on challenging scenarios.


international conference on robotics and automation | 2015

Human feet tracking guided by locomotion model

Ying Li; Sihao Ding; Qiang Zhai; Yuan F. Zheng; Dong Xuan

Following a person is a fundamental requirement for human-robot interaction. In this paper we propose a novel tracking approach for robust human feet tracking which integrates human locomotion into tracking algorithms. The vertical displacement between the two feet is analyzed and we observe that this displacement during the walking cycle is close to a modulated cosine waveform. Based on this, we propose an adaptive model for the human walking pattern. We divide the motion of the human feet into local motion and global motion. The local motion is modeled by a modified cosine wave that updates along time. Global motion is estimated by the continuity between successive frames. This model is combined with particle filtering to guide the searching of the feet. A 2D Gaussian mask is generated according to the predicted position estimated by the motion model and used to modify the weight of the particles. Experiments are implemented in several human walking videos and the algorithm is evaluated against the generic particle filtering method. Results show that the feet can be tracked successfully with significant improvements compared to the generic particle filtering method.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Sequential Sample Consensus: A Robust Algorithm for Video-Based Face Recognition

Sihao Ding; Ying Li; Junda Zhu; Yuan F. Zheng; Dong Xuan

This paper presents a novel video-based face recognition algorithm by using a sequential sampling and updating scheme, named sequential sample consensus. The proposed algorithm aims at providing a sequential scheme that can be applied to streaming video data. Different from existing approaches, the training video sequences serve as the sample space, and the persons identity in the testing sequence is characterized using an identity probability mass function (PMF) that is sequentially updated. For each testing frame, samples are randomly drawn from the sample space, and the numbers of samples for each identity are determined by the identity PMF. The testing frame is evaluated against the drawn samples to calculate the weights, and the sample weights are used for updating the identity PMF. Benefiting from the sampling procedure, the change in both the numbers and the weights of the samples for each individual leads to quick reaction of the algorithm. The proposed algorithm is robust against misclassification caused by pose variations, and sensitive to identity switching during recognition. The algorithm is evaluated using both public and self-made datasets, and shows better performance than other video-based face recognition approaches.


intelligence and security informatics | 2013

Side-view face authentication based on wavelet and random forest with subsets

Sihao Ding; Qiang Zhai; Yuan F. Zheng; Dong Xuan

This paper provides a novel side-view face authentication method based on discrete wavelet transform and random forest. A subset selection method that increases the number of training samples and allows subsets to preserve the global information is presented. The authentication method can be summarized to have the following steps: profile extraction, wavelet decomposition, subset splitting and random forest verification. The new method takes the advantage of wavelets localization property in both frequency and spatial domains, while maintaining the generalized properties of random forest. The implementation of the proposed method is computationally feasible and the experimental results show that the performance is satisfactory. Future improvements are discussed in the paper.


Pattern Analysis and Applications | 2017

Efficient health-related abnormal behavior detection with visual and inertial sensor integration

Ying Li; Qiang Zhai; Sihao Ding; Fan Yang; Gang Li; Yuan F. Zheng

An increasing number of healthcare issues arise from unsafe abnormal behaviors such as falling and staggering of a rapidly aging population. These abnormal behaviors, often coming with abrupt movements, could potentially be life-threatening if unnoticed; real-time, accurate detection of this sort of behavior is essential for timely response. However, it is challenging to achieve generic, while accurate, abnormal behavior detection in real time with moderate sensing devices and processing power. This paper presents an innovative system as a solution. It utilizes primarily visual data for detecting various types of abnormal behaviors due to accuracy and generality of computer vision technologies. Unfortunately, the volume of the recorded video data is huge, which is preventive to process all in real time. We propose to use elder-carried mobile devices either by a dedicated design or by a smartphone, equipped with inertial sensor to trigger the selection of relevant video data. In this way, the system operates in a trigger verify fashion, which leads to selective utilization of video data to guarantee both accuracy and efficiency in detection. The system is designed and implemented using inexpensive commercial off-the-shelf sensors and smartphones. Experimental evaluations in real-world settings illustrate our system’s promise for real-time accurate detection of abnormal behaviors.


national aerospace and electronics conference | 2016

From RGBD image to hologram

Sihao Ding; Ying Li; Siyang Cao; Yuan F. Zheng; Robert L. Ewing

We propose an approach to produce computer generated holograms (CGH) from RGBD data. Being able to produce CGH from RGBD images simplifies the recording process and results in more realistic reconstruction. A multilayer wavefront recording plane method is described for fast wave propagation, and it is completed with a two-step occlusion culling process to preserve the self-occlusion effect. The approach is evaluated on RGBD images of real world 3D scene and the the reconstructed images from the CGH demonstrate the effectiveness of the proposed approach.


Applied Optics | 2016

From image pair to a computer generated hologram for a real-world scene.

Sihao Ding; Siyang Cao; Yuan F. Zheng; Robert L. Ewing

We propose an approach to produce computer generated holograms (CGHs) from image pairs of a real-world scene. The ratio of the three-dimensional (3D) physical size of the object is computed from the image pair to provide the correct depth cue. A multilayer wavefront recording plane method completed with a two-stage occlusion culling process is carried out for wave propagation. Multiple holograms can be generated by propagating the wave toward the desired angles, to cover the circular views that are wider than the viewing angle restricted by the wavelength and pitch size of a single hologram. The impact of the imperfect depth information extracted from the image pair on CGH is examined. The approach is evaluated extensively on image pairs of real-world 3D scenes, and the results demonstrate that the circular-view CGH can be produced from a pair of stereo images using the proposed approach.


advanced video and signal based surveillance | 2013

Robust video-based face recognition by sequential sample consensus

Sihao Ding; Ying Li; Junda Zhu; Yuan F. Zheng; Dong Xuan

This paper presents a novel video-based face recognition algorithm using a sequential sampling and updating scheme, named sequential sample consensus (SSC). Different from the existing approaches, the training video sequences serve as the sample space, and the persons identity in the testing sequence is characterized by an identity probability mass function (PMF) that is sequentially updated. For each testing frame, samples are randomly drawn from the sample space with the numbers of samples for each identity determined by the identity PMF. The testing frame is evaluated against the drawn samples to calculate the weights, and the sample weights are utilized for updating the identity PMF. The proposed algorithm is robust against misclassification caused by pose variations, and sensitive to identity switching during recognition. The algorithm is evaluated using both public and self-made databases, and shows better performance than other video-based face recognition approaches.

Collaboration


Dive into the Sihao Ding's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ying Li

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Dong Xuan

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junda Zhu

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Fan Yang

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Gang Li

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Robert L. Ewing

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge