Qiang Zhai
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Qiang Zhai.
international conference on computer communications | 2013
Xinfeng Li; Jin Teng; Qiang Zhai; Junda Zhu; Dong Xuan; Yuan F. Zheng; Wei Zhao
Human localization is an enabling technology for many mobile applications. As more and more people carry mobile phones with them, we can now localize a person by localizing his mobile phone. However, it is observed that presence of human bodies introduces heavy interference to mobile phone signals. This has been one of the major causes of inaccurate wireless localization for humans. In this paper, we propose using video cameras to help estimate human bodys interference on mobile devices signals. We combine human orientation detection and human/phone/AP relative position inference estimation to better measure how a human blocks or reflects wireless signals. We have also developed a signal distortion compensation model. Based on these technologies, we have implemented a human localization system called EV-Human. Real world experiments show that our EV-system can accurately and robustly localize humans.
Multimedia Tools and Applications | 2017
Sihao Ding; Gang Li; Ying Li; Xinfeng Li; Qiang Zhai; Adam C. Champion; Junda Zhu; Dong Xuan; Yuan F. Zheng
The volume of surveillance videos is increasing rapidly, where humans are the major objects of interest. Rapid human retrieval in surveillance videos is therefore desirable and applicable to a broad spectrum of applications. Existing big data processing tools that mainly target textual data cannot be applied directly for timely processing of large video data due to three main challenges: videos are more data-intensive than textual data; visual operations have higher computational complexity than textual operations; and traditional segmentation may damage video data’s continuous semantics. In this paper, we design SurvSurf, a human retrieval system on large surveillance video data that exploits characteristics of these data and big data processing tools. We propose using motion information contained in videos for video data segmentation. The basic data unit after segmentation is called M-clip. M-clips help remove redundant video contents and reduce data volumes. We use the MapReduce framework to process M-clips in parallel for human detection and appearance/motion feature extraction. We further accelerate vision algorithms by processing only sub-areas with significant motion vectors rather than entire frames. In addition, we design a distributed data store called V-BigTable to structuralize M-clips’ semantic information. V-BigTable enables efficient retrieval on a huge amount of M-clips. We implement the system on Hadoop and HBase. Experimental results show that our system outperforms basic solutions by one order of magnitude in computational time with satisfactory human retrieval accuracy.
Pattern Recognition | 2016
Sihao Ding; Qiang Zhai; Ying Li; Junda Zhu; Yuan F. Zheng; Dong Xuan
Human-following robots are important for home, industrial and battlefield applications. To effectively interact with human, a robot needs to locate a persons position and understand his/her motion. Vision based techniques are widely used. However, due to the close distance between human and robot, and the limitation in a cameras field of view, only part of a human body can be observed most of the time. As such, the human motion observed by a robot is inherently ambiguous. Simultaneously identifying the body part being observed and the motion the person undergoing is a challenging problem, and has not been well studied in the past. In this paper, we propose a novel method solving the body part and motion identification problem in a unified framework. The relative position of an observed part with respect to the whole body and the motion type are treated as continuous and discrete labels, respectively, and the most probable labeling is inferred by structured learning. A fast part-distribution estimation is introduced to reduce the computational cost. The proposed approach is able to identify different body parts without explicitly building models for each single part, and to recognize the motion with only partial body observations. The proposed approach is evaluated using actual videos captured by a human-following robot as well as the synthesized videos from the public UCF50 dataset, originally developed for action recognition. The result demonstrates the effectiveness of the approach. HighlightsWe propose a method achieving body part and motion identification simultaneously.Part and motion are treated as combined continuous and discrete labels.We formulate the problem into a structured learning framework.The evaluation is done on actual videos and videos from the public datasets.
international conference on computer communications | 2015
Qiang Zhai; Sihao Ding; Xinfeng Li; Fan Yang; Jin Teng; Junda Zhu; Dong Xuan; Yuan F. Zheng; Wei Zhao
Human tracking in video has many practical applications such as visual guided navigation, assisted living, etc. In such applications, it is necessary to accurately track multiple humans across multiple cameras, subject to real-time constraints. Despite recent advances in visual tracking research, the tracking systems purely relying on visual information fail to meet the accuracy and real-time requirements at the same time. In this paper, we present a novel accurate and real-time human tracking system called VM-Tracking. The system aggregates the information of motion (M) sensor on human, and integrates it with visual (V) data based on physical locations. The system has two key features, i.e. location-based VM fusion and appearance-free tracking, which significantly distinguish itself from other existing human tracking systems. We have implemented the VM-Tracking system and conducted comprehensive experiments on challenging scenarios.
international conference on robotics and automation | 2015
Ying Li; Sihao Ding; Qiang Zhai; Yuan F. Zheng; Dong Xuan
Following a person is a fundamental requirement for human-robot interaction. In this paper we propose a novel tracking approach for robust human feet tracking which integrates human locomotion into tracking algorithms. The vertical displacement between the two feet is analyzed and we observe that this displacement during the walking cycle is close to a modulated cosine waveform. Based on this, we propose an adaptive model for the human walking pattern. We divide the motion of the human feet into local motion and global motion. The local motion is modeled by a modified cosine wave that updates along time. Global motion is estimated by the continuity between successive frames. This model is combined with particle filtering to guide the searching of the feet. A 2D Gaussian mask is generated according to the predicted position estimated by the motion model and used to modify the weight of the particles. Experiments are implemented in several human walking videos and the algorithm is evaluated against the generic particle filtering method. Results show that the feet can be tracked successfully with significant improvements compared to the generic particle filtering method.
intelligence and security informatics | 2013
Sihao Ding; Qiang Zhai; Yuan F. Zheng; Dong Xuan
This paper provides a novel side-view face authentication method based on discrete wavelet transform and random forest. A subset selection method that increases the number of training samples and allows subsets to preserve the global information is presented. The authentication method can be summarized to have the following steps: profile extraction, wavelet decomposition, subset splitting and random forest verification. The new method takes the advantage of wavelets localization property in both frequency and spatial domains, while maintaining the generalized properties of random forest. The implementation of the proposed method is computationally feasible and the experimental results show that the performance is satisfactory. Future improvements are discussed in the paper.
Pattern Analysis and Applications | 2017
Ying Li; Qiang Zhai; Sihao Ding; Fan Yang; Gang Li; Yuan F. Zheng
An increasing number of healthcare issues arise from unsafe abnormal behaviors such as falling and staggering of a rapidly aging population. These abnormal behaviors, often coming with abrupt movements, could potentially be life-threatening if unnoticed; real-time, accurate detection of this sort of behavior is essential for timely response. However, it is challenging to achieve generic, while accurate, abnormal behavior detection in real time with moderate sensing devices and processing power. This paper presents an innovative system as a solution. It utilizes primarily visual data for detecting various types of abnormal behaviors due to accuracy and generality of computer vision technologies. Unfortunately, the volume of the recorded video data is huge, which is preventive to process all in real time. We propose to use elder-carried mobile devices either by a dedicated design or by a smartphone, equipped with inertial sensor to trigger the selection of relevant video data. In this way, the system operates in a trigger verify fashion, which leads to selective utilization of video data to guarantee both accuracy and efficiency in detection. The system is designed and implemented using inexpensive commercial off-the-shelf sensors and smartphones. Experimental evaluations in real-world settings illustrate our system’s promise for real-time accurate detection of abnormal behaviors.
mobile adhoc and sensor systems | 2017
Qiang Zhai; Fan Yang; Adam C. Champion; Chunyi Peng; Jingchuan Wang; Dong Xuan; Wei Zhao
In this paper, we study vision-based localization for robots. We anticipate that numerous mobile robots will serve or interact with humans in indoor scenarios such as healthcare, entertainment, and public service. Such scenarios entail accurate and scalable indoor visual robot localization, the subject of this work. Most existing vision-based localization approaches suffer from low localization accuracy and scalability issues due to visual environmental features’ limited effective range and detection accuracy. In light of infrastructural cameras’ wide indoor deployment, this paper proposes BRIDGELOC, a novel vision-based indoor robot localization system that integrates both robots’ and infrastructural cameras. BRIDGELOC develops three key technologies: robot and infrastructural camera view bridging, rotation symmetric visual tag design, and continuous localization based on robots’ visual and motion sensing. Our system bridges robots’ and infrastructural cameras’ views to accurately localize robots. We use visual tags with rotation symmetric patterns to extend scalability greatly. Our continuous localization enables robot localization in areas without visual tags and infrastructural camera coverage. We implement our system and build a prototype robot using commercial off-the-shelf hardware. Our real-world evaluation validates BRIDGELOC’s promise for indoor robot localization.
wireless algorithms systems and applications | 2012
Adam C. Champion; Xinfeng Li; Qiang Zhai; Jin Teng; Dong Xuan
Thanks to smartphones’ mass popularity in our society, our world is surrounded by ubiquitous electronic signals. These signals originate from static objects such as buildings and stores and mobile objects such as people or vehicles. Yet it is difficult to readily access electronic information. Current wireless communications focus on reliable transmission from sources to destinations, which entails tedious connection establishment and network configuration. This forms a virtual electronic barrier among people that makes unobtrusive communication difficult. In addition, there is concern about interacting with the electronic world due to such interactions’ insecurity. To safely remove the electronic barrier, we propose Enclave, a delegate wireless device that helps people’s smartphones communicate unobtrusively. We realize Enclave using two key supporting technologies, NameCast and PicComm. NameCast uses wireless device names to unobtrusively transmit short messages without connection establishment. PicComm uses the transfer of visual images to securely deliver electronic information to people’s smartphones. We implement Enclave on commercial off-the-shelf smartphones. Our experimental evaluation illustrates its potential for smartphone data protection and unobtrusive and secure communication.
ieee international conference computer and communications | 2016
Fan Yang; Qiang Zhai; Guoxing Chen; Adam C. Champion; Junda Zhu; Dong Xuan