Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kaishun Wu is active.

Publication


Featured researches published by Kaishun Wu.


IEEE Communications Surveys and Tutorials | 2015

From QoS to QoE: A Tutorial on Video Quality Assessment

Yanjiao Chen; Kaishun Wu; Qian Zhang

Quality of experience (QoE) is the perceptual quality of service (QoS) from the users perspective. For video service, the relationship between QoE and QoS (such as coding parameters and network statistics) is complicated because users perceptual video quality is subjective and diversified in different environments. Traditionally, QoE is obtained from subjective test, where human viewers evaluate the quality of tested videos under a laboratory environment. To avoid high cost and offline nature of such tests, objective quality models are developed to predict QoE based on objective QoS parameters, but it is still an indirect way to estimate QoE. With the rising popularity of video streaming over the Internet, data-driven QoE analysis models have newly emerged due to availability of large-scale data. In this paper, we give a comprehensive survey of the evolution of video quality assessment methods, analyzing their characteristics, advantages, and drawbacks. We also introduce QoE-based video applications and, finally, identify the future research directions of QoE.


international conference on computer communications | 2012

HJam: Attachment transmission in WLANs

Kaishun Wu; Haochao Li; Lu Wang; Youwen Yi; Yunhuai Liu; Dihu Chen; Xiaonan Luo; Qian Zhang; Lionel M. Ni

Effective coordination can dramatically reduce radio interference and avoid packet collisions for multistation wireless local area networks (WLANs). Coordination itself needs consume communication resource and thus competes with data transmission for the limited wireless radio resources. In traditional approaches, control frames and data packets are transmitted in an alternate manner, which brings a great deal of coordination overhead. In this paper, we propose a new communication model where the control frames can be attacheda to the data transmission. Thus, control messages and data traffic can be transmitted simultaneously and consequently the channel utilization can be improved significantly. We implement the idea in OFDM-based WLANs called hJam, which fully explores the physical layer features of the OFDM modulation method and allows one data packet and a number of control messages to be transmitted together. hJam is implemented on the GNU Radio testbed consisting of eight USRP2 nodes. We also conduct comprehensive simulations and the experimental results show that hJam can improve the WLANs efficiency by up to 200 percent compared with the existing 802.11 family protocols.


IEEE Transactions on Mobile Computing | 2016

We Can Hear You with Wi-Fi!

Guanhua Wang; Yongpan Zou; Zimu Zhou; Kaishun Wu; Lionel M. Ni

Recent literature advances Wi-Fi signals to “see” peoples motions and locations. This paper asks the following question: Can Wi-Fi “hear” our talks? We present WiHear, which enables Wi-Fi signals to “hear” our talks without deploying any devices. To achieve this, WiHear needs to detect and analyze fine-grained radio reflections from mouth movements. WiHear solves this micro-movement detection problem by introducing Mouth Motion Profile that leverages partial multipath effects and wavelet packet transformation. Since Wi-Fi signals do not require line-of-sight, WiHear can “hear” people talks within the radio range. Further, WiHear can simultaneously “hear” multiple peoples talks leveraging MIMO technology. We implement WiHear on both USRP N210 platform and commercial Wi-Fi infrastructure. Results show that within our pre-defined vocabulary, WiHear can achieve detection accuracy of 91 percent on average for single individual speaking no more than six words and up to 74 percent for no more than three people talking simultaneously. Moreover, the detection accuracy can be further improved by deploying multiple receivers from different angles.


IEEE Transactions on Parallel and Distributed Systems | 2014

MODLoc: Localizing Multiple Objects in Dynamic Indoor Environment

Xiaonan Guo; Dian Zhang; Kaishun Wu; Lionel M. Ni

Radio frequency (RF) based technologies play an important role in indoor localization, since Radio Signal Strength (RSS) can be easily measured by various wireless devices without additional cost. Among these, radio map based technologies (also referred as fingerprinting technologies) are attractive due to high accuracy and easy deployment. However, these technologies have not been extensively applied on real environment for two fatal limitations. First, it is hard to localize multiple objects. When the number of target objects is unknown, constructing a radio map of multiple objects is almost impossible. Second, environment changes will generate different multipath signals and severely disturb the RSS measurement, making laborious retraining inevitable. Motivated by these, in this paper, we propose a novel approach, called Line-of-sight radio map matching, which only reserves the LOS signal among nodes. It leverages frequency diversity to eliminate the multipath behavior, making RSS more reliable than before. We implement our system MODLoc based on TelosB sensor nodes and commercial 802.11 NICs with Channel State Information (CSI) as well. Through extensive experiments, it shows that the accuracy does not decrease when localizing multiple targets in a dynamic environment. Our work outperforms the traditional methods by about 60 percent. More importantly, no calibration is required in such environment. Furthermore, our approach presents attractive flexibility, making it more appropriate for general RF-based localization studies than just the radio map based localization.


IEEE Transactions on Mobile Computing | 2017

GRfid: A Device-Free RFID-Based Gesture Recognition System

Yongpan Zou; Jiang Xiao; Jinsong Han; Kaishun Wu; Yun Li; Lionel M. Ni

Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of <inline-formula> <tex-math notation=LaTeX>


international conference on computer communications and networks | 2015

WiG: WiFi-Based Gesture Recognition System

Wenfeng He; Kaishun Wu; Yongpan Zou; Zhong Ming

96.5


IEEE Transactions on Wireless Communications | 2014

CSMA/SF: Carrier Sense Multiple Access with Shortest First

Guanhua Wang; Kaishun Wu; Lionel M. Ni

</tex-math><alternatives><inline-graphic xlink:href=wu-ieq1-2549518.gif/> </alternatives></inline-formula> and <inline-formula><tex-math notation=LaTeX>


IEEE Transactions on Mobile Computing | 2016

SmartScanner: Know More in Walls with Your Smartphone!

Yongpan Zou; Guanhua Wang; Kaishun Wu; Lionel M. Ni

92.8


international conference on network protocols | 2013

Voice over the dins: Improving wireless channel utilization with collision tolerance

Xiaoyu Ji; Jiliang Wang; Kaishun Wu; Ke Yi; Yunhao Liu

</tex-math><alternatives> <inline-graphic xlink:href=wu-ieq2-2549518.gif/></alternatives></inline-formula> percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.


Computer Networks | 2015

Understanding viewer engagement of video service in Wi-Fi network

Yanjiao Chen; Qihong Chen; Fan Zhang; Qian Zhang; Kaishun Wu; Ruochen Huang; Liang Zhou

Most recently, gesture recognition has increasingly attracted intense academic and industrial interest due to its various applications in daily life, such as home automation, mobile games. Present approaches for gesture recognition, mainly including vision-based, sensor-based and RF-based, all have certain limitations which hinder their practical use in some scenarios. For example, the vision-based approaches fail to work well in poor light conditions and the sensor-based ones require users to wear devices. To address these, we propose WiG in this paper, a device-free gesture recognition system based solely on Commercial Off-The-Shelf (COTS) WiFi infrastructures and devices. Compared with existing Radio Frequency (RF)-based systems, WiG stands out for its systematic simplicity, extremely low cost and high practicability. We implemented WiG in indoor environment and conducted experiments to evaluate its performance in two typical scenarios. The results demonstrate that WiG can achieve an average recognition accuracy of 92% in line-of-sight scenario and average accuracy of 88% in the none-line-of sight scenario.

Collaboration


Dive into the Kaishun Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qian Zhang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Lou

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Xu

Shenzhen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guanhua Wang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Junmei Yao

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge