Shansi Ren
College of William & Mary
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shansi Ren.
international conference on distributed computing systems | 2006
Shansi Ren; Lei Guo; Xiaodong Zhang
Peer-to-peer (P2P) technology has been successfully applied in Internet telephony or Voice over Internet Protocol (VoIP), such as the Skype system, where P2P is used for both searching clients and relaying voice packets. Selecting one or multiple suitable peers to relay voice packets is a critical factor for the quality, scalability and cost of a VoIP system. In this paper, we first present two sets of intensive Internet measurement results to confirm the benefits gained by peer relays in VoIP, and to investigate the performance of the Skype system. We obtain the following results: (1) many relay peer selections are suboptimal; (2) the waiting time to select a relay node can be quite long; and (3) there are a large number of unnecessary probes, resulting in heavy network traffic to limit scalability of the VoIP system. Our further analysis shows that two main reasons cause these problems. First, the peer selections do not take Autonomous System (AS) topology into consideration, and second, the complex communication relationships among peers are not well utilized. Motivated by our measurements and analysis, we propose an AS-aware peer-relay protocol called ASAP. Our objective is to significantly improve VoIP quality and system scalability with low overhead. Our intensive evaluation by trace-driven simulation shows ASAP is highly effective and easy to implement on the Internet for building large and scalable VoIP systems.
IEEE Transactions on Parallel and Distributed Systems | 2007
Shansi Ren; Qun Li; Haining Wang; Xin Xhen; Xiaodong Zhang
Object detection quality and network lifetime are two conflicting aspects of a sensor network, but both are critical to many sensor applications such as military surveillance. Partial coverage, where a sensing field is partially sensed by active sensors at any time, is an appropriate approach to balancing the two conflicting design requirements of monitoring applications. Under partial coverage, we develop an analytical framework for object detection in sensor networks, and mathematically analyze average-case object detection quality in random and synchronized sensing scheduling protocols. Our analytical framework facilitates performance evaluation of a sensing schedule, network deployment, and sensing scheduling protocol design. Furthermore, we propose three wave sensing scheduling protocols to achieve bounded worst-case object detection quality. We justify the correctness of our analyses through rigorous proof, and validate the effectiveness of the proposed protocols through extensive simulation experiments
international conference on computer communications | 2005
Xin Chen; Shansi Ren; Haining Wang; Xiaodong Zhang
While current peer-to-peer (P2P) systems facilitate static file sharing, newly developed applications demand that P2P systems be able to manage dynamically changing files. Maintaining consistency between frequently updated files and their replicas is a fundamental reliability requirement for a P2P system. In this paper, we present SCOPE, a structured P2P system supporting consistency among a large number of replicas. By building a replica-partition-tree (RPT) for each key, SCOPE keeps track of the locations of replicas and then propagates update notifications. Our theoretical analyses and experimental results demonstrate that SCOPE can effectively maintain replica consistency while preventing hot spot and node-failure problems. Its efficiency in maintenance and failure-recovery is particularly attractive to the deployment of large-scale P2P systems.
international workshop on quality of service | 2005
Shansi Ren; Qun Li; Haining Wang; Xin Chen; Xiaodong Zhang
Object detection quality and network lifetime are two conflicting aspects of a sensor network, but both are critical to many sensor applications such as military surveillance. Probabilistic coverage is an appropriate approach to balancing the conflicting design requirements of monitoring applications. Under probabilistic coverage, we present an analytical model to analyze object detection quality with respect to different network conditions and sensor scheduling schemes. Our analytical model facilitates performance evaluation of a sensing schedule, network deployment, and sensing scheduling protocol design. Applying the model to real sensor networks, we design a set of sensing scheduling protocols to achieve targeted object detection quality while minimizing power consumption. The correctness of our model and the effectiveness of the proposed protocols are validated through extensive simulation experiments.
Mobile Computing and Communications Review | 2005
Shansi Ren; Qun Li; Haining Wang; Xin Chen; Xiaodong Zhang
Sensor networks are used for a wide range of object tracking applications, such as vehicle tracking in military surveillance and wild animal tracking in habitat monitoring [1]. These applications, by their nature, enforce certain tracking quality and lifetime requirements. These two requirements, however, are two conflicting optimization goals due to the stringent energy constraints of sensor nodes.
IEEE Transactions on Knowledge and Data Engineering | 2007
Xin Chen; Haining Wang; Shansi Ren; Xiaodong Zhang
Effective caching in the domain name system (DNS) is critical to its performance and scalability. Existing DNS only supports weak cache consistency by using the time-to-live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong cache consistency in DNS as an indispensable exceptional handling mechanism has become more and more demanding for three important objectives: 1) to quickly respond and handle exceptions such as sudden and dramatic Internet failures caused by natural and human disasters, 2) to adapt increasingly frequent changes of Internet Protocol (IP) addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and 3) to provide fine-grain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS cache consistency improves the availability and reliability of Internet services. In this paper, we first conduct extensive Internet measurements to quantitatively characterize DNS dynamics. Then, we propose a proactive DNS cache update protocol (DNScup), running as middleware in DNS name servers, to provide strong cache consistency for DNS. The core of DNScup is an optimal lease scheme, called dynamic lease, to keep track of the local DNS name servers. We compare dynamic lease with other existing lease schemes through theoretical analysis and trace-driven simulations. Based on the DNS dynamic update protocol, we build a DNScup prototype with minor modifications to the current DNS implementation. Our system prototype demonstrates the effectiveness of DNScup and its easy and incremental deployment on the Internet.
distributed computing in sensor systems | 2005
Shansi Ren; Qun Li; Haining Wang; Xiaodong Zhang
Many sensor network applications demand tightly-bounded object detection quality. To meet such stringent requirements, we develop three sensing scheduling protocols to guarantee worst-case detection quality in a sensor network while reducing sensing power consumption. Our protocols emulate a line sweeping through all points in the sensing field periodically. Nodes wake up when the sweeping line comes close, and then go to sleep when the line moves forward. In this way, any object can be detected within a certain period. We prove the correctness of the protocols and evaluate their performances by theoretical analyses and simulation.
international conference on distributed computing systems | 2006
Xin Chen; Haining Wang; Shansi Ren
Effective caching in Domain Name System (DNS) is critical to its performance and scalability. Existing DNS only supports weak cache consistency by using the Time-To-Live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong cache consistency in DNS as an indispensable exceptional handling mechanismhas become more and more demanding for three important objectives: (1) to quickly respond and handle exceptional incidents, such as sudden and dramatic Internet failures caused by natural and human disasters, (2) to adapt increasingly frequent changes of IP addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and (3) to provide finegrain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS cache consistency improves the availability and reliability of Internet services. In this paper, we propose a proactive DNS cache update protocol, called DNScup, running as middleware in DNS nameservers, to provide strong cache consistency for DNS. The core of DNScup is a dynamic lease technique to keep track of the local DNS nameservers, whose clients need cache coherence to avoid losing service availability. Based on the DNS Dynamic Update protocol, we have built a DNScup prototype with minor modifications to the current DNS implementation. Our trace-driven simulation and system prototype demonstrate the effectiveness of DNScup and its easy and incremental deployment on the Internet.
international conference on distributed computing systems | 2004
Lei Guo; Songqing Chen; Shansi Ren; Xin Chen; Song Jiang
international conference on computer communications | 2010
Shansi Ren; Enhua Tan; Tian Luo; Songqing Chen; Lei Guo; Xiaodong Zhang