Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Songqing Chen is active.

Publication


Featured researches published by Songqing Chen.


internet measurement conference | 2005

Measurements, analysis, and modeling of BitTorrent-like systems

Lei Guo; Songqing Chen; Zhen Xiao; Enhua Tan; Xiaoning Ding; Xiaodong Zhang

Existing studies on BitTorrent systems are single-torrent based, while more than 85% of all peers participate in multiple torrents according to our trace analysis. In addition, these studies are not sufficiently insightful and accurate even for single-torrent models, due to some unrealistic assumptions. Our analysis of representative Bit-Torrent traffic provides several new findings regarding the limitations of BitTorrent systems: (1) Due to the exponentially decreasing peer arrival rate in reality, service availability in such systems becomes poor quickly, after which it is difficult for the file to be located and downloaded. (2) Client performance in the BitTorrent-like systems is unstable, and fluctuates widely with the peer population. (3) Existing systems could provide unfair services to peers, where peers with high downloading speed tend to download more and upload less. In this paper, we study these limitations on torrent evolution in realistic environments. Motivated by the analysis and modeling results, we further build a graph based multi-torrent model to study inter-torrent collaboration. Our model quantitatively provides strong motivation for inter-torrent collaboration instead of directly stimulating seeds to stay longer. We also discuss a system design to show the feasibility of multi-torrent collaboration.


IEEE Journal on Selected Areas in Communications | 2007

A performance study of BitTorrent-like peer-to-peer systems

Lei Guo; Songqing Chen; Zhen Xiao; Enhua Tan; Xiaoning Ding; Xiaodong Zhang

This paper presents a performance study of BitTorrent-like P2P systems by modeling, based on extensive measurements and trace analysis. Existing studies on BitTorrent systems are single-torrent based and usually assume the process of request arrivals to a torrent is Poisson-like. However, in reality, most BitTorrent peers participate in multiple torrents and file popularity changes over time. Our study of representative BitTorrent traffic provides insights into the evolution of single-torrent systems and several new findings regarding the limitations of BitTorrent systems: (1) Due to the exponentially decreasing peer arrival rate in a torrent, the service availability of the corresponding file becomes poor quickly, and eventually it is hard to locate and download this file. (2) Client performance in the BitTorrent-like system is unstable, and fluctuates significantly with the changes of the number of online peers. (3) Existing systems could provide unfair services to peers, where a peer with a higher downloading speed tends to download more and upload less. Motivated by the analysis and modeling results, we have further proposed a graph based model to study interactions among multiple torrents. Our model quantitatively demonstrates that inter-torrent collaboration is much more effective than stimulating seeds to serve longer for addressing the service unavailability in BitTorrent systems. An architecture for inter-torrent collaboration under an exchange based instant incentive mechanism is also discussed and evaluated by simulations


principles of distributed computing | 2008

The stretched exponential distribution of internet media access patterns

Lei Guo; Enhua Tan; Songqing Chen; Zhen Xiao; Xiaodong Zhang

The commonly agreed Zipf-like access pattern of Web workloads is mainly based on Internet measurements when text-based content dominated the Web traffic. However, with dramatic increase of media traffic on the Internet, the inconsistency between the access patterns of media objects and the Zipf model has been observed in a number of studies. An insightful understanding of media access patterns is essential to guide Internet system design and management, including resource provisioning and performance optimizations. In this paper, we have studied a large variety of media workloads collected from both client and server sides in different media systems with different delivery methods. Through extensive analysis and modeling, we find: (1) the object reference ranks of all these workloads follow the stretched exponential (SE) distribution despite their different media systems and delivery methods; (2) one parameter of this distribution well characterizes the media file sizes, the other well characterizes the aging of media accesses; (3) some biased measurements may lead to Zipf-like observations on media access patterns; and (4) the deviation of media access pattern from the Zipf model in these workloads increases along with the workload duration. We have further analyzed the effectiveness of media caching with a mathematical model. Compared with Web caching under the Zipf model, media caching under the SE model is far less effective unless the cache size is enormously large. This indicates that many previous studies based on a Zipf-like assumption have potentially overestimated the media caching benefit, while an effective media caching system must be able to scale its storage size to accommodate the increase of media content over a long time. Our study provides an analytical basis for applying a P2P model rather than a client-server model to build large scale Internet media delivery systems.


recent advances in intrusion detection | 2009

VirusMeter: Preventing Your Cellphone from Spies

Lei Liu; Guanhua Yan; Xinwen Zhang; Songqing Chen

Due to the rapid advancement of mobile communication technology, mobile devices nowadays can support a variety of data services that are not traditionally available. With the growing popularity of mobile devices in the last few years, attacks targeting them are also surging. Existing mobile malware detection techniques, which are often borrowed from solutions to Internet malware detection, do not perform as effectively due to the limited computing resources on mobile devices. In this paper, we propose VirusMeter, a novel and general malware detection method, to detect anomalous behaviors on mobile devices. The rationale underlying VirusMeter is the fact that mobile devices are usually battery powered and any malicious activity would inevitably consume some battery power. By monitoring power consumption on a mobile device, VirusMeter catches misbehaviors that lead to abnormal power consumption. For this purpose, VirusMeter relies on a concise user-centric power model that characterizes power consumption of common user behaviors. In a real-time mode, VirusMeter can perform fast malware detection with trivial runtime overhead. When the battery is charging (referred to as a battery-charging mode), VirusMeter applies more sophisticated machine learning techniques to further improve the detection accuracy. To demonstrate its feasibility and effectiveness, we have implemented a VirusMeter prototype on Nokia 5500 Sport and used it to evaluate some real cellphone malware, including FlexiSPY and Cabir. Our experimental results show that VirusMeter can effectively detect these malware activities with less than 1.5% additional power consumption in real time.


international conference on network protocols | 2007

PSM-throttling: Minimizing Energy Consumption for Bulk Data Communications in WLANs

Enhua Tan; Lei Guo; Songqing Chen; Xiaodong Zhang

While the 802.11 power saving mode (PSM) and its enhancements can reduce power consumption by putting the wireless network interface (WNI) into sleep as much as possible, they either require additional infrastructure support, or may degrade the transmission throughput and cause additional transmission delay. These schemes are not suitable for long and bulk data transmissions with strict QoS requirements on wireless devices. With increasingly abundant bandwidth available on the Internet, we have observed that TCP congestion control is often not a constraint of bulk data transmissions as bandwidth throttling is widely used in practice. In this paper, instead of further manipulating the trade-off between the power saving and the incurred delay, we effectively explore the power saving potential by considering the bandwidth throttling on streaming/downloading servers. We propose an application-independent protocol, called PSM-throttling. With a quick detection on the TCP flow throughput, a client can identify bandwidth throttling connections with a low cost Since the throttling enables us to reshape the TCP traffic into periodic bursts with the same average throughput as the server transmission rate, the client can accurately predict the arriving time of packets and turn on/off the WNI accordingly. PSM-throttling can minimize power consumption on TCP-based bulk traffic by effectively utilizing available Internet bandwidth without degrading the applications performance perceived by the user. Furthermore, PSM-throttling is client-centric, and does not need any additional infrastructure support. Our lab-environment and Internet-based evaluation results show that PSM-throttling can effectively improve energy savings (by up to 75%) and/or the QoS for a broad types of TCP-based applications, including streaming, pseudo streaming, and large file downloading, over existing PSM-like methods.


network and operating system support for digital audio and video | 2003

Adaptive and lazy segmentation based proxy caching for streaming media delivery

Songqing Chen; Bo Shen; Susie Wee; Xiaodong Zhang

Streaming media objects are often cached in segments. Previous segment-based caching strategies cache segments with constant or exponentially increasing lengths and typically favor caching the beginning segments of media objects. However, these strategies typically do not consider the fact that most accesses are targeted toward a few popular objects. In this paper, we argue that neither the use of a predefined segment length nor the favorable caching of the beginning segments is the best caching strategy for reducing network traffic. We propose an adaptive and lazy segmentation based caching mechanism by delaying the segmentation as late as possible and determining the segment length based on the client access behaviors in real time. In addition, the admission and eviction of segments are carried out adaptively based on an accurate utility function. The proposed method is evaluated by simulations using traces including one from actual enterprise server logs. Simulation results indicate that our proposed method achieves a 30% reduction in network traffic. The utility functions of the replacement policy are also evaluated with different variations to show its accuracy.


internet measurement conference | 2006

Delving into internet streaming media delivery: a quality and resource utilization perspective

Lei Guo; Enhua Tan; Songqing Chen; Zhen Xiao; Oliver Spatscheck; Xiaodong Zhang

Modern Internet streaming services have utilized various techniques to improve the quality of streaming media delivery. Despite the characterization of media access patterns and user behaviors in many measurement studies, few studies have focused on the streaming techniques themselves, particularly on the quality of streaming experiences they offer end users and on the resources of the media systems that they consume. In order to gain insights into current streaming services techniques and thus provide guidance on designing resource-efficient and high quality streaming media systems, we have collected a large streaming media workload from thousands of broadband home users and business users hosted by a major ISP, and analyzed the most commonly used streaming techniques such as automatic protocol switch, Fast Streaming, MBR encoding and rate adaptation. Our measurement and analysis results show that with these techniques, current streaming systems these techniques tend to over-utilize CPU and bandwidth resources to provide better services to end users, which may not be a desirable and effective is not necessary the best way to improve the quality of streaming media delivery. Motivated by these results, we propose and evaluate a coordination mechanism that effectively takes advantage of both Fast Streaming and rate adaptation to better utilize the server and Internet resources for streaming quality improvement.


IEEE Transactions on Parallel and Distributed Systems | 2002

Dynamic cluster resource allocations for jobs with known and unknown memory demands

Li Xiao; Songqing Chen; Xiaodong Zhang

The cluster system we consider for load sharing is a compute farm which is a pool of networked server nodes providing high-performance computing for CPU-intensive, memory-intensive, and I/O active jobs in a batch mode. Existing resource management systems mainly target at balancing the usage of CPU loads among server nodes. With the rapid advancement of CPU chips, memory and disk access speed improvements significantly lag behind advancement of CPU speed, increasing the penalty for data movement, such as page faults and I/O operations, relative to normal CPU operations. Aiming at reducing the memory resource contention caused by page faults and I/O activities, we have developed and examined load sharing policies by considering effective usage of global memory in addition to CPU load balancing in clusters. We study two types of application workloads: 1) Memory demands are known in advance or are predictable and 2) memory demands are unknown and dynamically changed during execution. Besides using workload traces with known memory demands, we have also made kernel instrumentation to collect different types of workload execution traces to capture dynamic memory access patterns. Conducting different groups of trace-driven simulations, we show that our proposed policies can effectively improve overall job execution performance by well utilizing both CPU and memory resources with known and unknown memory demands.


international world wide web conferences | 2005

Analysis of multimedia workloads with implications for internet streaming

Lei Guo; Songqing Chen; Zhen Xiao; Xiaodong Zhang

In this paper, we study the media workload collected from a large number of commercial Web sites hosted by a major ISP and that collected from a large group of home users connected to the Internet via a well-known cable company. Some of our key findings are: (1) Surprisingly, the majority of media contents are still delivered via downloading from Web servers. (2) A substantial percentage of media downloading connections are aborted before completion due to the long waiting time. (3) A hybrid approach, pseudo streaming, is used by clients to imitate real streaming. (4) The mismatch between the downloading rate and the client playback speed in pseudo streaming is common, which either causes frequent playback delays to the clients, or unnecessary traffic to the Internet. (5) Compared with streaming, downloading and pseudo streaming are neither bandwidth efficient nor performance effective. To address this problem, we propose the design of AutoStream, an innovative system that can provide additional previewing and streaming services automatically for media objects hosted on standard Web sites in server farms at the clients will.


international conference on information security | 2008

BotTracer: Execution-Based Bot-Like Malware Detection

Lei Liu; Songqing Chen; Guanhua Yan; Zhao Zhang

Bot-like malware has posed an immense threat to computer security. Bot detection is still a challenging task since bot developers are continuously adopting advanced techniques to make bots more stealthy. A typical bot exhibits three invariant features along its onset: (1) the startup of a bot is automatic without requiring any user actions; (2) a bot must establish a command and control channel with its botmaster; and (3) a bot will perform local or remote attacks sooner or later. These invariants indicate three indispensable phases (startup, preparation, and attack) for a bot attack. In this paper, we propose BotTracer to detect these three phases with the assistance of virtual machine techniques. To validate BotTracer, we implement a prototype of BotTracer based on VMware and Windows XP Professional. The results show that BotTracer has successfully detected all the bots in the experiments without any false negatives.

Collaboration


Dive into the Songqing Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Guo

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yao Liu

Binghamton University

View shared research outputs
Top Co-Authors

Avatar

Fei Li

George Mason University

View shared research outputs
Top Co-Authors

Avatar

An Wang

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Enhua Tan

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Mengbai Xiao

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Lei Liu

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Wentao Chang

George Mason University

View shared research outputs
Researchain Logo
Decentralizing Knowledge