Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lakshmish Ramaswamy is active.

Publication


Featured researches published by Lakshmish Ramaswamy.


IEEE Transactions on Parallel and Distributed Systems | 2005

A distributed approach to node clustering in decentralized peer-to-peer networks

Lakshmish Ramaswamy; Bugra Gedik; Ling Liu

Connectivity-based node clustering has wide-ranging applications in decentralized peer-to-peer (P2P) networks such as P2P file sharing systems, mobile ad-hoc networks, P2P sensor networks, and so forth. This paper describes a connectivity-based distributed node clustering scheme (CDC). This scheme presents a scalable and efficient solution for discovering connectivity-based clusters in peer networks. In contrast to centralized graph clustering algorithms, the CDC scheme is completely decentralized and it only assumes the knowledge of neighbor nodes instead of requiring a global knowledge of the network (graph) to be available. An important feature of the CDC scheme is its ability to cluster the entire network automatically or to discover clusters around a given set of nodes. To cope with the typical dynamics of P2P networks, we provide mechanisms to allow new nodes to be incorporated into appropriate existing clusters and to gracefully handle the departure of nodes in the clusters. These mechanisms enable the CDC scheme to be extensible and adaptable in the sense that the clustering structure of the network adjusts automatically as nodes join or leave the system. We provide detailed experimental evaluations of the CDC scheme, addressing its effectiveness in discovering good quality clusters and handling the node dynamics. We further study the types of topologies that can benefit best from the connectivity-based distributed clustering algorithms like CDC. Our experiments show that utilizing message-based connectivity structure can considerably reduce the messaging cost and provide better utilization of resources, which in turn improves the quality of service of the applications executing over decentralized peer-to-peer networks.


international world wide web conferences | 2004

Automatic detection of fragments in dynamically generated web pages

Lakshmish Ramaswamy; Arun Iyengar; Ling Liu; Fred Douglis

Dividing web pages into fragments has been shown to provide significant benefits for both content generation and caching. In order for a web site to use fragment-based content generation, however, good methods are needed for dividing web pages into fragments. Manual fragmentation of web pages is expensive, error prone, and unscalable. This paper proposes a novel scheme to automatically detect and flag fragments that are cost-effective cache units in web sites serving dynamic content. We consider the fragments to be interesting if they are shared among multiple documents or they have different lifetime or personalization characteristics. Our approach has three unique features. First, we propose a hierarchical and fragment-aware model of the dynamic web pages and a data structure that is compact and effective for fragment detection. Second, we present an efficient algorithm to detect maximal fragments that are shared among multiple documents. Third, we develop a practical algorithm that effectively detects fragments based on their lifetime and personalization characteristics. We evaluate the proposed scheme through a series of experiments, showing the benefits and costs of the algorithms. We also study the impact of adopting the fragments detected by our system on disk space utilization and network bandwidth consumption.


IEEE Transactions on Knowledge and Data Engineering | 2005

Automatic fragment detection in dynamic Web pages and its impact on caching

Lakshmish Ramaswamy; Arun lyengar; Ling Liu; Fred Douglis

Constructing Web pages from fragments has been shown to provide significant benefits for both content generation and caching. In order for a Web site to use fragment-based content generation, however, good methods are needed for fragmenting the Web pages. Manual fragmentation of Web pages is expensive, error prone, and unscalable. This paper proposes a novel scheme to automatically detect and flag fragments that are cost-effective cache units in Web sites serving dynamic content. Our approach analyzes Web pages with respect to their information sharing behavior, personalization characteristics, and change patterns. We identify fragments which are shared among multiple documents or have different lifetime or personalization characteristics. Our approach has three unique features. First, we propose a framework for fragment detection, which includes a hierarchical and fragment-aware model for dynamic Web pages and a compact and effective data structure for fragment detection. Second, we present an efficient algorithm to detect maximal fragments that are shared among multiple documents. Third, we develop a practical algorithm that effectively detects fragments based on their lifetime and personalization characteristics. This paper shows the results when the algorithms are applied to real Web sites. We evaluate the proposed scheme through a series of experiments, showing the benefits and costs of the algorithms. We also study the impact of using the fragments detected by our system on key parameters such as disk space utilization, network bandwidth consumption, and load on the origin servers.


mobile data management | 2009

CAESAR: A Context-Aware, Social Recommender System for Low-End Mobile Devices

Lakshmish Ramaswamy; Deepak P; Ramana V. Polavarapu; Kutila Gunasekera; Dinesh Garg; Karthik Visweswariah; Shivkumar Kalyanaraman

Mobile-enabled social networks applications are becoming increasingly popular. Most of the current social network applications have been designed for high-end mobile devices, and they rely upon features such as GPS, capabilities of the world wide web, and rich media support. However, a significant fraction of mobile user base, especially in the developing world, own low-end devices that are only capable of voice and short text messages (SMS). In this context, a natural question is whether one can design meaningful social network-based applications that can work well with these simple devices, and if so, what the real challenges are. Towards answering these questions, this paper presents a social network-based recommender system that has been explicitly designed to work even with devices that just support phone calls and SMS. Our design of the social network based recommender system incorporates three features that complement each other to derive highly targeted ads. First, we analyze information such as customers address books to estimate the level of social affinity among various users. This social affinity information is used to identify the recommendations to be sent to an individual user. Second, we combine the social affinity information with the spatio-temporal context of users and historical responses of the user to further refine the set of recommendations and to decide when a recommendation would be sent. Third, social affinity computation and spatio-temporal contextual association are continuously tuned through user feedback. We outline the challenges in building such a system, and outline approaches to deal with such challenges.


international conference on distributed computing systems | 2005

Cache Clouds: Cooperative Caching of Dynamic Documents in Edge Networks

Lakshmish Ramaswamy; Ling Liu; Arun Iyengar

Caching on the edge of the Internet is becoming a popular technique to improve the scalability and efficiency of delivering dynamic Web content. In this paper, we study the challenges in designing a large scale cooperative edge cache network, focusing on mechanisms and methodologies for efficient cooperation among caches to improve the overall performance of the edge cache network. This paper makes three original contributions. First, we introduce the concept of cache clouds, which forms the fundamental framework for cooperation among caches in the edge network. Second, we present dynamic hashing-based protocols for document lookups and updates within each cache cloud, which are not only efficient, but also effective in dynamically balancing lookup and update loads among the caches in the cloud. Third, we outline a utility-based mechanism for placing dynamic documents within a cache cloud. Our experiments indicate that these techniques can significantly improve the performance of the edge cache networks


international conference on peer-to-peer computing | 2003

Connectivity based node clustering in decentralized peer-to-peer networks

Lakshmish Ramaswamy; Bug̃ra Gedik; Ling Liu

Connectivity based node clustering has wide ranging applications in decentralized peer-to-peer (P2P) networks such as P2P file sharing systems, mobile ad-hoc networks, P2P sensor networks and so forth. We describe a connectivity-based distributed node clustering scheme (CDC). This scheme presents a scalable and an efficient solution for discovering connectivity based clusters in peer networks. In contrast to centralized graph clustering algorithms, the CDC scheme is completely decentralized and it only assumes the knowledge of neighbor nodes, instead of requiring a global knowledge of the network (graph) to be available. An important feature of the CDC scheme is its ability to cluster the entire network automatically or to discover clusters around a given set of nodes. We provide experimental evaluations of the CDC scheme, addressing its effectiveness in discovering good quality clusters. Our experiments show that utilizing message-based connectivity structure can considerably reduce the messaging cost, and provide better utilization of resources, which in turn improves the quality of service of the applications executing over decentralized peer-to-peer networks.


IEEE Transactions on Parallel and Distributed Systems | 2009

Privacy-Aware Collaborative Spam Filtering

Zhenyu Zhong; Lakshmish Ramaswamy

While the concept of collaboration provides a natural defense against massive spam e-mails directed at large numbers of recipients, designing effective collaborative anti-spam systems raises several important research challenges. First and foremost, since e-mails may contain confidential information, any collaborative anti-spam approach has to guarantee strong privacy protection to the participating entities. Second, the continuously evolving nature of spam demands the collaborative techniques to be resilient to various kinds of camouflage attacks. Third, the collaboration has to be lightweight, efficient, and scalable. Toward addressing these challenges, this paper presents ALPACAS-a privacy-aware framework for collaborative spam filtering. In designing the ALPACAS framework, we make two unique contributions. The first is a feature-preserving message transformation technique that is highly resilient against the latest kinds of spam attacks. The second is a privacy-preserving protocol that provides enhanced privacy guarantees to the participating entities. Our experimental results conducted on a real e-mail data set shows that the proposed framework provides a 10 fold improvement in the false negative rate over the Bayesian-based Bogofilter when faced with one of the recent kinds of spam attacks. Further, the privacy breaches are extremely rare. This demonstrates the strong privacy protection provided by the ALPACAS system.


ieee international conference on services computing | 2008

A Formal Model of Service Delivery

Lakshmish Ramaswamy; Guruduth Banavar

We define a service delivery system as a set of interacting entities that are involved in the delivery of one or more business services. A service operating system manages the processes and resources within a service delivery system. This paper presents our on-going work on developing a formal model for these concepts, with the goal of clearly and precisely describing the delivery behavior of service systems. The model lays the groundwork for reasoning about the scenarios that occur in service delivery.


international conference on big data | 2013

A distributed vertex-centric approach for pattern matching in massive graphs

Arash Fard; M. Usman Nisar; Lakshmish Ramaswamy; John A. Miller; Matthew Saltz

Graph pattern matching is fundamentally important to many applications such as analyzing hyper-links in the World Wide Web, mining associations in online social networks, and substructure search in biochemistry. Most existing graph pattern matching algorithms are highly computation intensive, and do not scale to extremely large graphs that characterize many emerging applications. In recent years, graph processing frameworks such as Pregel have sought to harness shared nothing clusters for processing massive graphs through a vertex-centric, Bulk Synchronous Parallel (BSP) programming model. However, developing scalable and efficient BSP-based algorithms for pattern matching is very challenging because this problem does not naturally align with a vertex-centric programming paradigm. This paper presents novel distributed algorithms based on the vertex-centric programming paradigm for a set of pattern matching models, namely, graph simulation, dual simulation and strong simulation. Our algorithms are fine-tuned to consider the challenges of pattern matching on massive data graphs. Furthermore, we introduce a new pattern matching model, called strict simulation, which outperforms strong simulation in terms of scalability while preserving its important properties. We investigate potential performance bottlenecks and propose several techniques to mitigate them. This paper also presents an extensive set of experiments involving massive graphs (millions of vertices and billions of edges) to study the effects of various parameters on the scalability and performance of the proposed algorithms. The results demonstrate that our techniques are highly effective in alleviating performance bottlenecks and yield significant scalability benefits.


Journal of Network and Computer Applications | 2008

PeerCast: Churn-resilient end system multicast on heterogeneous overlay networks

Jianjun Zhang; Ling Liu; Lakshmish Ramaswamy; Calton Pu

The lack of wide deployment of IP multicast in the Internet has prompted researchers to propose end system multicast or application-level multicast as an alternate approach. However, end system multicast, by its very nature, suffers from several performance limitations, including, high communication overheads due to duplicate data transfers over same physical links, uneven load distribution caused by widely varying resource availabilities at nodes, and highly failure-prone nature of end-hosts. This paper presents a self-configuring, efficient and churn-resilient end-system multicast system called PeerCast. Three unique features distinguish PeerCast from existing approaches to application-level multicasting. First, with the aim of exploiting network proximity of end-system nodes for efficient multicast subscription management and fast information dissemination, we propose a novel Internet-landmark signature technique to cluster the end-hosts of the overlay network. Second, we propose a capacity aware overlay construction technique to balance the multicast workload among heterogeneous end-system nodes. Third, we develop a dynamic passive replication scheme to provide reliable end system multicast services in an inherently dynamic environment of unreliable peers. We also present a set of experiments showing the feasibility and the effectiveness of the proposed mechanisms and techniques.

Collaboration


Dive into the Lakshmish Ramaswamy's collaboration.

Top Co-Authors

Avatar

Ling Liu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Calton Pu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor Lawson

Georgia Gwinnett College

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge