Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chad R. Meiners is active.

Publication


Featured researches published by Chad R. Meiners.


IEEE ACM Transactions on Networking | 2010

TCAM Razor: a systematic approach towards minimizing packet classifiers in TCAMs

Alex X. Liu; Chad R. Meiners; Eric Torng

Packet classification is the core mechanism that enables many networking services on the Internet such as firewall packet filtering and traffic accounting. Using ternary content addressable memories (TCAMs) to perform high-speed packet classification has become the de facto standard in industry. TCAMs classify packets in constant time by comparing a packet with all classification rules of ternary encoding in parallel. Despite their high speed, TCAMs suffer from the well-known range expansion problem. As packet classification rules usually have fields specified as ranges, converting such rules to TCAM-compatible rules may result in an explosive increase in the number of rules. This is not a problem if TCAMs have large capacities. Unfortunately, TCAMs have very limited capacity, and more rules mean more power consumption and more heat generation for TCAMs. Even worse, the number of rules in packet classifiers has been increasing rapidly with the growing number of services deployed on the Internet. In this paper, we consider the following problem: given a packet classifier, how can we generate another semantically equivalent packet classifier that requires the least number of TCAM entries? In this paper, we propose a systematic approach, the TCAM Razor, that is effective, efficient, and practical. In terms of effectiveness, TCAM Razor achieves a total compression ratio of 29.0%, which is significantly better than the previously published best result of 54%. In terms of efficiency, our TCAM Razor prototype runs in seconds, even for large packet classifiers. Finally, in terms of practicality, our TCAM Razor approach can be easily deployed as it does not require any modification to existing packet classification systems, unlike many previous range encoding schemes.


IEEE ACM Transactions on Networking | 2012

Bit weaving: a non-prefix approach to compressing packet classifiers in TCAMs

Chad R. Meiners; Alex X. Liu; Eric Torng

Ternary content addressable memories (TCAMs) have become the de facto standard in industry for fast packet classification. Unfortunately, TCAMs have limitations of small capacity, high power consumption, high heat generation, and high cost. The well-known range expansion problem exacerbates these limitations as each classifier rule typically has to be converted to multiple TCAM rules. One method for coping with these limitations is to use compression schemes to reduce the number of TCAM rules required to represent a classifier. Unfortunately, all existing compression schemes only produce prefix classifiers. Thus, they all miss the compression opportunities created by non-prefix ternary classifiers. In this paper, we propose bit weaving, the first non-prefix compression scheme. Bit weaving is based on the observation that TCAM entries that have the same decision and whose predicates differ by only one bit can be merged into one entry by replacing the bit in question with . Bit weaving consists of two new techniques, bit swapping and bit merging, to first identify and then merge such rules together. The key advantages of bit weaving are that it runs fast, it is effective, and it is composable with other TCAM optimization methods as a pre/post-processing routine. We implemented bit weaving and conducted experiments on both real-world and synthetic packet classifiers. Our experimental results show the following: 1) bit weaving is an effective standalone compression technique (it achieves an average compression ratio of 23.6%); 2) bit weaving finds compression opportunities that other methods miss. Specifically, bit weaving improves the prior TCAM optimization techniques of TCAM Razor and Topological Transformation by an average of 12.8% and 36.5%, respectively.


international conference on computer communications | 2008

Firewall Compressor: An Algorithm for Minimizing Firewall Policies

Alex X. Liu; Eric Torng; Chad R. Meiners

A firewall is a security guard placed between a private network and the outside Internet that monitors all incoming and outgoing packets. The function of a firewall is to examine every packet and decide whether to accept or discard it based upon the firewalls policy. This policy is specified as a sequence of (possibly conflicting) rules. When a packet comes to a firewall, the firewall searches for the first rule that the packet matches, and executes the decision of that rule. With the explosive growth of Internet-based applications and malicious attacks, the number of rules in firewalls have been increasing rapidly, which consequently degrades network performance and throughput. In this paper, we propose Firewall Compressor, a framework that can significantly reduce the number of rules in a firewall while keeping the semantics of the firewall unchanged. We make three major contributions in this paper. First, we propose an optimal solution using dynamic programming techniques for compressing one-dimensional firewalls. Second, we present a systematic approach to compressing multi-dimensional firewalls. Last, we conducted extensive experiments to evaluate Firewall Compressor. In terms of effectiveness, Firewall Compressor achieves an average compression ratio of 52.3% on real- life rule sets. In terms of efficiency, Firewall Compressor runs in seconds even for a large firewall with thousands of rules. Moreover, the algorithms and techniques proposed in this paper are not limited to firewalls. Rather, they can be applied to other rule-based systems such as packet filters on Internet routers.


international conference on computer communications | 2008

All-Match Based Complete Redundancy Removal for Packet Classifiers in TCAMs

Alex X. Liu; Chad R. Meiners; Yun Zhou

Packet classification is the core mechanism that enables many networking services on the Internet such as firewall packet filtering and traffic accounting. Using Ternary Content Addressable Memories (TCAMs) to perform high-speed packet classification has become the de facto standard in industry. TCAMs classify packets in constant time by comparing a packet with all classification rules of ternary encoding in parallel. Despite their high speed, TCAMs suffer from the well-known interval expansion problem. As packet classification rules usually have fields specified as intervals, converting such rules to TCAM- compatible rules may result in an explosive increase in the number of rules. This is not a problem if TCAMs have large capacities. Unfortunately, TCAMs have very limited capacity, and more rules means more power consumption and more heat generation for TCAMs. Even worse, the number of rules in packet classifiers have been increasing rapidly with the growing number of services deployed on the internet. The interval expansion problem of TCAMs can be addressed by removing redundant rules in packet classifiers. This equivalent transformation can significantly reduce the number of TCAM entries needed by a packet classifier. Our experiments on real- life packet classifiers show an average reduction of 58.2% in the number of TCAM entries by removing redundant rules. In this paper, we propose an all-match based complete redundancy removal algorithm. This is the first algorithm that attempts to solve first-match problems from an all-match perspective. We formally prove that our redundancy removal algorithm guarantees no redundant rules in resulting packet classifiers. We conducted extensive experiments on both real-life and synthetic packet classifiers. These experimental results show that our redundancy removal algorithm is both effective in terms of reducing TCAM entries and efficient in terms of running time.


international conference on network protocols | 2007

TCAM Razor: A Systematic Approach Towards Minimizing Packet Classifiers in TCAMs

Chad R. Meiners; Alex X. Liu; Eric Torng

Packet classification is the core mechanism that enables many networking services on the Internet such as firewall packet filtering and traffic accounting. Using ternary content addressable memories (TCAMs) to perform high-speed packet classification has become the de facto standard in industry. TCAMs classify packets in constant time by comparing a packet with all classification rules of ternary encoding in parallel. Despite their high speed, TCAMs suffer from the well-known range expansion problem. As packet classification rules usually have fields specified as ranges, converting such rules to TCAM-compatible rules may result in an explosive increase in the number of rules. This is not a problem if TCAMs have large capacities. Unfortunately, TCAMs have very limited capacity, and more rules means more power consumption and more heat generation for TCAMs. Even worse, the number of rules in packet classifiers have been increasing rapidly with the growing number of services deployed on the internet. To address the range expansion problem of TCAMs, we consider the following problem: given a packet classifier, how can we generate another semantically equivalent packet classifier that requires the least number of TCAM entries? In this paper, we propose a systematic approach, the TCAM Razor, that is effective, efficient, and practical. In terms of effectiveness, our TCAM Razor prototype achieves a total compression ratio of 3.9%, which is significantly better than the previously published best result of 54%. In terms of efficiency, our TCAM Razor prototype runs in seconds, even for large packet classifiers. Finally, in terms of practicality, our TCAM Razor approach can be easily deployed as it does not require any modification to existing packet classification systems, unlike many previous range expansion solutions.


architectures for networking and communications systems | 2011

Split: Optimizing Space, Power, and Throughput for TCAM-Based Classification

Chad R. Meiners; Alex X. Liu; Eric Torng; Jignesh Patel

Using Ternary Content Addressable Memories (TCAMs) to perform high-speed packet classication has become the de facto standard in industry because TCAMs facilitate constant time classication by comparing packet elds against ternary encoded rules in parallel. Despite their high speed, TCAMs have limitations of small capacity, large power consumption, and relatively slow access times. One reason TCAM-based packet classiers are so large is the multiplicative eect inherent in representing d-dimensional classiers in TCAMs. To address the multiplicative effect, we propose the TCAM Split architecture, where a d-dimensional classier is split into k = 2 low dimensional classiers, each of which is stored on its own small TCAM. A d-dimensional lookup is split into k low dimensional, pipe-lined lookups with one lookup on each chip. Our experimental results with real-life classiers show that TCAM Split reduces classier size by 84% using only two small TCAM chips, this increases to 93% if we use ve small TCAM chips.


measurement and modeling of computer systems | 2009

Topological transformation approaches to optimizing TCAM-based packet classification systems

Chad R. Meiners; Alex X. Liu; Eric Torng

Several range reencoding schemes have been proposed to mitigate the effect of range expansion and the limitations of small capacity, large power consumption, and high heat generation of TCAM-based packet classification systems. However, they all disregard the semantics of classifiers and therefore miss significant opportunities for space compression. In this paper, we propose new approaches to range reencoding by taking into account classifier semantics. Fundamentally different from prior work, we view reencoding as a topological transformation process from one colored hyperrectangle to another where the color is the decision associated with a given packet. We present two orthogonal, yet composable, reencoding approaches, domain compression and prefix alignment. Our techniques significantly outperform all previous reencoding techniques. In comparison with the state-of-the-art results, our experimental results show that our techniques achieve at least 7 times more space reduction in terms of TCAM space for an encoded classifier and at least 3 times more space reduction in terms of TCAM space for a reencoded classifier and its transformers.


IEEE ACM Transactions on Networking | 2011

Topological transformation approaches to TCAM-based packet classification

Chad R. Meiners; Alex X. Liu; Eric Torng

Several range reencoding schemes have been proposed to mitigate the effect of range expansion and the limitations of small capacity, large power consumption, and high heat generation of ternary content addressable memory (TCAM)-based packet classification systems. However, they all disregard the semantics of classifiers and therefore miss significant opportunities for space compression. In this paper, we propose new approaches to range reencoding by taking into account classifier semantics. Fundamentally different from prior work, we view reencoding as a topological transformation process from one colored hyperrectangle to another, where the color is the decision associated with a given packet. Stated another way, we reencode the entire classifier by considering the classifiers decisions rather than reencode only ranges in the classifier ignoring the classifiers decisions as prior work does. We present two orthogonal, yet composable, reencoding approaches: domain compression and prefix alignment. Our techniques significantly outperform all previous reencoding techniques. In comparison to prior art, our experimental results show that our techniques achieve at least five times more space reduction in terms of TCAM space for an encoded classifier and at least three times more space reduction in terms of TCAM space for a reencoded classifier and its transformers. This, in turn, leads to improved throughput and decreased power consumption.


international conference on network protocols | 2009

Bit weaving: A non-prefix approach to compressing packet classifiers in TCAMs

Chad R. Meiners; Alex X. Liu; Eric Torng

Ternary Content Addressable Memories (TCAMs) have become the de facto standard in industry for fast packet classification. Unfortunately, TCAMs have limitations of small capacity, high power consumption, high heat generation, and high cost. The well-known range expansion problem exacerbates these limitations as each classifier rule typically has to be converted to multiple TCAM rules. One method for coping with these limitations is to use compression schemes to reduce the number of TCAM rules required to represent a classifier. Unfortunately, all existing compression schemes only produce prefix classifiers. Thus, they all miss the compression opportunities created by non-prefix ternary classifiers.


IEEE Transactions on Parallel and Distributed Systems | 2011

Compressing Network Access Control Lists

Alex X. Liu; Eric Torng; Chad R. Meiners

An access control list (ACL) provides security for a private network by controlling the flow of incoming and outgoing packets. Specifically, a network policy is created in the form of a sequence of (possibly conflicting) rules. Each packet is compared against this ACL, and the first rule that the packet matches defines the decision for that packet. The size of ACLs has been increasing rapidly due to the explosive growth of Internet-based applications and malicious attacks. This increase in size degrades network performance and increases management complexity. In this paper, we propose ACL Compressor, a framework that can significantly reduce the number of rules in an access control list while maintaining the same semantics. We make three major contributions. First, we propose an optimal solution using dynamic programming techniques for compressing one-dimensional range-based access control lists. Second, we present a systematic approach for compressing multidimensional access control lists. Last, we conducted extensive experiments to evaluate ACL Compressor. In terms of effectiveness, ACL Compressor achieves an average compression ratio of 50.22 percent on real-life rule sets. In terms of efficiency, ACL runs in seconds, even for large ACLs with thousands of rules.

Collaboration


Dive into the Chad R. Meiners's collaboration.

Top Co-Authors

Avatar

Eric Torng

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Alex X. Liu

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Eric Norige

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Jignesh Patel

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alok Watve

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Ke Shen

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sakti Pramanik

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge