Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anat Bremler-Barr is active.

Publication


Featured researches published by Anat Bremler-Barr.


international conference on computer communications | 2005

Spoofing prevention method

Anat Bremler-Barr; Hanoch Levy

A new approach for filtering spoofed IP packets, called spoofing prevention method (SPM), is proposed. The method enables routers closer to the destination of a packet to verify the authenticity of the source address of the packet. This stands in contrast to standard ingress filtering which is effective mostly at routers next to the source and is ineffective otherwise. In the proposed method a unique temporal key is associated with each ordered pair of source destination networks (ASs, autonomous systems). Each packet leaving a source network S is tagged with the key K(S, D), associated with (S, D), where D is the destination network. Upon arrival at the destination network the key is verified and removed. Thus the method verifies the authenticity of packets carrying the address s which belongs to network S. An efficient implementation of the method, ensuring not to overload the routers, is presented. The major benefits of the method are the strong incentive it provides to network operators to implement it, and the fact that the method lends itself to stepwise deployment, since it benefits networks deploying the method even if it is implemented only on parts of the Internet. These two properties, not shared by alternative approaches, make it an attractive and viable solution to the packet spoofing problem.


conference on emerging network experiment and technology | 2014

Deep Packet Inspection as a Service

Anat Bremler-Barr; Yotam Harchol; David Hay; Yaron Koral

Middleboxes play a major role in contemporary networks, as forwarding packets is often not enough to meet operator demands, and other functionalities (such as security, QoS/QoE provisioning, and load balancing) are required. Traffic is usually routed through a sequence of such middleboxes, which either reside across the network or in a single, consolidated location. Although middleboxes provide a vast range of different capabilities, there are components that are shared among many of them. A task common to almost all middleboxes that deal with L7 protocols is Deep Packet Inspection (DPI). Today, traffic is inspected from scratch by all the middleboxes on its route. In this paper, we propose to treat DPI as a service to the middleboxes, implying that traffic should be scanned only once, but against the data of all middleboxes that use the service. The DPI service then passes the scan results to the appropriate middleboxes. Having DPI as a service has significant advantages in performance, scalability, robustness, and as a catalyst for innovation in the middlebox domain. Moreover, technologies and solutions for current Software Defined Networks (SDN) (e.g., SIMPLE [41]) make it feasible to implement such a service and route traffic to and from its instances.


acm special interest group on data communication | 2016

OpenBox: A Software-Defined Framework for Developing, Deploying, and Managing Network Functions

Anat Bremler-Barr; Yotam Harchol; David Hay

We present OpenBox — a software-defined framework for network-wide development, deployment, and management of network functions (NFs). OpenBox effectively decouples the control plane of NFs from their data plane, similarly to SDN solutions that only address the network’s forwarding plane. OpenBox consists of three logic components. First, user-defined OpenBox applications provide NF specifications through the OpenBox north-bound API. Second, a logically-centralized OpenBox controller is able to merge logic of multiple NFs, possibly from multiple tenants, and to use a network-wide view to efficiently deploy and scale NFs across the network data plane. Finally, OpenBox instances constitute OpenBox’s data plane and are implemented either purely in software or contain specific hardware accelerators (e.g., a TCAM). In practice, different NFs carry out similar processing steps on the same packet, and our experiments indeed show a significant improvement of the network performance when using OpenBox. Moreover, OpenBox readily supports smart NF placement, NF scaling, and multi-tenancy through its controller.


international conference on computer communications | 2010

CompactDFA: Generic State Machine Compression for Scalable Pattern Matching

Anat Bremler-Barr; David Hay; Yaron Koral

Pattern matching algorithms lie at the core of all contemporary Intrusion Detection Systems (IDS), making it intrinsic to reduce their speed and memory requirements. This paper focuses on the most popular class of pattern-matching algorithms, the Aho-Corasick--like algorithms, which are based on constructing and traversing a Deterministic Finite Automaton (DFA), representing the patterns. While this approach ensures deterministic time guarantees, modern IDSs need to deal with hundreds of patterns, thus requiring to store very large DFAs which usually do not fit in fast memory. This results in a major bottleneck on the throughput of the IDS, as well as its power consumption and cost. We propose a novel method to compress DFAs by observing that the name of the states is meaningless. While regular DFAs store separately each transition between two states, we use this degree of freedom and encode states in such a way that all transitions to a specific state can be represented by a single prefix that defines a set of current states. Our technique applies to a large class of automata, which can be categorized by simple properties. Then, the problem of pattern matching is reduced to the well-studied problem of Longest Prefix Matching (LPM) that can be solved either in TCAM, in commercially available IP-lookup chips, or in software. Specifically, we show that with a TCAM our scheme can reach a throughput of 10 Gbps with low power consumption.


IEEE Journal on Selected Areas in Communications | 2004

Improved BGP convergence via ghost flushing

Yehuda Afek; Anat Bremler-Barr; Shemer Schwarz

Labovitz et al. (2001) and Labovitz et al. (2000) noticed that sometimes it takes border gateway protocol (BGP) a substantial amount of time and messages to converge and stabilize following the failure of some node in the Internet. In this paper, we suggest a minor modification to BGP that eliminates the problem pointed out and substantially reduces the convergence time and communication complexity of BGP. Roughly speaking, our modification ensures that bad news (the failure of a node/edge) propagate fast, while good news (the establishment of a new path to a destination) propagate somewhat slower. This is achieved in BGP by allowing withdrawal messages to propagate with no delay as fast as the network forward them, while announcements propagate as they do in BGP with a delay at each node of one minRouteAdver (except for the first wave of announcements). As a by product of this work, a new stateless mechanism to overcome the counting to infinity problem is provided, which compares favorably with other known stateless mechanisms (in RIP and IGRP).


IEEE Transactions on Computers | 2012

Space-Efficient TCAM-Based Classification Using Gray Coding

Anat Bremler-Barr; Danny Hendler

Ternary content-addressable memories (TCAMs) are increasingly used for high-speed packet classification. TCAMs compare packet headers against all rules in a classification database in parallel and thus provide high throughput unparalleled by software-based solutions. TCAMs are not well-suited, however, for representing rules that contain range fields. Such rules have to be represented by multiple TCAM entries. The resulting range expansion can dramatically reduce TCAM utilization. The majority of real-life database ranges are short. We present a novel algorithm called short range gray encoding (SRGE) for the efficient representation of short range rules. SRGE encodes range borders as binary reflected gray codes and then represents the resulting range by a minimal set of ternary strings. SRGE is database independent and does not use TCAM extra bits. For the small number of ranges whose expansion is not significantly reduced by SRGE, we use dependent encoding that exploits the extra bits available on todays TCAMs. Our comparative analysis establishes that this hybrid scheme utilizes TCAM more efficiently than previously published solutions. The SRGE algorithm has worst-case expansion ratio of 2W-4, where W is the range-field length . We prove that any TCAM encoding scheme has worst-case expansion ratio W or more.


IEEE ACM Transactions on Networking | 2012

Accelerating multipattern matching on compressed HTTP traffic

Anat Bremler-Barr; Yaron Koral

Current security tools, using “signature-based” detection, do not handle compressed traffic, whose market-share is constantly increasing. This paper focuses on compressed HTTP traffic. HTTP uses GZIP compression and requires some kind of decompression phase before performing a string matching. We present a novel algorithm, Aho-Corasick-based algorithm for Compressed HTTP (ACCH), that takes advantage of information gathered by the decompression phase in order to accelerate the commonly used Aho-Corasick pattern-matching algorithm. By analyzing real HTTP traffic and real Web application firewall signatures, we show that up to 84% of the data can be skipped in its scan. Surprisingly, we show that it is faster to perform pattern matching on the compressed data, with the penalty of decompression, than on regular traffic. As far as we know, we are the first paper that analyzes the problem of “on-the-fly” multipattern matching on compressed HTTP traffic and suggest a solution.


international conference on computer communications | 2009

Layered Interval Codes for TCAM-Based Classification

Anat Bremler-Barr; David Hay; Danny Hendler

Ternary content-addressable memories (TCAMs) are increasingly used for high-speed packet classification. TCAMs compare packet headers against all rules in a classification database in parallel and thus provide high throughput. TCAMs are not well-suited, however, for representing rules that contain range fields and prior art algorithms typically represent each such rule by multiple TCAM entries. The resulting range expansion can dramatically reduce TCAM utilization because it introduces a large number of redundant TCAM entries. This redundancy can be mitigated by making use of extra bits, available in each TCAM entry. We present a scheme for constructing efficient representations of range rules, based on the simple observation that sets of disjoint ranges may be encoded much more efficiently than sets of overlapping ranges. Since the ranges in real-world classification databases are, in general, non-disjoint, the algorithms we present split ranges between multiple layers each of which consists of mutually disjoint ranges. Each layer is then coded independently and assigned its own set of extra bits. Our layering algorithms are based on approximations for specific variants of interval-graph coloring. We evaluate these algorithms by performing extensive comparative analysis on real-life classification databases. Our analysis establishes that our algorithms reduce the number of redundant TCAM entries caused by range rules by more than 60% as compared with best range-encoding prior art.


IEEE Journal on Selected Areas in Communications | 2003

Predicting and bypassing end-to-end Internet service degradations

Anat Bremler-Barr; Edith Cohen; Haim Kaplan; Yishay Mansour

We study the patterns and predictability of Internet end-to-end service degradations, where a degradation is a significant deviation of the round-trip time (RTT) between a client and a server. We use simultaneous RTT measurements collected from several locations to a large representative set of Web sites and study the duration and extent of degradations. We combine these measurements with border gateway protocol cluster information to learn on the location of the cause. We evaluate a number of predictors based upon hidden Markov models and Markov models. Predictors typically exhibit a tradeoff between two types of errors, false positives (incorrect degradation prediction) and false negatives (a degradation is not predicted). The costs of these error types is application dependent, but we capture the entire spectrum using a precision versus recall tradeoff. Using this methodology, we learn what information is most valuable for prediction (recency versus quantity of past measurements). Surprisingly, we also conclude that predictors that utilize history in a very simple way perform as well as more sophisticated ones. One important application of prediction is gateway selection, which is applicable when a local-area network is connected through multiple gateways to one or several Internet service provider. Gateway selection can boost reliability and survivability by selecting for each connection the (hopefully) best gateway. We show that gateway selection using our predictors can reduce the degradations to half of that obtained by routing all the connections through the best gateway.


IEEE Transactions on Computers | 2013

Vulnerability of Network Mechanisms to Sophisticated DDoS Attacks

Udi Ben-Porat; Anat Bremler-Barr; Hanoch Levy

In recent years, we have experienced a wave of DDoS attacks threatening the welfare of the internet. These are launched by malicious users whose only incentive is to degrade the performance of other, innocent, users. The traditional systems turn out to be quite vulnerable to these attacks. The objective of this work is to take a first step to close this fundamental gap, aiming at laying a foundation that can be used in future computer/network designs taking into account the malicious users. Our approach is based on proposing a metric that evaluates the vulnerability of a system. We then use our vulnerability metric to evaluate a data structure which is commonly used in network mechanisms-the Hash table data structure. We show that Closed Hash is much more vulnerable to DDoS attacks than Open Hash, even though the two systems are considered to be equivalent by traditional performance evaluation. We also apply the metric to queuing mechanisms common to computer and communications systems. Furthermore, we apply it to the practical case of a hash table whose requests are controlled by a queue, showing that even after the attack has ended, the regular users still suffer from performance degradation or even a total denial of service.

Collaboration


Dive into the Anat Bremler-Barr's collaboration.

Top Co-Authors

Avatar

Yehuda Afek

Interdisciplinary Center Herzliya

View shared research outputs
Top Co-Authors

Avatar

David Hay

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yotam Harchol

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danny Hendler

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge