Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen John Tarsa is active.

Publication


Featured researches published by Stephen John Tarsa.


international symposium on computers and communications | 2005

Trie-based policy representations for network firewalls

Errin W. Fulp; Stephen John Tarsa

Network firewalls remain the forefront defense for most computer systems. These critical devices filter traffic by comparing arriving packets to a list of rules, or security policy, in a sequential manner. Unfortunately packet filtering in this fashion can result in significant traffic delays, which is problematic for applications that require strict quality of service (QoS) guarantees. Given this demanding environment, new methods are needed to increase network firewall performance. This paper introduces a new technique for representing a security policy that maintains policy integrity and provides more efficient processing. The policy is represented as an n-ary retrieval tree, also referred to as a trie. The worst case processing requirement for the policy trie is a fraction compared a list representation, which only considers rules individually (1/5 the processing for TCP/IP networks). Furthermore unlike other representations, the n-ary trie developed in this paper can be proven to maintain policy integrity. The creation of policy trie structures is discussed in detail and their performance benefits are described theoretically and validated empirically.


international symposium on computers and communications | 2006

Balancing Trie-Based Policy Representations for Network Firewalls

Stephen John Tarsa; Errin W. Fulp

Firewalls inspect arriving packets according to a security policy. The complexity of these policies can cause significant delays in the processing of packets, resulting in degraded performance, traffic bottlenecks, and ultimately violating Quality of Service (QoS) constraints. As network capacities continue to increase, the improvement of firewall performance is a main concern. One technique that dramatically reduces required processing is the representation of security policies in software with n-ary tries. This paper describes trie balancing methods that further improve performance by placing more frequently used rules in high precedence positions which require fewer tuple comparisons. A proof of sorted trie integrity is presented along with experimental results showing that on average, sorting reduces the number of comparisons by 27% as compared to the original trie and by 83% as compared to a list representation. Sorting methods are described in detail and their benefits are demonstrated empirically.


military communications conference | 2011

Partitioned compressive sensing with neighbor-weighted decoding

H. T. Kung; Stephen John Tarsa

Compressive sensing has gained momentum in recent years as an exciting new theory in signal processing with several useful applications. It states that signals known to have a sparse representation may be encoded and later reconstructed using a small number of measurements, approximately proportional to the signals sparsity rather than its size. This paper addresses a critical problem that arises when scaling compressive sensing to signals of large length: that the time required for decoding becomes prohibitively long, and that decoding is not easily parallelized. We describe a method for partitioned compressive sensing, by which we divide a large signal into smaller blocks that may be decoded in parallel. However, since this process requires a significant increase in the number of measurements needed for exact signal reconstruction, we focus on mitigating artifacts that arise due to partitioning in approximately reconstructed signals. Given an error-prone partitioned decoding, we use large magnitude components that are detected with highest accuracy to influence the decoding of neighboring blocks, and call this approach neighbor-weighted decoding. We show that, for applications with a predefined error threshold, our method can be used in conjunction with partitioned compressive sensing to improve decoding speed, requiring fewer additional measurements than unweighted or locally-weighted decoding.


international conference on computer communications and networks | 2011

Achieving High Throughput Ground-to-UAV Transport via Parallel Links

Chit-Kwan Lin; H. T. Kung; Tsung-Han Lin; Stephen John Tarsa; Dario Vlah

Wireless data transfer under high mobility, as found in unmanned aerial vehicle (UAV) applications, is a challenge due to varying channel quality and extended link outages. We present FlowCode, an easily deployable link-layer solution utilizing multiple transmitters and receivers for the purpose of supporting existing transport protocols such as TCP in these scenarios. By using multiple transmitters and receivers and by exploiting the resulting antenna beam diversity and parallel transmission effects, FlowCode increases throughput and reception range. In emulation, we show that TCP over FlowCode gives greater goodput over a larger portion of the flight path, compared to an enhanced TCP protocol using the standard 802.11 MAC. In the process, we make a strong case for using trace-modulated emulation when developing distributed protocols for complex wireless environments.


global communications conference | 2010

Measuring diversity on a low-altitude UAV in a ground-to-air wireless 802.11 mesh network

H. T. Kung; Chit-Kwan Lin; Tsung-Han Lin; Stephen John Tarsa; Dario Vlah

We consider the problem of mitigating a highly varying wireless channel between a transmitting ground node and receivers on a small, low-altitude unmanned aerial vehicle (UAV) in a 802.11 wireless mesh network. One approach is to use multiple transmitter and receiver nodes that exploit the channels spatial/temporal diversity and that cooperate to improve overall packet reception. We present a series of measurement results from a real-world testbed that characterize the resulting wireless channel. We show that the correlation between receiver nodes on the airplane is poor at small time scales so receiver diversity can be exploited. Our measurements suggest that using several receiver nodes simultaneously can boost packet delivery rates substantially. Lastly, we show that similar results apply to transmitter selection diversity as well.


mobile ad hoc networking and computing | 2015

Taming Wireless Fluctuations by Predictive Queuing Using a Sparse-Coding Link-State Model

Stephen John Tarsa; Marcus Z. Comiter; Michael B. Crouse; Bradley McDanel; H. T. Kung

We introduce State-Informed Link-Layer Queuing (SILQ), a system that models, predicts, and avoids packet delivery failures caused by temporary wireless outages in everyday scenarios. By stabilizing connections in adverse link conditions, SILQ boosts throughput and reduces performance variation for network applications, for example by preventing unnecessary TCP timeouts due to dead zones, elevators, and subway tunnels. SILQ makes predictions in real-time by actively probing links, matching measurements to an overcomplete dictionary of patterns learned offline, and classifying the resulting sparse feature vectors to identify those that precede outages. We use a clustering method called sparse coding to build our data-driven link model, and show that it produces more variation-tolerant predictions than traditional loss-rate, location-based, or Markov chain techniques. We present extensive data collection and field-validation of SILQ in airborne, indoor, and urban scenarios of practical interest. We show how offline unsupervised learning discovers link-state patterns that are stable across diverse networks and signal-propagation environments. Using these canonical primitives, we train outage predictors for 802.11 (Wi-Fi) and 3G cellular networks to demonstrate TCP throughput gains of 4x with off-the-shelf mobile devices. SILQ addresses delivery failures solely at the link layer, requires no new hardware, and upholds the end-to-end design principle, to enable easy integration across applications, devices, and networks.


military communications conference | 2012

Output compression for IC fault detection using compressive sensing

Stephen John Tarsa; H. T. Kung

The process of detecting logical faults in integrated circuits (ICs) due to manufacturing variations is bottlenecked by the I/O cost of scanning in test vectors and offloading test results. Traditionally, the output bottleneck is alleviated by reducing the number of bits in output responses using XOR networks, or computing signatures from the responses of multiple tests. However, these many-to-one computations reduce test time at the cost of higher detection failure rates, and lower test granularity. In this paper, we propose an output compression approach that uses compressive sensing to exploit the redundancy of correlated outputs from closely related tests, and of correlated faulty responses across many circuits. Compressive sensings simple encoding method makes our approach attractive because it can be implemented on-chip using only a small number of accumulators. Through simulation, we show that our method can reduce the output I/O bottleneck without increasing failure rates, and can reconstruct higher granularity results off-chip than current compaction approaches.


military communications conference | 2009

FlowCode: Multi-site data exchange over wireless ad-hoc networks using network coding

H. T. Kung; Chit-Kwan Lin; Tsung-Han Lin; Stephen John Tarsa; Dario Vlah

We present FlowCode, a system that exploits network coding at the granularity of traffic flows to facilitate fault-tolerant data exchange in wireless mesh networks. Applications include multi-site data replication in ad-hoc environments such as mesh networks or wireless data centers. By coupling an operand-driven transmission mechanism with a layered network topology, FlowCode allows us to realize the gains of network coding in application systems without a global scheduler. We analyze the resulting gains through modeling and simulation and validate our results on an outdoor testbed of 12 wireless devices. Results indicate that in high loss environments, FlowCode provides the most significant gains from improved fault tolerance over redundant paths.


international conference on ic design and technology | 2014

Workload prediction for adaptive power scaling using deep learning

Stephen John Tarsa; Amit Kumar; H. T. Kung

We apply hierarchical sparse coding, a form of deep learning, to model user-driven workloads based on on-chip hardware performance counters. We then predict periods of low instruction throughput, during which frequency and voltage can be scaled to reclaim power. Using a multi-layer coding structure, our method progressively codes counter values in terms of a few prominent features learned from data, and passes them to a Support Vector Machine (SVM) classifier where they act as signatures for predicting future workload states. We show that prediction accuracy and look-ahead range improve significantly over linear regression modeling, giving more time to adjust power management settings. Our method relies on learning and feature extraction algorithms that can discover and exploit hidden statistical invariances specific to workloads. We argue that, in addition to achieving superior prediction performance, our method is fast enough for practical use. To our knowledge, we are the first to use deep learning at the instruction level for workload prediction and on-chip power adaptation.


military communications conference | 2013

Hierarchical Sparse Coding for Wireless Link Prediction in an Airborne Scenario

Stephen John Tarsa; H. T. Kung

We build a data-driven hierarchical inference model to predict wireless link quality between a mobile unmanned aerial vehicle (UAV) and ground nodes. Clustering, sparse feature extraction, and non-linear pooling are combined to improve Support Vector Machine (SVM) classification when a limited training set does not comprehensively characterize data variations. Our approach first learns two layers of dictionaries by clustering packet reception data. These dictionaries are used to perform sparse feature extraction, which expresses link state vectors first in terms of a few prominent local patterns, or features, and then in terms of co-occurring features along the flight path. In order to tolerate artifacts like small positional shifts in field-collected data, we pool large magnitude features among overlapping shifted patches within windows. Together, these techniques transform raw link measurements into stable feature vectors that capture environmental effects driven by radio range limitations, antenna pattern variations, line-of-sight occlusions, etc. Link outage prediction is implemented by an SVM that assigns a common label to feature vectors immediately preceding gaps of successive packet losses, predictions are then fed to an adaptive link layer protocol that adjusts forward error correction rates, or queues packets during outages to prevent TCP timeout. In our harsh target environment, links are unstable and temporary outages common, so baseline TCP connections achieve only minimal throughput. However, connections under our predictive protocol temporarily hold packets that would otherwise be lost on unavailable links, and react quickly when the UAV link is restored, increasing overall channel utilization.

Collaboration


Dive into the Stephen John Tarsa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge