Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter L. Reiher is active.

Publication


Featured researches published by Peter L. Reiher.


acm special interest group on data communication | 2004

A taxonomy of DDoS attack and DDoS defense mechanisms

Jelena Mirkovic; Peter L. Reiher

Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the problem and the current solution space. The attack classification criteria was selected to highlight commonalities and important features of attack strategies, that define challenges and dictate the design of countermeasures. The defense taxonomy classifies the body of existing DDoS defenses based on their design decisions; it then shows how these decisions dictate the advantages and deficiencies of proposed solutions.


international conference on network protocols | 2002

Attacking DDoS at the source

Jelena Mirkovic; Gregory Prier; Peter L. Reiher

Distributed denial-of-service (DDoS) attacks present an Internet-wide threat. We propose D-WARD, a DDoS defense system deployed at source-end networks that autonomously detects and stops attacks originating from these networks. Attacks are detected by the constant monitoring of two-way traffic flows between the network and the rest of the Internet and periodic comparison with normal flow models. Mismatching flows are rate-limited in proportion to their aggressiveness. D-WARD offers good service to legitimate traffic even during an attack, while effectively reducing DDoS traffic to a negligible level. A prototype of the system has been built in a Linux router. We show its effectiveness in various attack scenarios, discuss motivations for deployment, and describe associated costs.


Mobile Computing and Communications Review | 1998

Saving portable computer battery power through remote process execution

Alexey Rudenko; Peter L. Reiher; Gerald J. Popek; Geoffrey H. Kuenning

We describe a new approach to power saving and battery life extension on an untethered laptop through wireless remote processing of power-costly tasks. We ran a series of experiments comparing the power consumption of processes run locally with that of the same processes run remotely. We examined the trade-off between communication power expenditures and the power cost of local processing. This paper describes our methodology and results of our experiments. We suggest ways to further improve this approach, and outline a software design to support remote process execution.


international conference on computer communications | 2002

SAVE: source address validity enforcement protocol

Jun Li; Jelena Mirkovic; Mengqiu Wang; Peter L. Reiher; Lixia Zhang

Forcing all IP packets to carry correct source addresses can greatly help network security, attack tracing, and network problem debugging. However, due to asymmetries in todays Internet routing, routers do not have readily available information to verify the correctness of the source address for each incoming packet. In this paper we describe a new protocol, named SAVE, that can provide routers with the information needed for source address validation. SAVE messages propagate valid source address information from the source location to all destinations, allowing each router along the way to build an incoming table that associates each incoming interface of the router with a set of valid source address blocks. This paper presents the protocol design and evaluates its correctness and performance by simulation experiments. The paper also discusses the issues of protocol security, the effectiveness of partial SAVE deployment, and the handling of unconventional forms of network routing, such as mobile IP and tunneling.


IEEE Transactions on Dependable and Secure Computing | 2005

D-WARD: a source-end defense against flooding denial-of-service attacks

Jelena Mirkovic; Peter L. Reiher

Defenses against flooding distributed denial-of-service (DDoS) commonly respond to the attack by dropping the excess traffic, thus reducing the overload at the victim. The major challenge is the differentiation of the legitimate from the attack traffic, so that the dropping policies can be selectively applied. We propose D-WARD, a source-end DDoS defense system that achieves autonomous attack detection and surgically accurate response, thanks to its novel traffic profiling techniques, the adaptive response and the source-end deployment. Moderate traffic volumes seen near the sources, even during the attacks, enable extensive statistics gathering and profiling, facilitating high response selectiveness. D-WARD inflicts an extremely low collateral damage to the legitimate traffic, while quickly detecting and severely rate-limiting outgoing attacks. D-WARD has been extensively evaluated in a controlled testbed environment and in real network operation. Results of selected tests are presented in the paper.


IEEE Communications Surveys and Tutorials | 2010

Host-to-Host Congestion Control for TCP

Alexander Afanasyev; Neil Tilley; Peter L. Reiher; Leonard Kleinrock

The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCP-namely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP-based application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed that address several problems with different levels of reliability and precision. There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. The survey describes each congestion control alternative, its strengths and its weaknesses. Additionally, techniques that are in common use or available for testing are described.


Software - Practice and Experience | 1998

Perspectives on optimistically replicated, peer-to-peer filing

Thomas W. Page; Richard Guy; John S. Heidemann; David Ratner; Peter L. Reiher; Ashish Goel; Geoffrey H. Kuenning; Gerald J. Popek

This research proposes and tests an approach to engineering distributed file systems that are aimed at wide‐scale, Internet‐based use. The premise is that replication is essential to deliver performance and availability, yet the traditional conservative replica consistency algorithms do not scale to this environment. Our Ficus replicated file system uses a single‐copy availability, optimistic update policy with reconciliation algorithms that reliably detect concurrent updates and automatically restore the consistency of directory replicas. The system uses the peer‐to‐peer model in which all machines are architectural equals but still permits configuration in a client‐server arrangement where appropriate. Ficus has been used for six years at several geographically scattered installations. This paper details and evaluates the use of optimistic replica consistency, automatic update conflict detection and repair, the peer‐to‐peer (as opposed to client‐server) interaction model, and the stackable file system architecture in the design and construction of Ficus. The paper concludes with a number of lessons learned from the experience of designing, building, measuring, and living with an optimistically replicated file system.


systems man and cybernetics | 2003

Detecting insider threats by monitoring system call activity

Nam T. Nguyen; Peter L. Reiher; Geoffrey H. Kuenning

One approach to detecting insider misbehavior is to monitor system call activity and watch for danger signs or unusual behavior. We describe an experimental system designed to test this approach. We tested the systems ability to detect common insider misbehavior by examining file system and process-related system calls. Our results show that this approach can detect many such activities.


international conference on conceptual modeling | 1998

Rumor: Mobile Data Access Through Optimistic Peer-to-Peer Replication

Richard Guy; Peter L. Reiher; David Ratner; Michial Gunter; Wilkie Ma; Gerald J. Popek

Rumor is an optimistically replicated file system designed for use in mobile computers. Rumor uses a peer model that allows opportunistic update propagation among any sites replicating files. The paper outlines basic characteristics of replication systems for mobile computers, describes the design and implementation of the Rumor file system, and presents performance data for Rumor. The research described demonstrates the feasibility of using peer optimistic replication to support mobile computing.


acm symposium on applied computing | 1999

The remote processing framework for portable computer power saving

Alexey Rudenko; Peter L. Reiher; Gerald J. Popek; Geoffrey H. Kuenning

Recent research has demonstrated that portable computer users can save battery power by migrating tasks over wireless networks to server machines. Making this technique generally useful requires considerable automation. This paper describes a framework for automatically migrating tasks from a portable computer over a wireless network to a server and migrating the results back. The paper presents the frameworks architecture, discusses key issues in creating the framework, and presents performance results that demonstrate that the framework is both useful and power-inexpensive.

Collaboration


Dive into the Peter L. Reiher's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jelena Mirkovic

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Eustice

University of California

View shared research outputs
Top Co-Authors

Avatar

Jun Li

University of Oregon

View shared research outputs
Top Co-Authors

Avatar

An-I Andy Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

David Ratner

University of California

View shared research outputs
Top Co-Authors

Avatar

Mario Gerla

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge