Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Phillip W. Hutto is active.

Publication


Featured researches published by Phillip W. Hutto.


Distributed Computing | 1995

Causal memory: definitions, implementation, and programming

Mustaque Ahamad; Gil Neiger; James E. Burns; Prince Kohli; Phillip W. Hutto

SummaryThe abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.


international conference on embedded networked sensor systems | 2003

DFuse: a framework for distributed data fusion

Rajnish Kumar; Matthew Wolenetz; Bikash Agarwalla; JunSuk Shin; Phillip W. Hutto; Arnab Paul

Simple in-network data aggregation (or fusion) techniques for sensor networks have been the focus of several recent research efforts, but they are insufficient to support advanced fusion applications. We extend these techniques to future sensor networks and ask two related questions: (a) what is the appropriate set of data fusion techniques, and (b) how do we dynamically assign aggregation roles to the nodes of a sensor network. We have developed an architectural framework, DFuse, for answering these two questions. It consists of a data fusion API and a distributed algorithm for energy-aware role assignment. The fusion API enables an application to be specified as a coarse-grained dataflow graph, and eases application development and deployment. The role assignment algorithm maps the graph onto the network, and optimally adapts the mapping at run-time using role migration. Experiments on an iPAQ farm show that, the fusion API has low-overhead, and the role assignment algorithm with role migration significantly increases the network lifetime compared to any static assignment.


international conference on distributed computing systems | 1990

Slow memory: weakening consistency to enhance concurrency in distributed shared memories

Phillip W. Hutto; Mustaque Ahamad

The use of weakly consistent memories in distributed shared memory systems to combat unacceptable network delay and to allow such systems to scale is proposed. Proposed memory correctness conditions are surveyed, and how they are related by a weakness hierarchy is demonstrated. Multiversion and messaging interpretations of memory are introduced as means of systematically exploring the space of possible memories. Slow memory is presented as a memory that allows the effects of writes to propagate slowly through the system, eliminating the need for costly consistency maintenance protocols that limit concurrency. Slow memory processes a valuable locality property and supports a reduction from traditional atomic memory. Thus slow memory is as expressive as atomic memory. This expressiveness is demonstrated by two exclusion algorithms and a solution to M.J. Fischer and A. Michaels (1982) dictionary problem on slow memory.<<ETX>>


international conference on distributed computing systems | 1991

Implementing and programming causal distributed shared memory

Mustaque Ahamad; Phillip W. Hutto; Ranjit John

A simple owner protocol for implementing a causal distributed shared memory (DSM) is presented, and it is argued that this implementation is more efficient than comparable coherent DSM implementations. Moreover, it is shown that writing programs for causal memory is no more difficult than writing programs for atomic shared memory.<<ETX>>


ACM Transactions on Sensor Networks | 2006

Dynamic data fusion for future sensor networks

Rajnish Kumar; Matthew Wolenetz; Brian F. Cooper; Bikash Agarwalla; JunSuk Shin; Phillip W. Hutto; Arnab Paul

DFuse is an architectural framework for dynamic application-specified data fusion in sensor networks. It bridges an important abstraction gap for developing advanced fusion applications that takes into account the dynamic nature of applications and sensor networks. Elements of the DFuse architecture include a fusion API, a distributed role assignment algorithm that dynamically adapts the placement of the application task graph on the network, and an abstraction migration facility that aids such dynamic role assignment. Experimental evaluations show that the API has low overhead, and simulation results show that the role assignment algorithm significantly increases the network lifetime over static placement.


pervasive computing and communications | 2004

MediaBroker: an architecture for pervasive computing

Martin Modahl; Ilya Bagrak; Matthew Wolenetz; Phillip W. Hutto

MediaBroker is a distributed framework designed to support pervasive computing applications. Specifically, the architecture consists of a transport engine and peripheral clients and addresses issues in scalability, data sharing, data transformation and platform heterogeneity. Key features of MediaBroker are a type-aware data transport that is capable of dynamically transforming data en route from source to sinks; an extensible system for describing types of streaming data; and the interaction between the transformation engine and the type system. Details of the MediaBroker architecture and implementation are presented in this paper. Through experimental study, we show reasonable performance for selected streaming media-intensive applications. For example, relative to baseline TCP performance, MediaBroker incurs under 11% latency overhead and achieves roughly 80% of the TCP throughput when streaming items larger than 100 KB across our infrastructure.


southeastcon | 2004

A Methodology to Characterize Kernel Level Rootkit Exploits that Overwrite the System Call Table

John G. Levine; Julian B. Grizzard; Phillip W. Hutto; Henry L. Owen

A cracker who gains access to a computer system will normally install some method, for use at a later time that allows the cracker to come back onto the system with root privilege. One method that a cracker may use is the installation of a rootkit on the compromised system. A kernel level rootkit will modify the underlying kernel of the installed operating system. The kernel controls everything that happens on a computer. We are developing a standardized methodology to characterize rootkits. The ability to characterize rootkits will provide system administrators, researchers, and security personnel with the information necessary in order to take the best possible recovery actions. This may also help to detect and fingerprint additional instances and prevent further security instances involving rootkits. We propose new methods for characterizing kernel level rootkits. These methods may also be used in the detection of kernel rootkits.


Pervasive and Mobile Computing | 2005

MediaBroker: A pervasive computing infrastructure for adaptive transformation and sharing of stream data

Martin Modahl; Ilya Bagrak; Matthew Wolenetz; David J. Lillethun; Bin Liu; James Kim; Phillip W. Hutto; Ramesh Jain

MediaBroker is a distributed framework designed to support pervasive computing applications. Key contributions of MediaBroker are efficient and scalable data transport, data stream registration and discovery, an extensible system for data type description, and type-aware data transport that is capable of dynamically transforming data en route from source to sinks. Specifically, the architecture consists of a transport engine and peripheral clients and addresses issues in scalability, data sharing, data transformation, and platform heterogeneity. Details of the MediaBroker architecture, implementation, and a concrete application example are presented in this article. Experimental study shows reasonable performance for selected streaming media-intensive applications. For example, relative to baseline TCP performance, MediaBroker incurs under 11% latency overhead and achieves roughly 80% of the TCP throughput when streaming items larger than 100 kB across our infrastructure. The EventWeb application demonstrates the utility and graceful scaling of MediaBroker for supporting pervasive computing applications.


international workshop on variable structure systems | 2004

State management in Web services

Xiang Song; Namgeun Jeong; Phillip W. Hutto; James M. Rehg

In the paper, we identify a problem for certain applications wishing to use the Web service paradigm to enhance interoperability: rapid, robust state maintenance. While many features are available to support session data, special mechanisms for application state maintenance are less well developed. We discuss three different models to solve the problem and compare the advantages and disadvantages of each. Experimental results show that which model to use depends on application requirements. D-Stampede.NET is a platform supporting the development of applications that involve large time-sequenced data communication among heterogeneous clients. We describe our Web service implementation along with our state server solution to the application state management problem. A simple demonstration application is described and measured to validate performance.


ieee computer society workshop on future trends of distributed computing systems | 2003

A system architecture for distributed control loop applications (extended abstract)

Phillip W. Hutto; Bikash Agarwalla; Matthew Wolenetz

In this position paper we motivate an important emerging class of applications that cooperate across a complex distributed computational fabric containing elements of widely varying capabilities, including physical and virtual sensors, actuators, and high-performance computational clusters and grids. We identify typical requirements of such applications and identify several novel research challenges that such applications pose. We sketch an evolving architecture developed as part of the Media Broker project at Georgia Tech that solves a subset of the problems presented.

Collaboration


Dive into the Phillip W. Hutto's collaboration.

Top Co-Authors

Avatar

Matthew Wolenetz

Georgia Institute of Technology College of Computing

View shared research outputs
Top Co-Authors

Avatar

Bikash Agarwalla

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mustaque Ahamad

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arnab Paul

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ilya Bagrak

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

JunSuk Shin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Modahl

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rajnish Kumar

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James E. Burns

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge