Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dan E. Poff is active.

Publication


Featured researches published by Dan E. Poff.


ieee international conference on high performance computing data and analytics | 2008

PAM: a novel performance/power aware meta-scheduler for multi-core systems

Mohammad Banikazemi; Dan E. Poff; Bulent Abali

Sharing resources such as caches and main memory bandwidth in multi-core systems requires a more sophisticated scheduling scheme. PAM is a low-overhead, user-level meta-scheduler which does not require any hardware or software changes. In particular, it operates by detecting resource congestions and providing guidelines to the standard system scheduler by limiting the assignment of processes to subsets of available cores. PAM contains a cache model that it uses to predict the impact of new schedules. PAM can be used to improve the system along three dimensions: performance, power, and energy consumption (and any combination of these three). On our prototype, we show individual benchmarks can improve by up to 33% and the overall system performance can be improved by as much as 14%.


Ibm Journal of Research and Development | 2001

Algorithms and data structures for compressed-memory machines

Peter A. Franaszek; Philip Heidelberger; Dan E. Poff; John T. Robinson

An overview of a set of algorithms and data structures developed for compressed-memory machines is given. These include 1) very fast compression and decompression algorithms, for relatively small fixed-size lines, that are suitable for hardware implementation; 2) methods for storing variable-size compressed lines in main memory that minimize overheads due to directory size and storage fragmentation, but that are simple enough for implementation as part of a system memory controller; 3) a number of operating system modifications required to ensure that a compressed-memory machine never runs out of memory as the compression ratio changes dynamically. This research was done to explore the feasibility of computer architectures in which data are decompressed/compressed on cache misses/writebacks. The results led to and were implemented in IBM Memory Expansion Technology (MXT), which for typical systems yields a factor of 2 expansion in effective memory size with generally minimal effect on performance.


ieee conference on mass storage systems and technologies | 2005

Storage-based intrusion detection for storage area networks (SANs)

Mohammad Banikazemi; Dan E. Poff; Bulent Abali

Storage systems are the next frontier for providing protection against intrusion. Since storage systems see changes to persistent data, several types of intrusions can be detected by storage systems. Intrusion detection (ID) techniques can be deployed in various storage systems. In this paper, we study how intrusions can be detected at the block storage level and in SAN environments. We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data. In particular we present two prototype systems. First we present a real time intrusion detection system (IDS), which has been integrated within a storage management and virtualization system. In this system incoming requests for storage blocks are examined for signs of intrusions in real time. We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system. The major advantage of this approach is that it does not require any modification and enhancement to the storage system software. In this approach, we use the space and time efficient point-in-time copy operation provided by SAN storage devices. We also present performance results showing that the impact of ID on the overall storage system performance is negligible. Recovering data in compromised systems is also discussed.


international conference on supercomputing | 2009

Evaluating high performance communication: a power perspective

Jiuxing Liu; Dan E. Poff; Bulent Abali

Recently, high speed interconnects capable of remote direct memory access (RDMA) such as InfiniBand and iWARP have gained considerable popularity due to their superb latency and bandwidth. Most existing studies about RDMA have focused mainly on its performance aspect. However, as power management has become essential for high-end systems such as enterprise servers and high performance computing nodes which are often equipped with RDMA capable network adapters, it is very important for us to take a fresh look at the benefits of RDMA from the power perspective. In the paper, we provide a detailed empirical study of the benefits of RDMA in terms of power savings compared with traditional communication protocols such as TCP/IP. We used two popular RDMA adapters in our evaluations: Mellanox ConnectX InfiniBand HCAs and Chelsio T3 10GE RNICs. In order to isolate the impact of communication on power consumption, our evaluation focused on using micro-benchmarks which perform different communication patterns. We have also studied several important factors that may have an impact on the performance and the power consumption of RDMA adapters such as the use of polling versus blocking, CPU speeds, and extra memory copies. We show that using high speed RDMA adapters can result in significant amount of power consumption during communication. (In one test, the system power has increased by as much as 50 watts, or over 30% of the idle power.) We found that RDMA generally has better power efficiency compared to that of TCP/IP, especially for communication intensive phases, for example when large messages are transferred. The power savings of RDMA are achieved by minimizing the interactions between the network adapters and other system components such as the CPUs and the memory: Although nearly the same amount of data must be going through the network adapters for both RDMA and TCP/IP, RDMA requires much fewer CPU cycles for protocol processing and also generates less memory bus traffic, both of which contribute to its power savings. Overall, our research demonstrated that RDMA not only provides high communication performance, but also offers excellent power efficiency, making it a desirable choice in environments that have strict power/energy constraints and demand high communication performance.


workshop on storage security and survivability | 2005

Storage-based file system integrity checker

Mohammad Banikazemi; Dan E. Poff; Bulent Abali

In this paper we present a storage based intrusion detection system (IDS) which uses time and space efficient point-in-time copy and performs file system integrity checks to detect intrusions. The storage system software is enhanced to keep track of modified blocks such that the file system scan can be performed more efficiently. Furthermore, when an intrusion occurs a recent undamaged copy of the storage is used to recover the compromised data.


international symposium on performance analysis of systems and software | 2010

Program behavior characterization in large memory systems

Parijat Dube; Michael Tsao; Dan E. Poff; Li Zhang; Alan Bivens

We introduce models to characterize large cache performance in terms of various statistics related to sojourn time of a line in the cache. These statistics themselves depend on cache configuration parameters and we are currently working to isolate this dependency using LCS data and models. This will then help us in obtaining explicit relation between cache performance and its configuration parameters which will be helpful in identifying an optimal set of configuration parameters during early design phase of large memory systems.


quantitative evaluation of systems | 2011

A Hybrid Approach for Large Cache Performance Studies

David Daly; Parijat Dube; Kaoutar El Maghraoui; Dan E. Poff; Li Zhang

Recent technology trends are leading to the possibility of computer systems having last level caches significantly larger than those that exist today. Traditionally, cache effectiveness has been modeled through trace-driven simulation tools, however, these tools are not up to the task of modeling very large caches. Because of the limited length of available traces, the tools cannot capture behavior across long enough periods of time to adequately simulate a very large cache. We present mprofiler, a tool that characterizes the memory access pattern of workloads, and present a novel hybrid modeling technique that models cache behavior across a much larger time scale than previously possible. Our methodology combines memory access patterns captured by different tools (e.g., mprofiler) at different time scales and develops analytical techniques that allow spanning the required time frame and predicting the performance of very large caches.


Operating Systems Review | 2008

Flipstone: managing storage with fail-in-place and deferred maintenance service models

Mohammad Banikazemi; James Lee Hafner; Wendy Belluomini; Kk Rao; Dan E. Poff; Bulent Abali

The cost of managing storage systems has become one of the significant expense items in data centers. In this paper, we discuss the design and implementation of Flipstone, a new storage system with reduced storage management cost. Flipstone provides fail-in-place and deferred maintenance by aggregating large number of off-the-shelf, inexpensive storage systems. We show a significant improvement in total cost of ownership of storage systems by reducing the number of service calls when Flipstone is used.


Archive | 2004

Performance of Memory Expansion Technology (MXT)

Dan E. Poff; Mohammad Banikazemi; Robert Saccone; Hubertus Franke; Bulent Abali; T. Basil Smith

A novel memory subsystem called Memory Expansion Technology (MXT) has been built for fast hardware compression of main memory contents. This allows a system with memory expansion to present a real memory larger than the physically available memory. This chapter provides an overview of the memory compression architecture, the OS support, and an analysis of the performance impact of memory compression while running multiple benchmarks. Results show that the hardware compression of main memory has a negligible penalty compared to an uncompressed memory, and for memory starved applications it increases performance significantly. We also show that an applications’ memory contents can be compressed usually by a factor of 2.


Ibm Journal of Research and Development | 2001

Memory expansion technology (MXT): software support and performance

Bulent Abali; Hubertus Franke; Dan E. Poff; Robert Saccone; Charles O. Schulz; Lorraine M. Herger; T. B. Smith

Collaboration


Dive into the Dan E. Poff's collaboration.

Researchain Logo
Decentralizing Knowledge