Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T. Basil Smith is active.

Publication


Featured researches published by T. Basil Smith.


architectural support for programming languages and operating systems | 2000

MemorIES3: a programmable, real-time hardware emulation tool for multiprocessor server design

Ashwini K. Nanda; Kwok-Ken Mak; Krishnan Sugarvanam; Ramendra K. Sahoo; Vijayaraghavan Soundarararjan; T. Basil Smith

Modern system design often requires multiple levels of simulation for design validation and performance debugging. However, while machines have gotten faster, and simulators have become more detailed, simulation speeds have not tracked machine speeds. As a result, it is difficult to simulate realistic problem sizes and hardware configurations for a target machine. Instead, researchers have focussed on developing scaling methodologies and running smaller problem sizes and configurations that attempt to represent the behavior of the real problem. Given the increasing size of problems today, it is unclear whether such an approach yields accurate results. Moreover, although commercial workloads are prevalent and important in todays marketplace, many simulation tools are unable to adequately profile such applications, let alone for realistic sizes.In this paper we present a hardware-based emulation tool that can be used to aid memory system designers. Our focus is on the memory system because the ever-widening gap between processor and memory speeds means that optimizing the memory subsystem is critical for performance. We present the design of the Memory Instrumentation and Emulation System (MemorIES). MemorIES is a programmable tool designed using FPGAs and SDRAMs. It plugs into an SMP bus to perform on-line emulation of several cache configurations, structures and protocols while the system is running real-life workloads in real-time, without any slowdown in application execution speed. We demonstrate its usefulness in several case studies, and find several important results. First, using traces to perform system evaluation can lead to incorrect results (off by 100% or more in some cases) if the trace size is not sufficiently large. Second, MemorIES is able to detect performance problems by profiling miss behavior over the entire course of a run, rather than relying on a small interval of time. Finally, we observe that previous studies of SPLASH2 applications using scaled application sizes can result in optimistic miss rates relative to real sizes on real machines, providing potentially misleading data when used for design evaluation.


ieee international symposium on fault tolerant computing | 1993

A case for fault-tolerant memory for transaction processing

Anupam Bhide; Daniel M. Dias; Nagui Halim; T. Basil Smith; Francis Nicholas Parr

For database transaction processing, the authors compare the relative price-performance of storing data in volatile memory (V-mem), fault-tolerant non-volatile memory (FT-mem), and disk. First, they extend Grays five-minute rule, which compares the relative cost of storing data in volatile memory as against disk for read-only data, to read-write data. Second, they show that because of additional write overhead, FT-mem has a higher advantage over V-mem than previously thought. Previous studies comparing volatile and non-volatile memories have focused on the response time advantages of putting log data in non-volatile memory. The authors show that there is a direct reduction in disk I/O, which leads to a much larger savings in cost using an FT-mem buffer. Third, the five-minute model is a simple model that assumes knowledge of inter-access times for data items. The authors present a more realistic model that assumes an LRU buffer management policy. They combine this with the recovery time constraint and study the resulting price-performance. It is shown that the use of an FT-mem buffer can lead to a significant benefit in terms of overall price-performance.


Archive | 2004

Performance of Memory Expansion Technology (MXT)

Dan E. Poff; Mohammad Banikazemi; Robert Saccone; Hubertus Franke; Bulent Abali; T. Basil Smith

A novel memory subsystem called Memory Expansion Technology (MXT) has been built for fast hardware compression of main memory contents. This allows a system with memory expansion to present a real memory larger than the physically available memory. This chapter provides an overview of the memory compression architecture, the OS support, and an analysis of the performance impact of memory compression while running multiple benchmarks. Results show that the hardware compression of main memory has a negligible penalty compared to an uncompressed memory, and for memory starved applications it increases performance significantly. We also show that an applications’ memory contents can be compressed usually by a factor of 2.


Archive | 2001

System and method for managing memory compression transparent to an operating system

Lorraine M. Herger; Mary J. McHugh; Dan E. Poff; Robert Saccone; Charles O. Schulz; T. Basil Smith


Archive | 2000

Method for operating system support for memory compression

Hubertus Franke; Bulent Abali; Lorraine M. Herger; Dan E. Poff; Robert Saccone; T. Basil Smith


Archive | 2000

Dynamic allocation of physical memory space

Peter A. Franaszek; Michel H. T. Hack; Charles O. Schulz; T. Basil Smith; R. Brett Tremaine


Archive | 2003

Very high speed page operations in indirect accessed memory systems

Peter A. Franaszek; Charles O. Schulz; T. Basil Smith; Robert B. Tremaine; Michael E. Wazlowski


Archive | 2000

Method and apparatus for high integrity hardware memory compression

David Har; Kwok-Ken Mak; Charles O. Schulz; T. Basil Smith; R. Brett Tremaine


Archive | 2000

Compressor system memory organization and method for low latency access to uncompressed memory regions

Charles O. Schulz; T. Basil Smith; Robert B. Tremaine; Michael E. Wazlowski


Archive | 1992

Nested frame communication protocol

T. Basil Smith

Researchain Logo
Decentralizing Knowledge