Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter A. Franaszek is active.

Publication


Featured researches published by Peter A. Franaszek.


Ibm Journal of Research and Development | 1983

A DC-balanced, partitioned-block, 8B/10B transmission code

Albert X. Widmer; Peter A. Franaszek

This paperd escribes a byte-oriented binary transmission code and its implementation. This code is particularly well suited for high-speed local area networks and similar data links, where the information format consists of packets, variable in length, from about a dozen up to several hundred 8-bit bytes. The proposed transmission code translates each source byte into a constrained 10-bit binary sequence which hase excellent performance parameters near the theoretical limits for 8B/10B codes. The maximum run length is 5 and the maximum digital sum variation is 6. A single error in the encoded bits can, at most, generate an error burst of length 5 in the decoded domain. A very simple implementation of the code has been accomplished by partitioning the coder into 5B/6B and 3B/4B subordinate coders.


ACM Transactions on Database Systems | 1985

Limitations of concurrency in transaction processing

Peter A. Franaszek; John T. Robinson

Given the pairwise probability of conflict p among transactions in a transaction processing system, together with the total number of concurrent transactions n, the effective level of concurrency E(n,p) is defined as the expected number of the n transactions that can run concurrently and actually do useful work. Using a random graph model of concurrency, we show for three general classes of concurrency control methods, examples of which are (1) standard locking, (2) strict priority scheduling, and (3) optimistic methods, that (1) E(n, p) ⩽ n(1 - p/2)n-1, (2) E(n, p) ⩽ (1 - (1 - p)n)/p, and (3) 1 + ((1 - p)/p)ln(p(n - 1) + 1) ⩽ E(n, p) ⩽ 1 + (1/p)ln(p(n - 1) + 1). Thus, for fixed p, as n ↣ ∞), (1) E ↣ 0 for standard locking methods, (2) E ⩽ 1/p for strict priority scheduling methods, and (3) E ↣ ∞ for optimistic methods. Also found are bounds on E in the case where conflicts are analyzed so as to maximize E. The predictions of the random graph model are confirmed by simulations of an abstract transaction processing system. In practice, though, there is a price to pay for the increased effective level of concurrency of methods (2) and (3): using these methods there is more wasted work (i.e., more steps executed by transactions that are later aborted). In response to this problem, three new concurrency control methods suggested by the random graph model analysis are developed. Two of these, called (a) running priority and (b) older or running priority, are shown by the simulation results to perform better than the previously known methods (l)-(3) for relatively large n or large p, in terms of achieving a high effective level of concurrency at a comparatively small cost in wasted work.


ACM Transactions on Database Systems | 1992

Concurrency control for high contention environments

Peter A. Franaszek; John T. Robinson; Alexander Thomasian

Future transaction processing systems may have substantially higher levels of concurrency due to reasons which include: (1) increasing disparity between processor speeds and data access latencies, (2) large numbers of processors, and (3) distributed databases. Another influence is the trend towards longer or more complex transactions. A possible consequence is substantially more data contention, which could limit total achievable throughput. In particular, it is known that the usual locking method of concurrency control is not well suited to environments where data contention is a significant factor. Here we consider a number of concurrency control concepts and transaction scheduling techniques that are applicable to high contention environments, and that do not rely on database semantics to reduce contention. These include access invariance and its application to prefetching of data, approximations to essential blocking such as wait depth limited scheduling, and phase dependent control. The performance of various concurrency control methods based on these concepts are studied using detailed simulation models. The results indicate that the new techniques can offer substantial benefits for systems with high levels of data contention.


data compression conference | 1996

Parallel compression with cooperative dictionary construction

Peter A. Franaszek; John T. Robinson; Joy A. Thomas

It is often desirable to compress or decompress relatively small blocks of data at high bandwidth and low latency (for example, for data fetches across a high speed network). Sequential compression may not satisfy the speed requirement, while simply splitting the block into smaller subblocks for parallel compression yields poor compression performance due to small dictionary sizes. We consider an intermediate approach, where multiple compressors jointly construct a dictionary. The result is parallel speedup, with compression performance similar to the sequential case.


Journal of the ACM | 1974

Some Distribution-Free Aspects of Paging Algorithm Performance

Peter A. Franaszek; T. J. Wagner

The topic of this paper is a probabilistic analysis of demand paging algorithms for storage hierarchies. Two aspects of algorithm performance are studied under the assumption that the sequence of page requests is statistically independent: the page fault probability for a fixed memory size and the variation of performance with memory. Performance bounds are obtained which are independent of the page request probabilities. It is shown that simple algorithms exist which yield fault probabilities close to optimal with only a modest increase in memory.


Ibm Journal of Research and Development | 1982

Construction of bounded delay codes for discrete noiseless channels

Peter A. Franaszek

Algorithms are described for constructing synchronous (fixed rate) codes for discrete noiseless channels where the constraints can be modeled by finite state machines. The methods yield two classes of codes with minimum delay or look-ahead.


Ibm Journal of Research and Development | 2001

Algorithms and data structures for compressed-memory machines

Peter A. Franaszek; Philip Heidelberger; Dan E. Poff; John T. Robinson

An overview of a set of algorithms and data structures developed for compressed-memory machines is given. These include 1) very fast compression and decompression algorithms, for relatively small fixed-size lines, that are suitable for hardware implementation; 2) methods for storing variable-size compressed lines in main memory that minimize overheads due to directory size and storage fragmentation, but that are simple enough for implementation as part of a system memory controller; 3) a number of operating system modifications required to ensure that a compressed-memory machine never runs out of memory as the compression ratio changes dynamically. This research was done to explore the feasibility of computer architectures in which data are decompressed/compressed on cache misses/writebacks. The results led to and were implemented in IBM Memory Expansion Technology (MXT), which for typical systems yields a factor of 2 expansion in effective memory size with generally minimal effect on performance.


international conference on data engineering | 1990

Access invariance and its use in high contention environments

Peter A. Franaszek; John T. Robinson; Alexander Thomasian

Various factors suggest that data contention may be of increasing significance in transaction processing systems. One approach to this problem is to run transactions twice, the first time without making any changes to the database. Benefits may result either from data prefetching during the first execution or from determining the locks required for purposes of scheduling. Consideration is given to various concurrency control methods based on this notion, and properties required for these methods to be useful are formalized. Performance results based on detailed simulation models suggest that such policies offer potential benefits for some system configurations.<<ETX>>


IEEE Transactions on Computers | 2001

Cache-memory interfaces in compressed memory systems

Caroline D. Benveniste; Peter A. Franaszek; John T. Robinson

We consider a number of cache/memory hierarchy design issues in systems with compressed random access memories (C-RAMs) In which compression and decompression occur automatically to and from main memory. Using a C-RAM as main memory, the bulk of main memory contents are stored in a compressed format and dynamically decompressed to handle cache misses at the next higher level of memory. This is the general approach adopted in IBMs memory expansion technology (MXT). The design of the main memory directory structures and storage allocation methods in such systems is described elsewhere; here, we focus on issues related to cache-memory interfaces. In particular, if the cache line size (of the cache or caches to which main memory data is transferred) is different than the size of the unit of compression in main memory, bandwidth and latency problems can occur. Another issue is that of guaranteed forward progress, that is, ensuring that modified lines can be written to the compressed main memory so that the system can continue operation even if overall compression deteriorates. We study several approaches for solving these problems, using trace-driven analysis to evaluate alternatives.


Ibm Journal of Research and Development | 2001

On internal organization in compressed random-access memories

Peter A. Franaszek; John T. Robinson

The design of a compressed random-access memory (C-RAM) is considered. Using a C-RAM at the lowest level of a systems main-memory hierarchy, cache lines are stored in a compressed format and dynamically decompressed to handle cache misses at the next higher level of memory. The requirement that compression/decompression, address translation, and memory management be performed by hardware has implications for the directory structure and storage allocation designs used within the C-RAM. Various new approaches, summarized here, are necessary in these areas in order to have methods that are amenable to hardware implementation. Furthermore, there are numerous design issues for the directory and storage management architectures. We consider a number of these issues, and present the results of evaluations of various approaches using analytic methods and simulations. This research was done as part of a project to explore the feasibility of compressed-memory systems; it forms the basis for the memory organization of IBM Memory Expansion Technology (MXT).

Researchain Logo
Decentralizing Knowledge