Elliott I. Organick
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elliott I. Organick.
symposium on operating systems principles | 1971
Richard J. Feiertag; Elliott I. Organick
An I/0 system has been implemented in the Multics system that facilitates dynamic switching of I/0 devices. This switching is accomplished by providing a general interface for all I/O devices that allows all equivalent operations on different devices to be expressed in the same way. Also particular devices are referenced by symbolic names and the binding of names to devices can be dynamically modified. Available I/0 operations range from a set of basic I/0 calls that require almost no knowledge of the I/O System or the I/0 device being used to fully general calls that permit one to take full advantage of all features of an I/O device but require considerable knowledge of the I/0 System and the device. The I/O System is described and some popular applications of it, illustrating these features, are presented.
IEEE Software | 1984
Elliott I. Organick; Tony M. Carter; Mike P. Maloney; Alan L. Davis; Alan B. Hayes; Dan Klass; Gary Lindstrom; Brent E. Nelson; Kent F. Smith
Although the functional behavior of this IC was tested in system, the evaluation of circuit performance should not be long in coming.
international symposium on computer architecture | 1982
M. Castan; Elliott I. Organick
To eliminate the conceptual distance between the hardware instruction set and the user interface, some architects advocate High Level Language (HLL) machines. To obtain simple, fast and cheap machines, some architects advocate Reduced Instruction Set Computer (RISC) machines. This paper reconciles both views and presents an architecture which has both an HLL user interface and a RISC hardware. Each instance of this architecture is a module of an HLL multiprocessor system. Functional programming languages offer a bridge between mathematical models of computation and multiprocessor system environments. We choose the language AFPL (A Functional Programming Language) as the HLL user interface. AFPLs direct execution model, based on a tree structured internal representation, takes advantage of the parallelism inherent in programs by decomposing them on the fly into tasks which can be performed concurrently.
Euromicro Newsletter | 1979
Elliott I. Organick
Abstract In spite of major technological developments over a thirty-year span, the architecture of computer systems has remained surprisingly stable under continued pressures for change. A number of limited architectural responses to such pressures (system improvements) are reviewed in support of this thesis. Recently, however, a maturing appreciation for the potential benefits of functional programming languages for use as a base for computer applications, juxtaposed with the VLSI opportunity, has resulted in several proposals for innovative architect area. In principle, these proposals offer attractive alternatives to conventional systems, and may well become practical alternatives as well. The relative advantages of functional languages and functional machine hosts are reviewed, and two of the more interesting proposals are singled out for extended discussion and comparison.
Computer System Organization#R##N#The B5700/B6700 Series | 1973
Elliott I. Organick
Systems are often criticized or appraised from three viewpoints: (1) that of the languages available to users and the cost effectiveness of user programs, (2) that of the operating system and what it lets the user do or not do, and (3) that of the hardware. The highly structured B6700 differs from typical machines in both instruction fetch and data accessing costs. On the plus side, fewer memory cycles are typically required for fetching B6700 instructions, and on the minus side, more memory cycles are required for accessing data. Variable-length B6700 instructions frequently do not have any address fields at all as the operand locations are implied to be in some specific spots at or near the top of the stack. This means that B6700 instructions are on the average shorter than their counterparts in von Neumann-type machines. The shorter B6700 instructions, called “syllables,” are packed several per word, hence each memory cycle taken for an instruction fetch usually retrieves more instructions than an instruction fetch on a competitive conventional machine.
Computer System Organization#R##N#The B5700/B6700 Series | 1973
Elliott I. Organick
This chapter discusses the storage control strategies. De-allocation is close-coupled with program-block or procedure-block exit in the current B6700 implementation. This implies a discipline for recovery of all no-longer-needed storage resources and explicit adjustment to prevent “dangling pointers.” A dangling pointer is any reference to an information object—that is, an array, an I/O buffer area, and an activation record—that has been deleted from the address space of the computation. This type of resource management is done at block exit time and is the responsibility of system routines, calls to which are generated by the compilers at block exit and procedure return points in the algorithm. If an activation record that is being de-allocated contains one or more interrupt queue entries, then one or more of such interrupt queue chains would be severed leaving dangling pointers, possibly in the stack—when de-allocating a software interrupt queue entry, or in other stacks—when de-allocating an event interrupt queue entry.
Computer System Organization#R##N#The B5700/B6700 Series | 1973
Elliott I. Organick
This chapter discusses the concept of a B6700 job, which consists of a time-invariant algorithm and a time-varying data structure that is the record of execution of that algorithm. The algorithm consists of a set of non-varying code segments that are directly addressable in the virtual memory sense. The record of execution is a multipurpose data structure that at any given time defines: (1) the execution state of the job, including values for all variables—scalar, arrayed, and structured; (2) the addressing environment—virtual address subspace—that a hardware processor serving this job may access, or possibly several overlapping addressing environments, in case it is appropriate that more than one processor be permitted to execute in the job at the same time—multiple activity; (3) the inter-block/inter-procedure/inter-task flow of control history, for example, chain of calls. In its simplest view, the hardware processor functions by maintaining a pair of pointers, an instruction pointer (ip) and an environment pointer (ep), for referencing the accessible portions of the record.
Computer System Organization#R##N#The B5700/B6700 Series | 1973
Elliott I. Organick
This chapter discusses stack structure and stack ownership. A task family and a treelike stack structure that represents the records of execution are depicted in a diagrammatic from in the chapter. The statically linked set of activation records defining the accessing environment of any one offspring task extends over two or more separate stacks. The highest display-level portion of the environment for an off-spring task depicted in the figure is found in the stack associated with that task. These access regions connect to access regions at lower display levels through as many separate stacks as are required to include the root or main stack of this job. Only the first of these stacks, for which the display level is highest, is directly associated with the given task. It is the one that uniquely identifies or associates with the virtual processor that executes this task. Not all of each stack owned by an ancestor need to be a part of a given tasks accessing environment.
Computer System Organization#R##N#The B5700/B6700 Series | 1973
Elliott I. Organick
Publisher Summary This chapter discusses the basic data structures for B6700 algorithms. The code for a B6700 algorithm is segmented into blocks as shown in a diagrammatic form in the chapter. Each block-structured language has its own syntax for use in delimiting such blocks, for example, Algol 60, uses begin, end pairs for program blocks and procedure” pairs for procedure blocks. The code for each block is stored as a physically separate segment. Each entry in the segment dictionary serves as a segment pointer or descriptor. Only segments that are actually a part of the specification of a site of activity, that is, for an active processor need to be present in physically addressable memory. When the flow of control moves from one segment to another in the algorithm, the hardware accesses the segment dictionary to acquire the base address of the desired segment as found in its descriptor. Thereafter, each succeeding instruction in the same segment is accessed as an offset from the base.
Computer System Organization#R##N#The B5700/B6700 Series | 1973
Elliott I. Organick
Publisher Summary This chapter focuses on tasking. A specific example of a program that gains a second site of activity is illustrated in the chapter. Lines 3, 5, 10, 14, and 16 reflect most of the new syntactical units required to achieve a simple, synchronized, tasking objective. At line 3, a variable, ev1, of type event is declared for use as the basis for synchronization. A matching formal parameter called done is declared in line 5 for the procedure sumit2. To request that sumit2 be executed as a separate but related task, a new syntactical construction is needed. Burroughs Algol, for instance, employs the key word process to distinguish a task call (line 14) from an ordinary procedure call (line 15). Variables of type event are structured. One field in this structure is a binary switch called the “happened” bit, which is set to not happened initially and later set to happened when the cause intrinsic is executed.