Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin R. Rattner is active.

Publication


Featured researches published by Justin R. Rattner.


architectural support for programming languages and operating systems | 1982

Supporting ada memory management in the iAPX-432

Fred J. Pollack; George W. Cox; Dan W. Hammerstrom; Kevin C. Kahn; Konrad K. Lai; Justin R. Rattner

In this paper, we describe how the memory management mechanisms of the Intel iAPX-432 are used to implement the visibility rules of Ada. At any point in the execution of an Ada® program on the 432, the program has a protected address space that corresponds exactly to the programs accessibility at the corresponding point in the programs source. This close match of architecture and language did not occur because the 432 was designed to execute Ada—it was not. Rather, both Ada and the 432 are the result of very similar design goals. To illustrate this point, we compare, in their support for Ada, the memory management mechanisms of the 432 to those of traditional computers. The most notable differences occur in heap-space management and multitasking. With respect to the former, we describe a degree of hardware/software cooperation that is not typical of other systems. In the latter area, we show how Adas view of sharing is the same as the 432, but differs totally from the sharing permitted by traditional systems. A description of these differences provide some insight into the problems of implementing an Ada compiler for a traditional architecture.


international conference on parallel architectures and compilation techniques | 2005

Multi-core to the masses

Justin R. Rattner

Summary form only given. It is likely that 2005 will be viewed as the year that parallelism came to the masses, with multiple vendors shipping dual/multi-core platforms into the mainstream consumer and enterprise markets. Assuming that this trend will follow Moores Law scaling, mainstream systems will contain over 10 processing cores by the end of the decade, yielding unprecedented theoretical peak performance. However, it is unclear whether the software community is sufficiently ready for this transition and will be able to unleash these capabilities due to the significant challenges associated with parallel programming. This keynote addresses the motivation for multi-core architectures, their unique characteristics, and potential solutions to the fundamental software challenges, including architectural enhancements for transactional memory, fine-grain message passing, and speculative multi-threading. Finally, we stress the need for a concerted, accelerated effort, starting at the academic-level and encompassing the entire platform software ecosystem, to successfully make the multi-core architectural transition.


distributed memory computing conference | 1991

The new age of supercomputing

Justin R. Rattner

The solutions to todays foremost scientific challenges require order-of-magnitude increases in computing power. The route to TeraFLOP computing lies in parallel multi-computers that exploit advances in microprocessor technology.


ACM Sigarch Computer Architecture News | 1980

Object-based computer architecture

Justin R. Rattner; George W. Cox

It is unusual for a talk such as this to be given before the product is introduced, but we wanted an opportunity to focus attention on conceptual framework of the design before the practical details of its realization are made public. We are grateful to the management of Intel for the opprotunity to do so. The talk will be an architectural, not a product, preview and is cleared to discuss only the concepts underlying the architecture. It will specifically not discuss the implementation. The product is, however, real, implemented, and running and scheduled to be introduced approximately six months from now. At this point what can be said about the implementation was its goal of producing an all VLSI system. This goal was achieved: the system uses several, one or two component processors and occupies very little physical space.


architectural support for programming languages and operating systems | 1982

Hardware/software cooperation in the iAPX-432

Justin R. Rattner

The Intel iAPX-432 is an object-based microcomputer system with a unified approach to the design and use of its architecture, operating system, and primary programming language. The concrete architecture of the 432 incorporates hardware support for data abstraction, small protection domains, and language-oriented run-time environments. It also uses its object-orientation to provide hardware support for dynamic heap storage management, interprocess communication, and processor dispatching. We begin with an overview of the 432 architecture so readers unfamiliar with its basic concepts will be able to follow the succeeding discussion without the need to consult the references. Following that, we introduce the various forms of hardware/software cooperation and the criteria by which a function or service is selected for migration. This is followed by several of the more interesting examples of hardware/software cooperation in the 432. A comparison of cooperation in the 432 with several contemporary machines and discussions of development issues, past and future, complete the paper.


IEEE Solid-state Circuits Magazine | 2009

The dawn of terascale computing

Justin R. Rattner

The digital revolution, far from abating, continues with even greater intensity in new applications in health, media, social networking, and many other areas of our lives. These applications will require revolutionary improvements in speed and capacity in future microprocessors so that they can process terabytes of information with teraflops of terascale computing power. Tera is not an exaggeration: trillions of hertz and trillions of bytes will be needed. In a terascale world, there will be new processing capabilities for mining and interpreting the worlds growing mountain of data, and for doing so with even greater efficiency. Examples of applications are artificial intelligence in smart cars and appliances and virtual reality for modeling, visualization, physics simulation, and medical training. Many other applications are still on the edge of science fiction. In these applications, massive amounts of data must be processed. Three-dimensional (3-D) images in connected visual computing applications like virtual worlds can include hundreds of hours of video, thousands of documents, and tens of thousands of digital photos that require indexing and searching. Terascale computing refers to this massive processing capability with the right mix of memory and input/output (I/O) capabilities for use in everyday devices, from servers to desktops to laptops.


Operating Systems Review | 2011

Research at Intel

Justin R. Rattner

For most people, Intel’s name is synonymous with the microprocessors powering the vast majority of the world’s personal computers, whether they run the Windows, Macintosh, or Linux operating systems. But the Intel of today is no longer a single-minded chip company serving the horizontally-structured PC industry. Over the last decade, we have become a world-class high-tech innovator by contributing to the rapid advancement of computing and communications technologies, starting at the silicon level by extending Moore’s Law with high-K, metal-gate transistors, pioneering high-speed mobile broadband communications with WiFi and later WiMax 4G wireless networking, and delivering complete hardware and software solutions to the digital home with our Smart TV platform, in the digital classroom with a family of Classmate notebooks and slates, and to the home with the Intel Health Guide and Intel Reader. While the traditional PC sectors, including both client and server products, continue to show strong growth, we are actively developing products for new markets including the smartphone, tablet, and automotive segments.


high-performance computer architecture | 2008

Intel’s Tera-scale Computing Project: The first five years, the next five years

Justin R. Rattner

The Intel tera-scale computing research project is an effort to advance computing technology for the next decade. By scaling todaypsilas multi-core architectures to 10s and 100s of cores and embracing a shift to parallel programming, the goal is to enable applications and capabilities only dreamed of today. In his keynote Justin Rattner will talk about the hardware and software research vision for the program. He will address hardware challenges with scaling multi-core architectures to integrate programmable cores and fixed-function accelerators, flexible cache and memory hierarchy, and high bandwidth on-die networks to ensure high throughput. On the software front, he will talk about thread-aware execution environments to provide high scalability and energy-efficiency across the cores and parallel programming tools for mere mortal programmers. The talk will also highlight future applications like integrated real-time physics and visualization and non-textual media mining which along with many others benefit from high degrees of concurrency.


Supercomputer '91 Anwendungen, Architekturen, Trends, Seminar | 1991

Supercomputing 1995 and Beyond - the Different Perspectives

Steven J. Wallach; Justin R. Rattner; Carl W. Diem; Kenichi Miura; Craig J. Mundie; Guy L. Steele; Andreas Reuter

The Mannheim Supercomputer Seminar 1991 had one of its highlights in the Panel Discussion covering “Supercomputing 1995 and beyond”. Above named reputed personalities of leading supercomputer manufacturers participated in this discussion as well as Prof. Andreas Reuter from the “Institut fur Parallele und Verteilte Hochstleistungsrechner”, University of Stuttgart, as independent expert and user.


Archive | 1981

Functional Extensibility: Making the World Safe for VLSI

Justin R. Rattner

The greatly improved access to VLSI technology , now available to non-specialists , has sparked considerable interest in the design of special function architectures (SFAs) that exploit its unique characteristics and advantages. For example, many of SFA’s take the form of pipelines or arrays in order to capitalize on the economies of replication that come with VLSI. Application of these VLSI SFA’s has reflected current research interests in areas such as pattern recognition , image analysis , data base retrieval, interactive graphics, and speech processing.

Collaboration


Dive into the Justin R. Rattner's collaboration.

Researchain Logo
Decentralizing Knowledge