Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where JoAnn M. Paul is active.

Publication


Featured researches published by JoAnn M. Paul.


ACM Transactions in Embedded Computing Systems | 2005

Undergraduate embedded system education at Carnegie Mellon

Philip Koopman; Howie Choset; Rajeev Gandhi; Bruce H. Krogh; Diana Marculescu; Priya Narasimhan; JoAnn M. Paul; Ragunathan Rajkumar; Daniel P. Siewiorek; Asim Smailagic; Peter Steenkiste; Donald E. Thomas; Chenxi Wang

Embedded systems encompass a wide range of applications, technologies, and disciplines, necessitating a broad approach to education. We describe embedded system coursework during the first 4 years of university education (the U.S. undergraduate level). Embedded application curriculum areas include: small and single-microcontroller applications, control systems, distributed embedded control, system-on-chip, networking, embedded PCs, critical systems, robotics, computer peripherals, wireless data systems, signal processing, and command and control. Additional cross-cutting skills that are important to embedded system designers include: security, dependability, energy-aware computing, software/systems engineering, real-time computing, and human--computer interaction. We describe lessons learned from teaching courses in many of these areas, as well as general skills taught and approaches used, including a heavy emphasis on course projects to teach system skills.


IEEE Transactions on Very Large Scale Integration Systems | 2006

Scenario-oriented design for single-chip heterogeneous multiprocessors

JoAnn M. Paul; Donald E. Thomas; Alex Bobrek

Single-chip heterogeneous multiprocessors (SCHMs) are arising to meet the computational demands of portable and handheld devices. These computing systems are not fully custom designs traditionally targeted by the design automation community, general-purpose designs traditionally targeted by the computer architecture community, nor pure embedded designs traditionally targeted by the real-time community. An entirely new design philosophy will be needed for this hybrid class of computing. The programming of the device will be drawn from a narrower set of applications with execution that persists in the system over a longer period of time than for general-purpose programming. However, the devices will still be programmable, not only at the level of the individual processing element, but across multiple processing elements and even the entire chip. The design of other programmable single chip computers has enjoyed an era where the design tradeoffs could be captured in simulators such as SimpleScalar and performance could be evaluated to the SPEC benchmarks. Motivated by this, we describe new benchmark-based design strategies for SCHMs which we refer to as scenario-oriented design. We include an example and results


design automation conference | 2004

High level cache simulation for heterogeneous multiprocessors

Joshua J. Pieper; Alain Mellan; JoAnn M. Paul; Donald E. Thomas; Faraydon Karim

As multiprocessor systems-on-chip become a reality, performance modeling becomes a challenge. To quickly evaluate many architectures, some type of high-level simulation is required, including high-level cache simulation. We propose to perform this cache simulation by defining a metric to represent memory behavior independently of cache structure and back-annotate this into the original application. While the annotation phase is complex, requiring time comparable to normal address trace based simulation, it need only be performed once per application set and thus enables simulation to be sped up by a factor of 20 to 50 over trace based simulation. This is important for embedded systems, as software is often evaluated against many input sets and many architectures. Our results show the technique is accurate to within 20% of miss rate for uniprocessors and was able to reduce the die area of a multiprocessor chip by a projected 14% over a naive design by accurately sizing caches for each processor.


design, automation, and test in europe | 2004

Modeling shared resource contention using a hybrid simulation/analytical approach

Alex Bobrek; Joshua J. Pieper; Jeffrey E. Nelson; JoAnn M. Paul; Donald E. Thomas

Future Systems-on-Chips will include multiple heterogeneous processing units, with complex data-dependent shared resource access patterns dictating the performance of a design. Currently, the most accurate methods of simulating the interactions between these components operate at the cycle-accurate level, which can be very slow to execute for large systems. Analytical models sacrifice accuracy for speed, and cannot cope with dynamic data-dependent behavior well. We propose a hybrid approach combining simulation with piecewise evaluation of analytical models that apply time penalties to simulated regions. Our experimental results show that for representative heterogeneous multiprocessor applications, simulation time can be decreased by 100 times over cycle-accurate models, while the error can be reduced by 60% to 80% over traditional analytical models to within 18% of an ISS simulation.


ACM Transactions on Design Automation of Electronic Systems | 2005

High-level modeling and simulation of single-chip programmable heterogeneous multiprocessors

JoAnn M. Paul; Donald E. Thomas; Andrew S. Cassidy

Heterogeneous multiprocessing is the future of chip design with the potential for tens to hundreds of programmable elements on single chips within the next several years. These chips will have heterogeneous, programmable hardware elements that lead to different execution times for the same software executing on different resources as well as a mix of desktop-style and embedded-style software. They will also have a layer of programming across multiple programmable elements forming the basis of a new kind of programmable system which we refer to as a Programmable Heterogeneous Multiprocessor (PHM). Current modeling approaches use instruction set simulation for performance modeling, but this will become far too prohibitive in terms of simulation time for these larger designs. The fundamental question is what the next higher level of design will be. The high-level modeling, simulation and design required for these programmable systems poses unique challenges, representing a break from traditional hardware design. Programmable systems, including layered concurrent software executing via schedulers on concurrent hardware, are not characterizable with traditional component-based hierarchical composition approaches, including discrete event simulation. We describe the foundations of our layered approach to modeling and performance simulation of PHMs, showing an example design space of a network processor explored using our simulation approach.


IEEE Transactions on Computers | 2005

Power-performance simulation and design strategies for single-chip heterogeneous multiprocessors

Brett H. Meyer; Joshua J. Pieper; JoAnn M. Paul; Jeffrey E. Nelson; Sean M. Pieper; Anthony Rowe

Single chip heterogeneous multiprocessors (SCHMs) are becoming more commonplace, especially in portable devices where reduced energy consumption is a priority. The use of coordinated collections of processors which are simpler or which execute at lower clock frequencies is widely recognized as a means of reducing power while maintaining latency and throughput. A primary limitation of using this approach to reduce power at the system level has been the time to develop and simulate models of many processors at the instruction set simulator level. High-level models, simulators, and design strategies for SCHMs are required to enable designers to think in terms of collections of cooperating, heterogeneous processors in order to reduce power. Toward this end, this paper has two contributions. The first is to extend a unique, preexisting high-level performance simulator, the Modeling Environment for Software and Hardware (MESH), to include power annotations. MESH can be thought of as a thread-level simulator instead of an instruction-level simulator. Thus, the problem is to understand how power might be calibrated and annotated with program fragments instead of at the instruction level. Program fragments are finer-grained than threads and coarser-grained than instructions. Our experimentation found that compilers produce instruction patterns that allow power to be annotated at this level using a single number over all compiler-generated fragments executing on a processor. Since energy is power*time, this makes system runtime (i.e., performance) the dominant factor to be dynamically calculated at this level of simulation. The second contribution arises from the observation that high-level modeling is most beneficial when it opens up new possibilities for organizing designs. Thus, we introduce a design strategy, enabled by the high-level performance power-simulation, which we refer to as spatial voltage scaling. The strategy both reduces overall system power consumption and improves performance in our example. The design space for this design strategy could not be explored without high-level SCHM power-performance simulation.


International Journal of Parallel Programming | 2007

Amdahl's law revisited for single chip systems

JoAnn M. Paul; Brett H. Meyer

Amdahl’s Law is based upon two assumptions – that of boundlessness and homogeneity – and so it can fail when applied to single chip heterogeneous multiprocessor designs, and even microarchitecture. We show that a performance increase in one part of the system can negatively impact the overall performance of the system, in direct contradiction to the way Amdahl’s Law is instructed. Fundamental assumptions that are consistent with Amdahl’s Law are a heavily ingrained part of our computing design culture, for research as well as design. This paper points in a new direction. We motivate that emphasis should be made on holistic, system level views instead of divide and conquer approaches. This, in turn, has relevance to the potential impacts of custom processors, system-level scheduling strategies and the way systems are partitioned. We realize that Amdahl’s Law is one of the few, fundamental laws of computing. However, its very power is in its simplicity, and if that simplicity is carried over to future systems, we believe that it will impede the potential of future computing systems.


design, automation, and test in europe | 2003

Layered, Multi-Threaded, High-Level Performance Design

Andrew S. Cassidy; JoAnn M. Paul; Donald E. Thomas

A primary goal of high-level modeling is to efficiently explore a broad design space, converging on an optimal or near-optimal system architecture before moving to a more detailed design. This paper evaluates a high-level, layered software-on-hardware performance modeling environment called MESH that captures coarse-grained, interacting system elements. The validity of the high-level model is established by comparing the outcome of the high-level model with a corresponding low-level, cycle-accurate instruction set simulator. We model a network processor and show that both high and low level models converge on the same architecture when design modifications are classified as good or bad performance impacts.


design automation conference | 2003

Schedulers as model-based design elements in programmable heterogeneous multiprocessors

JoAnn M. Paul; Alex Bobrek; Jeffrey E. Nelson; Joshua J. Pieper; Donald E. Thomas

As System On a Chip (SoC) designs become more like Programmable Heterogeneous Multiprocessors (PHMs), the highest levels of design will place emphasis on the custom design of elements that were traditionally associated with systems in the large. We motivate how schedulers that make dynamic, data-dependent decisions at run-time will be key design elements in PHM SoCs. Starting from a fundamental model, the role schedulers play in PHMs is developed. Model-based scheduling is introduced as an approach to designing schedulers that optimize a PHMs performance. Due to the complexity of the PHM design space, convergence on optimal design requires high-level modeling and simulation. In model-based scheduling, high-level models of scheduling decisions result in actual design elements that appear in real systems. Experiments for a simple two-processor PHM that does a mix of image and text compression are included. Results show the effectiveness of model-based scheduling.


IEEE Computer | 2007

A New Era of Performance Evaluation

Sean M. Pieper; JoAnn M. Paul; Michael J. Schulte

Long-standing techniques for performance evaluation of computer designs are beginning to fail. Computers increasingly interact with other computers, humans, and the outside world, leading to scenario-oriented computing, an emerging category of design that will enable future consumer devices and usher in a new era of performance evaluation.

Collaboration


Dive into the JoAnn M. Paul's collaboration.

Top Co-Authors

Avatar

Donald E. Thomas

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Alex Bobrek

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua J. Pieper

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey E. Nelson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sean M. Pieper

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Simon N. Peffers

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew S. Cassidy

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Arne Suppé

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge