Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nelson H. Weiderman is active.

Publication


Featured researches published by Nelson H. Weiderman.


Real-time Systems | 1992

Hartstone Uniprocessor Benchmark: definitions and experiments for real-time systems

Nelson H. Weiderman; Nick I. Kamenoff

The purpose of this paper is to define a series of requirements and associated experiments called the Hartstone Uniprocessor Benchmark (HUB), to be used in testing the ability of a uniprocessor system to handle certain types of hard real-time applications. The benchmark model considers the real-time system as a set of periodic, aperiodic (sporadic), and synchronization (server) tasks. The tasks are characterized by their execution times (workloads), and deadlines. There are five series of experiments defined. They are, in order of increasing complexity, PH (Periodic Tasks, Harmonic Frequencies), PN (Periodic Tasks, Nonharmonic Frequencies), AH (Periodic Tasks with Aperiodic Processing), SH (Periodic Tasks with Synchronization), and SA (Periodic Tasks with Aperiodic Processing and Synchronization). The general stopping criteria of the experiments is defined as follows: Change one of the following four task set parameters: number of tasks, execution time(s), blocking time(s), or deadline(s) until a given task set is no longer schedulable, i.e., a deadline is missed. The derivation of the Hartstone experiments from one static scheduling algorithm (Rate Monotonic) and one dynamic scheduling algorithm (Earliest Deadline First) is presented. Because of its high-level application view of the underlying hardware and real-time system software the Hartstone experiments can be used for fast prototyping of real-time applications. Implementation of such benchmarks is useful in evaluating scheduling algorithms, scheduling protocols, and design paradigms, as well as evaluating real-time languages, the tasking system of compilers, real-time operating systems, and hardware configurations.


ACM Sigada Ada Letters | 1990

Hartstone: synthetic benchmark requirements for hard real-time applications

Nelson H. Weiderman

The purpose of this paper is to define the operational concept for a series of benchmark requirements to be used to test the ability of a system to handle hard real-time applications. Implementations of such benchmarks would be useful in evaluating scheduling algorithms, protocols, and design paradigms, as well as processors, languages, compilers, and operating systems. Several Ada programs are under development to test standard versions of the benchmark requirements and will be released into the public domain.


real-time systems symposium | 1991

Hartstone distributed benchmark: requirements and definitions

Nick I. Kamenoff; Nelson H. Weiderman

A series of benchmark requirements, the Hartstone distributed benchmark (HDB), to be used in testing the ability of real-time distributed systems (RTDSs), to handle real-time applications, is defined. The HDB experiments deliver figures of merit for the end-to-end scheduling of messages, which integrates the processor scheduling domain and the communication scheduling domain. In the HDB experiments, the entire communication system on each node (hardware and communication software) is presented in the host processor space as a communication server (CS). The CS is reviewed in the processor space as an aperiodic server task which executes on behalf of its clients. The HDB experiments are defined for datagram (acknowledged and unacknowledged) services, virtual-circuit services and integrated protocol services. Experiments for aperiodic activities are defined as well. Dedicated experiments are defined for testing the channel-access protocol, whose serializing nature can lead to priority inversions in the communication media.<<ETX>>


ACM Sigada Ada Letters | 1988

Timing variation in dual loop benchmark

Neal Altman; Nelson H. Weiderman

Benchmarks that measure time values using a standard system clock often employ a dual loop design. One of the important assumptions of this design is that textually identical loop statements will take the same amount of time to execute. This assumption has been tested on two bare computers with Ada&reg; test programs and has been demonstrated to be inaccurate in these specific test cases.


ACM Sigsoft Software Engineering Notes | 1998

Notes on the second international workshop on development and evolution of software architectures for product families

Paul C. Clements; Nelson H. Weiderman

Abstract : In February 1998, the European Architectural Reasoning for Embedded Software (ARES) project sponsored the Second International Workshop on Development and Evolution of Software Architectures for Product Families. The workshop brought together practitioners and academics from Europe and the United States who are working in the area of software product families; that is, the production of related software systems from a common set of core assets. Chief among those assets is a shared software and/or system architecture. This workshop explored problems of architecture creation, description, evaluation, recovery, and architecture-based process in the context of building a product family. This report summarizes the discussions and outcomes of the workshop.


european software engineering conference | 1995

Assessing the Quality of Large, Software-Intensive Systems: A Case Study

Alan W. Brown; David J. Carney; Paul C. Clements; B. Craig Meyers; Dennis B. Smith; Nelson H. Weiderman; William G. Wood

This paper presents a case study in carrying out an audit of a large, softwareintensive system. We discuss our experience in structuring the team for obtaining maximum effectiveness under a short deadline. We also discuss the goals of an audit, the methods of gathering and assimilating information, and specific lines of inquiry to be followed. We present observations on our approach in light of our experience and feedback from the customer. In the past decade, as engineers have attempted to build software-intensive systems of a scale not dreamed of heretofore, there have been extraordinary successes and failures. Those projects that have failed have often been spectacular and highly visible [3], particularly those commissioned with public money. Such failures do not happen all at once; like Brooks’ admonition that schedules slip one day at a time [2], failures happen incrementally. The symptoms of a failing project range from the subtle (a customer’s vague feelings of uneasiness) to the ridiculous (the vendor slips the schedule for the eighth time and promises that another


international workshop on real-time ada issues | 1988

A testbed for investigating real-time Ada issues

M. Borger; Mark H. Klein; Nelson H. Weiderman

30 million will fix everything). A project that has passed the “failure in progress” stage and gone on to full-fledged meltdown can be spotted by one sure symptom: the funding authority curtails payment and severely slows development. When that happens, the obvious question is asked by every involved party: “What now?” The answer is often an audit. This paper summarizes the experience of an audit undertaken by the Software Engineering Institute (SEI) in the summer of 1994 to examine a large, highly visible development effort exhibiting the meltdown symptom suggested above. The customer was a government agency in the process of procuring a large software-intensive system from a major contractor. The audit team included the authors of this paper, as well as members from other organizations. Members of the team had extensive backgrounds and expertise in software engineering, in large systems development,


tri-ada | 1990

Benchmarking for deadline-driven computing

Nelson H. Weiderman; Patrick Donohoe; Ruth Shapiro

The Software Engineering Institute’s Ada Embedded Systems Testbed project is constructing a hardware and software testbed that provides a real-time laboratory environment for conducting experiments using Ada and investigating real-time Ada issues. The intent of the testbed is to provide an experimental environment that facilitates the application of both real-time theory and current (Ada) software technology in solving realistic and challenging problems. For example, the testbed offers an experimental context in which the theorist can, using the most advanced Ada cross-compilation systems, apply the best real-time scheduling algorithms to applications. To date, the testbed has been used for benchmarking; real-time experimentation and prototyping; and designing, coding, and testing a representative real-time Ada application.


tri-ada | 1989

Compiler technology evaluation

D. Smith; Nelson H. Weiderman

Hartstone is a series of timing requirements for testing a systems ability to handle hard real-time applications. It is specified as a set of processes with well-defined workloads and timing constraints. The name Hartstone derives from HArd Real Time and the fact that the workloads are based on the well-known Whetstone benchmark. This paper describes the results obtained by running Version 1.0 of the Hartstone benchmark, an Ada implementation of one of the requirements, on a number of compiler/target processor combinations. The characteristics and expected behavior of the benchmark are described, actual results are presented and analyzed, and the lessons learned about the compilers and processors, and the benchmark itself, are discussed. Nothing in this paper should be taken as an endorsement of, or an indictment of, a particular product. Users of Ada technology are encouraged to experiment with the Hartstone benchmark relative to their own particular application requirements.


tri-ada | 1989

Evaluating real-time performance of Ada implementations

Nelson H. Weiderman

The Ada Compiler Evaluation Capability (ACEC) is a U.S. Government sponsored evaluation suite that was first released in the fall of 1988. It consists of over 1000 runtime performance tests and is under continuing development to address additional evaluation concerns. Currently, the government is developing procedures and guidelines to make the collection and reporting of evaluation data a more systematic and formal process. The Compiler Technology Evaluation Panel will address the current status of evaluation technology, and the prospects for improving both compiler technology and the compiler selection process through formal evaluation procedures. The panel will be comprised of four panelists and an “ACEC Procedures/Guidelines” presenter. The four panelists will consist of two Ada compiler developers, a defense contractor, and a government program representative selected for their knowledge, experience, and different perspectives of Ada compiler evaluation. The presenter will be an AJPO representative who will be providing the current status on the intended use and procedures of the ACEC. This presenter will also serve as a panelist during the question and answer period. The panel is meant to promote discussion and provide insight into embedded systems Ada compiler quality/usability especially as it relates to current and proposed evaluation tests and procedures. To facilitate this discussion, in addition to the ACEC presentation, each of the four panelists have been requested to respond to the following three questions:What is the current overall quality of embedded system Ada Compilers targeted for “bare” processors? Answers should consider the following: Code generation - correctness, time and space efficiency Runtime system - correctness, time and space efficiency, configurability, and interfacing capability Implementation dependent features for efficiency and their impact on portability Compiler support for good symbolic information Ability to interface with other software integration tools Robustness (freedom from errors) and user interface What is the current overall quality of the ACEC for evaluating Ada compilers for “bare” processors? Answers should consider the following: Coverage of major evaluation areas Size - number of tests Ease of use and degree of automation Analysis tools and the ability to synthesize results Soundness of the methodology Remembering that language conformity is a critical quality parameter, what has been the impact of the ACVC (test and procedures) on Ada compiler quality? What is and/or what will be the impact of the ACEC (tests and procedures) on Ada compiler quality?

Collaboration


Dive into the Nelson H. Weiderman's collaboration.

Top Co-Authors

Avatar

Dennis B. Smith

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Mark Borger

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mark H. Klein

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

John Bergey

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Scott R. Tilley

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Neal Altman

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Patrick Donohoe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

A. Nico Habermann

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul C. Clements

Software Engineering Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge