Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Herbert Hecht is active.

Publication


Featured researches published by Herbert Hecht.


[1991] Digest of Papers. Fault-Tolerant Computing: The Twenty-First International Symposium | 1991

A distributed fault tolerant architecture for nuclear reactor and other critical process control applications

Myron Hecht; J. Agron; Herbert Hecht; K. H. (Kane) Kim

A distributed fault tolerant system for process control that is based on an enhancement of the distributed recovery block (DRB) is described. Fault tolerance provisions in the system cover software faults by use of the DRB; hardware faults by means of replication and the DRB; system software faults by means of replication, loose coupling, periodic status messages, and a restart capability; and network faults by means of replication and diverse interconnection paths. Maintainability is enhanced through an automated restart capability and logging function resident on a system supervisor node. The system, called the extended distributed recovery block, or EDRB, has been implemented and integrated into a chemical processing system.<<ETX>>


IEEE Transactions on Software Engineering | 1986

Software reliability in the system context

Herbert Hecht; Myron Hecht

A systems approach to the analysis and control of software reliability is described which is intended to supplement conventional software reliability models which focus on program attributes under the control of the software professionals. A review of software reliability experience during the operations and maintenance (O&M) phase is presented. This is followed by a description of a basic failure model that supports a unified approach to software and hardware reliability and of the implications of that model for conventional software reliability approaches. Next, the effect of management activities on reliability is investigated, and an outline of a combined hardware/software reliability model suitable for the planning phase is presented.


Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance | 1997

Quantitative reliability and availability assessment for critical systems including software

Myron Hecht; Dong Tang; Herbert Hecht; Robert Brill

In many cases, it is possible to derive a quantitative reliability or availability assessment for systems containing software with the appropriate use of system-level measurement-based modeling and supporting data. This paper demonstrates the system-level measurement based approach using a simplified safety protection system example. The approach is contrasted with other software reliability prediction methodologies. The treatment of multiple correlated and common mode failures, systematic failures, and degraded states are also discussed. Finally a tool called MEADEP, which is now under development, is described. The objective of the tool is to reduce the system-level measurement-based approach to a practical task that can be performed on systems with element failure rates as low as 10/sup -6/ per hour.


reliability and maintainability symposium | 2006

Prognostics for electronic equipment: an economic perspective

Herbert Hecht

Prognostics are not currently in wide use for electronic equipment whereas they are an established feature for many mechanical components. The paper examines technical and economic factors that underlie this disparity and concludes that prognostics may be beneficial for electronics where the cost of the instrumentation is low, the prognostic technique provides broad coverage, and the difference between the cost of pre-planned maintenance and unscheduled maintenance is high


reliability and maintainability symposium | 2004

Computer aided software FMEA for unified modeling language based software

Herbert Hecht; Xuegao An; Myron Hecht

Model-based software development, particularly when it utilizes unified modeling language (UML) tools, provides artifacts that make programs more transparent. We use these capabilities to automate major steps in the generation of a software FMEA. Automation not only reduces the labor required but also makes the process repeatable and removes many subjective decisions that can impair the credibility of a software FMEA. The computer-aided software FMEA discussed in this paper can be the central organizing element for the verification and validation (V&V) of embedded software for real-time systems. The adoption of this technique benefits budgets because V&V frequently consumes the majority of the development resources for embedded software. After reviewing prior efforts in establishing a procedure for software FMEA we describe our approach for two life cycle phases: concept and design/implementation. Then we discuss the application of the computer-aided FMEA to software V&V and identify areas for further research.


international symposium on software reliability engineering | 1997

An approach to measuring and assessing dependability for critical software systems

Dong Tang; Herbert Hecht

Traditional software testing methods combined with probabilistic models cannot measure and assess dependability for software that requires very high reliability (failure rate<10/sup -6//hour) and availability (>0.999999). This paper proposes a novel approach, drawing on findings and methods that have been described individually but have never been combined, applied in the late testing phase or early operational phase, to quantify dependability for a category of critical software with such high requirements. The concepts that are integrated are: operational profile, rare conditions, importance sampling, stress testing, and measurement-based dependability evaluation. In the approach, importance sampling is applied on the operational profile to guide the testing of critical operations of the software, thereby accelerating the occurrence of rare conditions which have been shown to be a leading cause of failure in critical systems. The failure rates measured in the testing are then transformed to those that would occur in the normal operation by the likelihood ratio function of the importance sampling theory, and finally dependability for the tested software system is evaluated by using measurement-based dependability modeling techniques. When the acceleration factor is large (over 100), which is typical for a category of software of interest, it is possible to quantify a very high reliability or availability in a reasonable test duration. Some feasible methods to implement the approach are discussed based on real data.


ieee aerospace conference | 2000

Use of importance sampling and related techniques to measure very high reliability software

Myron Hecht; Herbert Hecht

Computer-based control systems have grown more complex over the past two decades. Thus, the software aspects of system reliability are an increasingly important concern. Current methods of software and system reliability prediction-whether measurement based or incorporating reliability growth models-cannot accurately predict failure rates of greater than 10/sup -6/ per mission hour. This paper describes a new methodology for more accurately predicting failure rates of very high reliability systems. The methodology enhances conventional measurement-based reliability assessment with a method incorporating the results of stress testing called importance sampling. By means of importance sampling in conjunction with a system model, acceleration factors can be associated with stress testing much as is currently done with elevated temperature life testing of hardware components.


high assurance systems engineering | 1997

Toward more effective testing for high assurance systems

Herbert Hecht; Myron Hecht; Dolores R. Wallace

The objective of the paper is to reduce the cost of testing software in high assurance systems. It is at present a very expensive activity and one for which there are no generally accepted guidelines. A part of the problem is that failure mechanisms for software are not as readily understood as those for hardware, and that the experience of any one project does not provide enough data to improve the understanding. A more comprehensive attack on the high cost of software test requires pooling of fault and failure data from many projects, and an initiative by NIST that can furnish the basis for the data collection and analysis is described.


ieee aerospace conference | 2000

Adaptive fault tolerance for spacecraft

M. Hecht; Herbert Hecht; E. Shokri

This paper describes the design and implementation of software infrastructure for real-time fault tolerance for applications on long duration deep space missions. The infrastructure has advanced capabilities for Adaptive Fault Tolerance (AFT), i.e., the ability to change the recovery strategy based on the failure history, available resources, and the operating environment. The AFT technology can accommodate adaptive or fixed recovery strategies. Adaptive fault tolerance allows the recovery strategy to be changed on the basis of the mission phase, failure history, and environment. For example, during a phase when power consumption must be minimized, there would be only one processor in operation. Thus, the recovery strategy would be to restart and retry. On the other hand, if the mission phase were in a time-critical mode (e.g., orbital insertion, encounter, etc.), then, multiple processors would be running, and the recovery strategy would be to switch from a leader copy to a follower copy of the control software. In a fixed recovery strategy, there is a specified redundant resource which is committed when certain failure conditions occur. The most obvious example of a fixed recovery strategy is to switch over to the standby processor in the event of any failure of the active processor.


ieee aerospace conference | 2006

Why prognostics for avionics

Herbert Hecht

Prognostics, by providing early information on potential equipment failures, permit maintenance to be transformed from a purely responsive (and hence largely uncontrollable) activity into one that can be planned and controlled. The ability to plan and control maintenance activities is becoming increasingly important because of the shortage of skilled personnel and the complexity of current avionics products. The benefits of prognostics are well established for mechanical and electromechanical equipment, and this motivates the extension of the technique to the electronics field. But there are very large differences between mechanical and electronic components in failure mechanisms, in the design process, and in the physical dimensions of the parts subject to failure that preclude direct migration of the prognostic techniques. These differences are examined in detail and a procedure for developing prognostics specifically targeted at solid state electronics is suggested

Collaboration


Dive into the Herbert Hecht's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge