Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steve Roach is active.

Publication


Featured researches published by Steve Roach.


IEEE Transactions on Software Engineering | 2004

A taxonomy and catalog of runtime software-fault monitoring tools

Nelly Delgado; Ann Q. Gates; Steve Roach

A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the programs execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.


Electronic Notes in Theoretical Computer Science | 2001

DynaMICs: Comprehensive support for run-time monitoring

Ann Q. Gates; Steve Roach; Oscar Mondragon; Nelly Delgado

Abstract Software engineering strives to enable the economic construction of software systems that behave reliably, predictably, and safely. In other engineering disciplines, safety is assured in part by detailed monitoring of processes. In software, we may achieve some level of confidence in the operation of programs by monitoring their execution. DynaMICs is a software tool that facilitates the collection and use of constraints for software systems. In addition, it supports traceability by mapping constraints to system artifacts. Constraint specifications are stored separately from code; constraint-monitoring code is automatically generated from the specifications and inserted into the program at appropriate places; and constraints are verified at execution time. These constraint checks are triggered by changes made to variable values. We describe the architecture of DynaMICs, discuss alternative verification techniques, and outline research directions for the DynaMICs project.


technical symposium on computer science education | 2011

Setting the stage for computing curricula 2013: computer science -- report from the ACM/IEEE-CS joint task force

Mehran Sahami; Mark Guzdial; Andrew D. McGettrick; Steve Roach

Following a roughly 10 year cycle, the Computing Curricula volumes have helped to set international curricular guidelines for undergraduate programs in computing. In the summer of 2010, planning for the next volume in the series, Computer Science 2013, began. This panel seeks to update and engage the SIGCSE community on the Computer Science 2013 effort. The development of curricular guidelines in Computer Science is particularly challenging given the rapid evolution and expansion of the field. Moreover, the growing diversity of topics in Computer Science and the integration of computing with other disciplines create additional challenges and opportunities in defining computing curricula. As a result, it is particularly important to engage the broader computer science education community in a dialog to better understand new opportunities, local needs, and novel successful models of computing curriculum. The last complete Computer Science curricular volume was released in 2001 [3] and followed by a review effort that concluded in 2008 [2]. While the review helped to update some of the knowledge units in the 2001 volume, it was not aimed at producing an entirely new curricular volume and deferred some of the more significant questions that arose at the time. The Computer Science 2013 effort seeks to provide a new volume reflecting the current state of the field and highlighting promising future directions through revisiting and redefining the knowledge units in CS, rethinking the essentials necessary for a CS curriculum, and identifying working exemplars of courses and curricula along these lines.


technical symposium on computer science education | 2012

Computer science curriculum 2013: reviewing the strawman report from the ACM/IEEE-CS task force

Mehran Sahami; Steve Roach; Ernesto Cuadros-Vargas; David Reed

Beginning over 40 years ago with the publication of Curriculum 68, the major professional societies in computing--ACM and IEEE-Computer Society--have sponsored various efforts to establish international curricular guidelines for undergraduate programs in computing. As the field has grown and diversified, so too have the recommendations for curricula. There are now guidelines for Computer Engineering, Information Systems, Information Technology, and Software Engineering in addition to Computer Science. These volumes are updated regularly with the aim of keeping computing curricula modern and relevant. In the Fall of 2010, work on the next volume in the series, Computer Science 2013 (CS2013), began. Considerable work on the new volume has already been completed and a first draft of the CS2013 report (known as the Strawman report) will be complete by the beginning of 2012. This panel seeks to update and engage the SIGCSE community in providing feedback on the Strawman report, which will be available shortly prior to the SIGCSE conference.


technical symposium on computer science education | 2013

ACM/IEEE-CS computer science curriculum 2013: reviewing the ironman report

Mehran Sahami; Steve Roach; Ernesto Cuadros-Vargas; Richard LeBlanc

For over 40 years, the ACM and IEEE-Computer Society have sponsored the creation of international curricular guidelines for undergraduate programs in computing. These Computing Curricula volumes are updated approximately every 10-year cycle, with the aim of keeping curricula modern and relevant. The next volume in the series, Computer Science 2013 (CS2013), is currently in progress. This panel seeks to update and engage the SIGCSE community in providing feedback on a complete draft of the CS2013 report (called the Ironman report), which will be released shortly before SIGCSE. Since the Ironman report is the penultimate draft of the CS2013 report, this panel is an especially important venue for starting the last round of feedback that will impact the final CS2013 curricular guidelines.


international workshop on model checking software | 2005

Verifying pattern-generated LTL formulas: a case study

Salamah Salamah; Ann Q. Gates; Steve Roach; Oscar Mondragon

The Specification Pattern System (SPS) and the Property Specification (Prospec) tool assist a user in generating formal specifications in Linear Temporal Logic (LTL), as well as other languages, from property patterns and scopes. Patterns are high-level abstractions that provide descriptions of common properties, and scopes describe the extent of program execution over which the property holds. The purpose of the work presented in this paper is to verify that the generated LTL formulas match the natural language descriptions, timelines, and traces of computation that describe the pattern and scope. The LTL formulas were verified using the Spin model checker on test cases developed using boundary value analysis and equivalence class testing strategies. A test case is an LTL formula and a sequence of Boolean valuations. The LTL formulas were those generated from SPS and Prospec. The Boolean valuations of propositions in the LTL formula are generated by a deterministic, single-threaded Promela program that was run using the software model-checker Spin. For each pattern, a suite of test cases was. The experiments uncovered several errors in both the SPS-generated and the Prospec-generated formulas.


Advances in Computers | 2003

Transformation-Oriented Programming: A Development Methodology for High Assurance Software

Victor L. Winter; Steve Roach; Gregory L. Wickstrom

Abstract A software development paradigm known as Transformation-Oriented Programming (TOP) is introduced. In TOP, software development consists of constructing a sequence of transformations capable of systematically constructing a software implementation from a given formal specification. As such TOP falls under the category of formal methods. The general theory and techniques upon which TOP is built is presented. The High Assurance Transformation System (HATS) is described. The use of the HATS tool to implement a portion of the functionality of a classloader needed by the Sandia Secure Processor (SSP) is described.


automated software engineering | 2005

Automated Procedure Construction for Deductive Synthesis

Steve Roach; Jeffrey Van Baalen

Deductive program synthesis systems based on automated theorem proving offer the promise of software that is correct by construction. However, the difficulty encountered in constructing usable deductive synthesis systems has prevented their widespread use. Amphion is a real-world, domain-independent, completely automated program synthesis system. It is specialized to specific applications through the creation of an operational domain theory and a specialized deductive engine. This paper describes an experiment aimed at making the construction of usable Amphion applications easier.The software system Theory Operationalization for Program Synthesis (TOPS) has a library of decision procedure templates with a theory schema for each procedure. TOPS identifies sets of axioms in the domain theory that are instances of theory schema associated with library procedures. For each procedure instance, TOPS uses iterated partial deduction to augment the procedure with the capability to construct ground terms for deductive synthesis. Synthesized procedures are interfaced to a resolution theorem prover. Axioms in the original domain theory that are implied by the synthesized procedures are removed.The inference rules of the theorem prover have been extended so that during deductive synthesis, each procedure is invoked to test conjunctions of literals in the language of the theory of that procedure. When possible, the procedure generates ground terms and binds them to variables in a problem specification. These terms are program fragments. Experiments show that the procedures synthesized by TOPS can reduce theorem proving search at least as much as hand tuning of the deductive synthesis system.


frontiers in education conference | 2001

The impact of technology on engineering and computer science education in the 21/sup st/ Century: changing classroom instructional methods

Steve Roach

The college instructor of the future will certainly have more technology, available for use in the classroom. The decision to utilize instructional technology should be based on sound principles. The three guiding principles of using technology to enable, planning to support technology, and using technology judiciously are discussed in this paper.


high-assurance systems engineering | 2007

Verification of Automatically Generated Pattern-Based LTL Specifications

Salamah Salamah; Ann Q. Gates; Vladik Kreinovich; Steve Roach

The use of property classifications and patterns, i.e., high-level abstractions that describe common behavior, have been shown to assist practitioners in generating formal specifications that can be used in formal verification techniques. The specification pattern system (SPS) provides descriptions of a collection of patterns. The extent of program execution over which a pattern must hold is described by the notion of scope. SPS provides a manual technique for obtaining formal specifications from a pattern and a scope. The property specification tool (Prospec) extends SPS by introducing composite propositions (CPs), a classification for defining sequential and concurrent behavior to represent pattern and scope parameters, and provides a tool to support users. This work provides general templates for generating formal specifications in linear temporal logic (LTL) for all pattern, scope, and CP combinations. In addition, the work explains the methodology for the verification of the correctness of these templates.

Collaboration


Dive into the Steve Roach's collaboration.

Top Co-Authors

Avatar

Ann Q. Gates

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Salamah Salamah

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor L. Winter

University of Nebraska Omaha

View shared research outputs
Top Co-Authors

Avatar

Fares Fraij

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Ernesto Cuadros-Vargas

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Omar Ochoa

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Oscar Mondragon

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Cuauhtemoc Munoz

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge