Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen H. Edwards is active.

Publication


Featured researches published by Stephen H. Edwards.


technical symposium on computer science education | 2004

Using software testing to move students from trial-and-error to reflection-in-action

Stephen H. Edwards

Introductory computer science students rely on a trial and error approach to fixing errors and debugging for too long. Moving to a reflection in action strategy can help students become more successful. Traditional programming assignments are usually assessed in a way that ignores the skills needed for reflection in action, but software testing promotes the hypothesis-forming and experimental validation that are central to this mode of learning. By changing the way assignments are assessed--where students are responsible for demonstrating correctness through testing, and then assessed on how well they achieve this goal--it is possible to reinforce desired skills. Automated feedback can also play a valuable role in encouraging students while also showing them where they can improve.


Software - Practice and Experience | 2005

Model Variables: Cleanly Supporting Abstraction in Design By Contract

Yoonsik Cheon; Gary T. Leavens; Murali Sitaraman; Stephen H. Edwards

In design by contract (DBC), assertions are typically written using program variables and query methods. The lack of separation between program code and assertions is confusing, because readers do not know what code is intended for use in the program and what code is only intended for specification purposes. This lack of separation also creates a potential runtime performance penalty, even when runtime assertion checks are disabled, due to both the increased memory footprint of the program and the execution of code maintaining that part of the programs state intended for use in specifications. To solve these problems, we present a new way of writing and checking DBC assertions without directly referring to concrete program states, using ‘model’, i.e. specification‐only, variables and methods. The use of model variables and methods does not incur the problems mentioned above, but it also allow one to write more easily assertions that are abstract, concise, and independent of representation details, and hence more readable and maintainable. We implemented these features in the runtime assertion checker for the Java Modeling Language (JML), but the approach could also be implemented in other DBC tools. Copyright


ACM Transactions on Computing Education \/ ACM Journal of Educational Resources in Computing | 2003

Improving student performance by evaluating how well students test their own programs

Stephen H. Edwards

Students need to learn more software testing skills. This paper presents an approach to teaching software testing in a way that will encourage students to practice testing skills in many classes and give them concrete feedback on their testing performance, without requiring a new course, any new faculty resources, or a significant number of lecture hours in each course where testing will be practiced. The strategy is to give students basic exposure to test-driven development, and then provide an automated tool that will assess student submissions on-demand and provide feedback for improvement. This approach has been demonstrated in an undergraduate programming languages course using a prototype tool. The results have been positive, with students expressing appreciation for the practical benefits of test-driven development on programming assignments. Experimental analysis of student programs shows a 28% reduction in defects per thousand lines of code.


conference on object-oriented programming systems, languages, and applications | 2003

Rethinking computer science education from a test-first perspective

Stephen H. Edwards

Despite our best efforts and intentions as educators, student programmers continue to struggle in acquiring comprehension and analysis skills. Students believe that once a program runs on sample data, it is correct; most programming errors are reported by the compiler; when a program misbehaves, shuffling statements and tweaking expressions to see what happens is the best debugging approach. This paper presents a new vision for computer science education centered around the use of test-driven development in all programming assignments, from the beginning of CS1. A key element to the strategy is comprehensive, automated evaluation of student work, in terms of correctness, the thoroughness and validity of the students tests, and an automatic coding style assessment performed using industrial-strength tools. By systematically applying the strategy across the curriculum as part of a students regular programming activities, and by providing rapid, concrete, useful feedback that students find valuable, it is possible to induce a cultural shift in how students behave.


technical symposium on computer science education | 2007

Algorithm visualization: a report on the state of the field

Clifford A. Shaffer; Matthew Cooper; Stephen H. Edwards

We present our findings on the state of the field of algorithm visualization, based on extensive search and analysis of links to hundreds of visualizations. We seek to answer questions such as how content is distributed among topics, who created algorithm visualizations and when, the overall quality of available visualizations, and how visualizations are disseminated. We have built a wiki that currently catalogs over 350 algorithm visualizations, contains the beginnings of an annotated bibliography on algorithm visualization literature, and provides information about researchers and projects. Unfortunately, we found that most existing algorithm visualizations are of low quality, and the content coverage is skewed heavily toward easier topics. There are no effective repositories or organized collections of algorithm visualizations currently available. Thus, the field appears in need of improvement in dissemination of materials, informing potential developers about what is needed, and propagating known best practices for creating new visualizations.


international computing education research workshop | 2009

Comparing effective and ineffective behaviors of student programmers

Stephen H. Edwards; Jason Snyder; Manuel A. Pérez-Quiñones; Anthony Allevato; Dongkwan Kim; Betsy Tretola

This paper reports on a quantitative evaluation of five years of data collected in the first three programming courses at Virginia Tech. The dataset involves a total of 89,879 assignment submissions by 1,101 different students. Assignment results were partitioned into two groups: scores above 80% (A/B) and scores below 80% (C/D/F). To investigate student behaviors that result in differing levels of achievement, all students who consistently received A/B scores and all students who consistently received C/D/F scores were removed from the dataset. A within-subjects comparison of the scores received by the remaining individuals was performed. Further, time and code-size data that is difficult to compare directly between different courses was normalized. This study revealed several significant results. When students received A/B scores, they started earlier and finished earlier than when the same students received C/D/F scores. They also wrote slightly more program code. They did not appear to spend any more time on their work, however. Approximately two-thirds of the A/B scores were received by individuals who started more than a day in advance of the deadline, while approximately two-thirds of the C/D/F scores were received by individuals who started on the last day or later. One possible explanation is that students who start earlier simply have more time to seek assistance when they get stuck.


Software Testing, Verification & Reliability | 2000

Black‐box testing using flowgraphs: an experimental assessment of effectiveness and automation potential

Stephen H. Edwards

A black‐box testing strategy based on Zweben et al.s specification‐based test data adequacy criteria is explored. The approach focuses on generating a flowgraph from a components specification and applying analogues of white‐box strategies to it. An experimental assessment of the fault‐detecting ability of test sets generated using this approach was performed for three of Zweben et al.s criteria using mutation analysis. By using precondition, postcondition and invariant checking wrappers around the component under test, fault detection ratios competitive with white‐box techniques were achieved. Experience with a prototype test set generator used in the experiment suggests that practical automation may be feasible. Copyright


conference on object-oriented programming systems, languages, and applications | 2003

Teaching software testing: automatic grading meets test-first coding

Stephen H. Edwards

A new approach to teaching software testing is proposed: students use test-driven development on programming assignments, and an automated grading tool assesses their testing performance and provides feedback. The basics of the approach, screenshots of the sytem, and a discussion of industrial tool use for grading Java programs are discussed.


ACM Sigsoft Software Engineering Notes | 1994

Part II: specifying components in RESOLVE

Stephen H. Edwards; Wayne D. Heym; Timothy J. Long; Murali Sitaraman; Bruce W. Weide

Conceptual modules may export two kinds of things for use in client programs: type families and operation families. We say “families” here because every RESOLVE module is generic, so a client must instantiate a module before using it. Instantiation has two parts: First you bind all of a conceptual module’s formal parameters to actuals which match the formals both in structure and in other specified properties; then you select an implementation for the concept and fix the realization’s (additional) parameters [Part III]. An instance created this way is called a facility. For a typical conceptual module that defines one type family and associated operation families, every instance defines a particular type and some particular operations whose specifications result, in effect, from replacing the formal parameters of the generic specification with the actuals for that instance.


integrating technology into computer science education | 2012

Exploring influences on student adherence to test-driven development

Kevin Buffardi; Stephen H. Edwards

Test-Driven Development (TDD) is a software development process with a test-first approach that shows promise for improving code quality. Our research addresses concerns raised in both academia and industry about a lack of motivation or acceptance in adopting TDD. In a CS2 class, we used an automated testing tool and post-class surveys to observe patterns of behavior in testing as well as changes in attitudes. We found significant positive outcomes for students following TDD. We also identified obstacles deterring students from adhering to TDD and discuss reasons and possible remedies.

Collaboration


Dive into the Stephen H. Edwards's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary T. Leavens

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge