Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeff Offutt is active.

Publication


Featured researches published by Jeff Offutt.


Software Testing, Verification & Reliability | 2014

Globalization—references and citations

Jeff Offutt

This issue has three exciting papers that show how test tools can help in areas ranging from modelbased testing to embedded software to Web application software. The first, Tool Support for the Test Template Framework, by Cristiá, Albertengo, Frydman, Plüss, and Rodríguez Monetti, describes a new tool to support model-based testing with Z specifications. Their tool, Fastest, is open-source and available online. (Recommended by Paul Strooper.) The second, Model Checking Trampoline OS: A Case Study on Safety Analysis for Automotive Software, by Choi, presents a study of the use of model checking to check safety properties in automotive operating systems. The author was able to find evidence of safety problems in the Trampoline operating system. (Recommended by Jeff Offutt.) The third, Design and Industrial Evaluation of a Tool Supporting Semi-Automated Website Testing, by Mahmud, Cypher, Haber and Lau, presents experience from using a test automation tool, CoTester, in practical situations. They found that the tool was useful for both professional testers and non-professional testers, but most useful for the non-professional testers. (Recommended by Per Runeson.) I wrote about The Globalization of Software Engineering in a previous editorial [1] and followed up with a discussion on language skills to support globalization [2]. Another difference I have noticed is in how the scientific community uses citations and references, and more interestingly, which citations they use. Many of these differences are personal and individual, but some seem cultural. I will first discuss citations in general with some thoughts I share with my PhD students, then talk about some cultural differences from my experience. The first principle is that references must help readers understand the paper. Of course, this is so broad that it is not much help, but it is an important starting point. References have an important role in research papers. They need to explain what the paper is based on (context), they indicate what the authors know about the subject and they summarise what the readers should know to understand this paper. Reviewers also use references as a proxy for the measure of care the authors take with their research. Reviewers also expect certain rules to be followed. The most important is ethical: never reference something you have not read. A secondary citation, where we write something like (Parnas [4] ‘as cited by Burdell [15]’), indicates that you read Burdell’s paper, and he referenced Parnas’ paper. This should only be used when absolutely necessary if the original reference is unavailable. It is also important to list all authors in a reference list; ‘et al.’ is okay in the text, but if you leave off the name in the references of a person who reviews your paper, it will not help your chances of being accepted. Another expectation is that you write the authors’ names as they appeared in the published paper. So if someone changes his or her name, you should not update the old papers. The final note is about grammar. The citations are parenthetical elements, not nouns. That is ‘as said in [52]’ is grammatically wrong and should be written as ‘as said by Liskov [52].’ This last one may be the most common mistake, made even by established scientists, probably because it is a convenient shortcut. These ideas are basic, and generally taught in high school and early college writing classes. So our PhD students should already know these rules, and if not, should certainly absorb them before they ‘leave the nest’ and start independent research. A major question about references is how many? The correct answer is, of course, to use exactly the references the paper needs and no more. But defining what ‘references the paper needs’ is clearly subjective. My general philosophy is that it is better to over-reference than under-reference. We are more likely to confuse readers with too much information than with too little. And as a famous


Software Testing, Verification & Reliability | 2013

The globalization of software engineering

Jeff Offutt

This issue has three intriguing papers. The first, Using Concepts of Content-based Image Retrieval to Implement Graphical Testing Oracles, by Delamaro, de Lourdes dos Santos Nunes and de Oliveira, presents a new way to automate the oracle function for programmes that produce images. (Recommended by Atif Memon.) The second, A Measurement-based Ageing Analysis of the JVM, by Cotroneo, Orlando, Pietrantuono and Russo, presents a practical exploration of the issue of aging software. Software is modeled as ‘aging’ by considering such things as the long-term depletion of resources from the operating system, incremental corruption of data and accumulation of numerical errors. This paper analyses the JVM’s aging characteristics. (Recommended by Michael Lyu.) The third, Regression Verification: Proving the Equivalence of Similar Programs, by Godlin and Strichman, looks at techniques for proving that two programs have equivalent behavior. This would allow for verification of program changes without needing to refer to a formal specification. (Recommended by Wolfgang Grieskamp.) One of the most exciting trends I’ve been fortunate enough to join is globalization. When I was growing up in a small town in eastern Kentucky, the James Bond movies always captivated me. The most exciting thing about the movies was that they let me join another world, and not just New York and Los Angeles, but Europe and Asia. Bond took me on a tour of plush hotels, casinos, expensive restaurants and fast cars in every corner of the world. When I started my career in the late 1980s, most conferences I attended were in North America, and foreign attendees were rare. But the world was changing, and we soon started travelling overseas to conferences and welcoming foreign visitors into the USA. Most of my PhD students and many of my MS students are from overseas. It is now common and normal to collaborate with scientists from all over the world. Globalization brings enormous benefits, but of course, with certain costs (jet lag is perhaps the most mundane). Robert Laughlin, Nobel Prize in Physics (1998) said it well: ‘The global economy imposes a tax on young people in the form of learning English.’ He went on to say that nativeEnglish speakers have a different tax; by not learning a new language, they are slower to absorb broader lessons about the emerging global culture. A couple of years later, one of my colleagues exasperatedly said that dealing with cheating in the classroom is part of acculturating our students. The vast majority of classroom cheating is with foreign students, some of whom seem to have trouble accepting that we seriously believe it is wrong. Attitude towards cheating and plagiarism is clearly cultural. The research community also has to acculturate new members. I see this clearly in STVR, which is truly an international journal, and at ICST, one of the most diversely international conferences I attend. No nation or continent dominates either venue, a happy trend that affects us all. I’ve developed the following evolving list of issues that are affected by the globalization of software engineering research:


Software Testing, Verification & Reliability | 2014

Globalization-standards for research quality

Jeff Offutt

This issue features three interesting papers, all of which offer real solutions to real problems, and demonstrate their success on industrial software. The first, ‘A practical model-based statistical approach for generating functional test cases: application in the automotive industry’, by Awedikian and Yannou, presents a new way to generate tests from models. The results include a tool that selects test inputs (the test generation problem), predicts the expected results (the oracle problem), and suggests when testing can stop (the stopping problem). They have demonstrated their approach on automotive software. (Recommended by Hong Zhu.) The second, ‘A novel approach to software quality risk management’, by Bubevski, offers an advance in managing the risk of software. Bubevski’s technique uses Six Sigma and Monte Carlo simulation and has been successfully used on industrial software. (Recommended by Min Xie.) The third, ‘Automatic test case generation from simulink/stateflow models using model checking’, by Mohalik, Gadkari, Yeolekar, Shashidhar, and Ramesh, uses model checking to solve the problem of test data generation based on models. This technique has also successfully been used on industrial automative software. (Recommended by Peter Mueller.) I wrote about The Globalization of Software Engineering in a previous editorial [1] and followed up with a discussion of language skills to support globalization [2] and then uses of references and citations [3]. Another difficult difference that is affected by globalization is the expected standards for research quality. Scientists usually learn about research in graduate school. The process has its roots in the middle ages and is based on the ancient apprenticeship model [4]. After finishing our classes, we spend years as an ‘apprentice’ to a ‘master’, the PhD advisor. This advisor is responsible for teaching us the dozens of skills, strategies, and tactics required for a successful research career, including standards for the quality of the research. We also learn about research from other professors, by reading papers and reasoning how the research was conducted, but our advisor has primary responsibility. In most cases, our advisors learned from their advisors, they from their advisors, and so on, sometimes back centuries. This is why we are so interested in our genealogies [5]. I have friends who can trace their ‘academic roots’ back to luminaries such as Dijkstra, Poisson, Bernoulli, and Euler. The model of research apprenticeships has a rich tradition in countries that have a long history of scientific research. Historically, many of these countries are in Europe and North America. However, part of globalization is that other countries, without a long history of scientific research, are trying to kick-start this process. When knowledge and skills tend to be handed down by word of mouth, this is, not surprisingly, difficult. Thus, the globalization of research results in a large divergence in the standards for research quality. Who teaches new students? Who teaches the teacher? I see the affects of this divergence at STVR. We have a policy of desk-rejecting papers that are out of scope for the journal or that are low quality. We get papers that have no research results, for example, that explain an existing process or concept with an example. We get papers that have insufficient results for a major journal or whose results are not sufficiently original. And we get papers whose writing is so poor that the reviewers would not be able to understand the paper well enough to fairly assess the results. For these papers, we send polite, regretful rejection letters that are as kind as possible. It is clear that many of these authors are bright enough, are hard working enough, and have sufficient technical strengths to carry out high quality research projects. Unfortunately, they simply have not been adequately prepared. This rarely happens with papers from authors in Europe or North America, which have long traditions of research. Unfortunately, most are from the Indian sub-continent or China. These


Software Testing, Verification & Reliability | 2016

How to revise a research paper

Jeff Offutt

This issue contains three outstanding papers, two that contain strong theory and show promise for immediate practical application, and another that can inform a new generation of researchers. The first, A Lightweight Framework for Dynamic GUI Data Verification Based on Scripts, by Mateo, Ruiz and Pérez, presents a way to integrate verification into a GUI during execution. The runtime verifier reads verification rules from files created by the engineers and checks the state of the GUI for violations while running (recommended by Peter Mueller). The second, Model-Based Security Testing: A Taxonomy and Systematic Classification, by Felderer, Zech, Breu, Büchler and Pretschner, surveys and summarizes 119 papers on model-based security testing. This paper should become the first entry port for anybody doing research in the area (recommended by Bogdan Korel). The third paper, Generating Effective Test Cases Based on Satisfiability Modulo Theory Solvers for Service-Oriented Workflow Applications, by Wang, Xing, Yang, Song and Zhang, address the very technically difficult problem of testing service-oriented applications developed with WSBPEL. Many execution paths in WS-BPEL applications are infeasible. This paper addresses the problem and shows how to generate tests based on finding test paths from embedded constraints (recommended by Bogdan Korel). A well-crafted process to revising a journal submission is crucial for eventual acceptance. Although the initial reaction to the reviews may be negative, it is very important to be proactive and positive. Researchers, even world-renowned, will always be criticized, fairly or not. We must be able to respond to criticism in positive ways. ‘The reviewers were blind and close-minded’, a common complaint, may be valid—however, expressing that does not help achieve the goal of publishing a paper. Authors cannot make reviewers or editors smarter. This is yet another situation where we must strive to change the things we can and accept the things we cannot. In this editorial, I walk through the process that I have used to revise journal papers for two and a half decades. I start my revision process with three initial steps. First, I look at the decision. If it is an ‘accept’, ‘minor revision’ or ‘major revision’, I celebrate. I view a major revision as an ‘accept after lots of work’. I put off reading the reviews until later that day or the next. Even a decision of minor revision may contain things that are bothersome. A reaction of ‘how could the reviewer be so blind?’ is common. Several days later, I return to the reviews for a deep, detailed analysis of what they said. Being proactive is essential. If the reviewers misunderstood, how can the author change the writing so that reviewers will understand the second time? If the reviewers were not satisfied, can the work be better motivated? If the reviewers did not believe the work truly solved the problem, can the problem be restated? Like software, no paper is ever perfect. Like testers, the reviewers’ job is to help the authors improve the paper. Recently a co-author and I got reviews asking for a major revision. The revisions asked us to throw the previous empirical study out and start again. (As an editor, I would define that as a reject, but that is another story [1].) The reviews were strange—as if they read the wrong paper. They reflected neither the paper’s goals nor its results. Three reviewers completely misunderstood the paper! We finally found a key review comment that helped us realize that we had buried our important goal inside a subsection in the experimental design . . . in a formula! Our title, abstract, introduction and research questions all sent the reviewers in the wrong direction. That is an extreme case, but true. And it illustrates the main point of response letters. Take responsibility! After all, authors want a paper accepted, but reviewers do not care. They simply want to


Software Testing, Verification & Reliability | 2014

Globalization-ethics and plagiarism

Jeff Offutt

This issue presents two exciting papers on different topics in software testing. The first, Optimizing Compilation with Preservation of Structural Code Coverage Metrics to Support Software Testing, by Kirner and Hass, presents results on mapping coverage computed at the source level to coverage at the executable level. Their suggestion is to modify compilers to add additional information to the executable version of the software to make it possible to back-calculate coverage measured on the executable source. (Recommended by Jose Maldonado.) The second, A Hitchhiker’s Guide to Statistical Tests for Assessing Randomized Algorithms in Software Engineering, by Arcuri and Briand, presents guidelines on using statistical tests in experiments involving randomized algorithms. The authors make the point that randomized algorithms have different characteristics than other kinds of experimental research, which means different statistical analysis techniques are needed. The paper analyzes several recent publications and recommends how their statistical tests could have been improved. (Recommended by Alexander Pretschner.) I wrote about The Globalization of Software Engineering in a previous editorial [1], followed up with a discussion of language skills to support globalization [2], uses of references and citations [3], and standards for research quality [4]. Another difficult difference that is affected by globalization is plagiarism. All journal editors that I ask agree that we detect more plagiarism than in the past. Part of the increase is simply that we now use technology to increase observability. STVR uses an automatic plagiarism tool that searches thousands (millions?) of documents looking for overlapping text. It reports the percentage of the text in the submitted paper that is identical to text in previously published papers. We can also view the papers that have the most overlap in a tool that highlights the overlapping text. As a result, we detect more plagiarized papers, and detect them sooner. STVR rejects 10 to 20 papers annually for plagiarism. Another reason for the increase is that many authors do not understand plagiarism. As I pointed out last month [4], scientists learn many things from their advisors, including what constitutes plagiarism. This, in turn, is certainly affected by culture. In countries with governments that are inherently corrupt, it is natural for plagiarism to be more common and accepted. In countries without a long tradition of individual ownership of ideas, plagiarism is not a natural concept. And in countries that are extremely competitive, the ‘anything to get ahead’ attitude can sometimes turn to plagiarism. Thus, the increasing globalization of software engineering certainly contributes to an increase in plagiarism. A prevailing problem seems to be ‘what is plagiarism?’ My Merriam-Webster dictionary defines plagiarize as ‘to use the words or ideas of another person as if they were your own words or ideas’ [5]. This seems simple and straightforward, but, as the saying goes, ‘the devil is in the details.’ In the succeeding text, I present several (anonymized) examples. Complete copying: Perhaps the most obvious and egregious case is when an entire paper is copied with only slight changes. I have seen this twice in my tenure at STVR. Most recently, the original paper was published in a fairly obscure outlet in 1978. By happenstance, the reviewing editor had read the original paper and found it, scanned it, and sent it to me. The new paper contained a few additional paragraphs and a few new references, but all results, figures, formulas, and text were copied verbatim. This was an easy case—the authors knew they were plagiarizing, did it intentionally, and added no value to the original paper. Our response was to inform their department chairs and Dean, and put them on a ‘no submission ever’ list for STVR. If they submit other papers to STVR, they will be desk-rejected without consideration.


Software Testing, Verification & Reliability | 2016

How to write an effective Response to Reviewers letter

Jeff Offutt

This issue presents three new inventions in software testing. A major research topic in software engineering is that of fault localization, that is, finding faulty code in failing software. Probabilistic reasoning in diagnosing causes of program failures, by Junjie Xu, Rong Chen, and Zhenjun Du, invents a new graph to model possible faults probabilistically. (Recommended by Atif Memon.) The second paper, Behaviour abstraction adequacy criteria for API call protocol testing, by Hernan Czemerinski, Victor Braberman, and Sebastian Uchitel, invents new criteria to test whether APIs are used correctly. Specifically, the criteria measure the extent to which software that uses the APIs conforms to the expected protocol such as whether the methods are called in valid orders. (Recommended by Hasan Ural.) The third paper in this issue addresses the difficult oracle problem. Predicting metamorphic relations for testing scientific software: A machine learning approach using graph kernels, by Upulee Kanewala, James M. Bieman, and Asa Ben-Hur, invents a technique to use machine learning to predict metamorphic relations, which are used to create oracles in software for which correct behavior is unknown. (Recommended by TY Chen.) A well-written response letter is critical when resubmitting a paper to a journal. A key is to be proactive. As authors, we’re happy that the journal did not reject the paper, but we always find comments that are annoying, frustrating, or downright insulting. Authors even react to comments that are valid and correct. But as I discussed in my last editorial [1], our goal is not to argue with anonymous reviewers, but to publish a paper. Indeed, authors, reviewers, and editors all want the same thing: to publish good papers. Even the worst reviews can help us write better papers. A response letter is the authors’ chance to assert proactive control over the process. Don’t try to convince reviewers they were wrong—that’s the path to rejection. The goal is much simpler: The response letter should convince the reviewers that you modified the paper perfectly before they even look at the revision. Sadly, many reviewers decide whether to accept or reject a paper in the first five minutes they consider it. Then they spend the rest of their time looking for reasons to support their initial assessment. (Please note that I am definitely not advocating that approach. It is anti-science and destructive but is unfortunately common.) The response letter is the first impression of that crucial first five minutes. I’m suggesting some ways to not blow it. Below is a list of “dos” and “don’ts” for writing effective response letters.


Software Testing, Verification & Reliability | 2018

Proper references is a matter of scholarship, ethics, and courtesy

Jeff Offutt

This issue contains two papers on testing of special‐purpose software. Systematic testing of actor systems, by Elvira Albert, Puri Arenas, and Miguel Gómez‐Zamalloa, presents ideas for testing concurrent systems. In particular, instead of testing all possible interleavings, the technique prunes the states that are explored to reduce unneeded non‐determinism. (Recommended by Yves Le Traon.) Verifying OSEK/VDX automotive applications: A Spin‐based model checking approach, by Haitao Zhang, Guoqiang Li, Zhuo Cheng, and Jinyun Xue, presents a use of model checking to verify automotive software. (Recommended by Shaoying Liu.) Conferences often have associated workshops, some quite successful. For example, the International Conference on Software Testing, Verification, and Validation hosts several workshops every year. Unfortunately, this habit has encouraged the proliferation of a type of bad scholarship. Consider the following workshop paper:


Software Testing, Verification & Reliability | 2015

How the web resuscitated evolutionary design

Jeff Offutt

This issue contains two exciting papers about test automation. The first, Killing strategies for modelbased mutation testing, by Aichernig, Brandl, Jöbstl, Krenn, Schlick, and Tiran, presents techniques and algorithms for automatically generating tests from UML state machines (recommended by Mark Harman). The second, Assessing and generating test sets in terms of behavioural adequacy, by Fraser and Walkinshaw, turns the notion of criteria on its head, by defining test criteria in terms of the outputs instead of the inputs or source (recommended by Hong Zhu). Both inventions can improve test automation, and thus enhance our ability to have evolutionary design. One of my favorite oldies, The Design of Everyday Things, discusses evolutionary design. It caused me to consider what this concept means to software design, development, and testing. I want to start with cost. All technological artifacts, hardware and software, come with costs. Not being an economist or systems engineer, I may leave some out, but at least four types of costs help us understand a major trend in software:


Software Testing, Verification & Reliability | 2014

Globalization - logical flow, motivation, and assumptions

Jeff Offutt

This issue presents three fascinating papers on ensuring the reliability and correct behavior of software. The first, Analysis and testing of black-box component-based systems by inferring partial models, by Shahbaz and Groz, tackles the problem of integration testing of component-based software. When components have no specifications, models, or source, testers can only infer proper behavior by trial and error. This paper uses a model learning approach to derive finite state machines that describe observed behavior of the software component. (Recommended by Rob Hierons.) The second, Sound and mechanised compositional verification of input-output conformance, by Sampaio, Noguira, Mota, and Isobe, uses process algebra to verify conformance of software with the expected behavior. This test theory was applied to test mobile applications. (Recommended by Alexander Pretschner.) The third, Towards the prioritization of system test cases, by Srikanth, Banerjee, Williams, and Osborne, focuses on the problem of test case prioritization. The approach assumes requirements-based tests, assigns a prioritization value to each requirement, and then prioritizes tests that were designed for requirements with a higher priority. (Recommended by Jeff Offutt.) I wrote about The Globalization of Software Engineering in a previous editorial [1], followed up with a discussion of language skills to support globalization [2], uses of references and citations [3], standards for research quality [4], and cheating and plagiarism [5]. Another difficult difference that is affected by globalization is presenting research with logical flow, clear motivation, precision, and without cultural-based assumptions. While it is probably obvious that successful research publications must be based on sound research, hard work, and original ideas, it may be less obvious that presentation is just as important. The ability to clearly present research results can be developed through education, practice, and helpful feedback. This editorial attempts to point out a few issues that are influenced by culture, with the hope of helping authors, teachers, and reviewers to understand and improve. The most obvious cultural aspects of presentation, of course, is language–improving language skills improve the ability to present research results clearly. But telling a coherent story is even more important. Educational systems emphasize different topics, and some spend much more time teaching writing skills than others. Our cultural context also influences our writing. We sometimes make assumptions that are standard in our own culture, but may be different in others. This editorial explores some of these issues from a global perspective. Perhaps the most important aspect of presenting research is to have a logical flow of ideas. Each section must logically flow to the next, each paragraph must logically flow to the next, and each sentence must logically flow from the previous. If not, readers will be confused and not understand the research. Logical flow reflects a structured way of thinking that is influenced by culture, mother language, and scientific training. Outlining allows me to see the logical flow at an abstract level, without being distracted by the details of grammar and sentence structure. I outline sections, paragraphs in each section, and the sentences in each paragraph. I look for ‘data flow’ anomalies in the outline. Finally, when a paragraph or section does not look right but I’m not sure why, I ‘reverse engineer’ the text into an outline, refactor the outline to create a better logical flow, and apply the new outline to the text. Another issue that is heavily influenced by culture is motivation. Motivation essentially answers ‘why’–why the problem is relevant, why the solution technique was chosen, and why the specific validation technique used was chosen. Traditionally, egalitarian cultures have a strong built-in mechanism to develop the skills to present motivation. That is how people have their ideas accepted and used. Authoritarian cultures, on the other hand, can afford to de-emphasize motivation and expect people to do what they are told because an authority says so.


Software Testing, Verification & Reliability | 2013

A tribute to Mary Jean Harrold

Jeff Offutt

This issue has three research papers. ‘Incremental testing of finite state machines’, by Chaves Pedrosa and Vieira Moura, addresses the scalability problem of designing tests from finite state machines. They use a divide and conquer approach to define combined finite state machines, which allow individual tests to be defined on smaller units and allow test suites to be built incrementally. (Recommended by Byougju Choi.) ‘A survey of code-based change impact analysis techniques’, by Li, Sun, Leung, and Zhang, surveys 30 papers that empirically analyzed 23 change impact analysis techniques. The paper synthesizes these results into a structure of four research questions and proposes several new research questions. (Recommended by Jane Hayes.) ‘Combining weak and strong mutation for a noninterpretive Java mutation system’, by Kim, Ma, and Kwon, looks at the cost of executing mutants in mutation systems. They propose a new way to combine ‘strong’ and ‘weak’ mutation that keeps most of the strength of strong mutation, while achieving much of the cost savings from weak mutation. They adapted the muJava mutation tool to demonstrate their ideas. (Recommended by Rob Hierons.) Note that because of previous papers co-authored with the authors of this paper, Offutt was not involved with its handling. Software engineering lost one of its best last month. And I lost a good friend and role model. I first met Mary Jean Harrold at the 1988 TAV symposium (now the International Symposium on Software Testing and Analysis) and was impressed in every possible way. We finished our PhDs in the same year: I in August at the Georgia Institute of Technology and her in December at the University of Pittsburgh. I joined Clemson University in August 1988 and was excited when she applied for the following year. She accepted our offer, and we spent the next 3 years learning how to be professors together. Her years of teaching high school math helped her start as a terrific teacher, and she continued to improve every year. She was also an ideal mentor. Even as a new assistant professor, she somehow knew exactly how to motivate her students, had the insight to understand what knowledge and skills they lacked, and had the patience and abilities to teach what they needed. She set exacting standards with a kind and respectful demeanor. Most importantly, she earned their loyalty and love. Her students worked harder than anybody else because they wanted to impress her and they knew she was working even harder. I have tried to emulate her advising style for more than 20 years. We co-authored four papers, working with three students. The memories of working on those papers are still bright because we taught each other much about research, problem formulation, writing, and how to respond to reviews. Our PhD advisors had very different styles, and I was able to absorb much of what her advisor, Dr. Mary Lou Soffa, taught her, and I think she absorbed some of what my advisor, Dr. Rich DeMillo, tried to teach me. Those three pre-tenure years at Clemson were incredibly formative and bonding. We also grew up very close geographically. Mary Jean was born and raised in Huntington, West Virginia, and I was raised about 50 miles west, near Morehead, Kentucky. Even 20 years later, she teased me because I thought she came from a big city. I responded by reminding her that her home state was even poorer than mine. Appalachians are few and far between in academia, and we always felt that bond. A sharing of an unusual culture that few understand. I firmly believe that Mary Jean Harrold was the best PhD advisor in all of software engineering. Her deft touch shows; I know immediately when I see one of her students give a talk. She was also a wonderful colleague and great scientist. She focused on some of the deepest and most complicated problems in software analysis, testing, and evolution. She did not just focus on research that works on small problems or in the lab but found solutions that were scalable and usable by real engineers.

Collaboration


Dive into the Jeff Offutt's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge