Les Hatton
Kingston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Les Hatton.
IEEE Computer | 1997
Shari Lawrence Pfleeger; Les Hatton
Practitioners and researchers continue to seek methods and tools for improving software development processes and products. Candidate technologies promise increased productivity, better quality, lower cost, or enhanced customer satisfaction. We must test these methods and tools empirically and rigorously to determine any significant, quantifiable improvement. We tend to consider evaluation only after using the technology, which makes careful, quantitative analysis difficult if not impossible. However, when an evaluation is designed as part of overall project planning, and then carried out as software development progresses, the result can be a rich record of a tools or techniques effectiveness. In this study, we investigated the effects of using formal methods to develop an air-traffic-control information system.
IEEE Software | 1998
Les Hatton
Is object orientation an imperfect paradigm for reliable coding? Worse, does it focus on the wrong part of the life cycle? The author thinks so and explains why. Given that corrective-maintenance costs already dominate the software life cycle and look set to increase significantly, the author argues that reliability in the form of reducing such costs is the most important software improvement goal. Yet, the results are not promising when we review recent corrective-maintenance data for big systems in general and for OO systems, in this case written in C++. The author asserts that any paradigm that is capable of decomposing a system into large numbers of small components-as frequently occurs in both OO and conventional systems-is fundamentally wrong. Thus, because both paradigms suffer from this flaw, we should expect no particular benefits to accrue from an OO system over a non-OO system. Further, a detailed comparison of OO programming and the human thought processes involved in short and long term memory suggests that OO aligns with human thinking limitations indifferently at best. In the case studies described, OO is no more than a different paradigm, and emphatically not a better one, although it is not possible to apportion blame between the OO paradigm itself and its C++ implementation.
IEEE Computer | 2007
Les Hatton
Despite years of computing progress, todays systems experience spectacular and all-too-frequent crashes, while many enormously expensive projects fail to produce anything useful. Of equal importance, and potentially more damaging, are the misleading smaller defects we tend to miss. From time to time, we must remind ourselves that the underlying quality of the software that our results and progress increasingly depend on will likely be flawed and even more dependent on independent corroboration than the science itself. Many scientific results are corrupted, perhaps fatally so, by undiscovered mistakes in the software used to calculate and present those results.
IEEE Transactions on Software Engineering | 2009
Les Hatton
This paper begins by modeling general software systems using concepts from statistical mechanics which provide a framework for linking microscopic and macroscopic features of any complex system. This analysis provides a way of linking two features of particular interest in software systems: first the microscopic distribution of defects within components and second the macroscopic distribution of component sizes in a typical system. The former has been studied extensively, but the latter much less so. This paper shows that subject to an external constraint that the total number of defects is fixed in an equilibrium system, commonly used defect models for individual components directly imply that the distribution of component sizes in such a system will obey a power-law Pareto distribution. The paper continues by analyzing a large number of mature systems of different total sizes, different implementation languages, and very different application areas, and demonstrates that the component sizes do indeed appear to obey the predicted power-law distribution. Some possible implications of this are explored.
IEEE Computer | 2007
Les Hatton
Research shows considerable overlap among perfective, corrective, and adaptive maintenance tasks in software development projects. A case study involving two recent products provides further empirical evidence of this distribution and sheds light on how well programmers estimate both the type of maintenance necessary and the duration with some significant surprises
IEEE Software | 2012
Michiel van Genuchten; Les Hatton
Six Impact columns published over the last three years and a couple of precisely measured products provide the opportunity to calculate a compound annual growth rate.
Information & Software Technology | 2007
Les Hatton
The MISRA (Motor Industry Software Research Association) C standard first appeared in 1998 with the object of restricting the use of features in the ISO C programming language of known undefined or otherwise dangerous behaviour in embedded control systems in the motor car industry. The first edition gained significant attention around the world and in October 2004, a further edition was issued to a wider intended target audience, with the intention of correcting ambiguous wording undermining the effectiveness of the first edition and also improving its ability to restrict features of dangerous behaviour. This paper measures how well the two versions of this document compare on the same population of software and also determines how well the 2004 version achieved its stated goals. Given its increasing influence, the results raise important concerns, specifically that the false positive rate is still unacceptably high with the accompanying danger that compliance may make things worse not better.
IEEE Software | 2011
M.J.I.M. van Genuchten; Les Hatton
Looking back on the first seven Impact columns for IEEE Software, the editors propose a new metric called software mileage, defined as the number of new customers per year per line of code.
IEEE Software | 2012
Les Hatton; M.J.I.M. van Genuchten
An online survey with experienced managers and architects, all authors of previous Impact columns, compliments the special issue on studying professional software design. The practitioners view on the topic discusses who should be involved in early design decisions, the tools used, and typical mistakes.
IEEE Software | 1995
Les Hatton
Static inspection-the removal of obvious faults or inconsistencies prior to product testing-forms an indispensable part of all conventional engineering disciplines-except software engineering. This degree of care could apply to software as well. After careful rumination by an impressive number of qualified people who together comprise a standards committee, our programming languages might enter the world in a safe, well-defined and unambiguous manner, suitable for use by programmers who are necessarily less knowledgeable about the language. Then, any violation of this safe, complete and unambiguous language definition could be automatically detected before it caused any damage, and the responsible programmer informed. Unfortunately, this scenario could not be farther from reality. Explicitly recognized programming limitations are often abused because the language allows it, so much of the worlds software reaches its users full of inconsistencies: failures waiting to happen. When this is coupled with inaccuracies in capturing the specification, this situation leads to reduced software reliability and safety. As software becomes ever more pervasive, with thousands-sometimes millions-of lines of code controlling automobiles, televisions, fire alarms, medical scanners and aircraft, we can no longer tolerate this lack of quality. >