Rudi Lutz
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rudi Lutz.
Journal of Systems Architecture | 2001
Rudi Lutz
Many application areas represent the architecture of complex systems by means of hierarchical graphs containing basic entities with directed links between them, and showing the decomposition of systems into a hierarchical nested “module” structure. An interesting question is then: How best should such a complex system be decomposed into a hierarchical tree of nested “modules”? This paper describes an interesting complexity measure (based on an information theoretic minimum description length principle) which can be used to compare two such hierarchical decompositions. This is then used as the fitness function for a genetic algorithm (GA) which successfully explores the space of possible hierarchical decompositions of a system. The paper also describes the novel crosssover and mutation operators that are necessary in order to do this, and gives some examples of the system in practice.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2007
Pablo Romero; Benedict du Boulay; Richard Cox; Rudi Lutz; Sallyann Bryant
This paper investigates the interplay between high level debugging strategies and low level tactics in the context of a multi-representation software development environment (SDE). It investigates three questions. 1. How do programmers integrate debugging strategies and tactics when working with SDEs? 2. What is the relationship between verbal ability, level of graphical literacy and debugging (task) performance. 3. How do modality and perspective influence debugging strategy and deployment of tactics? The paper extends the work of Katz and Anderson [1988. Debugging: an analysis of bug location strategies. Human-Computer Interaction 3, 359-399] and others in terms of identifying high level debugging strategies, in this case when working with SDEs. It also describes how programmers of different backgrounds and degrees of experience make differential use of the multiple sources of information typically available in a software debugging environment. Individual difference measures considered among the participants were their programming experience and their knowledge of external representation formalisms. The debugging environment enabled the participants, computer science students, to view the execution of a program in steps and provided them with concurrently displayed, adjacent, multiple and linked programming representations. These representations comprised the program code, two visualisations of the program and its output. The two visualisations of the program were available, in either a largely textual format or a largely graphical format so as to track interactions between experience and low level mode-specific tactics, for example. The results suggest that (i) additionally to deploying debugging strategies similar to those reported in the literature, participants also employed a strategy specific to SDEs, following execution, (ii) verbal ability was not correlated with debugging performance, (iii) knowledge of external representation formalisms was as important as programming experience to succeed in the debugging task, and (iv) participants with greater experience of both programming and external representation formalisms, unlike the less experienced, were able to modify their debugging strategies and tactics effectively when working under different format conditions (i.e. when working with either largely graphical or largely textual visualisations) in order to maintain their high debugging accuracy level.
international conference on artificial intelligence | 2002
Rudi Lutz
In [12] a system was described for finding good hierarchical decompositions of complex systems represented as collections of nodes and links, using a genetic algorithm, with an information theoretic fitness function (representing complexity) derived from a minimum description length principle. This paper describes the application of this approach to the problem of reverse engineering the high-level structure of software systems.
Pattern Recognition | 2005
Bill Keller; Rudi Lutz
This paper describes an evolutionary approach to the problem of inferring stochastic context-free grammars from finite language samples. The approach employs a distributed, steady-state genetic algorithm, with a fitness function incorporating a prior over the space of possible grammars. Our choice of prior is designed to bias learning towards structurally simpler grammars. Solutions to the inference problem are evolved by optimizing the parameters of a covering grammar for a given language sample. Full details are given of our genetic algorithm (GA) and of our fitness function for grammars. We present the results of a number of experiments in learning grammars for a range of formal languages. Finally we compare the grammars induced using the GA-based approach with those found using the inside-outside algorithm. We find that our approach learns grammars that are both compact and fit the corpus data well.
Journal of Visual Languages and Computing | 2003
Pablo Romero; Richard Cox; Benedict du Boulay; Rudi Lutz
Abstract This document presents an overview of the program visualisations additional to the program code provided by some of the most popular object-oriented programming environments to support tasks involving program comprehension. These representations were compared in terms of the programming aspects they highlight and of their information modality. Those with common characteristics according to these criteria were identified. Finally, a brief analysis of these common representations in terms of Greens Cognitive Dimensions is presented. Two questions arising from this survey are (a) whether representations additional to the code should be redundant and highlight similar information to the main notation or be complementary and highlight different programming aspects and (b) which factors might increase the cognitive difficulty of co-ordinating these additional representations and the program code. More theoretical knowledge about the way these additional representations influence the comprehension of computer programs seems to be needed.
Lecture Notes in Computer Science | 2004
Richard Cox; Pablo Romero; Benedict du Boulay; Rudi Lutz
The ‘graphicacy’ of student programmers was investigated using several cognitive tasks designed to assess ER knowledge representation at the perceptual, semantic and output levels of the cognitive system. A large corpus of external representations (ERs) was used as stimuli. The question ‘How domain-specific is the ER knowledge of programmers?’ was addressed. Results showed that performance for programming-specific ER forms was equal to or slightly better than performance for non-specific ERs on the decision, naming and functional knowledge tasks, but not the categorisation task. Surprisingly, tree and network diagrams were particularly poorly named and categorised. Across the ER tasks, performance was found to be highest for textual ERs, lists, maps and notations (more ubiquitous, ‘everyday’ ER forms). Decision task performance was generally good across ER types indicating that participants were able to recognise the visual form of a wide range of ERs at a perceptual level. In general, the patterns of performance seem to be consistent with those described for the cognitive processing of visual objects.
ieee symposium on human centric computing languages and environments | 2003
Pablo Romero; B. du Boulay; Rudi Lutz; Richard Cox
The effects of graphical and textual visualisations in a multi-representational debugging environment were investigated in computing students who used a software debugging environment (SDE) that allowed them to view the execution of programs in steps and that provided them with concurrently displayed, adjacent, multiple and linked representations. The experimental results are in agreement with research in the area that suggests that good debugging performance is associated with a balanced use of the available representations. Additionally, these results raise the issue of whether graphical visualisations promote a more judicious representation use than textual ones for program debugging in multi-representational environments.
Behavior Research Methods | 2007
Pablo Romero; Richard Cox; Benedict du Boulay; Rudi Lutz; Sallyann Bryant
This article describes a methodology for the capture and analysis of hybrid data. A case study in the field of reasoning with multiple representations—specifically, in computer programming—is presented to exemplify the use of the methodology. The hybrid data considered comprise computer interaction logs, audio recordings, and data about visual attention focus. The capture of the focus of visual attention data is performed with software. The software employed tracks the user’s visual attention by blurring parts of the stimuli presented on the screen and allowing the participant to see only a small region of it at any one time. These hybrid data are analyzed via a methodology that combines qualitative and quantitative approaches. The article describes the software tool employed and the analytic methodology, and also discusses data capture issues and limitations of the approach.
international conference on artificial intelligence | 2002
Bill Keller; Rudi Lutz
In this paper we investigate the performance of penalized variants of the forwards-backwards algorithm for training Hidden Markov Models. Maximum likelihood estimation of model parameters can result in over-fitting and poor generalization ability. We discuss the use of priors to compute maximum a posteriori estimates and describe a number of experiments in which models are trained under different conditions. Our results show that MAP estimation can alleviate over-fitting and help learn better parameter estimates.
Archive | 1998
Bill Keller; Rudi Lutz
A genetic algorithm for inferring stochastic context-free grammars from finite language samples is described. Solutions to the inference problem are found by optimizing the parameters of a covering grammar for a given language sample. We describe a number of experiments in learning grammars for a range of formal languages. The results of these experiments are encouraging and compare very favourably with other approaches to stochastic grammatical inference.