Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas R. G. Green is active.

Publication


Featured researches published by Thomas R. G. Green.


Journal of Visual Languages and Computing | 1996

Usability Analysis of Visual Programming Environments: A ‘Cognitive Dimensions’ Framework

Thomas R. G. Green; Marian Petre

Abstract The cognitive dimensions framework is a broad-brush evaluation technique for interactive devices and for non-interactive notations. It sets out a small vocabulary of terms designed to capture the cognitively-relevant aspects of structure, and shows how they can be traded off against each other. The purpose of this paper is to propose the framework as an evaluation technique for visual programming environments. We apply it to two commercially-available dataflow languages (with further examples from other systems) and conclude that it is effective and insightful; other HCI-based evaluation techniques focus on different aspects and would make good complements. Insofar as the examples we used are representative, current VPLs are successful in achieving a good ‘closeness of match’, but designers need to consider the ‘viscosity ’ (resistance to local change) and the ‘secondary notation’ (possibility of conveying extra meaning by choice of layout, colour, etc.).


human factors in computing systems | 1989

Programmable user models for predictive evaluation of interface designs

Richard M. Young; Thomas R. G. Green; Tony J. Simon

A Programmable User Model (PUM) is a psychologically constrained architecture which an interface designer is invited to program to simulate a user performing a range of tasks with a proposed interface. It provides a novel way of conveying psychological considerations to the designer, by involving the designer in the process of making predictions of usability. Development of the idea leads to a complementary perspective, of the PUM as an interpreter for an “instruction language”. The methodology used in this research involves the use of concrete HCI scenarios to assess different approaches to cognitive modelling. The research findings include analyses of the cognitive processes involved in the use of interactive computer systems, and a number of issues to be resolved in future cognitive models.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1984

Comprehension and recall of miniature programs

David J. Gilmore; Thomas R. G. Green

Abstract Differences in the comprehensibility of programming notations can arise because their syntax can make them cognitively unwieldy in a generalized way ( Mayer, 1976 ), because all notations are translated into the same “mental language“ but some are easier to translate than others (Shneiderman & Mayer, 1979 ), or because the mental operations demanded by certain tasks are harder in some notations than in others ( Green, 1977 ). The first two hypotheses predict that the relative comprehensibility of two notations will be consistent across all tasks, whereas the mental operations hypothesis suggests that particular notations may be best suited to particular tasks. The present experiment used four notations and 40 non-programmers to test these hypotheses. Two of the notations were procedural and two were declarative, and one of each pair contained cues to declarative or procedural information, respectively. Different types of comprehension question were used (“sequential“ and “circumstantial“); a mental operations analysis predicted that procedural languages would be “matched” with sequential questions, and declarative languages with circumstantial questions. Questions were answered first from the printed text, and then from recall. Subjects performed best on “matched pairs” of tasks and languages. Perceptually-based cues improved the performance on “unmatched pairs” better than non-perceptual cues when answering from the text, and both types of cues improved performance on “unmatched pairs” in the recall stage. These results support the mental operations explanation. They also show that the mental representation of a program preserves some features of the original notation; a comprehended program is not stored in a uniform “mental language”.


Journal of Verbal Learning and Verbal Behavior | 1979

The Necessity of Syntax Markers: Two Experiments with Artificial Languages.

Thomas R. G. Green

Contemporary theories of syntax recognition agree on the “marker hypothesis”: natural languages contain a small number of elements that signal the presence of particular syntactic constructions, and the nature of the human parsing system would make markerless languages virtually unusable. Previous evidence, relying on directly comparing the perceptual complexities of various English constructions, is insufficient to test the marker hypothesis stringently. Complementary evidence from studies of artificial languages is now presented. Experiment I showed that artificial languages with no markers or with useless markers were much harder to learn than languages where markers signaled the class of the next word. Experiment II extended the comparison to markers signaling phrases as well as words, giving strong support to the marker hypothesis. The hypothesis has, if it is correct, notable implications for a wide variety of information-processing tasks that have been described in pattern-learning or grammar-learning terms.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1999

Psychological Evaluation of Two Conditional Constructions Used in Computer Languages

Max E. Sime; Thomas R. G. Green; D. J. Guest

There is a need for empirical evaluation of programming languages for unskilled users, but it is more effective to compare specific features common to many languages than to compare complete languages. This can be done by devising micro-languages stressing the feature of interest, together with a suitable subject matter for the programs. To illustrate the power of this approach two conditional constructions are compared: a nestable construction, like that of Algol 60, and a branch-to-label construction, as used in many simpler languages. The former is easier for unskilled subjects. Possible reasons for this finding are discussed.


Lecture Notes in Computer Science | 2001

Cognitive Dimensions of Notations: Design Tools for Cognitive Technology

Alan F. Blackwell; Carol Britton; Anna L. Cox; Thomas R. G. Green; Corin A. Gurr; Gada F. Kadoda; Maria Kutar; Martin J. Loomes; Chrystopher L. Nehaniv; Marian Petre; Chris Roast; Chris P. Roe; Allan Wong; Richard M. Young

The Cognitive Dimensions of Notations framework has been created to assist the designers of notational systems and information artifacts to evaluate their designs with respect to the impact that they will have on the users of those designs. The framework emphasizes the design choices available to such designers, including characterization of the users activity, and the inevitable tradeoffs that will occur between potential design options. The resuliing framework has been under development for over 10 years, and now has an active community of researchers devoted to it. This paper first introduces Cognitive Dimensions. It then summarizes the current activity, especially the results of a one-day workshop devoted to Cognitive Dimensions in December 2000, and reviews the ways in which it applies to the field of Cognitive Technology.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1994

Creating, comprehending and explaining spreadsheets

David G. Hendry; Thomas R. G. Green

Ten discretionary users were asked to recount their experiences with spreadsheets and to explain how one of their own sheets worked. The transcripts of the interviews are summarized to reveal the key strengths and weaknesses of the spreadsheet model. There are significant discrepancies between these findings and the opinions of experts expressed in the HCI literature, which have tended to emphasize the strengths of spreadsheets and to overlook the weaknesses. In general, the strengths are such as allow quick gratification of immediate needs, while the weaknesses are such as make subsequent debugging and interpretation difficult, suggesting a situated view of spreadsheet usage in which present needs outweigh future needs. We conclude with an attempt to characterize three extreme positions in the design space of information systems: the incremental addition system, the explanation system and the transcription system. The spreadsheet partakes of the first two. We discuss how to improve its explanation facilities.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1977

Scope marking in computer conditionals—a psychological evaluation

Max E. Sime; Thomas R. G. Green; D. J. Guest

In a previous paper the authors reported that it was easier for non-programmers to learn to use nested conditional constructions than jumping, or branch-to-label, constructions; however, as only single situations were studied, the conclusions were necessarily restricted. The present study extends the comparison to the more general case where nesting requires “scope markers” to disambiguate the syntax. The results showed that if the scope markers were simply the begin and end of ALGOL 60 (abbreviated NEST-BE) then the advantage of nesting over jumping was weakened; but if the scope markers carried redundant information about the conditional tested (NEST-INE) performance was excellent, particularly at debugging. It seems necessary to distinguish sequence information in a program, which describes the order in which things are done, from taxon information, which describes the conditions under which a given action is performed. Conventional programming languages obscure the taxon information. The advantage of nesting over jumping, we speculate, is in clarifying the sequence information by redundant re-coding in spatial terms; the added advantage of NEST-INE over NEST-BE is that it clarifies the taxon information. It is because debugging requires taxon information that NEST-INE is so much superior. On this view one would expect that in decision table and production system languages, where the taxon information is explicit but the sequence information is obscured, the reverse phenomena should occur. Because debugging requires sequence information as well as taxon information, a device that clarified the sequence would greatly improve such languages.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1979

When do diagrams make good computer languages

M. Fitter; Thomas R. G. Green

It is obvious that some diagrammatic notations are better than others, less obvious why. We list some requirements for a good notation with examples and empirical findings. Central requirements are to give the user the useful information (relevance) using a clear perceptual code for the underlying processes (representation); moreover the notation should restrict the writer to “good” structures. Important information in symbolic codes should be redundantly recoded in a perceptual code as well. Unfortunately these principles, especially the last, tend to make extra work if the diagram has to be modified, conflicting with the requirement of revisability unless software aids can be devised. Notation designers cannot turn to behavioural science for detailed guidance, but they could well make more use of empirical evaluations than at present.


ubiquitous computing | 2001

Group and Individual Time Management Tools: What You Get is Not What You Need

Ann Blandford; Thomas R. G. Green

Abstract: Some studies of diaries and scheduling systems have considered how individuals use diaries with a view to proposing requirements for computerised time management tools. Others have focused on the criteria for success of group scheduling systems. Few have paid attention to how people use a battery of tools as an ensemble. This interview study reports how users exploit paper, personal digital assistants (PDAs) and a group scheduling system for their time management. As with earlier studies, we find many shortcomings of different technologies, but studying the ensemble rather than individual tools points towards a different conclusion: rather than aiming towards producing electronic time management tools that replace existing paper-based tools, we should be aiming to understand the relative strengths and weaknesses of each technology and look towards more seamless integration between tools. In particular, the requirements for scheduling and those for more responsive, fluid time management conflict in ways that demand different kinds of support.

Collaboration


Dive into the Thomas R. G. Green's collaboration.

Top Co-Authors

Avatar

Ann Blandford

University College London

View shared research outputs
Top Co-Authors

Avatar

David Benyon

Edinburgh Napier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max E. Sime

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iain Connell

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. J. Guest

University of Sheffield

View shared research outputs
Researchain Logo
Decentralizing Knowledge