Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Clark is active.

Publication


Featured researches published by David Clark.


Journal of Computer Security | 2007

A static analysis for quantifying information flow in a simple imperative language

David Clark; Sebastian Hunt; Pasquale Malacaria

We propose an approach to quantify interference in a simple imperative language that includes a looping construct. In this paper we focus on a particular case of this definition of interference: leakage of information from private variables to public ones via a Trojan Horse attack. We quantify leakage in terms of Shannons information theory and we motivate our definition by proving a result relating this definition of leakage and the classical notion of programming language interference. The major contribution of the paper is a quantitative static analysis based on this definition for such a language. The analysis uses some non-trivial information theory results like Fanos inequality and the L 1 inequality to provide reasonable bounds for conditional statements. While-loops are handled by integrating a qualitative flow-sensitive dependency analysis into the quantitative analysis.


Journal of Logic and Computation | 2005

Quantitative Information Flow, Relations and Polymorphic Types

David Clark; Sebastian Hunt; Pasquale Malacaria

This paper uses Shannons information theory to give a quantitative definition of information flow in systems that transform inputs to outputs. For deterministic systems, the definition is shown to specialize to a simpler form when the information source and the known inputs jointly determine all inputs uniquely. For this special case, the definition is related to the classical security condition of non-interference and an equivalence is established between non-interference and independence of random variables. Quantitative information flow for deterministic systems is then presented in relational form. With this presentation, it is shown how relational parametricity can be used to derive upper and lower bounds on information flows through families of functions defined in the second-order lambda calculus.


Electronic Notes in Theoretical Computer Science | 2005

Quantified Interference for a While Language

David Clark; Sebastian Hunt; Pasquale Malacaria

We show how information theory can be used to give a quantitative definition of interference between variables in imperative programming languages. In this paper we focus on a particular case of this definition of interference: leakage of information from private variables to public ones in While language programs. The major result of the paper is a quantitative analysis for this language that employs a use-definition graph to calculate bounds on the leakage into each variable.


Scopus | 2011

Google Generation II: web behaviour experiments with the BBC

David Nicholas; Ian Rowlands; David Clark; Peter Williams

Purpose – The purpose of this paper is to report on continuing research undertaken on the way the Google Generation behave on the internet and to compare this with an earlier highly publicised study by the papers authors.Design/methodology/approach – This research use a televised practical experiment and a remote web global test incorporating search, working memory and multi‐tasking experiments.Findings – The Google Generation appears to behave very differently from older generations. By their own admission they are less confident about their searching prowess and this is also demonstrated by the fact that they viewed fewer pages, visited fewer domains and undertook fewer searches. Also, tellingly, their search statements were much more the product of cut and paste. The Google Generation also have poorer working memories and are less competent at multi‐tasking, both of which may have implications for researching in an online environment.Originality/value – The paper introduces of multi‐tasking and cognit...


Psychological Medicine | 2008

Psychological treatment outcomes in routine NHS services: a commentary on Stiles et al. (2007)

David Clark; Christopher G. Fairburn; Simon Wessely

Following the Bristol enquiry into the care of children with congenital heart disease, NHS cardiovascular units now make their surgical survival rates available to the public through a website (www.ccad.org.uk/congenital) with suitable advice about how the data can, and cannot, be interpreted. Sadly, nothing comparable exists for members of the public who are suffering from mental illnesses and wish to know what their chance of recovery is if they take up the offer of treatment X in service Y. This is not simply because NHS mental health services do not make their out-comes available to the public. In many cases, it is because the outcomes are not even monitored. For example, a recent survey of British psychiatrists (Gilbody et al. 2002) found that only 11% routinely used standardized measures to assess clinical change in their patients and a majority (58%) had never used such instruments. Clearly, there is a long way to go. n nIn the present issue, Stiles et al. (2007) report a welcome exception. For a number of years, this group have been advocating the use of the Clinical Outcomes in Routine Evaluation — Outcome Measure (CORE-OM; Evans et al. 2000) to routinely measure outcomes in patients with common mental problems (especially anxiety, depression and interpersonal diffculties) who are receiving treatment in the NHS. Their strenuous efforts to overcome resistance to routine outcome monitoring are exemplary and they deserve enormous credit for the way in which they have moved the field forward. As a direct result of their work, a substantial number of NHS primary-care counselling services, and other psychological treatment services, now aim to give their patients self-report measures of their clinical state at pre- and post-treatment. While this is a very encouraging development, it is important to realize that the data that have so far been collected are incomplete in key respects and this poses severe limits on their interpretation. In our view, Stiles et al.’s study reported in this issue, and the earlier study (Stiles et al. 2006) with a smaller sample that it replicates, go well beyond these limits and, as a consequence, conclusions are drawn that are not warranted and risk being misinterpreted. n nThe aim of this second study was to evaluate the effectiveness, as measured by CORE-OM scores, of three different therapies as they are practised in NHS primary-care counselling services. The design, which was essentially the same as that employed in the earlier study (Stiles et al. 2006), is a non-randomized (naturalistic) comparison of patients whose therapy was described by their therapist as falling within the broad categories of: cognitive-behaviour therapy (CBT); person-centred therapy (PCT) or psychodynamic and/or psychoanalytic therapy (PDT) or alternatively one of those categories plus no more than one other therapy approach. The data were collected by encouraging therapists to use CORE-OM with their patients and to anonymously submit the questionnaires to a central database. n nNo information is given about the proportion of each therapist’s caseload that received CORE-OMs and was submitted to the database. However, from the small numbers of cases that were submitted by many therapists (over a 3.5-year data collection period the median number of cases submitted by each therapist was only six), it is clear that not everyone submitted all of their cases. The analyses focused on 5613 submitted patients who had completed CORE-OMs at pre-treatment and post-treatment and whose therapist had completed an End-of-Therapy Form (which identified the type of therapy). We are told this number constitutes 38 % of the patients who were submitted to the database. This is because many patients who were submitted to the database only completed a CORE-OM at pre-treatment or at post-treatment, but not on both occasions. n nThe main findings were as follows: (1) the patients who were included in the analyses showed substantial pre-treatment to post-treatment improvement (uncontrolled effect size=1.39; reliable and clinically significant improvement rate=58 %) and (2) there were no significant differences between the three treatment categories which had very similar improvement rates. The authors conclude from these findings: (1) all three treatments are effective, (2) the treatments do not differ in effectiveness, and (3) the treatments as currently delivered in primary care are doing about as well as they do in tightly controlled clinical trials, when such data is available. The authors frankly acknowledge methodological shortcomings in their Discussion (as they did in the previous report). However, they feel that their results are still interpretable. We disagree. Below we list and review the most serious methodological limitations and indicate why we think they seriously compromise the conclusions drawn from this study and the earlier one.


integrated formal methods | 2004

UML to B: Formal Verification of Object-Oriented Models

Kevin Lano; David Clark; Kelly Androutsopoulos

The integration of UML and formal methods such as B and SMV provides a bridge between graphical specification techniques usable by mainstream software engineers, and precise analysis and verification techniques, essential for the development of high integrity and critical systems. In this paper we define a translation from UML class diagrams into B, which is used to verify the consistency of UML models and to verify that expected properties of these models hold.


formal methods | 2013

Fault localization prioritization: Comparing information-theoretic and coverage-based approaches

Shin Yoo; Mark Harman; David Clark

Test case prioritization techniques seek to maximize early fault detection. Fault localization seeks to use test cases already executed to help find the fault location. There is a natural interplay between the two techniques; once a fault is detected, we often switch focus to fault fixing, for which localization may be a first step. In this article we introduce the Fault Localization Prioritization (FLP) problem, which combines prioritization and localization. We evaluate three techniques: a novel FLP technique based on information theory, FLINT (Fault Localization using INformation Theory), that we introduce in this article, a standard Test Case Prioritization (TCP) technique, and a “test similarity technique” used in previous work. Our evaluation uses five different releases of four software systems. The results indicate that FLP and TCP can statistically significantly reduce fault localization costs for 73% and 76% of cases, respectively, and that FLINT significantly outperforms similarity-based localization techniques in 52% of the cases considered in the study.


fundamental approaches to software engineering | 2009

Control Dependence for Extended Finite State Machines

Kelly Androutsopoulos; David Clark; Mark Harman; Zheng Li; Laurence Tratt

Though there has been nearly three decades of work on program slicing, there has been comparatively little work on slicing for state machines. One of the primary challenges that currently presents a barrier to wider application of state machine slicing is the problem of determining control dependence. We survey existing related definitions, introducing a new definition that subsumes one and extends another. We illustrate that by using this new definition our slices respect Weiser slicings termination behaviour. We prove results that clarify the relationships between our definition and older ones, following this up with examples to motivate the need for these differences.


Computer Languages, Systems & Structures | 2002

Information flow for Algol-like languages

David Clark; Chris Hankin; Sebastian Hunt

In this paper we present an approach to information flow analysis for a family of languages. We start with a simple imperative language. We present an information flow analysis using a flow logic. The paper contains detailed correctness proofs for this analysis. We next extend the analysis to a restricted form of Idealised Algol, a call-by-value higher-order extension of the simple imperative language (the key restriction being the lack of recursion). The paper concludes with a discussion of further extensions, including a probabilistic extension of Idealised Algol.


Journal of Information Science | 2009

Online use and information seeking behaviour

David Nicholas; David Clark; Ian Rowlands; R M Hamid Jamali

The paper reports on the results of the project ‘Evaluating the usage and impact of e-journals in the UK’. Using deep log analysis techniques, we evaluated the use of the Oxford Journals database in regard to life sciences, economics and history by 10 major UK research institutions. The aim of the study was to investigate researchers’ digital behaviour, and to ascertain whether it varied by subjects and disciplines, or in relation to the institutions. The findings revealed significant subject and institutional differences. Life scientists were the biggest users. Economists made the greatest use of abstracts. Historians proved to be the most active searchers. Research intensive universities were characterized by high volume use and short session times, light sessions, and sessions which utilized few of the search functions available. Open access journals featured strongly in the ranked lists of life sciences and history; and Google was an extremely popular means of accessing journal content, especially so in the case of historians.

Collaboration


Dive into the David Clark's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Harman

University College London

View shared research outputs
Top Co-Authors

Avatar

Pasquale Malacaria

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Chris Hankin

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Earl T. Barr

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge