Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Hankin is active.

Publication


Featured researches published by Chris Hankin.


ieee computer security foundations symposium | 2002

Approximate non-interference

A. Di Pierro; Chris Hankin; Herbert Wiklicky

We address the problem of characterising the security of a program against unauthorised information flows. Classical approaches are based on non-interference models which depend ultimately on the notion of process equivalence. In these models confidentiality is an absolute property stating the absence of any illegal information flow. We present a model in which the notion of non-interference is approximated in the sense that it allows for some exactly quantified leakage of information. This is characterised via a notion of process similarity which replaces the indistinguishability of processes by a quantitative measure of their behavioural difference. Such a quantity is related to the number of statistical tests needed to distinguish two behaviours. We also present two semantics-based analyses of approximate noninterference and we show that one is a correct abstraction of the other.


Science of Computer Programming | 1986

Strictness analysis for higher-order functions

Geoffrey L. Burn; Chris Hankin; Samson Abramsky

Abstract interpretation is a compile-time technique which is used to gain information about a program that may then be used to optimise the execution of the program. A particular use of abstract interpretation is the strictness analysis of functional programs. This provides the key to the exploitation of parallelism in the evaluation of programs written in functional languages. In a language that has lazy semantics, the main potential for parallelism arises in the evaluation of operands of strict operators. A function is strict in an argument if its value is undefined whenever the argument is undefined. If we can use strictness analysis to detect which arguments a function is strict in, we then know that these arguments can be safely evaluated in parallel because this will not affect the lazy semantics. Experimental results suggest that this leads to significant speed-ups.Mycroft was the first person to apply abstract interpretation to the strictness analysis of functional programs. His framework only applies to first-order functions on flat domains. Many workers have proposed practical approaches to strictness analysis of higher-order functions over flat base domains but their work has not been accompanied by extensions to Mycrofts theoretical framework. In this paper we give sound mathematical foundations for this work and discuss some of the practical issues involved. The practical approach is proved correct in relation to the theoretical framework.


Programs as Data Objects, Proceedings of a Workshop | 1985

The theory of strictness analysis for higher order functions

Geoffrey L. Burn; Chris Hankin; Samson Abramsky

Abstract interpretation is a compile-time technique which is used to gain information about a program that may then be used to optimise the execution of the program. A particular use of abstract interpretation is in strictness analysis of functional programs. This provides the key to the exploitation of parallelism in the evaluation of programs written in functional languages. In a language that has lazy semantics, the main potential for parallelism arises in the evaluation of operands of strict operators. A function is strict in an argument if its value is undefined whenever the argument is undefined. If we can use strictness analysis to detect which arguments a function is strict in, we then know that these arguments can be safely evaluated in parallel because this will not affect the lazy semantics. Experimental results suggest that this leads to significant speed-ups.


international conference on concurrency theory | 2003

Quantitative relations and approximate process equivalences

Alessandra Di Pierro; Chris Hankin; Herbert Wiklicky

We introduce a characterisation of probabilistic transition systems (PTS) in terms of linear operators on some suitably defined vector space representing the set of states. Various notions of process equivalences can then be re-formulated as abstract linear operators related to the concrete PTS semantics via a probabilistic abstract interpretation. These process equivalences can be turned into corresponding approximate notions by identifying processes whose abstract operators “differ” by a given quantity, which can be calculated as the norm of the difference operator. We argue that this number can be given a statistical interpretation in terms of the tests needed to distinguish two behaviours.


parallel computing | 1998

Coordination languages for parallel programming

Farhad Arbab; Paolo Ciancarini; Chris Hankin

Abstract A number of interesting models have been proposed and used to support coordination languages and systems. In this introductory paper, we first present a number of important concepts that form a context for classification and comparison of various coordination models and languages, and their applications. Next, we review three models and their associated languages, representing three different approaches to coordination. We illustrate the application of each model and language by using it to solve the classical dining philosophers problem. This paper ends with an overview of the rest of the papers that appear in this special issue.


international colloquium on automata languages and programming | 1998

Generalised Flowcharts and Games

Pasquale Malacaria; Chris Hankin

We introduce a generalization of the classical notion of flowchart for languages with higher order and object-oriented features. These general flowcharts are obtained by an abstraction of the game semantics for Idealized Algol and as such rely on a solid mathematical basis. We demonstrate how charts may be used as the basis for data flow analysis.


Journal of Functional Programming | 1991

Fixed points and frontiers: a new perspective

Sebastian Hunt; Chris Hankin

Abstract interpretation is the collective name for a family of semantics-based techniques for compile-time analysis of programs. One of the most costly operations in automating such analyses is the computation of fixed points. The frontiers algorithm is an elegant method, invented by Chris Clack and Simon Peyton Jones, which addresses this issue. In this article we present a new approach to the frontiers algorithm based on the insight that frontiers represent upper and lower subsets of a functions argument domain. This insight leads to a new formulation of the frontiers algorithm for higher-order functions, which is considerably more concise than previous versions. We go on to argue that for many functions, especially in the higher-order case, finding fixed points is an intractable problem unless the sizes of the abstract domains are reduced. We show how the semantic machinery of abstract interpretation allows us to place upper and lower bounds on the values of fixed points in large lattices by working within smaller ones.


international conference on functional programming | 1987

Finding fixed points in finite lattices

Chris Martin; Chris Hankin

Recently there has been much interest in the abstract interpretation of declarative languages. Abstract interpretation is a semantics-based approach to program analysis that uses compile time evaluation of programs using simplified value domains. This gives information about the run-time properties of programs and provides the basis for significant performance improvements. A particular example of abstract interpretation is strictness analysis which allows the detection of the parameters in which a function is strict; these parameters may be passed by value without compromising the termination properties of the program.


european symposium on programming | 1988

A safe approach to parallel combinator reduction

Chris Hankin; Geoffrey L. Burn; Simon L. Peyton Jones

Abstract In this paper we present the results of two pieces of work which, when combined, allow us to take a program text in a functional language and produce a parallel implementation of that program. We present techniques for discovering sources of parallelism in a program at compile time, and then show how this parallelism is naturally mapped into a parallel combinator set that we will define. To discover sources of parallelism in a program, we use abstract interpretation. Abstract interpretation is a compile-time technique which is used to gain information about a program that may then be used to optimise the execution of the program. A particular use of abstract interpretation is in strictness analysis of functional programs. In a language that has lazy semantics, the main potential for parallelism arises in the evaluation of operands of strict operators. A function is strict in an argument if its value is undefined whenever the argument is undefined. If we can use strictness analysis to detect which arguments a function is strict, we then know that these arguments can be safely evaluated in parallel because this will not affect the lazy semantics. Having identified the sources of parallelism at compile-time it is necessary to communicate these to the run-time system. In the second part of the paper we use an extended set of combinators, including some parallel combinators that achieve this purpose.


Electronic Notes in Theoretical Computer Science | 2005

Continuous-Time Probabilistic KLAIM

Alessandra Di Pierro; Chris Hankin; Herbert Wiklicky

The design of languages supporting network programming is a necessary step towards the formalisation of distributed and mobile computing. The existence of an abstract semantic framework constitutes the basis for a formal analysis of such systems. The KLAIM paradigm [5] provides such a semantic framework by introducing basic concepts and primitives addressing the key aspects of the coordination of interacting located processes. We extend this basic paradigm with probabilistic constructs with the aim of introducing a semantic basis for a quantitative analysis of networks. A quantitative analysis allows in general for the consideration of more “realistic” situations. For example, a probabilistic analysis allows for establishing the security of a system up to a given tolerance factor expressing how much the system is actually vulnerable. This is in contrast to a qualitative analysis which typically might be used to validate the absolute security of a given system. In a distributed environment quantitative analysis is also of a great practical use in the consideration of timing issues which involve the asynchronous communications among processes running with different clocks. In a security setting these issues are relevant e.g. for the analysis and prevention of denial of service attacks, which involve the delaying of time-critical operations [9]. In our probabilistic version of KLAIM, which we call pKLAIM, we introduce probabilities in a number of ways. At the local level, we introduce probabilistic parallel and choice operators. In addition we use probabilistic

Collaboration


Dive into the Chris Hankin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugh Glaser

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Flemming Nielson

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Hanne Riis Nielson

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Pasquale Malacaria

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge