Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruce Christianson is active.

Publication


Featured researches published by Bruce Christianson.


international workshop on security | 1996

Why Isn't Trust Transitive?

Bruce Christianson; William S. Harbison

One of the great strengths of public-key cryptography is its potential to allow the localization of trust. This potential is greatest when cryptography is present to guarantee data integrity rather than secrecy, and where there is no natural hierarchy of trust. Both these conditions are typically fulfilled in the commercial world, where CSCW requires sharing of data and resources across organizational boundaries. One property which trust is frequently assumed or “proved” to have is transitivity (if A trusts B and B trusts C then A trusts C) or some generalization of transitivity such as *-closure. We use the loose term unintensional transitivity of trust to refer to a situation where B can effectively put things into As set of trust assumptions without As explicit consent (or sometimes even awareness.) Any account of trust which allows such situations to arise clearly poses major obstacles to the effective confinement (localization) of trust. In this position paper, we argue against the need to accept unintensional transitivity of trust. We distinguish the notion of trust from a number of other (transitive) notions with which it is frequently confused, and argue that “proofs” of the unintensional transitivity of trust typically involve unpalatable logical assumptions as well as undesirable consequences.


Journal of Computational and Applied Mathematics | 2000

Automatic differentiation of algorithms

Michael Bartholomew-Biggs; Steven Brown; Bruce Christianson; Laurence Dixon

We introduce the basic notions of automatic differentiation, describe some extensions which are of interest in the context of nonlinear optimization and give some illustrative examples.


Optimization Methods & Software | 1994

Reverse accumulation and attractive fixed points

Bruce Christianson

We apply reverse accumulation to obtain automatic gradients and error estimates of functions which include in their computation a convergent iteration of the form y= Φ(y,u), where y and u are vectors. We suggest an implementation approach which allows this to be done by a fairly routine extension of existing reverse accumulation code. We show how to re-use the computational graph for the fixed point constructor Φ so as to set explicit stopping criteria for the iterations, based on the gradient accuracy required. Our construction allows the gradient vector to be obtained to the same order of accuracy as the objective function values (which is in general the best we can hope to achieve), and the same order of computational cost (which does not explicitly depend upon the number of independent variables.) The technique can be applied to functions which contain several iterative constructions, either serially or nested


Optimization Methods & Software | 1998

Reverse aumulation and imploicit functions

Bruce Christianson

We begin by introducing a simple technique for using reverse accumulation the first derivatives of target functions which include in their construction the solution of systems of linear or nonlinear equations. In the linear case solving Ay= bfor ycorresponds to the adjoint operations [bbar]:=[bbar]+vand [Abar] :=yvwhere vis the solution to the adjoint equations vA= [ybar]. A more sophisticated construction applies in the nonlinear case We apply these technique to obtain automatic numerical error estimates for calculated function values. These error estimates include the effects of inaccurate equation solution as well as rounding error Our basic techiques can br generalized to functions which contain several (linear or nonlinear) implicit functions in their constuction, either serially or nested. In the case of scalar-valued target functions that include equation solution as part of their construction. Our algorithms involve at most the same order of computational effort as the computattion of the target f...


international workshop on security | 2004

Anonymous authentication

Partha Das Chowdhury; Bruce Christianson; James A. Malcolm

The contribution of this paper is a mechanism which links authentication to audit using weak identities and takes identity out of the trust management envelope. Although our protocol supports weaker versions of anonymity it is still useful even if anonymity is not required, due to the ability to reduce trust assumptions which it provides. We illustrate the protocol with an example of authorization in a role based access mechanism.


international conference on engineering applications of neural networks | 2009

Using the Support Vector Machine as a Classification Method for Software Defect Prediction with Static Code Metrics

David Gray; David Bowes; Neil Davey; Yi Sun; Bruce Christianson

The automated detection of defective modules within software systems could lead to reduced development costs and more reliable software. In this work the static code metrics for a collection of modules contained within eleven NASA data sets are used with a Support Vector Machine classifier. A rigorous sequence of pre-processing steps were applied to the data prior to classification, including the balancing of both classes (defective or otherwise) and the removal of a large number of repeating instances. The Support Vector Machine in this experiment yields an average accuracy of 70% on previously unseen data.


IET Software | 2012

Reflections on the NASA MDP data sets

David Gray; David Bowes; Neil Davey; Yi Sun; Bruce Christianson

Background: The NASA metrics data program (MDP) data sets have been heavily used in software defect prediction research. Aim: To highlight the data quality issues present in these data sets, and the problems that can arise when they are used in a binary classification context. Method: A thorough exploration of all 13 original NASA data sets, followed by various experiments demonstrating the potential impact of duplicate data points when data mining. Conclusions: Firstly researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Secondly, the bulk of defect prediction experiments based on the NASA MDP data sets may have led to erroneous findings. This is mainly because of repeated/duplicate data points potentially causing substantial amounts of training and testing data to be identical.


Computational Optimization and Applications | 2006

Optimizing Preventive Maintenance Models

Michael Bartholomew-Biggs; Bruce Christianson; Ming J. Zuo

We deal with the problem of scheduling preventive maintenance (PM) for a system so that, over its operating life, we minimize a performance function which reflects repair and replacement costs as well as the costs of the PM itself. It is assumed that a hazard rate model is known which predicts the frequency of system failure as a function of age. It is also assumed that each PM produces a step reduction in the effective age of the system.We consider some variations and extensions of a PM scheduling approach proposed by Lin et al. [6]. In particular we consider numerical algorithms which may be more appropriate for hazard rate models which are less simple than those used in [6] and we introduce some constraints into the problem in order to avoid the possibility of spurious solutions. We also discuss the use of automatic differentiation (AD) as a convenient tool for computing the gradients and Hessians that are needed by numerical optimization methods.The main contribution of the paper is a new problem formulation which allows the optimal number of occurrences of PM to be determined along with their optimal timings. This formulation involves the global minimization of a non-smooth performance function. In our numerical tests this is done via the algorithm DIRECT proposed by Jones et al. [19]. We show results for a number of examples, involving different hazard rate models, to give an indication of how PM schedules can vary in response to changes in relative costs of maintenance, repair and replacement.


financial cryptography | 2010

Multichannel protocols to prevent relay attacks

Frank Stajano; Ford-Long Wong; Bruce Christianson

A number of security systems, from Chip-and-PIN payment cards to contactless subway and train tokens, as well as secure localization systems, are vulnerable to relay attacks. Encrypting the communication between the honest endpoints does not protect against such attacks. The main solution that has been offered to date is distance bounding, in which a tightly timed exchange of challenges and responses persuades the verifier that the prover cannot be further away than a certain distance. This solution, however, still won’t say whether the specific endpoint the verifier is talking to is the intended one or not—it will only tell the verifier whether the real prover is “nearby”. Are there any alternatives? We propose a more general paradigm based on multichannel protocols. Our class of protocols, of which distance bounding can be modelled as a special case, allows a precise answer to be given to the question of whether the unknown device in front of the potential victim is a relaying attacker or the device with which the victim intended to communicate. We discuss several instantiations of our solution and point out the extent to which all these countermeasures rely, often implicitly, on the alertness of a honest human taking part in the protocol.


Microprocessors and Microsystems | 1997

A superscalar architecture to exploit instruction level parallelism

Gordon B. Steven; Bruce Christianson; Roger Collins; Richard D. Potter; Fleur L. Steven

Abstract If a high-performance superscalar processor is to realise its full potential, the compiler must re-order or schedule the object code at compile time. This scheduling creates groups of adjacent instructions that are independent and which therefore can be issued and executed in parallel at run time. This paper provides an overview of the Hatfield Superscalar Architecture (HSA), a multipleinstruction-issue architecture developed at the University of Hertfordshire to support the development of high-performance instruction schedulers. The long-term objective of the HSA project is to develop the scheduling technology to realise an order of magnitude performance improvement over traditional RISC designs. The paper also presents results from the first HSA instruction scheduler that currently achieves a speedup of over three compared to a classic RISC processor.

Collaboration


Dive into the Bruce Christianson's collaboration.

Top Co-Authors

Avatar

James A. Malcolm

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Matt Blaze

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Roe

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Hannan Xiao

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge