David Wallom
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Wallom.
Environmental Research Letters | 2016
Daniel Mitchell; Clare Heaviside; Sotiris Vardoulakis; Chris Huntingford; Giacomo Masato; Benoit P. Guillod; Peter C. Frumhoff; Andy Bowery; David Wallom; Myles R. Allen
It has been argued that climate change is the biggest global health threat of the 21st century. The extreme high temperatures of the summer of 2003 were associated with up to seventy thousand excess deaths across Europe. Previous studies have attributed the meteorological event to the human influence on climate, or examined the role of heat waves on human health. Here, for the first time, we explicitly quantify the role of human activity on climate and heat-related mortality in an event attribution framework, analysing both the Europe-wide temperature response in 2003, and localised responses over London and Paris. Using publicly-donated computing, we perform many thousands of climate simulations of a high-resolution regional climate model. This allows generation of a comprehensive statistical description of the 2003 event and the role of human influence within it, using the results as input to a health impact assessment model of human mortality. We find large-scale dynamical modes of atmospheric variability remain largely unchanged under anthropogenic climate change, and hence the direct thermodynamical response is mainly responsible for the increased mortality. In summer 2003, anthropogenic climate change increased the risk of heat-related mortality in Central Paris by ~70% and by ~20% in London, which experienced lower extreme heat. Out of the estimated ~315 and ~735 summer deaths attributed to the heatwave event in Greater London and Central Paris, respectively, 64 (±3) deaths were attributable to anthropogenic climate change in London, and 506 (±51) in Paris. Such an ability to robustly attribute specific damages to anthropogenic drivers of increased extreme heat can inform societal responses to, and responsibilities for, climate change.
ieee international conference on cloud computing technology and science | 2011
David Wallom; Matteo Turilli; Andrew Martin; Anbang Raun; Gareth Taylor; Nigel Hargreaves; Alan McMoran
Cloud Computing provides an optimal infrastructure to utilise and share both computational and data resources whilst allowing a pay-per-use model, useful to cost-effectively manage hardware investment or to maximise its utilisation. Cloud Computing also offers transitory access to scalable amounts of computational resources, something that is particularly important due to the time and financial constraints of many user communities. The growing number of communities that are adopting large public cloud resources such as Amazon Web Services [1] or Microsoft Azure [2] proves the success and hence usefulness of the Cloud Computing paradigm. Nonetheless, the typical use cases for public clouds involve non-business critical applications, particularly where issues around security of utilization of applications or deposited data within shared public services are binding requisites. In this paper, a use case is presented illustrating how the integration of Trusted Computing technologies into an available cloud infrastructure -- Eucalyptus -- allows the security-critical energy industry to exploit the flexibility and potential economical benefits of the Cloud Computing paradigm for their business-critical applications.
international conference on e science | 2006
David Spence; Neil Geddes; Jens Jensen; Andrew Richards; Matthew Viljoen; Andrew P. Martin; Matthew J. Dovey; Mark Norman; Kang Tang; Anne E. Trefethen; David Wallom; Rob Allan; David Meredith
This paper presents work undertaken to integrate the future UK national Shibboleth infrastructure with the UKs National Grid Service (NGS). Our work, ShibGrid, provides both transparent authentication for portal based Grid access and a credential transformation service for users of other Grid access methods. The ShibGrid support for portal-based transparent Grid authentication is provided as a set of standards-based drop-in modules which can be used with any project portal as well as the NGS project in which they are initially deployed. The ShibGrid architecture requires no changes to the UK national Shibboleth authentication infrastructure or the NGS security infrastructure and provides access for users both with and without UK e-Science certificates. In addition to presenting both the architecture of Shib- Grid and its implementation, we additionally place the ShibGrid project within the context of other efforts to integrate Shibboleth with Grids.
Proceedings of the WICSA/ECSA 2012 Companion Volume on | 2012
David Wallom; Matteo Turilli; Andrew J. Martin; Anbang Raun; Gareth A. Taylor; Nigel Hargreaves; Alan McMoran
Cloud Computing provides an optimal infrastructure to utilise and share both computational and data resources whilst allowing a pay-per-use model, useful to cost-effectively manage hardware investment or to maximise its utilisation. Cloud Computing also offers transitory access to scalable amounts of computational resources, something that is particularly important due to the time and financial constraints of many user communities. The growing number of communities that are adopting large public cloud resources such as Amazon Web Services [1] or Microsoft Azure [2] proves the success and hence usefulness of the Cloud Computing paradigm. Nonetheless, the typical use cases for public clouds involve non-business critical applications, particularly where issues around security of utilization of applications or deposited data within shared public services are binding requisites. In this paper, a use case is presented illustrating how the integration of Trusted Computing technologies into an available cloud infrastructure -- Eucalyptus -- allows the security-critical energy industry to exploit the flexibility and potential economical benefits of the Cloud Computing paradigm for their business-critical applications.
The Journal of Supercomputing | 2014
Xiaoyu Yang; David Wallom; Simon Waddington; Jianwu Wang; Arif Shaon; Brian Matthews; Michael D. Wilson; Yike Guo; Li Guo; Jon Blower; Athanasios V. Vasilakos; Kecheng Liu; Philip Kershaw
Service-oriented architecture (SOA), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above-mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
information assurance and security | 2009
Xiao Dong Wang; M. Jones; Jens Jensen; Andrew Richards; David Wallom; Tiejun Ma; Robert Frank; David J. Spence; Steven Young; Claire Devereux; Neil Geddes
The National Grid Service (NGS) provides access to compute and data resources for UK academics. Currently users are required to have an X.509 certificate from the UK e-Science Certification Authority (CA) or one of its international peers to access the NGS. The CA must satisfy the requirements for internationally agreed assurance levels and some users find the processes of obtaining and managing certificates difficult. Shibboleth, an implementation of federation identity-based authentication, has been widely deployed in academic environments in the UK. The SARoNGS project, was proposed to integrate the Shibboleth and X.509 based infrastructures, to deliver a production level service for accessing the NGS in a user-friendly way. This paper describes an architecture by which users are authenticated by the UK Access Management Federation to acquire low assurance credentials to access Grid resources on the NGS. Users can login to NGS resources via NGS Portal, using their local institution’s authentication system.
IEEE Transactions on Power Systems | 2015
Ramón Granell; Colin J. Axon; David Wallom
There is growing interest in discerning behaviors of electricity users in both the residential and commercial sectors. With the advent of high-resolution time-series power demand data through advanced metering, mining this data could be costly from the computational viewpoint. One of the popular techniques is clustering, but depending on the algorithm the resolution of the data can have an important influence on the resulting clusters. This paper shows how temporal resolution of power demand profiles affects the quality of the clustering process, the consistency of cluster membership (profiles exhibiting similar behavior), and the efficiency of the clustering process. This work uses both raw data from household consumption data and synthetic profiles. The motivation for this work is to improve the clustering of electricity load profiles to help distinguish user types for tariff design and switching, fault and fraud detection, demand-side management, and energy efficiency measures. The key criterion for mining very large data sets is how little information needs to be used to get a reliable result, while maintaining privacy and security.
Geophysical Research Letters | 2016
Peter Uhe; Friederike E. L. Otto; Karsten Haustein; G. J. van Oldenborgh; Andrew D. King; David Wallom; Myles R. Allen; Heidi Cullen
The year 2014 broke the record for the warmest yearly average temperature in Europe. Attributing how much this was due to anthropogenic climate change and how much it was due to natural variability is a challenging question but one that is important to address. In this study, we compare four event attribution methods. We look at the risk ratio (RR) associated with anthropogenic climate change for this event, over the whole European region, as well as its spatial distribution. Each method shows a very strong anthropogenic influence on the event over Europe. However, the magnitude of the RR strongly depends on the definition of the event and the method used. Across Europe, attribution over larger regions tended to give greater RR values. This highlights a major source of sensitivity in attribution statements and the need to define the event to analyze on a case-by-case basis.
ieee pes international conference and exhibition on innovative smart grid technologies | 2011
Gareth A. Taylor; David Wallom; S. Grenard; Angel Yunta Huete
Information and data communications technology will be crucial to the operation of future electricity distribution networks. This will be mainly driven by the need to process and analyze increasing volumes of data produced by smart meters from residential and commercial customers, sensors monitoring the condition of network assets, distributed generation, and responsive loads. Complexity is further introduced as these diverse data-streams will be gathered at different rates and analyzed for different purposes such as near-to-real-time system state estimation and life-cycle condition monitoring analysis. However, the nature of active networks dictates that all relevant information will need to be exploited within the same operational framework. We are developing novel Information and Communications Technology (ICT) and high performance computing tools and techniques to enable near-to-real-time state estimation across large-scale distribution networks whilst concurrently supporting on the same computational infrastructure condition monitoring of network assets and advanced network restoration solutions. These platforms are promoting and supporting the emergence of new distribution network management systems, with inherent security and intelligent communications, for smart distribution network operation and management. We propose cost-effective scalable ICT solutions and initial investigation of realistic distribution network data traffic and management scenarios involving state estimation. Furthermore, we review the prospects for off-line trials of our proposed solutions in three different countries.
ieee international conference on high performance computing data and analytics | 2011
Stefano Salvini; Piotr Lopatka; David Wallom
The HiPerDNO project aims to develop new applications to enhance the operational capabilities of Distribution Network Operators (DNO). Their delivery requires an advanced computational strategy. This paper describes a High Performance Computing (HPC) platform developed for these applications whilst also being flexible enough to accommodate new ones emerging from the gradual introduction of smart metering in the Low Voltage (LV) networks (AMI: Advanced Metering Infrastructures). Security and reliability requirements for both data and computations are very stringent. Our proposed architecture would allow the deployment of computations and data access as services, thus achieving independence on the actual hardware and software technologies deployed, and hardening against malicious as well as accidental corruptions. Cost containment and reliance on proven technologies are also of paramount importance to DNOs. We suggest an architecture that fulfills these needs, which includes the following components for the HPC and Data Storage systems: Hadoop Distributed File System, a federation of loosely coupled computational clusters, the PELICAN computational application framework