James Southern
Fujitsu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James Southern.
PLOS Computational Biology | 2013
Gary R. Mirams; Christopher J. Arthurs; Miguel O. Bernabeu; Rafel Bordas; Jonathan Cooper; Alberto Corrias; Yohan Davit; Sara-Jane Dunn; Alexander G. Fletcher; Daniel G. Harvey; Megan E. Marsh; James M. Osborne; Pras Pathmanathan; Joe Pitt-Francis; James Southern; Nejib Zemzemi; David J. Gavaghan
Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.
Progress in Biophysics & Molecular Biology | 2008
James Southern; Joe Pitt-Francis; Jonathan P. Whiteley; Daniel Stokeley; Hiromichi Kobashi; Ross Nobes; Yoshimasa Kadooka; David J. Gavaghan
Abstract Recent advances in biotechnology and the availability of ever more powerful computers have led to the formulation of increasingly complex models at all levels of biology. One of the main aims of systems biology is to couple these together to produce integrated models across multiple spatial scales and physical processes. In this review, we formulate a definition of multi-scale in terms of levels of biological organisation and describe the types of model that are found at each level. Key issues that arise in trying to formulate and solve multi-scale and multi-physics models are considered and examples of how these issues have been addressed are given for two of the more mature fields in computational biology: the molecular dynamics of ion channels and cardiac modelling. As even more complex models are developed over the coming few years, it will be necessary to develop new methods to model them (in particular in coupling across the interface between stochastic and deterministic processes) and new techniques will be required to compute their solutions efficiently on massively parallel computers. We outline how we envisage these developments occurring.
Philosophical Transactions of the Royal Society A | 2009
Rafel Bordas; Bruno Carpentieri; Giorgio Fotia; Fabio Maggio; Ross Nobes; Joe Pitt-Francis; James Southern
Models of cardiac electrophysiology consist of a system of partial differential equations (PDEs) coupled with a system of ordinary differential equations representing cell membrane dynamics. Current software to solve such models does not provide the required computational speed for practical applications. One reason for this is that little use is made of recent developments in adaptive numerical algorithms for solving systems of PDEs. Studies have suggested that a speedup of up to two orders of magnitude is possible by using adaptive methods. The challenge lies in the efficient implementation of adaptive algorithms on massively parallel computers. The finite-element (FE) method is often used in heart simulators as it can encapsulate the complex geometry and small-scale details of the human heart. An alternative is the spectral element (SE) method, a high-order technique that provides the flexibility and accuracy of FE, but with a reduced number of degrees of freedom. The feasibility of implementing a parallel SE algorithm based on fully unstructured all-hexahedra meshes is discussed. A major computational task is solution of the large algebraic system resulting from FE or SE discretization. Choice of linear solver and preconditioner has a substantial effect on efficiency. A fully parallel implementation based on dynamic partitioning that accounts for load balance, communication and data movement costs is required. Each of these methods must be implemented on next-generation supercomputers in order to realize the necessary speedup. The problems that this may cause, and some of the techniques that are beginning to be developed to overcome these issues, are described.
Philosophical Transactions of the Royal Society A | 2009
Miguel O. Bernabeu; Rafel Bordas; Pras Pathmanathan; Joe Pitt-Francis; Jonathan Cooper; Alan Garny; David J. Gavaghan; Blanca Rodriguez; James Southern; Jonathan P. Whiteley
Recent work has described the software engineering and computational infrastructure that has been set up as part of the Cancer, Heart and Soft Tissue Environment (Chaste) project. Chaste is an open source software package that currently has heart and cancer modelling functionality. This software has been written using a programming paradigm imported from the commercial sector and has resulted in a code that has been subject to a far more rigorous testing procedure than that is usual in this field. In this paper, we explain how new functionality may be incorporated into Chaste. Whiteley has developed a numerical algorithm for solving the bidomain equations that uses the multi-scale (MS) nature of the physiology modelled to enhance computational efficiency. Using a simple geometry in two dimensions and a purpose-built code, this algorithm was reported to give an increase in computational efficiency of more than two orders of magnitude. In this paper, we begin by reviewing numerical methods currently in use for solving the bidomain equations, explaining how these methods may be developed to use the MS algorithm discussed above. We then demonstrate the use of this algorithm within the Chaste framework for solving the monodomain and bidomain equations in a three-dimensional realistic heart geometry. Finally, we discuss how Chaste may be developed to include new physiological functionality—such as modelling a beating heart and fluid flow in the heart—and how new algorithms aimed at increasing the efficiency of the code may be incorporated.
Journal of Computational Science | 2012
James Southern; Gerard J. Gorman; Matthew D. Piggott; Patrick E. Farrell
Abstract Simulations in cardiac electrophysiology generally use very fine meshes and small time steps to resolve highly localized wavefronts. This expense motivates the use of mesh adaptivity, which has been demonstrated to reduce the overall computational load. However, even with mesh adaptivity performing such simulations on a single processor is infeasible. Therefore, the adaptivity algorithm must be parallelised. Rather than modifying the sequential adaptive algorithm, the parallel mesh adaptivity method introduced in this paper focuses on dynamic load balancing in response to the local refinement and coarsening of the mesh. In essence, the mesh partition boundary is perturbed away from mesh regions of high relative error, while also balancing the computational load across processes. The parallel scaling of the method when applied to physiologically realistic heart meshes is shown to be good as long as there are enough mesh nodes to distribute over the available parallel processes. It is shown that the new method is dominated by the cost of the sequential adaptive mesh procedure and that the parallel overhead of inter-process data migration represents only a small fraction of the overall cost.
international parallel and distributed processing symposium | 2014
Md. Mohsin Ali; James Southern; Peter E. Strazdins; Brendan Harding
A fault-tolerant version of Open Message Passing Interface (Open MPI), based on the draft User Level Failure Mitigation (ULFM) proposal of the MPI Forums Fault Tolerance Working Group, is used to create fault-tolerant applications. This allows applications and libraries to design their own recovery methods and control them at the user level. However, only a limited amount of research work on user level failure recovery (including the implementation and performance evaluation of this prototype) has been carried out. This paper contributes a fault-tolerant implementation of an application solving 2D partial differential equations (PDEs) by means of a sparse grid combination technique which is capable of surviving multiple process failures caused by the faults. Our fault recovery involves reconstructing the faulty communicators without shrinking the global size by re-spawning failed MPI processes on the same physical processors where they were before the failure (for load balancing). It also involves restoring lost data from either exact check pointed data on disk, approximated data in memory (via an alternate sparse grid combination technique) or a near-exact copy of replicated data in memory. The experimental results show that the faulty communicator reconstruction time is currently large in the draft ULFM, especially for multiple process failures. They also show that the alternate combination technique has the lowest data recovery overhead, except on a system with very low disk write latency for which checkpointing has the lowest overhead. Furthermore, the errors due to the recovery of approximated data are within a factor of 10 in all cases, with the surprising result that the alternate combination technique being more accurate than the near-exact replication method. The contributed implementation details, including the analysis of the experimental results, of this paper will help application developers to resolve different issues of design and implementation of fault-tolerant applications by means of the Open MPI ULFM standard.
international conference on conceptual structures | 2012
Gerard J. Gorman; James Southern; Patrick E. Farrell; Matthew D. Piggott; Georgios Rokos; Paul H. J. Kelly
Abstract Mesh smoothing is an important algorithm for the improvement of element quality in unstructured mesh finite element methods. A new optimisation based mesh smoothing algorithm is presented for anisotropic mesh adaptivity. It is shown that this smoothing kernel is very effective at raising the minimum local quality of the mesh. A number of strategies are employed to reduce the algorithms cost while maintaining its effectiveness in improving overall mesh quality. The method is parallelised using hybrid OpenMP/MPI programming methods, and graph colouring to identify independent sets. Different approaches are explored to achieve good scaling performance within a shared memory compute node.
IEEE Transactions on Biomedical Engineering | 2009
James Southern; Gernot Plank; Edward J. Vigmond; Jonathan P. Whiteley
The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.
international conference on conceptual structures | 2013
Jay Walter Larson; Markus Hegland; Brendan Harding; Stephen Roberts; Linda Stals; Alistair P. Rendell; Peter E. Strazdins; Md. Mohsin Ali; Christoph Kowitz; Ross Nobes; James Southern; Nicholas Wilson; Michael Li; Yasuyuki Oishi
Abstract A key issue confronting petascale and exascale computing is the growth in probability of soft and hard faults with increasing system size. A promising approach to this problem is the use of algorithms that are inherently fault tolerant. We introduce such an algorithm for the solution of partial differential equations, based on the sparse grid approach. Here, the solution of multiple component grids are efficiently combined to achieve a solution on a full grid. The technique also lends itself to a (modified) MapReduce framework on a cluster of processors, with the map stage corresponding to allocating each component grid for solution over a subset of the processors, and the reduce stage corresponding to their combination. We describe how the sparse grid combination method can be modified to robustly solve partial differential equations in the presence of faults. This is based on a modified combination formula that can accommodate the loss of one or two component grids. We also discuss accuracy issues associated with this formula. We give details of a prototype implementation within a MapReduce framework using the dynamic process features and asynchronous message passing facilities of MPI. Results on a two-dimensional advection problem show that the errors after the loss of one or two sub-grids are within a factor of 3 of the sparse grid solution in the presence of no faults. They also indicate that the sparse grid technique with four times the resolution has approximately the same error as a full grid, while requiring (for a sufficiently high resolution) much lower computation and memory requirements. We finally outline a MapReduce variant capable of responding to faults in ways other than re-scheduling of failed tasks. We discuss the likely software requirements for such a flexible MapReduce framework, the requirements it will impose on users’ legacy codes, and the systems runtime behavior.
SIAM Journal on Scientific Computing | 2015
Brendan Harding; Markus Hegland; Jay Walter Larson; James Southern
This paper continues to develop a fault tolerant extension of the sparse grid combination technique recently proposed in [B. Harding and M. Hegland, ANZIAM J. Electron. Suppl., 54 (2013), pp. C394--C411]. This approach to fault tolerance is novel for two reasons: First, the combination technique adds an additional level of parallelism, and second, it provides algorithm-based fault tolerance so that solutions can still be recovered if failures occur during computation. Previous work indicates how the combination technique may be adapted for a low number of faults. In this paper we develop a generalization of the combination technique for which arbitrary collections of coarse approximations may be combined to obtain an accurate approximation. A general fault tolerant combination technique for large numbers of faults is a natural consequence of this work. Using a renewal model for the time between faults on each node of a high performance computer, we also provide bounds on the expected error for interpolati...