Derek Groen
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Derek Groen.
Advanced Materials | 2015
James L. Suter; Derek Groen; Peter V. Coveney
A quantitative description is presented of the dynamical process of polymer intercalation into clay tactoids and the ensuing aggregation of polymer-entangled tactoids into larger structures, obtaining various characteristics of these nanocomposites, including clay-layer spacings, out-of-plane clay-sheet bending energies, X-ray diffractograms, and materials properties. This model of clay–polymer interactions is based on a three-level approach, which uses quantum mechanical and atomistic descriptions to derive a coarse-grained yet chemically specific representation that can resolve processes on hitherto inaccessible length and time scales. The approach is applied to study collections of clay mineral tactoids interacting with two synthetic polymers, poly(ethylene glycol) and poly(vinyl alcohol). The controlled behavior of layered materials in a polymer matrix is centrally important for many engineering and manufacturing applications. This approach opens up a route to computing the properties of complex soft materials based on knowledge of their chemical composition, molecular structure, and processing conditions.
Computing in Science and Engineering | 2014
Derek Groen; Stefan J. Zasada; Peter V. Coveney
Multiscale and multiphysics applications are now commonplace, and many researchers focus on combining existing models to construct new multiscale models. This concise review of multiscale applications and their source communities in the EU and US outlines differences and commonalities among approaches and identifies areas in which collaboration between disciplines could be particularly beneficial. Because different communities adopt very different approaches to constructing multiscale simulations, and simulations on a length scale of a few meters and a time scale of a few hours can be found in many multiscale research domains, communities might receive additional benefit from sharing methods that are geared towards these scales. The Web extra is the full literature list mentioned in the article.
Physical Review E | 2014
Rupert W. Nash; Hywel B. Carver; Miguel O. Bernabeu; James Hetherington; Derek Groen; Timm Krueger; Peter V. Coveney
Modeling blood flow in larger vessels using lattice-Boltzmann methods comes with a challenging set of constraints: a complex geometry with walls and inlets and outlets at arbitrary orientations with respect to the lattice, intermediate Reynolds (Re) number, and unsteady flow. Simple bounce-back is one of the most commonly used, simplest, and most computationally efficient boundary conditions, but many others have been proposed. We implement three other methods applicable to complex geometries [Guo, Zheng, and Shi, Phys. Fluids 14, 2007 (2002); Bouzidi, Firdaouss, and Lallemand, Phys. Fluids 13, 3452 (2001); Junk and Yang, Phys. Rev. E 72, 066701 (2005)] in our open-source application hemelb. We use these to simulate Poiseuille and Womersley flows in a cylindrical pipe with an arbitrary orientation at physiologically relevant Re number (1-300) and Womersley (4-12) numbers and steady flow in a curved pipe at relevant Dean number (100-200) and compare the accuracy to analytical solutions. We find that both the Bouzidi-Firdaouss-Lallemand (BFL) and Guo-Zheng-Shi (GZS) methods give second-order convergence in space while simple bounce-back degrades to first order. The BFL method appears to perform better than GZS in unsteady flows and is significantly less computationally expensive. The Junk-Yang method shows poor stability at larger Re number and so cannot be recommended here. The choice of collision operator (lattice Bhatnagar-Gross-Krook vs multiple relaxation time) and velocity set (D3Q15 vs D3Q19 vs D3Q27) does not significantly affect the accuracy in the problems studied.
Interface Focus | 2013
Derek Groen; Joris Borgdorff; Carles Bona-Casas; James Hetherington; Rupert W. Nash; Stefan J. Zasada; Ilya Saverchenko; Mariusz Mamonski; Krzysztof Kurowski; Miguel O. Bernabeu; Alfons G. Hoekstra; Peter V. Coveney
Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from laptops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, in-stent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1–10% of the total execution time.
Journal of Computational Science | 2014
Joris Borgdorff; Mariusz Mamonski; Bartosz Bosak; Krzysztof Kurowski; M. Ben Belgacem; Bastien Chopard; Derek Groen; Peter V. Coveney; Alfons G. Hoekstra
We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and compare its performance to MUSCLE 1, file copy, MPI, MPWide, and GridFTP. The local throughput of MPI is about two times higher, so very tightly coupled code should use MPI as a single submodel of MUSCLE 2; the distributed performance of GridFTP is lower, especially for small messages. We test the performance of a canal system model with MUSCLE 2, where it introduces an overhead as small as 5% compared to MPI.
Journal of Computational Science | 2013
Derek Groen; James Hetherington; Hywel B. Carver; Rupert W. Nash; Miguel O. Bernabeu; Peter V. Coveney
We investigate the performance of the HemeLB lattice-Boltzmann simulator for cerebrovascular blood flow, aimed at providing timely and clinically relevant assistance to neurosurgeons. HemeLB is optimised for sparse geometries, supports interactive use, and scales well to 32,768 cores for problems with ∼81 million lattice sites. We obtain a maximum performance of 29.5 billion site updates per second, with only an 11% slowdown for highly sparse problems (5% fluid fraction). We present steering and visualisation performance measurements and provide a model which allows users to predict the performance, thereby determining how to run simulations with maximum accuracy within time constraints.
Journal of the Royal Society Interface | 2014
Miguel O. Bernabeu; Martin L. Jones; Jens H. Nielsen; Timm Krüger; Rupert W. Nash; Derek Groen; Sebastian Schmieschek; James Hetherington; Holger Gerhardt; Claudio A. Franco; Peter V. Coveney
There is currently limited understanding of the role played by haemodynamic forces on the processes governing vascular development. One of many obstacles to be overcome is being able to measure those forces, at the required resolution level, on vessels only a few micrometres thick. In this paper, we present an in silico method for the computation of the haemodynamic forces experienced by murine retinal vasculature (a widely used vascular development animal model) beyond what is measurable experimentally. Our results show that it is possible to reconstruct high-resolution three-dimensional geometrical models directly from samples of retinal vasculature and that the lattice-Boltzmann algorithm can be used to obtain accurate estimates of the haemodynamics in these domains. We generate flow models from samples obtained at postnatal days (P) 5 and 6. Our simulations show important differences between the flow patterns recovered in both cases, including observations of regression occurring in areas where wall shear stress (WSS) gradients exist. We propose two possible mechanisms to account for the observed increase in velocity and WSS between P5 and P6: (i) the measured reduction in typical vessel diameter between both time points and (ii) the reduction in network density triggered by the pruning process. The methodology developed herein is applicable to other biomedical domains where microvasculature can be imaged but experimental flow measurements are unavailable or difficult to obtain.
IEEE Computer | 2010
Simon Portegies Zwart; Tomoaki Ishiyama; Derek Groen; Keigo Nitadori; Junichiro Makino; Cees de Laat; Stephen L. W. McMillan; Kei Hiraki; Stefan Harfst; Paola Grosso
The computational requirements of simulating a sector of the universe led an international team of researchers to try concurrent processing on two supercomputers half a world apart. Data traveled nearly 27,000 km in 0.277 second, crisscrossing two oceans to go from Amsterdam to Tokyo and back.
PLOS Computational Biology | 2014
James M. Osborne; Miguel O. Bernabeu; Maria Bruna; Ben Calderhead; Jonathan Cooper; Neil Dalchau; Sara-Jane Dunn; Alexander G. Fletcher; Robin Freeman; Derek Groen; Bernhard Knapp; Greg J. McInerny; Gary R. Mirams; Joe Pitt-Francis; Biswa Sengupta; David W. Wright; Christian A. Yates; David J. Gavaghan; Stephen Emmott; Charlotte M. Deane
In order to attempt to understand the complexity inherent in nature, mathematical, statistical and computational techniques are increasingly being employed in the life sciences. In particular, the use and development of software tools is becoming vital for investigating scientific hypotheses, and a wide range of scientists are finding software development playing a more central role in their day-to-day research. In fields such as biology and ecology, there has been a noticeable trend towards the use of quantitative methods for both making sense of ever-increasing amounts of data [1] and building or selecting models [2]. As Research Fellows of the “2020 Science” project (http://www.2020science.net), funded jointly by the EPSRC (Engineering and Physical Sciences Research Council) and Microsoft Research, we have firsthand experience of the challenges associated with carrying out multidisciplinary computation-based science [3]–[5]. In this paper we offer a jargon-free guide to best practice when developing and using software for scientific research. While many guides to software development exist, they are often aimed at computer scientists [6] or concentrate on large open-source projects [7]; the present guide is aimed specifically at the vast majority of scientific researchers: those without formal training in computer science. We present our ten simple rules with the aim of enabling scientists to be more effective in undertaking research and therefore maximise the impact of this research within the scientific community. While these rules are described individually, collectively they form a single vision for how to approach the practical side of computational science. Our rules are presented in roughly the chronological order in which they should be undertaken, beginning with things that, as a computational scientist, you should do before you even think about writing any code. For each rule, guides on getting started, links to relevant tutorials, and further reading are provided in the supplementary material (Text S1).
Philosophical Transactions of the Royal Society A | 2014
Joris Borgdorff; M. Ben Belgacem; Carles Bona-Casas; Luis Fazendeiro; Derek Groen; Olivier Hoenen; Alexandru E. Mizeranschi; James L. Suter; D. Coster; Peter V. Coveney; Werner Dubitzky; Alfons G. Hoekstra; Pär Strand; Bastien Chopard
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption.