Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kieran Alden is active.

Publication


Featured researches published by Kieran Alden.


Science Signaling | 2012

Differential RET Signaling Pathways Drive Development of the Enteric Lymphoid and Nervous Systems

Amisha Patel; Nicola Harker; Lara Moreira-Santos; Manuela Ferreira; Kieran Alden; Jon Timmis; Katie Foster; Anna Garefalaki; Panayotis Pachnis; Paul S. Andrews; Hideki Enomoto; Jeffrey Milbrandt; Vassilis Pachnis; Mark Coles; Dimitris Kioussis; Henrique Veiga-Fernandes

Cis and trans signaling mechanisms direct different developmental responses to ligands for the receptor tyrosine kinase RET. RET Signaling in Cis and Trans Development of the enteric (gastrointestinal) organs requires coordinated growth of tissues from various embryonic layers. Evidence suggests that ligands of the receptor tyrosine kinase RET are used in different tissues to control distinct developmental end points. Lymphoid tissue initiator (LTin) cells are thought to function in the early development of Peyer’s patches (PPs), which are secondary lymphoid organs of the gut important for mucosal immunity. The formation of the enteric nervous system, which enervates the lymphoid tissue, depends on interactions between neural crest cells and stroma cells of the gut wall. RET signaling requires the presence of co-receptors, which bind to ligands, in the same cell (in cis), or RET co-receptors can be cleaved from cells, leading to the possibility of RET signaling in trans; however, the physiological relevance of such signaling is uncertain. Patel et al. investigated lymphoid tissue morphogenesis in mice and found that whereas development of the enteric nervous tissue depended on RET signaling in cis, aggregation of LTin cells and development of lymphoid tissue were driven by RET signaling in trans and depended on the local availability of RET co-receptors and ligands. During the early development of the gastrointestinal tract, signaling through the receptor tyrosine kinase RET is required for initiation of lymphoid organ (Peyer’s patch) formation and for intestinal innervation by enteric neurons. RET signaling occurs through glial cell line–derived neurotrophic factor (GDNF) family receptor α co-receptors present in the same cell (signaling in cis). It is unclear whether RET signaling in trans, which occurs in vitro through co-receptors from other cells, has a biological role. We showed that the initial aggregation of hematopoietic cells to form lymphoid clusters occurred in a RET-dependent, chemokine-independent manner through adhesion-mediated arrest of lymphoid tissue initiator (LTin) cells. Lymphoid tissue inducer cells were not necessary for this initiation phase. LTin cells responded to all RET ligands in trans, requiring factors from other cells, whereas RET was activated in enteric neurons exclusively by GDNF in cis. Furthermore, genetic and molecular approaches revealed that the versatile RET responses in LTin cells were determined by distinct patterns of expression of the genes encoding RET and its co-receptors. Our study shows that a trans RET response in LTin cells determines the initial phase of enteric lymphoid organ morphogenesis, and suggests that differential co-expression of Ret and Gfra can control the specificity of RET signaling.


PLOS Computational Biology | 2013

Spartan: A Comprehensive Tool for Understanding Uncertainty in Simulations of Biological Systems

Kieran Alden; Mark Read; Jonathan Timmis; Paul S. Andrews; Henrique Veiga-Fernandes; Mark Coles

Integrating computer simulation with conventional wet-lab research has proven to have much potential in furthering the understanding of biological systems. Success requires the relationship between simulation and the real-world system to be established: substantial aspects of the biological system are typically unknown, and the abstract nature of simulation can complicate interpretation of in silico results in terms of the biology. Here we present spartan (Simulation Parameter Analysis R Toolkit ApplicatioN), a package of statistical techniques specifically designed to help researchers understand this relationship and provide novel biological insight. The tools comprising spartan help identify which simulation results can be attributed to the dynamics of the modelled biological system, rather than artefacts of biological uncertainty or parametrisation, or simulation stochasticity. Statistical analyses reveal the influence that pathways and components have on simulation behaviour, offering valuable biological insight into aspects of the system under study. We demonstrate the power of spartan in providing critical insight into aspects of lymphoid tissue development in the small intestine through simulation. Spartan is released under a GPLv2 license, implemented within the open source R statistical environment, and freely available from both the Comprehensive R Archive Network (CRAN) and http://www.cs.york.ac.uk/spartan. The techniques within the package can be applied to traditional ordinary or partial differential equation simulations as well as agent-based implementations. Manuals, comprehensive tutorials, and example simulation data upon which spartan can be applied are available from the website.


Frontiers in Immunology | 2012

Pairing experimentation and computational modeling to understand the role of tissue inducer cells in the development of lymphoid organs

Kieran Alden; Jon Timmis; Paul S. Andrews; Henrique Veiga-Fernandes; Mark Coles

The use of genetic tools, imaging technologies and ex vivo culture systems has provided significant insights into the role of tissue inducer cells and associated signaling pathways in the formation and function of lymphoid organs. Despite advances in experimental technologies, the molecular and cellular process orchestrating the formation of a complex three-dimensional tissue is difficult to dissect using current approaches. Therefore, a robust set of simulation tools have been developed to model the processes involved in lymphoid tissue development. Specifically, the role of different tissue inducer cell populations in the dynamic formation of Peyer’s patches has been examined. Utilizing approaches from systems engineering, an unbiased model of lymphoid tissue inducer cell function has been developed that permits the development of emerging behaviors that are statistically not different from that observed in vivo. These results provide the confidence to utilize statistical methods to explore how the simulator predicts cellular behavior and outcomes under different physiological conditions. Such methods, known as sensitivity analysis techniques, can provide insight into when a component part of the system (such as a particular cell type, adhesion molecule, or chemokine) begins to have an influence on observed behavior, and quantifies the effect a component part has on the end result: the formation of lymphoid tissue. Through use of such a principled approach in the design, calibration, and analysis of a computer simulation, a robust in silico tool can be developed which can both further the understanding of a biological system being explored, and act as a tool for the generation of hypotheses which can be tested utilizing experimental approaches.


BMC Bioinformatics | 2010

dConsensus: a tool for displaying domain assignments by multiple structure-based algorithms and for construction of a consensus assignment

Kieran Alden; Stella Veretnik; Philip E. Bourne

BackgroundPartitioning of a protein into structural components, known as domains, is an important initial step in protein classification and for functional and evolutionary studies. While the systematic assignments of domains by human experts exist (CATH and SCOP), the introduction of high throughput technologies for structure determination threatens to overwhelm expert approaches. A variety of algorithmic methods have been developed to expedite this process, allowing almost instant structural decomposition into domains. The performance of algorithmic methods can approach 85% agreement on the number of domains with the consensus reached by experts. However, each algorithm takes a somewhat different conceptual approach, each with unique strengths and weaknesses. Currently there is no simple way to automatically compare assignments from different structure-based domain assignment methods, thereby providing a comprehensive understanding of possible structure partitioning as well as providing some insight into the tendencies of particular algorithms. Most importantly, a consensus assignment drawn from multiple assignment methods can provide a singular and presumably more accurate view.ResultsWe introduce dConsensus http://pdomains.sdsc.edu/dConsensus; a web resource that displays the results of calculations from multiple algorithmic methods and generates a domain assignment consensus with an associated reliability score. Domain assignments from seven structure-based algorithms - PDP, PUU, DomainParser2, NCBI method, DHcL, DDomains and Dodis are available for analysis and comparison alongside assignments made by expert methods. The assignments are available for all protein chains in the Protein Data Bank (PDB). A consensus domain assignment is built by either allowing each algorithm to contribute equally (simple approach) or by weighting the contribution of each method by its prior performance and observed tendencies. An analysis of secondary structure around domain and fragment boundaries is also available for display and further analysis.ConclusiondConsensus provides a comprehensive assignment of protein domains. For the first time, seven algorithmic methods are brought together with no need to access each method separately via a webserver or local copy of the software. This aggregation permits a consensus domain assignment to be computed. Comparison viewing of the consensus and choice methods provides the user with insights into the fundamental units of protein structure so important to the study of evolutionary and functional relationships.


CPT: Pharmacometrics & Systems Pharmacology | 2015

Agent‐Based Modeling in Systems Pharmacology

Jason Cosgrove; James A. Butler; Kieran Alden; Mark Read; Vipin Kumar; Lourdes Cucurull‐Sanchez; Jon Timmis; Mark Coles

Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent‐based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM‐specific strengths have yielded success in the area of preclinical mechanistic modeling.


Journal of the Royal Society Interface | 2015

Using argument notation to engineer biological simulations with increased confidence

Kieran Alden; Paul S. Andrews; Fiona Polack; Henrique Veiga-Fernandes; Mark Coles; Jon Timmis

The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyers Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions.


Journal of the Royal Society Interface | 2016

Automated multi-objective calibration of biological agent-based simulations

Mark Read; Kieran Alden; Louis M. Rose; Jon Timmis

Computational agent-based simulation (ABS) is increasingly used to complement laboratory techniques in advancing our understanding of biological systems. Calibration, the identification of parameter values that align simulation with biological behaviours, becomes challenging as increasingly complex biological domains are simulated. Complex domains cannot be characterized by single metrics alone, rendering simulation calibration a fundamentally multi-metric optimization problem that typical calibration techniques cannot handle. Yet calibration is an essential activity in simulation-based science; the baseline calibration forms a control for subsequent experimentation and hence is fundamental in the interpretation of results. Here, we develop and showcase a method, built around multi-objective optimization, for calibrating ABSs against complex target behaviours requiring several metrics (termed objectives) to characterize. Multi-objective calibration (MOC) delivers those sets of parameter values representing optimal trade-offs in simulation performance against each metric, in the form of a Pareto front. We use MOC to calibrate a well-understood immunological simulation against both established a priori and previously unestablished target behaviours. Furthermore, we show that simulation-borne conclusions are broadly, but not entirely, robust to adopting baseline parameter values from different extremes of the Pareto front, highlighting the importance of MOCs identification of numerous calibration solutions. We devise a method for detecting overfitting in a multi-objective context, not previously possible, used to save computational effort by terminating MOC when no improved solutions will be found. MOC can significantly impact biological simulation, adding rigour to and speeding up an otherwise time-consuming calibration process and highlighting inappropriate biological capture by simulations that cannot be well calibrated. As such, it produces more accurate simulations that generate more informative biological predictions.


Artificial Life | 2014

Novel Approaches to the Visualization and Quantification of Biological Simulations by Emulating Experimental Techniques

James A. Butler; Kieran Alden; Henrique Veiga Fernandes; Jon Timmis; Mark Coles

The use of modeling and simulation as a predictive tool for research in biology is becoming increasingly popular. However, outputs from such simulations are often abstract and presented in a very different manner to equivalent data from the biological domain. Therefore, we have developed a flexible tool-chain for emulating various biological laboratory techniques to produce biologically homomorphic outputs in computer simulations. This includes virtual immunohistochemistry, microscopy, flow cytometry, and gene expression heatmaps. We present a case study in the use of this tool-chain applied to a simulation of pre-natal lymphoid organ development. We find that application of the tool-chain provides additional, biologically relevant data, that is inaccessible with pre-existing methodologies for analysis of simulation results. We argue that biological experimental techniques borrowed from the wet-lab are an important additional approach to the analysis of simulations in computational biology, and might furthermore inspire confidence in simulation results from the perspective of experimental biologists. Introduction & Background In silico simulations of biological processes, including disease pathology, tissue development, immune responses, and evolutionary processes, have a demonstrated ability to offer new insight into complex biological systems that are difficult to study solely in wet laboratories. Computational models offer qualitative and quantitative insights into complex systems, in which biological behaviors emerge from the interactions of many individual entities. For instance, by capturing the dynamics of cells signalling each other through both direct contact and through the secretion and detection of molecules in their local environment. We have previously demonstrated that agent-based spatially resolved simulations, when properly validated, have the capacity to explore disease intervention strategies such as the administration of drugs or biological therapeutics, or the simulation of surgery by removing or modifying individual model compartments (Read et al., 2013). Agent-based simulations of biology are often much more complex than mathematical models based on differential equations, as they capture emergent phenomena in terms of spatially-resolved individuals rather than at the population level. Such simulations often require a very large parameter space and fine spatial resolution. Therefore, it is extremely important that such simulations can be properly validated and calibrated in order for us to have confidence in their results and predictions. Calibration is typically achieved in the first instance ’by hand’, that is, the parameter space is explored in conjunction with a domain expert, until the simulation outputs correspond to those measured experimentally. Simulations are often calibrated against one experiment, which is unlikely to be adequate if the simulation is then used to explore the properties of the system in other contexts. For this reason, Read et al. (2013) argues for use of multiple calibration points when developing simulations. However, due to the limitations of observation in the biological domain, simulation outputs usually take on a very different form to in vivo or in vitro experiments that model the same process. Many simulations have parameters that are not directly measurable with current technology, which must be inferred from statistical analysis of the simulation results and calibration against primary data from biological experiments. The difficulty of this task may be compounded by the different nature of simulation output to primary data. Sensitivity analysis should be performed over the parameter space for every biological simulation, using techniques that measure the effect magnitude of each simulation parameter individually, such as the Vargha-Delaney A-Test (Vargha and Delaney, 2000), and Latin Hypercube parameter sampling to explore the effect different parameter values exert on the sensitivity of other parameters (Alden et al., 2013). Yet, we believe that in depth statistical analysis and parameter-fitting through calibration, does not provide enough evidence alone to convince a biologist that a simulation is fit for purpose, nor does it provide access to the model’s full informational content. To ensure that simulations are developed in a principled manner, we advocate use of the CoSMoS (Complex Systems Modeling and Simulation) framework for the development of computational models. We provide a brief description of the process below but direct the reader to Andrews et al. (2010) for a more complete overview. The model should be explicitly stated using a modeling language, such as the UniALIFE 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems fied Modeling Language (UML), Systems Biology Markup Language (SBML) or the Pi calculus, in terms of biology alone before conversion and abstraction into a platformindependent ’platform model’, which may then be implemented through development of an executable software representation of the platform model. Simulation developers should also ensure the implementation is fully transparent: by providing a formal argumentation structure that explains and justifies all assumptions and abstractions with evidence or exposition, such that these can be considered when translating the simulation result into one grounded in the biological domain. Goal-structuring notation has been shown to provide a means of structuring such arguments in the field of safety-critical software systems (Kelly, 1999). Where simulations are used as a key tool in making biological predictions, clinical trials being one example, it is clear that the tool should be considered safety-critical. Although these techniques may appeal to the developers of biological simulations, they are not always easily accessible to biologists, nor ideal for validation or predictive purposes. One asset of simulation is its capability of providing high-resolution data which is difficult or impossible to obtain in the wet-lab, while maintaing a level of abstraction that makes the simulation computationally tractable. However, it is for precisely these same reasons that experimental biologists often lack confidence in simulation results, so we argue that model developers need to go further to build confidence in their simulations. Simulation visualizations are commonly too abstract to visually represent a model system as it is conceptualized by biologists. Since simulation outputs do not reflect the format or type of data obtained from wet-laboratory experimental techniques, such as flow cytometry (to measure cell surface expression of specific proteins), histology, or the various approaches to analysing relative gene expression (quantitative polymerase chain reactions, microarrays, deep sequencing, etc), they must be indirectly compared with the biological domain. In addition to providing transparency in simulation design, a strong argumentation structure, and a comprehensive statistical analysis of the parameter space, we argue modelers must look toward experimental techniques in biology with regard to the simulation outputs chosen. In this paper, we discuss the development of a tool-chain that enables simulations to output data comparable to biological experiments, through the emulation of experimental techniques used with in vitro or in vivo biological model systems. We additionally address the motivations and potential applications driving development of this approach. The concept of producing a Turing-like test for the validation of biological simulations is discussed at length in Harel (2005), in which a domain expert is presented with both experimental and simulated datasets, and challenged to identify the experimental data, and whether any discernible difference exists. Much like the Turing test in the field of artificial intelligence, this has been considered the standard to which computational simulations in biology should ultimately aspire to. The principal barrier to developing such a test as a viable validation tool, is negating the significant differences in means of producing, analyzing and presenting data in experimental and computational biology. Therefore, this presents a strong motivation to develop methods for bringing results in computational biology closer to those seen in experimental biology. Essentially, we argue that to better understand the dynamics of a simulation of biological processes, an in silico emulation of experimental biological techniques is required, and furthermore, that a simulation is more likely to yield useful results when analyzed within the context of the biological techniques that will ultimately be used to test simulation predictions. We found that this process can elucidate new perspectives in a pre-existing model, and identify emergent sub-populations of cells not previously known within the simulation. Table 1 presents the three wet-laboratory experimental techniques in biology that we aim to emulate in this case study: flow cytometry, histological imaging, and heat maps of protein expression both spatially and temporally expressed. Technique Description Illustrates Flow Cytometry Cells are input via microfluidics and individually scanned by lasers to determine the intensity of their fluorescent antibody stained surface. Relative cell surface expression of proteins, cell size and granularity. Can identify different cell populations and is multi-dimensional. (Immuno-)histology Sections of tissue are antibody stained and imaged using microscopy. Detect the presence of proteins and tissue structures. Spatially resolved. Heat Maps Used to illustrate gene expression data, typically from microarrays or deep sequencing. visualizations of spatiotemporal gene or protein expression (relative). Table 1: Table of experimental techniques in biology to be emulated and applied to art


Natural Computing | 2015

Utilising a simulation platform to understand the effect of domain model assumptions

Kieran Alden; Paul S. Andrews; Henrique Veiga-Fernandes; Jon Timmis; Mark Coles

Computational and mathematical modelling approaches are increasingly being adopted in attempts to further our understanding of complex biological systems. This approach can be subjected to strong criticism as substantial aspects of the biological system being captured are not currently known, meaning assumptions need to be made that could have a critical impact on simulation response. We have utilised the CoSMoS process in the development of an agent-based simulation of the formation of Peyer’s patches (PP), gut-associated lymphoid organs that have a key role in the initiation of adaptive immune responses to infection. Although the use of genetic tools, imaging technologies and ex vivo culture systems has provided significant insight into the cellular components and associated pathways involved in PP development, interesting questions remain that cannot be addressed using these approaches, and as such well justified assumptions have been introduced into our model to counter this. Here we focus not on the development of the model itself, but instead demonstrate how the resultant simulation can be used to assess how these assumptions impact the simulation response. For example, we consider the impact of our assumption that the migration rate of lymphoid tissue cells into the gut remains constant throughout PP development. We demonstrate that an analysis of the assumptions made in the construction of the domain model may either increase confidence in the model as a representation of the biological system it captures, or may suggest areas where further biological experimentation is required.


international conference on artificial immune systems | 2011

Towards argument-driven validation of an in silico model of immune tissue organogenesis

Kieran Alden; Paul S. Andrews; Jon Timmis; Henrique Veiga-Fernandes; Mark Coles

Specialised tissues of the immune system including lymph nodes, tonsils, spleen and Peyer’s Patches have key roles in the initiation of adaptive immune responses to infection. To understand the molecular mechanisms involved in the development of this tissue, mice deficient for key genes in this process have been developed and analysed, leading to a basic model describing tissue formation. Although this approach has provided some key insights into the molecular mechanisms involved, due to the complexity of gene expression patterns it has not been possible to fully understand the process of lymphoid tissue organogenesis.

Collaboration


Dive into the Kieran Alden's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henrique Veiga-Fernandes

Instituto de Medicina Molecular

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip E. Bourne

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge