Kristin Tøndel
Norwegian University of Life Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kristin Tøndel.
Mini-reviews in Medicinal Chemistry | 2012
Heather Wieman; Kristin Tøndel; Endre Anderssen; Finn Drabløs
The current status in rational drug design using homology-based models is discussed, with focus on template selection, model building, model verification and strategies for drug design based on model structures. A novel approach for identification of unique binding site features from homology-based models, Protein Alpha Shape Similarity Analysis (PASSA) is described.
BMC Systems Biology | 2011
Kristin Tøndel; Ulf G. Indahl; Arne B. Gjuvsland; Jon Olav Vik; Peter Hunter; Stig W. Omholt; Harald Martens
BackgroundDeterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function.ResultsOur results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops.ConclusionsHC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.
Journal of Computer-aided Molecular Design | 2006
Kristin Tøndel; Endre Anderssen; Finn Drabløs
Protein Alpha Shape (PAS) Dock is a new empirical score function suitable for virtual library screening using homology modelled protein structures. Here, the score function is used in combination with the geometry search method Tabu search. A description of the protein binding site is generated using gaussian property fields like in Protein Alpha Shape Similarity Analysis (PASSA). Gaussian property fields are also used to describe the ligand properties. The overlap between the receptor and ligand hydrophilicity and lipophilicity fields is maximised, while minimising steric clashes. Gaussian functions introduce a smoothing of the property fields. This makes the score function robust against small structural variations, and therefore suitable for use with homology models. This also makes it less critical to include protein flexibility in the docking calculations. We use a fast and simplified version of the score function in the geometry search, while a more detailed version is used for the final prediction of the binding free energies. This use of a two-level scoring makes PAS-Dock computationally efficient, and well suited for virtual screening. The PAS-Dock score function is trained on 218 X-ray structures of protein– ligand complexes with experimental binding affinities. The performance of PAS-Dock is compared to two other docking methods, AutoDock and MOE-Dock, with respect to both accuracy and computational efficiency. According to this study, PAS-Dock is more computationally efficient than both AutoDock and MOE-Dock, and gives a better prediction of the free energies of binding. PAS-Dock is also more robust against structural variations than AutoDock.
Frontiers in Physiology | 2011
Jon Olav Vik; Arne B. Gjuvsland; Liren Li; Kristin Tøndel; Steven Niederer; Nicolas Smith; Peter Hunter; Stig W. Omholt
Understanding the causal chain from genotypic to phenotypic variation is a tremendous challenge with huge implications for personalized medicine. Here we argue that linking computational physiology to genetic concepts, methodology, and data provides a new framework for this endeavor. We exemplify this causally cohesive genotype–phenotype (cGP) modeling approach using a detailed mathematical model of a heart cell. In silico genetic variation is mapped to parametric variation, which propagates through the physiological model to generate multivariate phenotypes for the action potential and calcium transient under regular pacing, and ion currents under voltage clamping. The resulting genotype-to-phenotype map is characterized using standard quantitative genetic methods and novel applications of high-dimensional data analysis. These analyses reveal many well-known genetic phenomena like intralocus dominance, interlocus epistasis, and varying degrees of phenotypic correlation. In particular, we observe penetrance features such as the masking/release of genetic variation, so that without any change in the regulatory anatomy of the model, traits may appear monogenic, oligogenic, or polygenic depending on which genotypic variation is actually present in the data. The results suggest that a cGP modeling approach may pave the way for a computational physiological genomics capable of generating biological insight about the genotype–phenotype relation in ways that statistical-genetic approaches cannot.
Journal of Chemometrics | 2010
Harald Martens; Ingrid Måge; Kristin Tøndel; Julia Isaeva; Martin Høy; Solve Sæbø
Computer experiments are useful for studying a complex system, e.g. a high‐dimensional nonlinear mathematical model of a biological or physical system. Based on the simulation results, an empirical “metamodel” may then be developed, emulating the behavior of the model in a way that is faster to compute and easier to understand. In modelometrics, the model phenome of a computer model is recorded, once and for all, by structured simulations according to a factorial design in the model inputs, and with high‐dimensional profiling of its simulation outputs. A multivariate metamodel is then developed, by multivariate analysis of the input–output data, akin to how high‐dimensional data are analyzed in chemometrics. To reveal strongly nonlinear input–output relationships, the factorial design must probe the design space at many different levels for each of the many input factors. A reduced factorial design method may be required if combinatorial explosion is to be avoided. In the multi‐level binary replacement (MBR) design the levels of each input factor are represented as binary numbers, and all the individual binary factor bits are then combined in a fractional factorial (FF) design. The experiment size can thereby be greatly reduced at the price of some binary confounding. The MBR method is here described and then illustrated for the optimization of a nonlinear model of a microbiological growth curve with five design factors, for finding the relevant region in the design space, and subsequently for estimating the optimal design points in that space. Copyright
Journal of Chemometrics | 2010
Kristin Tøndel; Arne B. Gjuvsland; Ingrid Måge; Harald Martens
Computer simulations are faster and cheaper than physical experiments. Still, if the system has many factors to be manipulated, experimental designs may be needed in order to make computer experiments more cost‐effective. Determining the relevant parameter ranges within which to set up a factorial experimental design is a critical and difficult step in the practical use of any formal statistical experimental planning, be it for screening or optimisation purposes. Here we show how a sparse initial range finding design based on a reduced multi‐factor multi‐level design method—the multi‐level binary replacement (MBR) design—can reveal the region of relevant system behaviour. The MBR design is presently optimised by generating a number of different confounding patterns and choosing the one giving the highest score with respect to a space‐spanning criterion. The usefulness of this optimised MBR (OMBR) design is demonstrated in an example from systems biology: A multivariate metamodel, emulating a deterministic, nonlinear dynamic model of the mammalian circadian clock, is developed based on data from a designed computer experiment. In order to allow the statistical metamodel to represent all aspects of the biologically relevant model behaviour, the relevant parameter ranges have to be spanned. The use of an initial OMBR design for finding the widest possible parameter ranges resulting in a stable limit cycle for the mammalian circadian clock model is demonstrated. The same OMBR design is subsequently applied within the selected, relevant sub‐region of the parameter space to develop a functional metamodel based on PLS regression. Copyright
Journal of Chemometrics | 2014
Valeria Tafintseva; Kristin Tøndel; Arcady Ponosov; Harald Martens
The problem of structural ambiguity or “sloppiness” of a mathematical model is here studied by multivariate metamodeling techniques. If a given model is “sloppy”, it means that a number of different parameter combinations—“a neutral parameter set”—can give more or less the same model behavior and thus equally good fit to data. This paper presents a way to characterize the structure of such sloppiness. The model used for illustration is a nonlinear dynamic model of reaction kinetics—a simple version of the S‐system model. When fitted to time series data by various nonlinear curve fitting methods, an unexpected problem was discovered: For every time series, a large neutral parameter set was observed. Each of these sets was analyzed by principal component analysis and found to have clear, but nonlinear, subspace structure. The neutral parameter sets were found for many different time series data, and the global sloppiness structure of the model was characterized. This global sloppiness structure of the model allowed us to find strong correlations between parameters, and on this basis to simplify the original model. A method to reduce the ambiguity in kinetic model parameter estimates based on combining several time series is suggested. Copyright
BMC Systems Biology | 2012
Kristin Tøndel; Ulf G. Indahl; Arne B. Gjuvsland; Stig W. Omholt; Harald Martens
BackgroundStatistical approaches to describing the behaviour, including the complex relationships between input parameters and model outputs, of nonlinear dynamic models (referred to as metamodelling) are gaining more and more acceptance as a means for sensitivity analysis and to reduce computational demand. Understanding such input-output maps is necessary for efficient model construction and validation. Multi-way metamodelling provides the opportunity to retain the block-wise structure of the temporal data typically generated by dynamic models throughout the analysis. Furthermore, a cluster-based approach to regional metamodelling allows description of highly nonlinear input-output relationships, revealing additional patterns of covariation.ResultsBy presenting the N-way Hierarchical Cluster-based Partial Least Squares Regression (N-way HC-PLSR) method, we here combine multi-way analysis with regional cluster-based metamodelling, together making a powerful methodology for extensive exploration of the input-output maps of complex dynamic models. We illustrate the potential of the N-way HC-PLSR by applying it both to predict model outputs as functions of the input parameters, and in the inverse direction (predicting input parameters from the model outputs), to analyse the behaviour of a dynamic model of the mammalian circadian clock. Our results display a more complete cartography of how variation in input parameters is reflected in the temporal behaviour of multiple model outputs than has been previously reported.ConclusionsOur results indicated that the N-way HC-PLSR metamodelling provides a gain in insight into which parameters that are related to a specific model output behaviour, as well as variations in the model sensitivity to certain input parameters across the model output space. Moreover, the N-way approach allows a more transparent and detailed exploration of the temporal dimension of complex dynamic models, compared to alternative 2-way methods.
Journal of Chemical Information and Computer Sciences | 2004
Kristin Tøndel
A new method has been developed for prediction of homology model quality directly from the sequence alignment, using multivariate regression. Hence, the expected quality of future homology models can be estimated using only information about the primary structure. This method has been applied to protein kinases and can easily be extended to other protein families. Homology model quality for a reference set of homology models was verified by comparison to experimental structures, by calculation of root-mean-square deviations (RMSDs) and comparison of interresidue contact areas. The homology model quality measures were then used as dependent variables in a Partial Least Squares (PLS) regression, using a matrix of alignment score profiles found from the Point Accepted Mutation (PAM) 250 similarity matrix as independent variables. This resulted in a regression model that can be used to predict the accuracy of future homology models from the sequence alignment. Using this method, one can identify the target-template combinations that are most likely to give homology models of sufficient quality. Hence, this method can be used to effectively choose the optimal templates to use for the homology modeling. The methods ability to guide the choice of homology modeling templates was verified by comparison of success rates to those obtained using BLAST scores and target-template sequence identities, respectively. The results indicate that the method presented here performs best in choosing the optimal homology modeling templates. Using this method, the optimal template was chosen in 86% of the cases, as compared to 62% using BLAST scores, and 57% using sequence identities. The method presented here can also be used to identify regions of the protein structure that are difficult to model, as well as alignment errors. Hence, this method is a useful tool for ensuring that the best possible homology model is generated.
Computers in Biology and Medicine | 2014
Øyvind Nordbø; Pablo Lamata; Sander Land; Steven Niederer; Jan Magnus Aronsen; William E. Louch; Ivar Sjaastad; Harald Martens; Arne B. Gjuvsland; Kristin Tøndel; Hans Torp; Maelene Lohezic; Jurgen E. Schneider; Espen W. Remme; Nicolas Smith; Stig W. Omholt; Jon Olav Vik
The mouse is an important model for theoretical-experimental cardiac research, and biophysically based whole organ models of the mouse heart are now within reach. However, the passive material properties of mouse myocardium have not been much studied. We present an experimental setup and associated computational pipeline to quantify these stiffness properties. A mouse heart was excised and the left ventricle experimentally inflated from 0 to 1.44kPa in eleven steps, and the resulting deformation was estimated by echocardiography and speckle tracking. An in silico counterpart to this experiment was built using finite element methods and data on ventricular tissue microstructure from diffusion tensor MRI. This model assumed a hyperelastic, transversely isotropic material law to describe the force-deformation relationship, and was simulated for many parameter scenarios, covering the relevant range of parameter space. To identify well-fitting parameter scenarios, we compared experimental and simulated outcomes across the whole range of pressures, based partly on gross phenotypes (volume, elastic energy, and short- and long-axis diameter), and partly on node positions in the geometrical mesh. This identified a narrow region of experimentally compatible values of the material parameters. Estimation turned out to be more precise when based on changes in gross phenotypes, compared to the prevailing practice of using displacements of the material points. We conclude that the presented experimental setup and computational pipeline is a viable method that deserves wider application.