Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Georgios S. Stamatakos is active.

Publication


Featured researches published by Georgios S. Stamatakos.


Proceedings of the IEEE | 2002

In silico radiation oncology: combining novel simulation algorithms with current visualization techniques

Georgios S. Stamatakos; Dimitra D. Dionysiou; Evangelia I. Zacharaki; Nikolaos A. Mouravliansky; Konstantina S. Nikita; Nikolaos K. Uzunoglu

The concept of in silica radiation oncology is clarified in this paper. A brief literature review points out the principal domains in which experimental, mathematical, and three-dimensional (3-D) computer simulation models of tumor growth and response to radiation therapy have been developed. Two paradigms of 3-D simulation models developed by our research group are concisely presented. The first one refers to the in vitro development and radiation response of a tumor spheroid whereas the second one refers to the fractionated radiation response of a clinical tumor in vivo based on the patients imaging data. In each case, a description of the salient points of the corresponding algorithms and the visualization techniques used takes place. Specific applications of the models to experimental and clinical cases are described and the behavior of the models is two- and three-dimensionally visualized by using virtual reality techniques. Good qualitative agreement with experimental and clinical observations strengthens the applicability of the models to real situations. A protocol for further testing and adaptation is outlined. Therefore, an advanced integrated patient specific decision support and spatio-temporal treatment planning system is expected to emerge after the completion of the necessary experimental tests and clinical evaluation.


IEEE Transactions on Microwave Theory and Techniques | 2000

Analysis of the interaction between a layered spherical human head model and a finite-length dipole

Konstantina S. Nikita; Georgios S. Stamatakos; Nikolaos K. Uzunoglu; Aggelos Karafotias

The coupling between a finite-length dipole antenna and a three-layer lossy dielectric sphere, representing a simplified model of the human head, is analyzed theoretically in this paper. The proposed technique is based on the theory of Greens functions in conjunction with the method of auxiliary sources (MAS). The Greens function of the three-layer sphere can be calculated as the response of this object to the excitation generated by an elementary dipole of unit dipole moment. The MAS is then applied to model the dipole antenna by distributing a set of auxiliary current sources on a virtual surface lying inside the antenna physical surface. By imposing appropriate boundary conditions at a finite number of points on the real surface of the antenna, the unknown auxiliary sources coefficients can be calculated and, hence, the electric field at any point in space can be easily obtained. Numerical results concerning the specific absorption rate inside the head, the total power absorbed by the head, the input impedance, and the radiation pattern of the antenna are presented for homogeneous and layered head models exposed to the near-field radiation of half-wavelength dipoles at 900 and 1710 MHz. The developed method can serve as a reliable platform for the assessment of purely numerical electromagnetic methods. The method can also provide an efficient tool for accurate testing and comparison of different antenna designs since generalizations required to treat more complex antenna configurations are straightforward.


Genome Medicine | 2014

Enabling multiscale modeling in systems medicine.

Olaf Wolkenhauer; Charles Auffray; Olivier Brass; Jean Clairambault; Andreas Deutsch; Dirk Drasdo; Francesco Luigi Gervasio; Luigi Preziosi; Philip K. Maini; Anna Marciniak-Czochra; Christina Kossow; Lars Kuepfer; Katja Rateitschak; Ignacio Ramis-Conde; Benjamin Ribba; Andreas Schuppert; Rod Smallwood; Georgios S. Stamatakos; Felix Winter; Helen M. Byrne

CITATION: Wolkenhauer, O. et al. 2014. Enabling multiscale modeling in systems medicine. Genome Medicine, 6:21, doi:10.1186/gm538.


Physics in Medicine and Biology | 2004

A spatio-temporal simulation model of the response of solid tumours to radiotherapy in vivo: parametric validation concerning oxygen enhancement ratio and cell cycle duration

Vassilis P. Antipas; Georgios S. Stamatakos; Nikolaos K. Uzunoglu; Dimitra D. Dionysiou; Roger G. Dale

Advanced bio-simulation methods are expected to substantially improve radiotherapy treatment planning. To this end a novel spatio-temporal patient-specific simulation model of the in vivo response of malignant tumours to radiotherapy schemes has been recently developed by our group. This paper discusses recent improvements to the model: an optimized algorithm leading to conformal shrinkage of the tumour as a response to radiotherapy, the introduction of the oxygen enhancement ratio (OER), a realistic initial cell phase distribution and finally an advanced imaging-based algorithm simulating the neovascularization field. A parametric study of the influence of the cell cycle duration Tc, OER, OERbeta for the beta LQ parameter on tumour growth. shrinkage and response to irradiation under two different fractionation schemes has been made. The model has been applied to two glioblastoma multiforme (GBM) cases, one with wild type (wt) and another one with mutated (mt) p53 gene. Furthermore, the model has been applied to a hypothetical GBM tumour with alpha and beta values corresponding to those of generic radiosensitive tumours. According to the model predictions, a whole tumour with shorter Tc tends to repopulate faster, as is to be expected. Furthermore, a higher OER value for the dormant cells leads to a more radioresistant whole tumour. A small variation of the OERbeta value does not seem to play a major role in the tumour response. Accelerated fractionation proved to be superior to the standard scheme for the whole range of the OER values considered. Finally, the tumour with mt p53 was shown to be more radioresistant compared to the tumour with wt p53. Although all simulation predictions agree at least qualitatively with the clinical experience and literature, a long-term clinical adaptation and quantitative validation procedure is in progress.


Interface Focus | 2011

Clinically driven design of multi-scale cancer models: the ContraCancrum project paradigm

Kostas Marias; Dionysia Dionysiou; Sakkalis; Norbert Graf; Rainer M. Bohle; Peter V. Coveney; Shunzhou Wan; Amos Folarin; P Büchler; M Reyes; Gordon J. Clapworthy; Enjie Liu; Jörg Sabczynski; T Bily; A Roniotis; M Tsiknakis; Eleni A. Kolokotroni; S Giatili; Christian Veith; E Messe; H Stenzhorn; Yoo-Jin Kim; Stefan J. Zasada; Ali Nasrat Haidar; Caroline May; S Bauer; T Wang; Yanjun Zhao; M Karasek; R Grewer

The challenge of modelling cancer presents a major opportunity to improve our ability to reduce mortality from malignant neoplasms, improve treatments and meet the demands associated with the individualization of care needs. This is the central motivation behind the ContraCancrum project. By developing integrated multi-scale cancer models, ContraCancrum is expected to contribute to the advancement of in silico oncology through the optimization of cancer treatment in the patient-individualized context by simulating the response to various therapeutic regimens. The aim of the present paper is to describe a novel paradigm for designing clinically driven multi-scale cancer modelling by bringing together basic science and information technology modules. In addition, the integration of the multi-scale tumour modelling components has led to novel concepts of personalized clinical decision support in the context of predictive oncology, as is also discussed in the paper. Since clinical adaptation is an inelastic prerequisite, a long-term clinical adaptation procedure of the models has been initiated for two tumour types, namely non-small cell lung cancer and glioblastoma multiforme; its current status is briefly summarized.


Cancer Informatics | 2006

Applying a 4D multiscale in vivo tumor growth model to the exploration of radiotherapy scheduling: The effects of weekend treatment gaps and p53 gene status on the response of fast growing solid tumors

Dimitra D. Dionysiou; Georgios S. Stamatakos

Background Although a majority of studies in cancer biomarker discovery claim to use proportional hazards regression (PHREG) to the study the ability of a biomarker to predict survival, few studies use the predicted probabilities obtained from the model to test the quality of the model. In this paper, we compared the quality of predictions by a PHREG model to that of a linear discriminant analysis (LDA) in both training and test set settings. Methods The PHREG and LDA models were built on a 491 colorectal cancer (CRC) patient dataset comprised of demographic and clinicopathologic variables, and phenotypic expression of p53 and Bcl-2. Two variable selection methods, stepwise discriminant analysis and the backward selection, were used to identify the final models. The endpoint of prediction in these models was five-year post-surgery survival. We also used linear regression model to examine the effect of bin size in the training set on the accuracy of prediction in the test set. Results The two variable selection techniques resulted in different models when stage was included in the list of variables available for selection. However, the proportion of survivors and non-survivors correctly identified was identical in both of these models. When stage was excluded from the variable list, the error rate for the LDA model was 42% as compared to an error rate of 34% for the PHREG model. Conclusions This study suggests that a PHREG model can perform as well or better than a traditional classifier such as LDA to classify patients into prognostic classes. Also, this study suggests that in the absence of the tumor stage as a variable, Bcl-2 expression is a strong prognostic molecular marker of CRC.Integrative cancer biology research relies on a variety of data-driven computational modeling and simulation methods and techniques geared towards gaining new insights into the complexity of biological processes that are of critical importance for cancer research. These include the dynamics of gene-protein interaction networks, the percolation of sub-cellular perturbations across scales and the impact they may have on tumorigenesis in both experiments and clinics. Such innovative ‘systems’ research will greatly benefit from enabling Information Technology that is currently under development, including an online collaborative environment, a Semantic Web based computing platform that hosts data and model repositories as well as high-performance computing access. Here, we present one of the National Cancer Institute’s recently established Integrative Cancer Biology Programs, i.e. the Center for the Development of a Virtual Tumor, CViT, which is charged with building a cancer modeling community, developing the aforementioned enabling technologies and fostering multi-scale cancer modeling and simulation.The issue of wide feature-set variability has recently been raised in the context of expression-based classification using microarray data. This paper addresses this concern by demonstrating the natural manner in which many feature sets of a certain size chosen from a large collection of potential features can be so close to being optimal that they are statistically indistinguishable. Feature-set optimality is inherently related to sample size because it only arises on account of the tendency for diminished classifier accuracy as the number of features grows too large for satisfactory design from the sample data. The paper considers optimal feature sets in the framework of a model in which the features are grouped in such a way that intra-group correlation is substantial whereas inter-group correlation is minimal, the intent being to model the situation in which there are groups of highly correlated co-regulated genes and there is little correlation between the co-regulated groups. This is accomplished by using a block model for the covariance matrix that reflects these conditions. Focusing on linear discriminant analysis, we demonstrate how these assumptions can lead to very large numbers of close-to-optimal feature sets.The use of MALDI-TOF mass spectrometry as a means of analyzing the proteome has been evaluated extensively in recent years. One of the limitations of this technique that has impeded the development of robust data analysis algorithms is the variability in the location of protein ion signals along the x-axis. We studied technical variations of MALDI-TOF measurements in the context of proteomics profiling. By acquiring a benchmark data set with five replicates, we estimated 76% to 85% of the total variance is due to phase variation. We devised a lobster plot, so named because of the resemblance to a lobster claw, to help detect the phase variation in replicates. We also investigated a peak alignment algorithm to remove the phase variation. This operation is analogous to the normalization step in microarray data analysis. Only after this critical step can features of biological interest be clearly revealed. With the help of principal component analysis, we demonstrated that after peak alignment, the differences among replicates are reduced. We compared this approach to peak alignment with a model-based calibration approach in which there was known information about peaks in common among all spectra. Finally, we examined the potential value at each point in an analysis pipeline of having a set of methods available that includes parametric, semiparametric and nonparametric methods; among such methods are those that benefit from the use of prior information.Array comparative genomic hybridization (aCGH) is a high-throughput lab technique to measure genome-wide chromosomal copy numbers. Data from aCGH experiments require extensive pre-processing, which consists of three steps: normalization, segmentation and calling. Each of these pre-processing steps yields a different data set: normalized data, segmented data, and called data. Publications using aCGH base their findings on data from all stages of the pre-processing. Hence, there is no consensus on which should be used for further down-stream analysis. This consensus is however important for correct reporting of findings, and comparison of results from different studies. We discuss several issues that should be taken into account when deciding on which data are to be used. We express the believe that called data are best used, but would welcome opposing views.We propose a method for biomarker discovery from mass spectrometry data, improving the common peak approach developed by Fushiki et al. (BMC Bioinformatics, 7:358, 2006). The common peak method is a simple way to select the sensible peaks that are shared with many subjects among all detected peaks by combining a standard spectrum alignment and kernel density estimates. The key idea of our proposed method is to apply the common peak approach to each class label separately. Hence, the proposed method gains more informative peaks for predicting class labels, while minor peaks associated with specific subjects are deleted correctly. We used a SELDI-TOF MS data set from laser microdissected cancer tissues for predicting the treatment effects of neoadjuvant therapy using an anticancer drug on breast cancer patients. The AdaBoost algorithm is adopted for pattern recognition, based on the set of candidate peaks selected by the proposed method. The analysis gives good performance in the sense of test errors for classifying the class labels for a given feature vector of selected peak values.Motivation Our goal was to understand why the PLIER algorithm performs so well given its derivation is based on a biologically implausible assumption. Results In spite of a non-intuitive assumption regarding the PM and MM errors made as part of the derivation for PLIER, the resulting probe level error function does capture the key characteristics of the ideal error function, assuming MM probes only measure non-specific binding and no signal.In this paper we develop a Bayesian analysis to estimate the disease prevalence, the sensitivity and specificity of three cervical cancer screening tests (cervical cytology, visual inspection with acetic acid and Hybrid Capture II) in the presence of a covariate and in the absence of a gold standard. We use Metropolis-Hastings algorithm to obtain the posterior summaries of interest. The estimated prevalence of cervical lesions was 6.4% (a 95% credible interval [95% CI] was 3.9, 9.3). The sensitivity of cervical cytology (with a result of ≥ ASC-US) was 53.6% (95% CI: 42.1, 65.0) compared with 52.9% (95% CI: 43.5, 62.5) for visual inspection with acetic acid and 90.3% (95% CI: 76.2, 98.7) for Hybrid Capture II (with result of >1 relative light units). The specificity of cervical cytology was 97.0% (95% CI: 95.5, 98.4) and the specificities for visual inspection with acetic acid and Hybrid Capture II were 93.0% (95% CI: 91.0, 94.7) and 88.7% (95% CI: 85.9, 91.4), respectively. The Bayesian model with covariates suggests that the sensitivity and the specificity of the visual inspection with acetic acid tend to increase as the age of the women increases. The Bayesian method proposed here is an useful alternative to estimate measures of performance of diagnostic tests in the presence of covariates and when a gold standard is not available. An advantage of the method is the fact that the number of parameters to be estimated is not limited by the number of observations, as it happens with several frequentist approaches. However, it is important to point out that the Bayesian analysis requires informative priors in order for the parameters to be identifiable. The method can be easily extended for the analysis of other medical data sets.The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA) software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a flexible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occuring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profiles of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM) tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specific gene expression patterns having both statistical and biological significance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the flexibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org.Consider a gene expression array study comparing two groups of subjects where the goal is to explore a large number of genes in order to select for further investigation a subset that appear to be differently expressed. There has been much statistical research into the development of formal methods for designating genes as differentially expressed. These procedures control error rates such as the false detection rate or family wise error rate. We contend however that other statistical considerations are also relevant to the task of gene selection. These include the extent of differential expression and the strength of evidence for differential expression at a gene. Using real and simulated data we first demonstrate that a proper exploratory analysis should evaluate these aspects as well as decision rules that control error rates. We propose a new measure called the mp-value that quantifies strength of evidence for differential expression. The mp-values are calculated with a resampling based algorithm taking into account the multiplicity and dependence encountered in microarray data. In contrast to traditional p-values our mp-values do not depend on specification of a decision rule for their definition. They are simply descriptive in nature. We contrast the mp-values with multiple testing p-values in the context of data from a breast cancer prognosis study and from a simulation model.Sound data analysis is critical to the success of modern molecular medicine research that involves collection and interpretation of mass-throughput data. The novel nature and high-dimensionality in such datasets pose a series of nontrivial data analysis problems. This technical commentary discusses the problems of over-fitting, error estimation, curse of dimensionality, causal versus predictive modeling, integration of heterogeneous types of data, and lack of standard protocols for data analysis. We attempt to shed light on the nature and causes of these problems and to outline viable methodological approaches to overcome them.The arrival of high-throughput technologies in cancer science and medicine has made the possibility for knowledge generation greater than ever before. However, this has brought with it real challenges as researchers struggle to analyse the avalanche of information available to them. A unique U.K.-based initiative has been established to promote data sharing in cancer science and medicine and to address the technical and cultural issues needed to support this.Searching PubMed for citations related to a specific cancer center or group of authors can be labor-intensive. We have created a tool, PubMed QUEST, to aid in the rapid searching of PubMed for publications of interest. It was designed by taking into account the needs of entire cancer centers as well as individual investigators. The experience of using the tool by our institution’s cancer center administration and investigators has been favorable and we believe it could easily be adapted to other institutions. Use of the tool has identified limitations of automated searches for publications based on an author’s name, especially for common names. These limitations could likely be solved if the PubMed database assigned a unique identifier to each author.In this paper, a model of signaling pathways involving G proteins is investigated. The model incorporates reaction-diffusion mechanisms in which various reactants participate inside and on the extra-cellular surface membrane. The messenger molecules may diffuse over the surface of the cell membrane and signal transduction across the cell membrane is mediated by membrane receptor bound proteins which connect the genetically controlled biochemical intra-cellular reactions to the production of the second messenger, leading to desired functional responses. Dynamic and steady-state properties of the model are then investigated through weakly nonlinear stability analysis. Turing-type patterns are shown to form robustly under different delineating conditions on the system parameters. The theoretical predictions are then discussed in the context of some recently reported experimental evidence.Introduction: As an alternative to DNA microarrays, mass spectrometry based analysis of proteomic patterns has shown great potential in cancer diagnosis. The ultimate application of this technique in clinical settings relies on the advancement of the technology itself and the maturity of the computational tools used to analyze the data. A number of computational algorithms constructed on different principles are available for the classification of disease status based on proteomic patterns. Nevertheless, few studies have addressed the difference in the performance of these approaches. In this report, we describe a comparative case study on the classification accuracy of hepatocellular carcinoma based on the serum proteomic pattern generated from a Surface Enhanced Laser Desorption/Ionization (SELDI) mass spectrometer. Methods: Nine supervised classification algorithms are implemented in R software and compared for the classification accuracy. Results: We found that the support vector machine with radial function is preferable as a tool for classification of hepatocellular carcinoma using features in SELDI mass spectra. Among the rest of the methods, random forest and prediction analysis of microarrays have better performance. A permutation-based technique reveals that the support vector machine with a radial function seems intrinsically superior in learning from the training data since it has a lower prediction error than others when there is essentially no differential signal. On the other hand, the performance of the random forest and prediction analysis of microarrays rely on their capability of capturing the signals with substantial differentiation between groups. Conclusions: Our finding is similar to a previous study, where classification methods based on the Matrix Assisted Laser Desorption/Ionization (MALDI) mass spectrometry are compared for the prediction accuracy of ovarian cancer. The support vector machine, random forest and prediction analysis of microarrays provide better prediction accuracy for hepatocellular carcinoma using SELDI proteomic data than six other approaches.Summary In our previous study [1], we have compared the performance of a number of widely used discrimination methods for classifying ovarian cancer using Matrix Assisted Laser Desorption Ionization (MALDI) mass spectrometry data on serum samples obtained from Reflectron mode. Our results demonstrate good performance with a random forest classifier. In this follow-up study, to improve the molecular classification power of the MALDI platform for ovarian cancer disease, we expanded the mass range of the MS data by adding data acquired in Linear mode and evaluated the resultant decrease in classification error. A general statistical framework is proposed to obtain unbiased classification error estimates and to analyze the effects of sample size and number of selected m/z features on classification errors. We also emphasize the importance of combining biological knowledge and statistical analysis to obtain both biologically and statistically sound results. Our study shows improvement in classification accuracy upon expanding the mass range of the analysis. In order to obtain the best classification accuracies possible, we found that a relatively large training sample size is needed to obviate the sample variations. For the ovarian MS dataset that is the focus of the current study, our results show that approximately 20–40 m/z features are needed to achieve the best classification accuracy from MALDI-MS analysis of sera. Supplementary information can be found at http://bioinformatics.med.yale.edu/proteomics/BioSupp2.html.In vitro experimentation provides a convenient controlled environment for testing biological hypotheses of functional genomics in cancer induction and progression. However, it is necessary to validate resulting gene signatures from these in vitro experiments in human tumor samples (i.e. in vivo). We discuss the several methods for integrating data from these two sources paying particular attention to formulating statistical tests and corresponding null hypotheses. We propose a classification null hypothesis that can be simply modeled via permutation testing. A classification method is proposed based upon the Tissue Similarity Index of Sandberg and Ernberg (PNAS, 2005) that uses the classification null hypothesis. This method is demonstrated using the in vitro signature of Core Serum Response developed by Chang et al. (PLoS Biology, 2004).Multiple studies have reported that surface enhanced laser desorption/ionization time of flight mass spectroscopy (SELDI-TOF-MS) is useful in the early detection of disease based on the analysis of bodily fluids. Use of any multiplex mass spectroscopy based approach as in the analysis of bodily fluids to detect disease must be analyzed with great care due to the susceptibility of multiplex and mass spectroscopy methods to biases introduced via experimental design, patient samples, and/or methodology. Specific biases include those related to experimental design, patients, samples, protein chips, chip reader and spectral analysis. Contributions to biases based on patients include demographics (e.g., age, race, ethnicity, sex), homeostasis (e.g., fasting, medications, stress, time of sampling), and site of analysis (hospital, clinic, other). Biases in samples include conditions of sampling (type of sample container, time of processing, time to storage), conditions of storage, (time and temperature of storage), and prior sample manipulation (freeze thaw cycles). Also, there are many potential biases in methodology which can be avoided by careful experimental design including ensuring that cases and controls are analyzed randomly. All the above forms of biases affect any system based on analyzing multiple analytes and especially all mass spectroscopy based methods, not just SELDI-TOF-MS. Also, all current mass spectroscopy systems have relatively low sensitivity compared with immunoassays (e.g., ELISA). There are several problems which may be unique to the SELDI-TOF-MS system marketed by Ciphergen®. Of these, the most important is a relatively low resolution (±0.2%) of the bundled mass spectrometer which may cause problems with analysis of data. Foremost, this low resolution results in difficulties in determining what constitutes a “peak” if a peak matching approach is used in analysis. Also, once peaks are selected, the peaks may represent multiple proteins. In addition, because peaks may vary slightly in location due to instrumental drift, long term identification of the same peaks may prove to be a challenge. Finally, the Ciphergen® system has some “noise” of the baseline which results from the accumulation of charge in the detector system. Thus, we must be very aware of the factors that may affect the use of proteomics in the early detection of disease, in determining aggressive subsets of cancers, in risk assessment and in monitoring the effectiveness of novel therapies.Summary: A key challenge in clinical proteomics of cancer is the identification of biomarkers that could allow detection, diagnosis and prognosis of the diseases. Recent advances in mass spectrometry and proteomic instrumentations offer unique chance to rapidly identify these markers. These advances pose considerable challenges, similar to those created by microarray-based investigation, for the discovery of pattern of markers from high-dimensional data, specific to each pathologic state (e.g. normal vs cancer). We propose a three-step strategy to select important markers from high-dimensional mass spectrometry data using surface enhanced laser desorption/ionization (SELDI) technology. The first two steps are the selection of the most discriminating biomarkers with a construction of different classifiers. Finally, we compare and validate their performance and robustness using different supervised classification methods such as Support Vector Machine, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Neural Networks, Classification Trees and Boosting Trees. We show that the proposed method is suitable for analysing high-throughput proteomics data and that the combination of logistic regression and Linear Discriminant Analysis outperform other methods tested.Proteins involved in the regulation of the cell cycle are highly conserved across all eukaryotes, and so a relatively simple eukaryote such as yeast can provide insight into a variety of cell cycle perturbations including those that occur in human cancer. To date, the budding yeast Saccharomyces cerevisiae has provided the largest amount of experimental and modeling data on the progression of the cell cycle, making it a logical choice for in-depth studies of this process. Moreover, the advent of methods for collection of high-throughput genome, transcriptome, and proteome data has provided a means to collect and precisely quantify simultaneous cell cycle gene transcript and protein levels, permitting modeling of the cell cycle on the systems level. With the appropriate mathematical framework and sufficient and accurate data on cell cycle components, it should be possible to create a model of the cell cycle that not only effectively describes its operation, but can also predict responses to perturbations such as variation in protein levels and responses to external stimuli including targeted inhibition by drugs. In this review, we summarize existing data on the yeast cell cycle, proteomics technologies for quantifying cell cycle proteins, and the mathematical frameworks that can integrate this data into representative and effective models. Systems level modeling of the cell cycle will require the integration of high-quality data with the appropriate mathematical framework, which can currently be attained through the combination of dynamic modeling based on proteomics data and using yeast as a model organism.Proteomic patterns derived from mass spectrometry have recently been put forth as potential biomarkers for the early diagnosis of cancer. This approach has generated much excitement, particularly as initial results reported on SELDI profiling of serum suggested that near perfect sensitivity and specificity could be achieved in diagnosing ovarian cancer. However, more recent reports have suggested that much of the observed structure could be due to the presence of experimental bias. A rebuttal to the findings of bias, subtitled “Producers and Consumers”, lists several objections. In this paper, we attempt to address these objections. While we continue to find evidence of experimental bias, we emphasize that the problems found are associated with experimental design and processing, and can be avoided in future studies.Microarray technologies have been an increasingly important tool in cancer research in the last decade, and a number of initiatives have sought to stress the importance of the provision and sharing of raw microarray data. Illumina BeadArrays provide a particular problem in this regard, as their random construction simultaneously adds value to analysis of the raw data and obstructs the sharing of those data. We present a compression scheme for raw Illumina BeadArray data, designed to ease the burdens of sharing and storing such data, that is implemented in the BeadDataPackR BioConductor package (http://bioconductor.org/packages/release/bioc/html/BeadDataPackR.html). It offers two key advantages over off-the-peg compression tools. First it uses knowledge of the data formats to achieve greater compression than other approaches, and second it does not need to be decompressed for analysis, but rather the values held within can be directly accessed.An important issue in current medical science research is to find the genes that are strongly related to an inherited disease. A particular focus is placed on cancer-gene relations, since some types of cancers are inherited. As biomedical databases have grown speedily in recent years, an informatics approach to predict such relations from currently available databases should be developed. Our objective is to find implicit associated cancer-genes from biomedical databases including the literature database. Co-occurrence of biological entities has been shown to be a popular and efficient technique in biomedical text mining. We have applied a new probabilistic model, called mixture aspect model (MAM) [48], to combine different types of co-occurrences of genes and cancer derived from Medline and OMIM (Online Mendelian Inheritance in Man). We trained the probability parameters of MAM using a learning method based on an EM (Expectation and Maximization) algorithm. We examined the performance of MAM by predicting associated cancer gene pairs. Through cross-validation, prediction accuracy was shown to be improved by adding gene-gene co-occurrences from Medline to cancer-gene cooccurrences in OMIM. Further experiments showed that MAM found new cancer-gene relations which are unknown in the literature. Supplementary information can be found at http://www.bic.kyotou.ac.jp/pathway/zhusf/CancerInformatics/Supplemental2006.htmlConstructing pathways of tumor progression and discovering the biomarkers associated with cancer is critical for understanding the molecular basis of the disease and for the establishment of novel chemotherapeutic approaches and in turn improving the clinical efficiency of the drugs. It has recently received a lot of attention from bioinformatics researchers. However, relatively few methods are available for constructing pathways. This article develops a novel entropy kernel based kernel clustering and fuzzy kernel clustering algorithms to construct the tumor progression pathways using CpG island methylation data. The methylation data which come from tumor tissues diagnosed at different stages can be used to distinguish epigenotype and phenotypes the describe the molecular events of different phases. Using kernel and fuzzy kernel kmeans, we built tumor progression trees to describe the pathways of tumor progression and find the possible biomarkers associated with cancer. Our results indicate that the proposed algorithms together with methylation profiles can predict the tumor progression stages and discover the biomarkers efficiently. Software is available upon request.Whole genome microarray investigations (e.g. differential expression, differential methylation, ChIP-Chip) provide opportunities to test millions of features in a genome. Traditional multiple comparison procedures such as familywise error rate (FWER) controlling procedures are too conservative. Although false discovery rate (FDR) procedures have been suggested as having greater power, the control itself is not exact and depends on the proportion of true null hypotheses. Because this proportion is unknown, it has to be accurately (small bias, small variance) estimated, preferably using a simple calculation that can be made accessible to the general scientific community. We propose an easy-to-implement method and make the R code available, for estimating the proportion of true null hypotheses. This estimate has relatively small bias and small variance as demonstrated by (simulated and real data) comparing it with four existing procedures. Although presented here in the context of microarrays, this estimate is applicable for many multiple comparison situations.Summary: Clinical covariates such as age, gender, tumor grade, and smoking history have been extensively used in prediction of disease occurrence and progression. On the other hand, genomic biomarkers selected from microarray measurements may provide an alternative, satisfactory way of disease prediction. Recent studies show that better prediction can be achieved by using both clinical and genomic biomarkers. However, due to different characteristics of clinical and genomic measurements, combining those covariates in disease prediction is very challenging. We propose a new regularization method, Covariate-Adjusted Threshold Gradient Directed Regularization (Cov-TGDR), for combining different type of covariates in disease prediction. The proposed approach is capable of simultaneous biomarker selection and predictive model building. It allows different degrees of regularization for different type of covariates. We consider biomedical studies with binary outcomes and right censored survival outcomes as examples. Logistic model and Cox model are assumed, respectively. Analysis of the Breast Cancer data and the Follicular lymphoma data show that the proposed approach can have better prediction performance than using clinical or genomic covariates alone.In this review, we take a survey of bioinformatics databases and quantitative structure-activity relationship studies reported in published literature. Databases from the most general to special cancer-related ones have been included. Most commonly used methods of structure-based analysis of molecules have been reviewed, along with some case studies where they have been used in cancer research. This article is expected to be of use for general bioinformatics researchers interested in cancer and will also provide an update to those who have been actively pursuing this field of research.Dedication by Dr James Lyons-Weiler, University of Pittsburgh Cancer Institute, Pittsburgh, PA, USA.We are experiencing a time of great growth in knowledge about human disease. However, translation of the knowledge into clinical practice has not kept pace. Clinical trials are an important part of the drug development process. The cost of conducting clinical trials has become greater because: 1) regulations on how the trial must be conducted have become more complex; 2) proposed therapies must be compared against standard therapies; and 3) if the end point is survival—it may take longer to reach that end-point as therapies and non-specific supportive measures become more effective. Moreover, therapies administered prior to or subsequent to the experimental intervention may confound the interpretation of survival as an endpoint. Finding valid alternative outcome measures that can be observed soon after the therapy is given could reduce the cost of drug trials, and make effective therapies available to the public more quickly. Imaging can assess therapeutic efficacy for cancers and may be a part of the solution to reduce costs and improve timeliness of clinical trials. (Fig 1). Figure 1 Number of submissions of new molecular entities (NMEs) and biologics license application (BLA) to FDA over the past 10 years. (U.S. Department of Health and Human Services-Food and Drug Administration 2004) The Challenges of Clinical Trials Problem 1: Clinical trials are too expensive Clinical trials are an essential part of the process of documenting the effectiveness of a new therapy. While laboratory experiments attempt to simulate the human situation, validating efficacy and safety in the population of interest remains a necessary step. But the cost of performing a clinical trial large enough to document a treatment effect and monitor for side effects is usually quite expensive. The FDA estimates that the cost to develop a new drug can be as high as


IEEE Transactions on Biomedical Engineering | 2012

Multiscale Modeling for Image Analysis of Brain Tumor Studies

Stefan Bauer; Christian May; Dimitra D. Dionysiou; Georgios S. Stamatakos; Philippe Büchler; Mauricio Reyes

1.7 billion (Fig 2), with others estimating that the median cost at ‘only’


Computer Methods and Programs in Biomedicine | 2004

Simulating growth dynamics and radiation response of avascular tumour spheroids-model validation in the case of an EMT6/Ro multicellular spheroid

Evangelia I. Zacharaki; Georgios S. Stamatakos; Konstantina S. Nikita; Nikolaos K. Uzunoglu

800 million (DiMasi, 2002). Figure 2 The cost of developing a successful compound is increasing, and the clinical trials pieces are the rapidly increasing components (Windhover’s In Vivo 2003). Some believe it is this mounting cost that is responsible for the decline in the number of new agents being submitted to the FDA. This represents a great challenge to our health care system. No amount of research is going to be effective in curing cancer if the final step of performing the clinical trial is too difficult or expensive to justify the economic returns expected from selling the product. Developing methods to reduce the effort and cost of a clinical trial while maintaining or increasing the validity would be valuable.Background: Epidermal growth factor receptor (EGFR) overexpression is observed in significant proportions of non-small cell lung carcinomas (NSCLC). Furthermore, overactivation of vascular endothelial growth factor (VEGF) leads to increased angiogenesis implicated as an important factor in vascularization of those tumors. Patients and Methods: Using tissue microarray technology, forty-paraffin (n = 40) embedded, histologically confirmed primary NSCLCs were cored and re-embedded into a recipient block. Immunohistochemistry was performed for the determination of EGFR and VEGF protein levels which were evaluated by the performance of computerized image analysis. EGFR gene amplification was studied by chromogenic in situ hybridization based on the use of EGFR gene and chromosome 7 centromeric probes. Results: EGFR overexpression was observed in 23/40 (57.5%) cases and was correlated to the stage of the tumors (p = 0.001), whereas VEGF was overexpressed in 35/40 (87.5%) cases and was correlated to the stage of the tumors (p = 0.005) and to the smoking history of the patients (p = 0.016). Statistical significance was assessed comparing the protein levels of EGFR and VEGF (p = 0.043, k = 0.846). EGFR gene amplification was identified in 2/40 (5%) cases demonstrating no association to its overall protein levels (p = 0.241), whereas chromosome 7 aneuploidy was detected in 7/40 (17.5%) cases correlating to smoking history of the patients (p = 0.013). Conclusions: A significant subset of NSCLC is characterized by EGFR and VEGF simultaneous overexpression and maybe this is the eligible target group for the application of combined anti-EGFR/VEGF targeted therapies at the basis of genetic deregulation (especially gene amplification for EGFR).BRB-ArrayTools is an integrated software system for the comprehensive analysis of DNA microarray experiments. It was developed by professional biostatisticians experienced in the design and analysis of DNA microarray studies and incorporates methods developed by leading statistical laboratories. The software is designed for use by biomedical scientists who wish to have access to state-of-the-art statistical methods for the analysis of gene expression data and to receive training in the statistical analysis of high dimensional data. The software provides the most extensive set of tools available for predictive classifier development and complete cross-validation. It offers extensive links to genomic websites for gene annotation and analysis tools for pathway analysis. An archive of over 100 datasets of published microarray data with associated clinical data is provided and BRB-ArrayTools automatically imports data from the Gene Expression Omnibus public archive at the National Center for Biotechnology Information.An algorithm to reduce multi-sample array CGH data from thousands of clones to tens or hundreds of clone regions is introduced. This reduction of the data is performed such that little information is lost, which is possible due to the high dependencies between neighboring clones. The algorithm is explained using a small example. The potential beneficial effects of the algorithm for downstream analysis are illustrated by re-analysis of previously published colorectal cancer data. Using multiple testing corrections suitable for these data, we provide statistical evidence for genomic differences on several clone regions between MSI+ and CIN+ tumors. The algorithm, named CGHregions, is available as an easy-to-use script in R.Microarrays allow researchers to monitor the gene expression patterns for tens of thousands of genes across a wide range of cellular responses, phenotype and conditions. Selecting a small subset of discriminate genes from thousands of genes is important for accurate classification of diseases and phenotypes. Many methods have been proposed to find subsets of genes with maximum relevance and minimum redundancy, which can distinguish accurately between samples with different labels. To find the minimum subset of relevant genes is often referred as biomarker discovery. Two main approaches, filter and wrapper techniques, have been applied to biomarker discovery. In this paper, we conducted a comparative study of different biomarker discovery methods, including six filter methods and three wrapper methods. We then proposed a hybrid approach, FR-Wrapper, for biomarker discovery. The aim of this approach is to find an optimum balance between the precision of the biomarker discovery and the computation cost, by taking advantages of both filter method’s efficiency and wrapper method’s high accuracy. Our hybrid approach applies Fisher’s ratio, a simple method easy to understand and implement, to filter out most of the irrelevant genes, then a wrapper method is employed to reduce the redundancy. The performance of FR-Wrapper approach is evaluated over four widely used microarray datasets. Analysis of experimental results reveals that the hybrid approach can achieve the goal of maximum relevance with minimum redundancy.Mass spectrometry approaches to biomarker discovery in human fluids have received a great deal of attention in recent years. While mass spectrometry instrumentation and analysis approaches have been widely investigated, little attention has been paid to how sample handling can impact the plasma proteome and therefore influence biomarker discovery. We have investigated the effects of two main aspects of sample handling on MALDI-TOF data: repeated freeze-thaw cycles and the effects of long-term storage of plasma at −70°C. Repeated freeze-thaw cycles resulted in a trend towards increasing changes in peak intensity, particularly after two thaws. However, a 4-year difference in long-term storage appears to have minimal effect on protein in plasma as no differences in peak number, mass distribution, or coefficient of variation were found between samples. Therefore, limiting freeze/thaw cycles seems more important to maintaining the integrity of the plasma proteome than degradation caused by long-term storage at −70°C.Introduction MR examinations of the brain are the primary method for clinical as well as research assessment of the effects of therapy on brain tumors. In clinical practice, visual comparison is the primary method of assessing changes that indicate tumor response or progression. This is a labor-intensive process involving visual search for changes between examinations on multiple images from multiple image types. Furthermore, some of the changes that may be perceived could be do to acquisition-related changes, rather than changes in the tumor status. One of these changes is the change in the patient position between the two time points. While every effort is made to acquire images in the same plane as prior exams, this is rarely achieved. In this study, we evaluated computerized image registration (A.K.A. image alignment) on accuracy and confi dence. Methods Study selection After IRB approval, we collected a series of 100 sequential MRI examination pairs in patients with primary brain gliomas in which there had been no intervening surgery. Furthermore, we selected those in which the clinical radiologist interpretation indicated either subtle or no change in the tumor. The interval between examinations ranged from 35 days to 375 days, with the median being 75 days. Tumor types included astrocytoma, oligodendroglioma, and mixed oligo-astrocytomas, and tumor grade ranged from 2 to 4 on the World Health Organization scale. Examinations consisted of 3 mm thick contiguous T1, T2, FLAIR, and T1-post contrast images obtained with a 1.5T GE Signa (GE Medical Systems, Waukesha, WI) scanner. The T1-weighted images were spin-echo sequences with TR ranging from 400 ms to 620 ms and TE min full. T2-weighted images were fast spin echo images TR 3500–4000 ms and TEeff of 120 ms. FLAIR images were also fast spin echo with TR 11000 ms, TI 2250 ms and TE 250 ms.We present the implementation of an application using caGrid, which is the service-oriented Grid software infrastructure of the NCI cancer Biomedical Informatics Grid (caBIGTM), to support design and analysis of custom microarray experiments in the study of epigenetic alterations in cancer. The design and execution of these experiments requires synthesis of information from multiple data types and datasets. In our implementation, each data source is implemented as a caGrid Data Service, and analytical resources are wrapped as caGrid Analytical Services. This service-based implementation has several advantages. A backend resource can be modified or upgraded, without needing to change other components in the application. A remote resource can be added easily, since resources are not required to be collected in a centralized infrastructure.Array-Comparative Genomic Hybridization (aCGH) is a powerful high throughput technology for detecting chromosomal copy number aberrations (CNAs) in cancer, aiming at identifying related critical genes from the affected genomic regions. However, advancing from a dataset with thousands of tabular lines to a few candidate genes can be an onerous and time-consuming process. To expedite the aCGH data analysis process, we have developed a user-friendly aCGH data viewer (aCGHViewer) as a conduit between the aCGH data tables and a genome browser. The data from a given aCGH analysis are displayed in a genomic view comprised of individual chromosome panels which can be rapidly scanned for interesting features. A chromosome panel containing a feature of interest can be selected to launch a detail window for that single chromosome. Selecting a data point of interest in the detail window launches a query to the UCSC or NCBI genome browser to allow the user to explore the gene content in the chromosomal region. Additionally, aCGHViewer can display aCGH and expression array data concurrently to visually correlate the two. aCGHViewer is a stand alone Java visualization application that should be used in conjunction with separate statistical programs. It operates on all major computer platforms and is freely available at http://falcon.roswellpark.org/aCGHview/.We developed a quality assurance (QA) tool, namely microarray outlier filter (MOF), and have applied it to our microarray datasets for the identification of problematic arrays. Our approach is based on the comparison of the arrays using the correlation coefficient and the number of outlier spots generated on each array to reveal outlier arrays. For a human universal reference (HUR) dataset, which is used as a technical control in our standard hybridization procedure, 3 outlier arrays were identified out of 35 experiments. For a human blood dataset, 12 outlier arrays were identified from 185 experiments. In general, arrays from human blood samples displayed greater variation in their gene expression profiles than arrays from HUR samples. As a result, MOF identified two distinct patterns in the occurrence of outlier arrays. These results demonstrate that this methodology is a valuable QA practice to identify questionable microarray data prior to downstream analysis.Background Cell cycle is an important clue to unravel the mechanism of cancer cells. Recently, expression profiles of cDNA microarray data of Cancer cell cycle are available for the information of dynamic interactions among Cancer cell cycle related genes. Therefore, it is more appealing to construct a dynamic model for gene regulatory network of Cancer cell cycle to gain more insight into the infrastructure of gene regulatory mechanism of cancer cell via microarray data. Results Based on the gene regulatory dynamic model and microarray data, we construct the whole dynamic gene regulatory network of Cancer cell cycle. In this study, we trace back upstream regulatory genes of a target gene to infer the regulatory pathways of the gene network by maximum likelihood estimation method. Finally, based on the dynamic regulatory network, we analyze the regulatory abilities and sensitivities of regulatory genes to clarify their roles in the mechanism of Cancer cell cycle. Conclusions Our study presents a systematically iterative approach to discern and characterize the transcriptional regulatory network in Hela cell cycle from the raw expression profiles. The transcription regulatory network in Hela cell cycle can also be confirmed by some experimental reviews. Based on our study and some literature reviews, we can predict and clarify the E2F target genes in G1/S phase, which are crucial for regulating cell cycle progression and tumorigenesis. From the results of the network construction and literature confirmation, we infer that MCM4, MCM5, CDC6, CDC25A, UNG and E2F2 are E2F target genes in Hela cell cycle.The study of pathway disruption is key to understanding cancer biology. Advances in high throughput technologies have led to the rapid accumulation of genomic data. The explosion in available data has generated opportunities for investigation of concerted changes that disrupt biological functions, this in turns created a need for computational tools for pathway analysis. In this review, we discuss approaches to the analysis of genomic data and describe the publicly available resources for studying biological pathways.Summary: Computer tomography (CT) imaging plays an important role in cancer detection and quantitative assessment in clinical trials. High-resolution imaging studies on large cohorts of patients generate vast data sets, which are infeasible to analyze through manual interpretation. In this article we describe a comprehensive architecture for computer-aided detection (CAD) and surveillance on lung nodules in CT images. Central to this architecture are the analytic components: an automated nodule detection system, nodule tracking capabilities and volume measurement, which are integrated within a data management system that includes mechanisms for receiving and archiving images, a database for storing quantitative nodule measurements and visualization, and reporting tools. We describe two studies to evaluate CAD technology within this architecture, and the potential application in large clinical trials. The first study involves performance assessment of an automated nodule detection system and its ability to increase radiologist sensitivity when used to provide a second opinion. The second study investigates nodule volume measurements on CT made using a semi-automated technique and shows that volumetric analysis yields significantly different tumor response classifications than a 2D diameter approach. These studies demonstrate the potential of automated CAD tools to assist in quantitative image analysis for clinical trials.We present a computational approach for studying the effect of potential drug combinations on the protein networks associated with tumor cells. The majority of therapeutics are designed to target single proteins, yet most diseased states are characterized by a combination of many interacting genes and proteins. Using the topology of protein-protein interaction networks, our methods can explicitly model the possible synergistic effect of targeting multiple proteins using drug combinations in different cancer types. The methodology can be conceptually split into two distinct stages. Firstly, we integrate protein interaction and gene expression data to develop network representations of different tissue types and cancer types. Secondly, we model network perturbations to search for target combinations which cause significant damage to a relevant cancer network but only minimal damage to an equivalent normal network. We have developed sets of predicted target and drug combinations for multiple cancer types, which are validated using known cancer and drug associations, and are currently in experimental testing for prostate cancer. Our methods also revealed significant bias in curated interaction data sources towards targets with associations compared with high-throughput data sources from model organisms. The approach developed can potentially be applied to many other diseased cell types.At the University of Pittsburgh, I teach a graduate-level course ‘The Practical Analysis of High-Throughput Genomic and Proteomic Data’. 50% of the course grade is based on a paper project based on the re-analysis of published data sets. The aim of the project is to encourage the comparative evaluation of different approaches to the various analytic tasks for – omic based biomarker studies. The students are empowered by this course to understand – and to see for themselves – that different approaches to normalization, feature selection, and disease prediction model (a) exist, and (b) differ in their apparent relative performance in helping to generate lists of therapeutic targets or disease prediction models. We also learn about various data standards, mostly from the perspective of data formats, which are critical to re-analysis based algorithm evaluation studies.Decades of focused cancer research have demonstrated the oncogenic process to be frustratingly complex. Despite many triumphs in scientific and clinical understanding, we still do not understand the formation of most solid tumors at a basic level. Each newly discovered molecular signature or profile brings to attention several exceptions in the form of mutations or histological subtypes that significantly change the applicability of the new knowledge to clinical practice. This has hampered improvements in detection, diagnosis, and treatment strategies. Most solid tumors arise from a spectrum of genetic, epigenetic, and chromosomal alterations. The volume of such observations from both patient samples and tumor models in the cancer literature is overwhelming. Despite this, the variations in the molecular alterations that can give rise to cancer can be broadly grouped into a handful of traits that cancer cells must acquire for malignant transformation to occur. The original description by Hanahan and Weinberg of the “hallmarks” of cancer remains a seminal description that looks beyond the detailed molecular discoveries governing malignant transformation, and integrates them into a conceptual framework underlying all cancers (Hanahan and Weinberg, 2000). This framework simply but insightfully states that molecular alterations can be classified by dysfunction in as many as six different regulatory systems that must be perturbed for a normal cell to become cancerous (Khalil and Hill, 2005). These include many diverse and seemingly non overlapping biological processes, including (1) self-sufficiency in growth signals, (2) insensitivity to anti-growth signals, (3) evasion of apoptosis, (4) limitless replicative potential, (5) sustained angiogenesis, and (6) tissue invasion and metastasis. They define genetic instability as an “enabling characteristic” that facilitates the acquisition of other mutations due to defects in the repair of DNA. Although some cancer subtypes are defined by a single genetic alteration leading to a primary defect in one of the above listed processes, most solid tumors responsible for the largest burden of human illness are heterogeneous lesions characterized by many if not all defects observable simultaneously. This includes lung, breast, prostate, colon, and central nervous system tumors among others. In our attempts to understand tumorigenesis by reductionism, much work has gone into the study of individual biologic processes referred to as the “hallmarks” of cancer. Increased understanding for many of these biologic modules has unfortunately not generated parallel understanding of the root cause of cancers and how best to treat them. The concept of cancer as a system failure and the potential to use systems biology approaches to understand the disease is generating significant discussion in the literature as investigators grapple with how to do this (Kitano, 2002; Alberghina, Chiaradonna et al. 2004; Spencer, Berryman et al. 2004; Khalil and Hill, 2005; Hornberg, Bruggeman et al. 2006). The mere recognition of cancer as a systems biology disease is a key first step. This hypothesis views the individual defects observable in solid tumors cumulatively as system failures either at the cellular or multicellular level. A systematic study and understanding of oncogenic network rewiring (Pawson and Warner, 2007) opens the potential to use systems biology approaches to generate testable models of different tumors, an exciting and as of yet unexplored realm of cancer biology. System level understanding, the approach advocated in systems biology, requires a change in our notion of “what to look for” in biology (Kitano, 2002). While an understanding role of individual genes and proteins continues to be important, the focus is superseded by the goal of understanding a systems structure, function and dynamics. This evolution in thought in the life sciences has produced a profound transformation at the threshold of what is widely regarded as the century of biology (Kafatos and Eisner, 2004). From a collection of narrow, well-defined, almost parochial disciplines, they are rapidly morphing into domains that span the realm of molecular structure and function through to the application of this knowledge to clinical medicine. The results of teams of individual specialists dedicated to specific biological goals are providing insight into system structures and function not conceivable a decade ago. Identifying all the genes and proteins in an organism is analogous to creating a list of all parts of a complex device or machine, such as an airplane. While such a list provides a catalog of individual components, it alone is not sufficient to understand the complexity underlying the engineered object (Kitano, 2002). One cannot readily build an airplane or understand its functional intricacies from a parts list alone. One needs to understand how these parts are assembled to form the structure of an airplane. This is biologically analogous to drawing an exhaustive diagram of gene-regulatory interactions and their biochemical interactions, as these diagrams would provide a limited knowledge of how changes in one part of the system may affect other parts. To understand how a particular system functions, how the individual components interact during operation and under failure must be examined first. From an engineering perspective, answers to key questions become critical, such as, what is the voltage on each signal line? How are the signals encoded? How is the voltage stabilized against noise and external fluctuations? How do the circuits react when a malfunction occurs in the system? What are the design principles and possible circuit patterns, and how can they be modified to improve system performance? (Kitano, 2002) Why has systems biology received so much recent attention? In short, it is because the key first step of defining system structures has quickly advanced from fantasy to reality in the post-genomic era. The achievement of full genome sequencing projects in many organisms, including Homo sapiens, has defined the “parts list” for growth, development, and normal physiologic function. The technological development associated with these achievements has spawned the nascent fields of genomics, proteomics, and multiple ”-omic” disciplines defined by their systematic, data-driven approaches to biological experimentation. These approaches are increasingly being applied to the question of understanding cancer. The volume of data generated by multiple high-throughput platforms has outpaced the computational and mathematical models to integrate this information for advances in true biologic understanding (Figure 1). This will continue to be a bottleneck for the near future. Figure 1. Estimating cancer research growth and utilization of diverse high-throughput platforms as measured by number of Medline references. The apparent decline in 2006 could be explained by not finalized references in Medline (February 9, 2007). We review a number of relevant observations in the context of the “hallmarks” of cancer and discuss the issue of data integration in performing systems-level experiments.Altered expression of COX-2 and overproduction of prostaglandins, particularly prostaglandin E2, are common in malignant tumors. Consequently, non-steroidal anti-inflammatory drugs (NSAIDs) attenuate tumor net growth, tumor related cachexia, improve appetite and prolong survival. We have also reported that COX-inhibition (indomethacin) interfered with early onset of tumor endothelial cell growth, tumor cell proliferation and apoptosis. It is however still unclear whether such effects are restricted to metabolic alterations closely related to eicosanoid pathways and corresponding regulators, or whether a whole variety of gene products are involved both up- and downstream effects of eicosanoids. Therefore, present experiments were performed by the use of an in vivo, intravital chamber technique, where micro-tumor growth and related angiogenesis were analyzed by microarray to evaluate for changes in global RNA expression caused by indomethacin treatment. Indomethacin up-regulated 351 and down-regulated 1852 genes significantly (p < 0.01); 1066 of these genes had unknown biological function. Genes with altered expression occurred on all chromosomes. Our results demonstrate that indomethacin altered expression of a large number of genes distributed among a variety of processes in the carcinogenic progression involving angiogenesis, apoptosis, cell-cycling, cell adhesion, inflammation as well as fatty acid metabolism and proteolysis. It remains a challenge to distinguish primary key alterations from secondary adaptive changes in transcription of genes altered by cyclooxygenase inhibition.The present paper aims at demonstrating clinically oriented applications of the multiscale four dimensional in vivo tumor growth simulation model previously developed by our research group. To this end the effect of weekend radiotherapy treatment gaps and p53 gene status on two virtual glioblastoma tumors differing only in p53 gene status is investigated in silico. Tumor response predictions concerning two rather extreme dose fractionation schedules (daily dose of 4.5 Gy administered in 3 equal fractions) namely HART (Hyperfractionated Accelerated Radiotherapy weekend less) 54 Gy and CHART (Continuous HART) 54 Gy are presented and compared. The model predictions suggest that, for the same p53 status, HART 54 Gy and CHART 54 Gy have almost the same long term effects on locoregional tumor control. However, no data have been located in the literature concerning a comparison of HART and CHART radiotherapy schedules for glioblastoma. As non small cell lung carcinoma (NSCLC) may also be a fast growing and radiosensitive tumor, a comparison of the model predictions with the outcome of clinical studies concerning the response of NSCLC to HART 54 Gy and CHART 54 Gy is made. The model predictions are in accordance with corresponding clinical observations, thus strengthening the potential of the model.Microarray gene expression profiling has been used to distinguish histological subtypes of renal cell carcinoma (RCC), and consequently to identify specific tumor markers. The analytical procedures currently in use find sets of genes whose average differential expression across the two categories differ significantly. In general each of the markers thus identified does not distinguish tumor from normal with 100% accuracy, although the group as a whole might be able to do so. For the purpose of developing a widely used economically viable diagnostic signature, however, large groups of genes are not likely to be useful. Here we use two different methods, one a support vector machine variant, and the other an exhaustive search, to reanalyze data previously generated in our Lab (Lenburg et al. 2003). We identify 158 genes, each having an expression level that is higher (lower) in every tumor sample than in any normal sample, and each having a minimum differential expression across the two categories at a significance of 0.01. The set is highly enriched in cancer related genes (p = 1.6 × 10−12), containing 43 genes previously associated with either RCC or other types of cancer. Many of the biomarkers appear to be associated with the central alterations known to be required for cancer transformation. These include the oncogenes JAZF1, AXL, ABL2; tumor suppressors RASD1, PTPRO, TFAP2A, CDKN1C; and genes involved in proteolysis or cell-adhesion such as WASF2, and PAPPA.The analysis of expression and CGH arrays plays a central role in the study of complex diseases, especially cancer, including finding markers for early diagnosis and prognosis, choosing an optimal therapy, or increasing our understanding of cancer development and metastasis. Asterias (http://www.asterias.info) is an integrated collection of freely-accessible web tools for the analysis of gene expression and aCGH data. Most of the tools use parallel computing (via MPI) and run on a server with 60 CPUs for computation; compared to a desktop or server-based but not parallelized application, parallelization provides speed ups of factors up to 50. Most of our applications allow the user to obtain additional information for user-selected genes (chromosomal location, PubMed ids, Gene Ontology terms, etc.) by using clickable links in tables and/or figures. Our tools include: normalization of expression and aCGH data (DNMAD); converting between different types of gene/clone and protein identifiers (IDconverter/IDClight); filtering and imputation (preP); finding differentially expressed genes related to patient class and survival data (Pomelo II); searching for models of class prediction (Tnasas); using random forests to search for minimal models for class prediction or for large subsets of genes with predictive capacity (GeneSrF); searching for molecular signatures and predictive genes with survival data (SignS); detecting regions of genomic DNA gain or loss (ADaCGH). The capability to send results between different applications, access to additional functional information, and parallelized computation make our suite unique and exploit features only available to web-based applications.From tumor to tumor, there is a great variation in the proportion of cancer cells growing and making daughter cells that ultimately metastasize. The differential growth within a single tumor, however, has not been studied extensively and this may be helpful in predicting the aggressiveness of a particular cancer type. The estimation problem of tumor growth rates from several populations is studied. The baseline growth rate estimator is based on a family of interacting particle system models which generalize the linear birth process as models of tumor growth. These interacting models incorporate the spatial structure of the tumor in such a way that growth slows down in a crowded system. Approximation-assisted estimation strategy is proposed when initial values of rates are known from the previous study. Some alternative estimators are suggested and the relative dominance picture of the proposed estimator to the benchmark estimator is investigated. An over-riding theme of this article is that the suggested estimation method extends its traditional counterpart to non-normal populations and to more realistic cases.With state-of-the-art microarray technologies now available for whole genome CpG island (CGI) methylation profiling, there is a need to develop statistical models that are specifically geared toward the analysis of such data. In this article, we propose a Gamma-Normal-Gamma (GNG) mixture model for describing three groups of CGI loci: hypomethylated, undifferentiated, and hypermethylated, from a single methylation microarray. This model was applied to study the methylation signatures of three breast cancer cell lines: MCF7, T47D, and MDAMB361. Biologically interesting and interpretable results are obtained, which highlights the heterogeneity nature of the three cell lines. This underlies the premise for the need of analyzing each of the microarray slides individually as opposed to pooling them together for a single analysis. Our comparisons with the fitted densities from the Normal-Uniform (NU) mixture model in the literature proposed for gene expression analysis show an improved goodness of fit of the GNG model over the NU model. Although the GNG model was proposed in the context of single-slide methylation analysis, it can be readily adapted to analyze multi-slide methylation data as well as other types of microarray data.As the founding Editor-in-Chief of Cancer Informatics, my fi rst duty is to thank the Editorial Board for their support and participation, and their input on the direction, scope and focus of the journal, and also and especially Tim Hill. Without their considerable generosity and personal effort and passion in launching the journal, Cancer Informatics would not exist. I also gratefully acknowledge Dr. William Grizzle’s critical role as co-editor of this volume, and for overseeing the independent review process for two submitted manuscripts (Normelle et al & Lyons-Weiler et al).Molecular stratification of disease based on expression levels of sets of genes can help guide therapeutic decisions if such classifications can be shown to be stable against variations in sample source and data perturbation. Classifications inferred from one set of samples in one lab should be able to consistently stratify a different set of samples in another lab. We present a method for assessing such stability and apply it to the breast cancer (BCA) datasets of Sorlie et al. 2003 and Ma et al. 2003. We find that within the now commonly accepted BCA categories identified by Sorlie et al. Luminal A and Basal are robust, but Luminal B and ERBB2+ are not. In particular, 36% of the samples identified as Luminal B and 55% identified as ERBB2+ cannot be assigned an accurate category because the classification is sensitive to data perturbation. We identify a “core cluster” of samples for each category, and from these we determine “patterns” of gene expression that distinguish the core clusters from each other. We find that the best markers for Luminal A and Basal are (ESR1, LIV1, GATA-3) and (CCNE1, LAD1, KRT5), respectively. Pathways enriched in the patterns regulate apoptosis, tissue remodeling and the immune response. We use a different dataset (Ma et al. 2003) to test the accuracy with which samples can be allocated to the four disease subtypes. We find, as expected, that the classification of samples identified as Luminal A and Basal is robust but classification into the other two subtypes is not.The antitumor drug paclitaxel stabilizes microtubules and reduces their dynamicity, promoting mitotic arrest and eventually apoptosis. Upon assembly of the α/β-tubulin heterodimer, GTP becomes bound to both the α and β-tubulin monomers. During microtubule assembly, the GTP bound to β-tubulin is hydrolyzed to GDP, eventually reaching steady-state equilibrium between free tubulin dimers and those polymerized into microtubules. Tubulin-binding drugs such as paclitaxel interact with β-tubulin, resulting in the disruption of this equilibrium. In spite of several crystal structures of tubulin, there is little biochemical insight into the mechanism by which anti-tubulin drugs target microtubules and alter their normal behavior. The mechanism of drug action is further complicated, as the description of altered β-tubulin isotype expression and/or mutations in tubulin genes may lead to drug resistance as has been described in the literature. Because of the relationship between β-tubulin isotype expression and mutations within β-tubulin, both leading to resistance, we examined the properties of altered residues within the taxane, colchicine and Vinca binding sites. The amount of data now available, allows us to investigate common patterns that lead to microtubule disruption and may provide a guide to the rational design of novel compounds that can inhibit microtubule dynamics for specific tubulin isotypes or, indeed resistant cell lines. Because of the vast amount of data published to date, we will only provide a broad overview of the mutational results and how these correlate with differences between tubulin isotypes. We also note that clinical studies describe a number of predictive factors for the response to anti-tubulin drugs and attempt to develop an understanding of the features within tubulin that may help explain how they may affect both microtubule assembly and stability.Summary: In recent years, there has been an increased interest in using protein mass spectroscopy to identify molecular markers that discriminate diseased from healthy individuals. Existing methods are tailored towards classifying observations into nominal categories. Sometimes, however, the outcome of interest may be measured on an ordered scale. Ignoring this natural ordering results in some loss of information. In this paper, we propose a Bayesian model for the analysis of mass spectrometry data with ordered outcome. The method provides a unified approach for identifying relevant markers and predicting class membership. This is accomplished by building a stochastic search variable selection method within an ordinal outcome model. We apply the methodology to mass spectrometry data on ovarian cancer cases and healthy individuals. We also utilize wavelet-based techniques to remove noise from the mass spectra prior to analysis. We identify protein markers associated with being healthy, having low grade ovarian cancer, or being a high grade case. For comparison, we repeated the analysis using conventional classification procedures and found improved predictive accuracy with our method.Background: The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium’s intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. Methods: The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions’ IRB/HIPAA standards. Results: Currently, this “virtual biorepository” has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium’s web site. Conclusions: The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy.Background Most published literature using SELDI-TOF has used traditional techniques in Spectral Analysis such as Fourier transforms and wavelets for denoising. Most of these publications also compare spectra using their most prominent feature, i.e, peaks or local maximums. Methods The maximum intensity value within each window of differentiable m/z values was used to represent the intensity level in that window. We also calculated the ‘Area under the Curve’ (AUC) spanned by each window. Results Keeping everything else constant, such as pre-processing of the data and the classifier used, the AUC performed much better as a metric of comparison than the peaks in two out of three data sets. In the third data set both metrics performed equivalently. Conclusions This study shows that the feature used to compare spectra can have an impact on the results of a study attempting to identify biomarkers using SELDI TOF data.We use Backward Chaining Rule Induction (BCRI), a novel data mining method for hypothesizing causative mechanisms, to mine lung cancer gene expression array data for mechanisms that could impact survival. Initially, a supervised learning system is used to generate a prediction model in the form of “IF THEN ” style rules. Next, each antecedent (i.e. an IF condition) of a previously discovered rule becomes the outcome class for subsequent application of supervised rule induction. This step is repeated until a termination condition is satisfied. “Chains” of rules are created by working backward from an initial condition (e.g. survival status). Through this iterative process of “backward chaining,” BCRI searches for rules that describe plausible gene interactions for subsequent validation. Thus, BCRI is a semi-supervised approach that constrains the search through the vast space of plausible causal mechanisms by using a top-level outcome to kick-start the process. We demonstrate the general BCRI task sequence, how to implement it, the validation process, and how BCRI-rules discovered from lung cancer microarray data can be combined with prior knowledge to generate hypotheses about functional genomics.Background Microarray technology has been previously used to identify genes that are differentially expressed between tumour and normal samples in a single study, as well as in syntheses involving multiple studies. When integrating results from several Affymetrix microarray datasets, previous studies summarized probeset-level data, which may potentially lead to a loss of information available at the probe-level. In this paper, we present an approach for integrating results across studies while taking probe-level data into account. Additionally, we follow a new direction in the analysis of microarray expression data, namely to focus on the variation of expression phenotypes in predefined gene sets, such as pathways. This targeted approach can be helpful for revealing information that is not easily visible from the changes in the individual genes. Results We used a recently developed method to integrate Affymetrix expression data across studies. The idea is based on a probe-level based test statistic developed for testing for differentially expressed genes in individual studies. We incorporated this test statistic into a classic random-effects model for integrating data across studies. Subsequently, we used a gene set enrichment test to evaluate the significance of enriched biological pathways in the differentially expressed genes identified from the integrative analysis. We compared statistical and biological significance of the prognostic gene expression signatures and pathways identified in the probe-level model (PLM) with those in the probeset-level model (PSLM). Our integrative analysis of Affymetrix microarray data from 110 prostate cancer samples obtained from three studies reveals thousands of genes significantly correlated with tumour cell differentiation. The bioinformatics analysis, mapping these genes to the publicly available KEGG database, reveals evidence that tumour cell differentiation is significantly associated with many biological pathways. In particular, we observed that by integrating information from the insulin signalling pathway into our prediction model, we achieved better prediction of prostate cancer. Conclusions Our data integration methodology provides an efficient way to identify biologically sound and statistically significant pathways from gene expression data. The significant gene expression phenotypes identified in our study have the potential to characterize complex genetic alterations in prostate cancer.Early detection of precancerous cells in the cervix and their clinical management is the main purpose of cervical cancer prevention and treatment programs. Cytological findings or testing for high risk (HR)-human papillomavirus (HPV) are inadequately sensitive for use in triage of women at high risk for cervical cancer. The current study is an exploratory study to identify candidate surface-enhanced laser desorption/ionization (SELDI) time of flight (TOF) mass spectrometry (MS) protein profiles in plasma that may distinguish cervical intraepithelial neoplasia (CIN 3) from CIN 1 among women infected with HR-HPV. We evaluated the SELDI-TOF-MS plasma protein profiles of HR-HPV positive 32 women with CIN 3 (cases) and 28 women with CIN1 (controls). Case-control status was kept blinded and triplicates of each sample and quality control plasma samples were randomized and after robotic sample preparations were run on WCX2 chips. After alignment of mass/charge (m-z values), an iterative method was used to develop a classifier on a training data set that had 28 cases and 22 controls. The classifier developed was used to classify the subjects in a test data set that has six cases and six controls. The classifier separated the cases from controls in the test set with 100% sensitivity and 100% specificity suggesting the possibility of using plasma SELDI protein profiles to identify women who are likely to have CIN 3 lesions.Genome wide DNA alterations were evaluated by array CGH in addition to RNA expression profiling in colorectal cancer from patients with excellent and poor survival following primary operations. DNA was used for CGH in BAC and cDNA arrays. Global RNA expression was determined by 44K arrays. DNA and RNA from tumor and normal colon were used from cancer patients grouped according to death, survival or Dukes A, B, C and D tumor stage. Confirmed DNA alterations in all Dukes A – D were judged relevant for carcinogenesis, while changes in Dukes C and D only were regarded relevant for tumor progression. Copy number gain was more common than loss in tumor tissue (p < 0.01). Major tumor DNA alterations occurred in chromosome 8, 13, 18 and 20, where short survival included gain in 8q and loss in 8p. Copy number gains related to tumor progression were most common on chromosome 7, 8, 19, 20, while corresponding major losses appeared in chromosome 8. Losses at chromosome 18 occurred in all Dukes stages. Normal colon tissue from cancer patients displayed gains in chromosome 19 and 20. Mathematical Vector analysis implied a number of BAC-clones in tumor DNA with genes of potential importance for death or survival. The genomic variation in colorectal cancer cells is tremendous and emphasizes that BAC array CGH is presently more powerful than available statistical models to discriminate DNA sequence information related to outcome. Present results suggest that a majority of DNA alterations observed in colorectal cancer are secondary to tumor progression. Therefore, it would require an immense work to distinguish primary from secondary DNA alterations behind colorectal cancer.Iodine is enriched and stored in the thyroid gland. Due to several factors, the size of the thyroid iodine pool varies both between individuals and within individuals over time. Excess iodine as well as iodine deficiency may promote thyroid cancer. Therefore, knowledge of iodine content and distribution within thyroid cancer tissue is of interest. X-ray fluorescence analysis (XRF) and secondary ion mass spectrometry (SIMS) are two methods that can be used to assess iodine content in thyroid tissue. With both techniques, choice of sample preparation affects the results. Aldehyde fixatives are required for SIMS analysis while a freezing method might be satisfactory for XRF analysis. The aims of the present study were primarily to evaluate a simple freezing technique for preserving samples for XRF analysis and also to use XRF to evaluate the efficacy of using aldehyde fixatives to prepare samples for SIMS analysis. Ten porcine thyroids were sectioned into four pieces that were either frozen or fixed in formaldehyde, glutaraldehyde, or a modified Karnovsky fixative. The frozen samples were assessed for iodine content with XRF after 1 and 2 months, and the fixed samples were analyzed for iodine content after 1 week. Freezing of untreated tissue yielded no significant iodine loss, whereas fixation with aldehydes yielded an iodine loss of 14–30%, with Karnovsky producing the least loss.


PLOS ONE | 2011

Exploiting clinical trial data drastically narrows the window of possible solutions to the problem of clinical adaptation of a multiscale cancer model.

Georgios S. Stamatakos; Eleni Ch. Georgiadi; Norbert Graf; Eleni A. Kolokotroni; Dimitra D. Dionysiou

Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multiscale, multiphysics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlas-based segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.


IEEE Journal of Biomedical and Health Informatics | 2014

The Technologically Integrated Oncosimulator: Combining Multiscale Cancer Modeling With Information Technology in the In Silico Oncology Context

Georgios S. Stamatakos; Dimitra D. Dionysiou; Aran Lunzer; Robert G. Belleman; Eleni A. Kolokotroni; Eleni Ch. Georgiadi; Marius Erdt; Juliusz Pukacki; Stefan Rueping; Stavroula Giatili; Alberto d'Onofrio; Stelios Sfakianakis; Kostas Marias; Christine Desmedt; Manolis Tsiknakis; Norbert Graf

The goal of this paper is to provide both the basic scientist and the clinician with an advanced computational tool for performing in silico experiments aiming at supporting the process of biological optimisation of radiation therapy. Improved understanding and description of malignant tumour dynamics is an additional intermediate objective. To this end an advanced three-dimensional (3D) Monte-Carlo simulation model of both the avascular development of multicellular tumour spheroids and their response to radiation therapy is presented. The model is based upon a number of fundamental biological principles such as the transition between the cell cycle phases, the diffusion of oxygen and nutrients and the cell survival probabilities following irradiation. Efficient algorithms describing tumour expansion and shrinkage are proposed and applied. The output of the biosimulation model is introduced into the (3D) visualisation package AVS-Express, which performs the visualisation of both the external surface and the internal structure of the dynamically evolving tumour based on volume or surface rendering techniques. Both the numerical stability and the statistical behaviour of the simulation model have been studied and evaluated for the case of EMT6/Ro spheroids. Predicted histological structure and tumour growth rates have been shown to be in agreement with published experimental data. Furthermore, the underlying structure of the tumour spheroid as well as its response to irradiation satisfactorily agrees with laboratory experience.

Collaboration


Dive into the Georgios S. Stamatakos's collaboration.

Top Co-Authors

Avatar

Dimitra D. Dionysiou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Nikolaos K. Uzunoglu

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eleni A. Kolokotroni

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Konstantina S. Nikita

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Eleni Ch. Georgiadi

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manolis Tsiknakis

Technological Educational Institute of Crete

View shared research outputs
Top Co-Authors

Avatar

Christine Desmedt

Université libre de Bruxelles

View shared research outputs
Researchain Logo
Decentralizing Knowledge