Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elisabeth Hansson is active.

Publication


Featured researches published by Elisabeth Hansson.


International Journal of Cancer | 2007

EP1-4 subtype, COX and PPARγ receptor expression in colorectal cancer in prediction of disease-specific mortality

Annika Gustafsson; Elisabeth Hansson; Ulf Kressner; Svante Nordgren; Marianne Andersson; Wenhua Wang; Christina Lönnroth; Kent Lundholm

The importance of prostaglandins in tumor growth and progression is well recognized, including antineoplastic activities by cyclooxygenase (COX) inhibitors. Variation in treatment response to COX inhibition has questioned differences in expression of cell surface and nuclear membrane receptors among tumors with different disease progression. The purpose of this study was to evaluate whether EP1–4 subtype, PPARγ receptor and COX‐1/COX‐2 expression in colorectal cancer are related to tumor‐specific mortality. Reverse transcription–polymerase chain reaction and immunohistochemistry were used to demonstrate expression and protein appearance in tumor tissue compared with normal colon tissue. EP1 and EP2 subtype receptor protein was highly present in tumor cells, EP3 occurred occasionally and EP4 was not visible. PPARγ, EP2 and EP4 mRNA were significantly higher in normal colon tissue compared with tumor tissue, without any distinct relationship to Dukes A–D tumor stage. Multivariate analyses indicated that increased tumor tissue EP2 and COX‐2 expression predicted poor survival (p < 0.001). COX‐1 expression was significantly higher than COX‐2 expression in normal colon tissue. Average COX‐2 mRNA was not increased in tumor tissue compared with normal colon. However, most tumor cells stained positive for COX‐2 protein, which was low or undetectable in normal mucosa cells. COX‐1 protein was preferentially visible in stroma. EP1–4 subtype receptor mRNAs were generally positively correlated to both COX‐1 and COX‐2 in tumor tissue, but not in normal colon. Our results imply that both prostaglandin production (COX‐2) and signaling via EP1–4 subtype receptors, particularly EP2, predict disease‐specific mortality in colorectal cancer


Acta Oncologica | 2007

Prostanoid receptor expression in colorectal cancer related to tumor stage, differentiation and progression

Annika Gustafsson; Elisabeth Hansson; Ulf Kressner; Svante Nordgren; Marianne Andersson; Christina Lönnroth; Kent Lundholm

Introduction: Alterations in eicosanoid metabolism is well established in a variety of malignant tumors, particularly colorectal carcinoma. Recent studies in our laboratory have emphasized a role for EP subtype receptors in progression of colorectal cancer and disease specific mortality. Therefore, the aim of the present study was to extend our knowledge to include additional receptor expression (DP1, DP2, FP, IP, TP) for prostanoids (PGD2, TXA2, PGF2α, PGI2) in relationship to tumor stage, differentiation and progression of colorectal cancer. Material and methods: Total RNA from 62 tumors and adjacent normal colon tissue (n = 48) was extracted. Quantification of receptor expression was performed by realtime PCR and related to the expression of an appropriate housekeeping gene (GAPDH). Tumors were assessed according to Dukes A-D (stage I-IV). Results: DP1, DP2, FP and IP receptor subtypes displayed significantly reduced overall expression in tumor tissue compared to normal colon tissue, while the TP receptor subtype showed significantly higher expression in tumor tissue. Overall expression of the prostanoid receptors in tumor tissue was not related to clinical indexes as tumor stage and tumor cell differentiation evaluated by multivariate analyses. Cultured colorectal cancer cell lines with low (HT-29) and high (HCA-7) intrinsic PGE2 production at confluent state did not express DP1 and IP receptor subtypes, but displayed low expression of DP2, FP and TP receptor subtypes. Conclusion: The results in the present study indicate imbalanced expression of prostanoid receptors in colorectal cancer compared to normal colon tissue without clear cut relationship to disease progression. Therefore, future studies should be performed on defined cells within the tumor tissue compartment determining whether any prostanoid receptor(s) is useful as a molecular target in treatment or prevention of colorectal cancer.


Neuroscience | 2013

A new concept affecting restoration of inflammation-reactive astrocytes

Linda Block; Ulrika Björklund; Anna Westerlund; Per Jörneberg; Björn Biber; Elisabeth Hansson

Long-lasting pain may partly be a consequence of ongoing neuroinflammation, in which astrocytes play a significant role. Following noxious stimuli, increased inflammatory receptor activity, influences in Na(+)/K(+)-ATPase activity and actin filament organization occur within the central nervous system. In astrocytes, the Ca(2+) signaling system, Na(+) transporters, cytoskeleton, and release of pro-inflammatory cytokines change during inflammation. The aim of this study was to restore these cell parameters in inflammation-reactive astrocytes. We found that the combination of (1) endomorphin-1, an opioid agonist that stimulates the Gi/o protein of the μ-opioid receptor; (2) naloxone, an opioid antagonist that inhibits the Gs protein of the μ-opioid receptor at ultralow concentrations; and (3) levetiracetam, an anti-epileptic agent that counteracts the release of IL-1β, managed to activate the Gi/o protein and Na(+)/K(+)-ATPase activity, inhibit the Gs protein, and decrease the release of IL-1β. The cell functions of astrocytes in an inflammatory state were virtually restored to their normal non-inflammatory state and it could be of clinical significance and may be useful for the treatment of long-term pain.


Cancer Informatics | 2007

Tumor Genome Wide DNA Alterations Assessed by Array CGH in Patients with Poor and Excellent Survival Following Operation for Colorectal Cancer

Kristina Lagerstedt; Johan Staaf; Göran Jönsson; Elisabeth Hansson; Christina Lönnroth; Ulf Kressner; Lars Lindström; Svante Nordgren; Åke Borg; Kent Lundholm

Background Although a majority of studies in cancer biomarker discovery claim to use proportional hazards regression (PHREG) to the study the ability of a biomarker to predict survival, few studies use the predicted probabilities obtained from the model to test the quality of the model. In this paper, we compared the quality of predictions by a PHREG model to that of a linear discriminant analysis (LDA) in both training and test set settings. Methods The PHREG and LDA models were built on a 491 colorectal cancer (CRC) patient dataset comprised of demographic and clinicopathologic variables, and phenotypic expression of p53 and Bcl-2. Two variable selection methods, stepwise discriminant analysis and the backward selection, were used to identify the final models. The endpoint of prediction in these models was five-year post-surgery survival. We also used linear regression model to examine the effect of bin size in the training set on the accuracy of prediction in the test set. Results The two variable selection techniques resulted in different models when stage was included in the list of variables available for selection. However, the proportion of survivors and non-survivors correctly identified was identical in both of these models. When stage was excluded from the variable list, the error rate for the LDA model was 42% as compared to an error rate of 34% for the PHREG model. Conclusions This study suggests that a PHREG model can perform as well or better than a traditional classifier such as LDA to classify patients into prognostic classes. Also, this study suggests that in the absence of the tumor stage as a variable, Bcl-2 expression is a strong prognostic molecular marker of CRC.Integrative cancer biology research relies on a variety of data-driven computational modeling and simulation methods and techniques geared towards gaining new insights into the complexity of biological processes that are of critical importance for cancer research. These include the dynamics of gene-protein interaction networks, the percolation of sub-cellular perturbations across scales and the impact they may have on tumorigenesis in both experiments and clinics. Such innovative ‘systems’ research will greatly benefit from enabling Information Technology that is currently under development, including an online collaborative environment, a Semantic Web based computing platform that hosts data and model repositories as well as high-performance computing access. Here, we present one of the National Cancer Institute’s recently established Integrative Cancer Biology Programs, i.e. the Center for the Development of a Virtual Tumor, CViT, which is charged with building a cancer modeling community, developing the aforementioned enabling technologies and fostering multi-scale cancer modeling and simulation.The issue of wide feature-set variability has recently been raised in the context of expression-based classification using microarray data. This paper addresses this concern by demonstrating the natural manner in which many feature sets of a certain size chosen from a large collection of potential features can be so close to being optimal that they are statistically indistinguishable. Feature-set optimality is inherently related to sample size because it only arises on account of the tendency for diminished classifier accuracy as the number of features grows too large for satisfactory design from the sample data. The paper considers optimal feature sets in the framework of a model in which the features are grouped in such a way that intra-group correlation is substantial whereas inter-group correlation is minimal, the intent being to model the situation in which there are groups of highly correlated co-regulated genes and there is little correlation between the co-regulated groups. This is accomplished by using a block model for the covariance matrix that reflects these conditions. Focusing on linear discriminant analysis, we demonstrate how these assumptions can lead to very large numbers of close-to-optimal feature sets.The use of MALDI-TOF mass spectrometry as a means of analyzing the proteome has been evaluated extensively in recent years. One of the limitations of this technique that has impeded the development of robust data analysis algorithms is the variability in the location of protein ion signals along the x-axis. We studied technical variations of MALDI-TOF measurements in the context of proteomics profiling. By acquiring a benchmark data set with five replicates, we estimated 76% to 85% of the total variance is due to phase variation. We devised a lobster plot, so named because of the resemblance to a lobster claw, to help detect the phase variation in replicates. We also investigated a peak alignment algorithm to remove the phase variation. This operation is analogous to the normalization step in microarray data analysis. Only after this critical step can features of biological interest be clearly revealed. With the help of principal component analysis, we demonstrated that after peak alignment, the differences among replicates are reduced. We compared this approach to peak alignment with a model-based calibration approach in which there was known information about peaks in common among all spectra. Finally, we examined the potential value at each point in an analysis pipeline of having a set of methods available that includes parametric, semiparametric and nonparametric methods; among such methods are those that benefit from the use of prior information.Array comparative genomic hybridization (aCGH) is a high-throughput lab technique to measure genome-wide chromosomal copy numbers. Data from aCGH experiments require extensive pre-processing, which consists of three steps: normalization, segmentation and calling. Each of these pre-processing steps yields a different data set: normalized data, segmented data, and called data. Publications using aCGH base their findings on data from all stages of the pre-processing. Hence, there is no consensus on which should be used for further down-stream analysis. This consensus is however important for correct reporting of findings, and comparison of results from different studies. We discuss several issues that should be taken into account when deciding on which data are to be used. We express the believe that called data are best used, but would welcome opposing views.We propose a method for biomarker discovery from mass spectrometry data, improving the common peak approach developed by Fushiki et al. (BMC Bioinformatics, 7:358, 2006). The common peak method is a simple way to select the sensible peaks that are shared with many subjects among all detected peaks by combining a standard spectrum alignment and kernel density estimates. The key idea of our proposed method is to apply the common peak approach to each class label separately. Hence, the proposed method gains more informative peaks for predicting class labels, while minor peaks associated with specific subjects are deleted correctly. We used a SELDI-TOF MS data set from laser microdissected cancer tissues for predicting the treatment effects of neoadjuvant therapy using an anticancer drug on breast cancer patients. The AdaBoost algorithm is adopted for pattern recognition, based on the set of candidate peaks selected by the proposed method. The analysis gives good performance in the sense of test errors for classifying the class labels for a given feature vector of selected peak values.Motivation Our goal was to understand why the PLIER algorithm performs so well given its derivation is based on a biologically implausible assumption. Results In spite of a non-intuitive assumption regarding the PM and MM errors made as part of the derivation for PLIER, the resulting probe level error function does capture the key characteristics of the ideal error function, assuming MM probes only measure non-specific binding and no signal.In this paper we develop a Bayesian analysis to estimate the disease prevalence, the sensitivity and specificity of three cervical cancer screening tests (cervical cytology, visual inspection with acetic acid and Hybrid Capture II) in the presence of a covariate and in the absence of a gold standard. We use Metropolis-Hastings algorithm to obtain the posterior summaries of interest. The estimated prevalence of cervical lesions was 6.4% (a 95% credible interval [95% CI] was 3.9, 9.3). The sensitivity of cervical cytology (with a result of ≥ ASC-US) was 53.6% (95% CI: 42.1, 65.0) compared with 52.9% (95% CI: 43.5, 62.5) for visual inspection with acetic acid and 90.3% (95% CI: 76.2, 98.7) for Hybrid Capture II (with result of >1 relative light units). The specificity of cervical cytology was 97.0% (95% CI: 95.5, 98.4) and the specificities for visual inspection with acetic acid and Hybrid Capture II were 93.0% (95% CI: 91.0, 94.7) and 88.7% (95% CI: 85.9, 91.4), respectively. The Bayesian model with covariates suggests that the sensitivity and the specificity of the visual inspection with acetic acid tend to increase as the age of the women increases. The Bayesian method proposed here is an useful alternative to estimate measures of performance of diagnostic tests in the presence of covariates and when a gold standard is not available. An advantage of the method is the fact that the number of parameters to be estimated is not limited by the number of observations, as it happens with several frequentist approaches. However, it is important to point out that the Bayesian analysis requires informative priors in order for the parameters to be identifiable. The method can be easily extended for the analysis of other medical data sets.The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA) software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a flexible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occuring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profiles of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM) tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specific gene expression patterns having both statistical and biological significance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the flexibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org.Consider a gene expression array study comparing two groups of subjects where the goal is to explore a large number of genes in order to select for further investigation a subset that appear to be differently expressed. There has been much statistical research into the development of formal methods for designating genes as differentially expressed. These procedures control error rates such as the false detection rate or family wise error rate. We contend however that other statistical considerations are also relevant to the task of gene selection. These include the extent of differential expression and the strength of evidence for differential expression at a gene. Using real and simulated data we first demonstrate that a proper exploratory analysis should evaluate these aspects as well as decision rules that control error rates. We propose a new measure called the mp-value that quantifies strength of evidence for differential expression. The mp-values are calculated with a resampling based algorithm taking into account the multiplicity and dependence encountered in microarray data. In contrast to traditional p-values our mp-values do not depend on specification of a decision rule for their definition. They are simply descriptive in nature. We contrast the mp-values with multiple testing p-values in the context of data from a breast cancer prognosis study and from a simulation model.Sound data analysis is critical to the success of modern molecular medicine research that involves collection and interpretation of mass-throughput data. The novel nature and high-dimensionality in such datasets pose a series of nontrivial data analysis problems. This technical commentary discusses the problems of over-fitting, error estimation, curse of dimensionality, causal versus predictive modeling, integration of heterogeneous types of data, and lack of standard protocols for data analysis. We attempt to shed light on the nature and causes of these problems and to outline viable methodological approaches to overcome them.The arrival of high-throughput technologies in cancer science and medicine has made the possibility for knowledge generation greater than ever before. However, this has brought with it real challenges as researchers struggle to analyse the avalanche of information available to them. A unique U.K.-based initiative has been established to promote data sharing in cancer science and medicine and to address the technical and cultural issues needed to support this.Searching PubMed for citations related to a specific cancer center or group of authors can be labor-intensive. We have created a tool, PubMed QUEST, to aid in the rapid searching of PubMed for publications of interest. It was designed by taking into account the needs of entire cancer centers as well as individual investigators. The experience of using the tool by our institution’s cancer center administration and investigators has been favorable and we believe it could easily be adapted to other institutions. Use of the tool has identified limitations of automated searches for publications based on an author’s name, especially for common names. These limitations could likely be solved if the PubMed database assigned a unique identifier to each author.In this paper, a model of signaling pathways involving G proteins is investigated. The model incorporates reaction-diffusion mechanisms in which various reactants participate inside and on the extra-cellular surface membrane. The messenger molecules may diffuse over the surface of the cell membrane and signal transduction across the cell membrane is mediated by membrane receptor bound proteins which connect the genetically controlled biochemical intra-cellular reactions to the production of the second messenger, leading to desired functional responses. Dynamic and steady-state properties of the model are then investigated through weakly nonlinear stability analysis. Turing-type patterns are shown to form robustly under different delineating conditions on the system parameters. The theoretical predictions are then discussed in the context of some recently reported experimental evidence.Introduction: As an alternative to DNA microarrays, mass spectrometry based analysis of proteomic patterns has shown great potential in cancer diagnosis. The ultimate application of this technique in clinical settings relies on the advancement of the technology itself and the maturity of the computational tools used to analyze the data. A number of computational algorithms constructed on different principles are available for the classification of disease status based on proteomic patterns. Nevertheless, few studies have addressed the difference in the performance of these approaches. In this report, we describe a comparative case study on the classification accuracy of hepatocellular carcinoma based on the serum proteomic pattern generated from a Surface Enhanced Laser Desorption/Ionization (SELDI) mass spectrometer. Methods: Nine supervised classification algorithms are implemented in R software and compared for the classification accuracy. Results: We found that the support vector machine with radial function is preferable as a tool for classification of hepatocellular carcinoma using features in SELDI mass spectra. Among the rest of the methods, random forest and prediction analysis of microarrays have better performance. A permutation-based technique reveals that the support vector machine with a radial function seems intrinsically superior in learning from the training data since it has a lower prediction error than others when there is essentially no differential signal. On the other hand, the performance of the random forest and prediction analysis of microarrays rely on their capability of capturing the signals with substantial differentiation between groups. Conclusions: Our finding is similar to a previous study, where classification methods based on the Matrix Assisted Laser Desorption/Ionization (MALDI) mass spectrometry are compared for the prediction accuracy of ovarian cancer. The support vector machine, random forest and prediction analysis of microarrays provide better prediction accuracy for hepatocellular carcinoma using SELDI proteomic data than six other approaches.Summary In our previous study [1], we have compared the performance of a number of widely used discrimination methods for classifying ovarian cancer using Matrix Assisted Laser Desorption Ionization (MALDI) mass spectrometry data on serum samples obtained from Reflectron mode. Our results demonstrate good performance with a random forest classifier. In this follow-up study, to improve the molecular classification power of the MALDI platform for ovarian cancer disease, we expanded the mass range of the MS data by adding data acquired in Linear mode and evaluated the resultant decrease in classification error. A general statistical framework is proposed to obtain unbiased classification error estimates and to analyze the effects of sample size and number of selected m/z features on classification errors. We also emphasize the importance of combining biological knowledge and statistical analysis to obtain both biologically and statistically sound results. Our study shows improvement in classification accuracy upon expanding the mass range of the analysis. In order to obtain the best classification accuracies possible, we found that a relatively large training sample size is needed to obviate the sample variations. For the ovarian MS dataset that is the focus of the current study, our results show that approximately 20–40 m/z features are needed to achieve the best classification accuracy from MALDI-MS analysis of sera. Supplementary information can be found at http://bioinformatics.med.yale.edu/proteomics/BioSupp2.html.In vitro experimentation provides a convenient controlled environment for testing biological hypotheses of functional genomics in cancer induction and progression. However, it is necessary to validate resulting gene signatures from these in vitro experiments in human tumor samples (i.e. in vivo). We discuss the several methods for integrating data from these two sources paying particular attention to formulating statistical tests and corresponding null hypotheses. We propose a classification null hypothesis that can be simply modeled via permutation testing. A classification method is proposed based upon the Tissue Similarity Index of Sandberg and Ernberg (PNAS, 2005) that uses the classification null hypothesis. This method is demonstrated using the in vitro signature of Core Serum Response developed by Chang et al. (PLoS Biology, 2004).Multiple studies have reported that surface enhanced laser desorption/ionization time of flight mass spectroscopy (SELDI-TOF-MS) is useful in the early detection of disease based on the analysis of bodily fluids. Use of any multiplex mass spectroscopy based approach as in the analysis of bodily fluids to detect disease must be analyzed with great care due to the susceptibility of multiplex and mass spectroscopy methods to biases introduced via experimental design, patient samples, and/or methodology. Specific biases include those related to experimental design, patients, samples, protein chips, chip reader and spectral analysis. Contributions to biases based on patients include demographics (e.g., age, race, ethnicity, sex), homeostasis (e.g., fasting, medications, stress, time of sampling), and site of analysis (hospital, clinic, other). Biases in samples include conditions of sampling (type of sample container, time of processing, time to storage), conditions of storage, (time and temperature of storage), and prior sample manipulation (freeze thaw cycles). Also, there are many potential biases in methodology which can be avoided by careful experimental design including ensuring that cases and controls are analyzed randomly. All the above forms of biases affect any system based on analyzing multiple analytes and especially all mass spectroscopy based methods, not just SELDI-TOF-MS. Also, all current mass spectroscopy systems have relatively low sensitivity compared with immunoassays (e.g., ELISA). There are several problems which may be unique to the SELDI-TOF-MS system marketed by Ciphergen®. Of these, the most important is a relatively low resolution (±0.2%) of the bundled mass spectrometer which may cause problems with analysis of data. Foremost, this low resolution results in difficulties in determining what constitutes a “peak” if a peak matching approach is used in analysis. Also, once peaks are selected, the peaks may represent multiple proteins. In addition, because peaks may vary slightly in location due to instrumental drift, long term identification of the same peaks may prove to be a challenge. Finally, the Ciphergen® system has some “noise” of the baseline which results from the accumulation of charge in the detector system. Thus, we must be very aware of the factors that may affect the use of proteomics in the early detection of disease, in determining aggressive subsets of cancers, in risk assessment and in monitoring the effectiveness of novel therapies.Summary: A key challenge in clinical proteomics of cancer is the identification of biomarkers that could allow detection, diagnosis and prognosis of the diseases. Recent advances in mass spectrometry and proteomic instrumentations offer unique chance to rapidly identify these markers. These advances pose considerable challenges, similar to those created by microarray-based investigation, for the discovery of pattern of markers from high-dimensional data, specific to each pathologic state (e.g. normal vs cancer). We propose a three-step strategy to select important markers from high-dimensional mass spectrometry data using surface enhanced laser desorption/ionization (SELDI) technology. The first two steps are the selection of the most discriminating biomarkers with a construction of different classifiers. Finally, we compare and validate their performance and robustness using different supervised classification methods such as Support Vector Machine, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Neural Networks, Classification Trees and Boosting Trees. We show that the proposed method is suitable for analysing high-throughput proteomics data and that the combination of logistic regression and Linear Discriminant Analysis outperform other methods tested.Proteins involved in the regulation of the cell cycle are highly conserved across all eukaryotes, and so a relatively simple eukaryote such as yeast can provide insight into a variety of cell cycle perturbations including those that occur in human cancer. To date, the budding yeast Saccharomyces cerevisiae has provided the largest amount of experimental and modeling data on the progression of the cell cycle, making it a logical choice for in-depth studies of this process. Moreover, the advent of methods for collection of high-throughput genome, transcriptome, and proteome data has provided a means to collect and precisely quantify simultaneous cell cycle gene transcript and protein levels, permitting modeling of the cell cycle on the systems level. With the appropriate mathematical framework and sufficient and accurate data on cell cycle components, it should be possible to create a model of the cell cycle that not only effectively describes its operation, but can also predict responses to perturbations such as variation in protein levels and responses to external stimuli including targeted inhibition by drugs. In this review, we summarize existing data on the yeast cell cycle, proteomics technologies for quantifying cell cycle proteins, and the mathematical frameworks that can integrate this data into representative and effective models. Systems level modeling of the cell cycle will require the integration of high-quality data with the appropriate mathematical framework, which can currently be attained through the combination of dynamic modeling based on proteomics data and using yeast as a model organism.Proteomic patterns derived from mass spectrometry have recently been put forth as potential biomarkers for the early diagnosis of cancer. This approach has generated much excitement, particularly as initial results reported on SELDI profiling of serum suggested that near perfect sensitivity and specificity could be achieved in diagnosing ovarian cancer. However, more recent reports have suggested that much of the observed structure could be due to the presence of experimental bias. A rebuttal to the findings of bias, subtitled “Producers and Consumers”, lists several objections. In this paper, we attempt to address these objections. While we continue to find evidence of experimental bias, we emphasize that the problems found are associated with experimental design and processing, and can be avoided in future studies.Microarray technologies have been an increasingly important tool in cancer research in the last decade, and a number of initiatives have sought to stress the importance of the provision and sharing of raw microarray data. Illumina BeadArrays provide a particular problem in this regard, as their random construction simultaneously adds value to analysis of the raw data and obstructs the sharing of those data. We present a compression scheme for raw Illumina BeadArray data, designed to ease the burdens of sharing and storing such data, that is implemented in the BeadDataPackR BioConductor package (http://bioconductor.org/packages/release/bioc/html/BeadDataPackR.html). It offers two key advantages over off-the-peg compression tools. First it uses knowledge of the data formats to achieve greater compression than other approaches, and second it does not need to be decompressed for analysis, but rather the values held within can be directly accessed.An important issue in current medical science research is to find the genes that are strongly related to an inherited disease. A particular focus is placed on cancer-gene relations, since some types of cancers are inherited. As biomedical databases have grown speedily in recent years, an informatics approach to predict such relations from currently available databases should be developed. Our objective is to find implicit associated cancer-genes from biomedical databases including the literature database. Co-occurrence of biological entities has been shown to be a popular and efficient technique in biomedical text mining. We have applied a new probabilistic model, called mixture aspect model (MAM) [48], to combine different types of co-occurrences of genes and cancer derived from Medline and OMIM (Online Mendelian Inheritance in Man). We trained the probability parameters of MAM using a learning method based on an EM (Expectation and Maximization) algorithm. We examined the performance of MAM by predicting associated cancer gene pairs. Through cross-validation, prediction accuracy was shown to be improved by adding gene-gene co-occurrences from Medline to cancer-gene cooccurrences in OMIM. Further experiments showed that MAM found new cancer-gene relations which are unknown in the literature. Supplementary information can be found at http://www.bic.kyotou.ac.jp/pathway/zhusf/CancerInformatics/Supplemental2006.htmlConstructing pathways of tumor progression and discovering the biomarkers associated with cancer is critical for understanding the molecular basis of the disease and for the establishment of novel chemotherapeutic approaches and in turn improving the clinical efficiency of the drugs. It has recently received a lot of attention from bioinformatics researchers. However, relatively few methods are available for constructing pathways. This article develops a novel entropy kernel based kernel clustering and fuzzy kernel clustering algorithms to construct the tumor progression pathways using CpG island methylation data. The methylation data which come from tumor tissues diagnosed at different stages can be used to distinguish epigenotype and phenotypes the describe the molecular events of different phases. Using kernel and fuzzy kernel kmeans, we built tumor progression trees to describe the pathways of tumor progression and find the possible biomarkers associated with cancer. Our results indicate that the proposed algorithms together with methylation profiles can predict the tumor progression stages and discover the biomarkers efficiently. Software is available upon request.Whole genome microarray investigations (e.g. differential expression, differential methylation, ChIP-Chip) provide opportunities to test millions of features in a genome. Traditional multiple comparison procedures such as familywise error rate (FWER) controlling procedures are too conservative. Although false discovery rate (FDR) procedures have been suggested as having greater power, the control itself is not exact and depends on the proportion of true null hypotheses. Because this proportion is unknown, it has to be accurately (small bias, small variance) estimated, preferably using a simple calculation that can be made accessible to the general scientific community. We propose an easy-to-implement method and make the R code available, for estimating the proportion of true null hypotheses. This estimate has relatively small bias and small variance as demonstrated by (simulated and real data) comparing it with four existing procedures. Although presented here in the context of microarrays, this estimate is applicable for many multiple comparison situations.Summary: Clinical covariates such as age, gender, tumor grade, and smoking history have been extensively used in prediction of disease occurrence and progression. On the other hand, genomic biomarkers selected from microarray measurements may provide an alternative, satisfactory way of disease prediction. Recent studies show that better prediction can be achieved by using both clinical and genomic biomarkers. However, due to different characteristics of clinical and genomic measurements, combining those covariates in disease prediction is very challenging. We propose a new regularization method, Covariate-Adjusted Threshold Gradient Directed Regularization (Cov-TGDR), for combining different type of covariates in disease prediction. The proposed approach is capable of simultaneous biomarker selection and predictive model building. It allows different degrees of regularization for different type of covariates. We consider biomedical studies with binary outcomes and right censored survival outcomes as examples. Logistic model and Cox model are assumed, respectively. Analysis of the Breast Cancer data and the Follicular lymphoma data show that the proposed approach can have better prediction performance than using clinical or genomic covariates alone.In this review, we take a survey of bioinformatics databases and quantitative structure-activity relationship studies reported in published literature. Databases from the most general to special cancer-related ones have been included. Most commonly used methods of structure-based analysis of molecules have been reviewed, along with some case studies where they have been used in cancer research. This article is expected to be of use for general bioinformatics researchers interested in cancer and will also provide an update to those who have been actively pursuing this field of research.Dedication by Dr James Lyons-Weiler, University of Pittsburgh Cancer Institute, Pittsburgh, PA, USA.We are experiencing a time of great growth in knowledge about human disease. However, translation of the knowledge into clinical practice has not kept pace. Clinical trials are an important part of the drug development process. The cost of conducting clinical trials has become greater because: 1) regulations on how the trial must be conducted have become more complex; 2) proposed therapies must be compared against standard therapies; and 3) if the end point is survival—it may take longer to reach that end-point as therapies and non-specific supportive measures become more effective. Moreover, therapies administered prior to or subsequent to the experimental intervention may confound the interpretation of survival as an endpoint. Finding valid alternative outcome measures that can be observed soon after the therapy is given could reduce the cost of drug trials, and make effective therapies available to the public more quickly. Imaging can assess therapeutic efficacy for cancers and may be a part of the solution to reduce costs and improve timeliness of clinical trials. (Fig 1). n n n nFigure 1 n nNumber of submissions of new molecular entities (NMEs) and biologics license application (BLA) to FDA over the past 10 years. (U.S. Department of Health and Human Services-Food and Drug Administration 2004) n n n n nThe Challenges of Clinical Trials nProblem 1: Clinical trials are too expensive nClinical trials are an essential part of the process of documenting the effectiveness of a new therapy. While laboratory experiments attempt to simulate the human situation, validating efficacy and safety in the population of interest remains a necessary step. But the cost of performing a clinical trial large enough to document a treatment effect and monitor for side effects is usually quite expensive. The FDA estimates that the cost to develop a new drug can be as high as


Cancer Informatics | 2010

Genes with Relevance for Early to Late Progression of Colon Carcinoma Based on Combined Genomic and Transcriptomic Information from the Same Patients

Kristina Lagerstedt; Erik Kristiansson; Christina Lönnroth; Marianne Andersson; Britt-Marie Iresjö; Annika Gustafsson; Elisabeth Hansson; Ulf Kressner; Svante Nordgren; Fredrik Enlund; Kent Lundholm

1.7 billion (Fig 2), with others estimating that the median cost at ‘only’


Neurochemical Research | 2015

Actin Filament Reorganization in Astrocyte Networks is a Key Functional Step in Neuroinflammation Resulting in Persistent Pain: Novel Findings on Network Restoration

Elisabeth Hansson

800 million (DiMasi, 2002). n n n nFigure 2 n nThe cost of developing a successful compound is increasing, and the clinical trials pieces are the rapidly increasing components (Windhover’s In Vivo 2003). n n n nSome believe it is this mounting cost that is responsible for the decline in the number of new agents being submitted to the FDA. This represents a great challenge to our health care system. No amount of research is going to be effective in curing cancer if the final step of performing the clinical trial is too difficult or expensive to justify the economic returns expected from selling the product. Developing methods to reduce the effort and cost of a clinical trial while maintaining or increasing the validity would be valuable.Background: Epidermal growth factor receptor (EGFR) overexpression is observed in significant proportions of non-small cell lung carcinomas (NSCLC). Furthermore, overactivation of vascular endothelial growth factor (VEGF) leads to increased angiogenesis implicated as an important factor in vascularization of those tumors. Patients and Methods: Using tissue microarray technology, forty-paraffin (n = 40) embedded, histologically confirmed primary NSCLCs were cored and re-embedded into a recipient block. Immunohistochemistry was performed for the determination of EGFR and VEGF protein levels which were evaluated by the performance of computerized image analysis. EGFR gene amplification was studied by chromogenic in situ hybridization based on the use of EGFR gene and chromosome 7 centromeric probes. Results: EGFR overexpression was observed in 23/40 (57.5%) cases and was correlated to the stage of the tumors (p = 0.001), whereas VEGF was overexpressed in 35/40 (87.5%) cases and was correlated to the stage of the tumors (p = 0.005) and to the smoking history of the patients (p = 0.016). Statistical significance was assessed comparing the protein levels of EGFR and VEGF (p = 0.043, k = 0.846). EGFR gene amplification was identified in 2/40 (5%) cases demonstrating no association to its overall protein levels (p = 0.241), whereas chromosome 7 aneuploidy was detected in 7/40 (17.5%) cases correlating to smoking history of the patients (p = 0.013). Conclusions: A significant subset of NSCLC is characterized by EGFR and VEGF simultaneous overexpression and maybe this is the eligible target group for the application of combined anti-EGFR/VEGF targeted therapies at the basis of genetic deregulation (especially gene amplification for EGFR).BRB-ArrayTools is an integrated software system for the comprehensive analysis of DNA microarray experiments. It was developed by professional biostatisticians experienced in the design and analysis of DNA microarray studies and incorporates methods developed by leading statistical laboratories. The software is designed for use by biomedical scientists who wish to have access to state-of-the-art statistical methods for the analysis of gene expression data and to receive training in the statistical analysis of high dimensional data. The software provides the most extensive set of tools available for predictive classifier development and complete cross-validation. It offers extensive links to genomic websites for gene annotation and analysis tools for pathway analysis. An archive of over 100 datasets of published microarray data with associated clinical data is provided and BRB-ArrayTools automatically imports data from the Gene Expression Omnibus public archive at the National Center for Biotechnology Information.An algorithm to reduce multi-sample array CGH data from thousands of clones to tens or hundreds of clone regions is introduced. This reduction of the data is performed such that little information is lost, which is possible due to the high dependencies between neighboring clones. The algorithm is explained using a small example. The potential beneficial effects of the algorithm for downstream analysis are illustrated by re-analysis of previously published colorectal cancer data. Using multiple testing corrections suitable for these data, we provide statistical evidence for genomic differences on several clone regions between MSI+ and CIN+ tumors. The algorithm, named CGHregions, is available as an easy-to-use script in R.Microarrays allow researchers to monitor the gene expression patterns for tens of thousands of genes across a wide range of cellular responses, phenotype and conditions. Selecting a small subset of discriminate genes from thousands of genes is important for accurate classification of diseases and phenotypes. Many methods have been proposed to find subsets of genes with maximum relevance and minimum redundancy, which can distinguish accurately between samples with different labels. To find the minimum subset of relevant genes is often referred as biomarker discovery. Two main approaches, filter and wrapper techniques, have been applied to biomarker discovery. In this paper, we conducted a comparative study of different biomarker discovery methods, including six filter methods and three wrapper methods. We then proposed a hybrid approach, FR-Wrapper, for biomarker discovery. The aim of this approach is to find an optimum balance between the precision of the biomarker discovery and the computation cost, by taking advantages of both filter method’s efficiency and wrapper method’s high accuracy. Our hybrid approach applies Fisher’s ratio, a simple method easy to understand and implement, to filter out most of the irrelevant genes, then a wrapper method is employed to reduce the redundancy. The performance of FR-Wrapper approach is evaluated over four widely used microarray datasets. Analysis of experimental results reveals that the hybrid approach can achieve the goal of maximum relevance with minimum redundancy.Mass spectrometry approaches to biomarker discovery in human fluids have received a great deal of attention in recent years. While mass spectrometry instrumentation and analysis approaches have been widely investigated, little attention has been paid to how sample handling can impact the plasma proteome and therefore influence biomarker discovery. We have investigated the effects of two main aspects of sample handling on MALDI-TOF data: repeated freeze-thaw cycles and the effects of long-term storage of plasma at −70°C. Repeated freeze-thaw cycles resulted in a trend towards increasing changes in peak intensity, particularly after two thaws. However, a 4-year difference in long-term storage appears to have minimal effect on protein in plasma as no differences in peak number, mass distribution, or coefficient of variation were found between samples. Therefore, limiting freeze/thaw cycles seems more important to maintaining the integrity of the plasma proteome than degradation caused by long-term storage at −70°C.Introduction MR examinations of the brain are the primary method for clinical as well as research assessment of the effects of therapy on brain tumors. In clinical practice, visual comparison is the primary method of assessing changes that indicate tumor response or progression. This is a labor-intensive process involving visual search for changes between examinations on multiple images from multiple image types. Furthermore, some of the changes that may be perceived could be do to acquisition-related changes, rather than changes in the tumor status. One of these changes is the change in the patient position between the two time points. While every effort is made to acquire images in the same plane as prior exams, this is rarely achieved. In this study, we evaluated computerized image registration (A.K.A. image alignment) on accuracy and confi dence. Methods Study selection After IRB approval, we collected a series of 100 sequential MRI examination pairs in patients with primary brain gliomas in which there had been no intervening surgery. Furthermore, we selected those in which the clinical radiologist interpretation indicated either subtle or no change in the tumor. The interval between examinations ranged from 35 days to 375 days, with the median being 75 days. Tumor types included astrocytoma, oligodendroglioma, and mixed oligo-astrocytomas, and tumor grade ranged from 2 to 4 on the World Health Organization scale. Examinations consisted of 3 mm thick contiguous T1, T2, FLAIR, and T1-post contrast images obtained with a 1.5T GE Signa (GE Medical Systems, Waukesha, WI) scanner. The T1-weighted images were spin-echo sequences with TR ranging from 400 ms to 620 ms and TE min full. T2-weighted images were fast spin echo images TR 3500–4000 ms and TEeff of 120 ms. FLAIR images were also fast spin echo with TR 11000 ms, TI 2250 ms and TE 250 ms.We present the implementation of an application using caGrid, which is the service-oriented Grid software infrastructure of the NCI cancer Biomedical Informatics Grid (caBIGTM), to support design and analysis of custom microarray experiments in the study of epigenetic alterations in cancer. The design and execution of these experiments requires synthesis of information from multiple data types and datasets. In our implementation, each data source is implemented as a caGrid Data Service, and analytical resources are wrapped as caGrid Analytical Services. This service-based implementation has several advantages. A backend resource can be modified or upgraded, without needing to change other components in the application. A remote resource can be added easily, since resources are not required to be collected in a centralized infrastructure.Array-Comparative Genomic Hybridization (aCGH) is a powerful high throughput technology for detecting chromosomal copy number aberrations (CNAs) in cancer, aiming at identifying related critical genes from the affected genomic regions. However, advancing from a dataset with thousands of tabular lines to a few candidate genes can be an onerous and time-consuming process. To expedite the aCGH data analysis process, we have developed a user-friendly aCGH data viewer (aCGHViewer) as a conduit between the aCGH data tables and a genome browser. The data from a given aCGH analysis are displayed in a genomic view comprised of individual chromosome panels which can be rapidly scanned for interesting features. A chromosome panel containing a feature of interest can be selected to launch a detail window for that single chromosome. Selecting a data point of interest in the detail window launches a query to the UCSC or NCBI genome browser to allow the user to explore the gene content in the chromosomal region. Additionally, aCGHViewer can display aCGH and expression array data concurrently to visually correlate the two. aCGHViewer is a stand alone Java visualization application that should be used in conjunction with separate statistical programs. It operates on all major computer platforms and is freely available at http://falcon.roswellpark.org/aCGHview/.We developed a quality assurance (QA) tool, namely microarray outlier filter (MOF), and have applied it to our microarray datasets for the identification of problematic arrays. Our approach is based on the comparison of the arrays using the correlation coefficient and the number of outlier spots generated on each array to reveal outlier arrays. For a human universal reference (HUR) dataset, which is used as a technical control in our standard hybridization procedure, 3 outlier arrays were identified out of 35 experiments. For a human blood dataset, 12 outlier arrays were identified from 185 experiments. In general, arrays from human blood samples displayed greater variation in their gene expression profiles than arrays from HUR samples. As a result, MOF identified two distinct patterns in the occurrence of outlier arrays. These results demonstrate that this methodology is a valuable QA practice to identify questionable microarray data prior to downstream analysis.Background Cell cycle is an important clue to unravel the mechanism of cancer cells. Recently, expression profiles of cDNA microarray data of Cancer cell cycle are available for the information of dynamic interactions among Cancer cell cycle related genes. Therefore, it is more appealing to construct a dynamic model for gene regulatory network of Cancer cell cycle to gain more insight into the infrastructure of gene regulatory mechanism of cancer cell via microarray data. Results Based on the gene regulatory dynamic model and microarray data, we construct the whole dynamic gene regulatory network of Cancer cell cycle. In this study, we trace back upstream regulatory genes of a target gene to infer the regulatory pathways of the gene network by maximum likelihood estimation method. Finally, based on the dynamic regulatory network, we analyze the regulatory abilities and sensitivities of regulatory genes to clarify their roles in the mechanism of Cancer cell cycle. Conclusions Our study presents a systematically iterative approach to discern and characterize the transcriptional regulatory network in Hela cell cycle from the raw expression profiles. The transcription regulatory network in Hela cell cycle can also be confirmed by some experimental reviews. Based on our study and some literature reviews, we can predict and clarify the E2F target genes in G1/S phase, which are crucial for regulating cell cycle progression and tumorigenesis. From the results of the network construction and literature confirmation, we infer that MCM4, MCM5, CDC6, CDC25A, UNG and E2F2 are E2F target genes in Hela cell cycle.The study of pathway disruption is key to understanding cancer biology. Advances in high throughput technologies have led to the rapid accumulation of genomic data. The explosion in available data has generated opportunities for investigation of concerted changes that disrupt biological functions, this in turns created a need for computational tools for pathway analysis. In this review, we discuss approaches to the analysis of genomic data and describe the publicly available resources for studying biological pathways.Summary: Computer tomography (CT) imaging plays an important role in cancer detection and quantitative assessment in clinical trials. High-resolution imaging studies on large cohorts of patients generate vast data sets, which are infeasible to analyze through manual interpretation. In this article we describe a comprehensive architecture for computer-aided detection (CAD) and surveillance on lung nodules in CT images. Central to this architecture are the analytic components: an automated nodule detection system, nodule tracking capabilities and volume measurement, which are integrated within a data management system that includes mechanisms for receiving and archiving images, a database for storing quantitative nodule measurements and visualization, and reporting tools. We describe two studies to evaluate CAD technology within this architecture, and the potential application in large clinical trials. The first study involves performance assessment of an automated nodule detection system and its ability to increase radiologist sensitivity when used to provide a second opinion. The second study investigates nodule volume measurements on CT made using a semi-automated technique and shows that volumetric analysis yields significantly different tumor response classifications than a 2D diameter approach. These studies demonstrate the potential of automated CAD tools to assist in quantitative image analysis for clinical trials.We present a computational approach for studying the effect of potential drug combinations on the protein networks associated with tumor cells. The majority of therapeutics are designed to target single proteins, yet most diseased states are characterized by a combination of many interacting genes and proteins. Using the topology of protein-protein interaction networks, our methods can explicitly model the possible synergistic effect of targeting multiple proteins using drug combinations in different cancer types. The methodology can be conceptually split into two distinct stages. Firstly, we integrate protein interaction and gene expression data to develop network representations of different tissue types and cancer types. Secondly, we model network perturbations to search for target combinations which cause significant damage to a relevant cancer network but only minimal damage to an equivalent normal network. We have developed sets of predicted target and drug combinations for multiple cancer types, which are validated using known cancer and drug associations, and are currently in experimental testing for prostate cancer. Our methods also revealed significant bias in curated interaction data sources towards targets with associations compared with high-throughput data sources from model organisms. The approach developed can potentially be applied to many other diseased cell types.At the University of Pittsburgh, I teach a graduate-level course ‘The Practical Analysis of High-Throughput Genomic and Proteomic Data’. 50% of the course grade is based on a paper project based on the re-analysis of published data sets. The aim of the project is to encourage the comparative evaluation of different approaches to the various analytic tasks for – omic based biomarker studies. The students are empowered by this course to understand – and to see for themselves – that different approaches to normalization, feature selection, and disease prediction model (a) exist, and (b) differ in their apparent relative performance in helping to generate lists of therapeutic targets or disease prediction models. We also learn about various data standards, mostly from the perspective of data formats, which are critical to re-analysis based algorithm evaluation studies.Decades of focused cancer research have demonstrated the oncogenic process to be frustratingly complex. Despite many triumphs in scientific and clinical understanding, we still do not understand the formation of most solid tumors at a basic level. Each newly discovered molecular signature or profile brings to attention several exceptions in the form of mutations or histological subtypes that significantly change the applicability of the new knowledge to clinical practice. This has hampered improvements in detection, diagnosis, and treatment strategies. n nMost solid tumors arise from a spectrum of genetic, epigenetic, and chromosomal alterations. The volume of such observations from both patient samples and tumor models in the cancer literature is overwhelming. Despite this, the variations in the molecular alterations that can give rise to cancer can be broadly grouped into a handful of traits that cancer cells must acquire for malignant transformation to occur. The original description by Hanahan and Weinberg of the “hallmarks” of cancer remains a seminal description that looks beyond the detailed molecular discoveries governing malignant transformation, and integrates them into a conceptual framework underlying all cancers (Hanahan and Weinberg, 2000). This framework simply but insightfully states that molecular alterations can be classified by dysfunction in as many as six different regulatory systems that must be perturbed for a normal cell to become cancerous (Khalil and Hill, 2005). These include many diverse and seemingly non overlapping biological processes, including (1) self-sufficiency in growth signals, (2) insensitivity to anti-growth signals, (3) evasion of apoptosis, (4) limitless replicative potential, (5) sustained angiogenesis, and (6) tissue invasion and metastasis. They define genetic instability as an “enabling characteristic” that facilitates the acquisition of other mutations due to defects in the repair of DNA. Although some cancer subtypes are defined by a single genetic alteration leading to a primary defect in one of the above listed processes, most solid tumors responsible for the largest burden of human illness are heterogeneous lesions characterized by many if not all defects observable simultaneously. This includes lung, breast, prostate, colon, and central nervous system tumors among others. In our attempts to understand tumorigenesis by reductionism, much work has gone into the study of individual biologic processes referred to as the “hallmarks” of cancer. Increased understanding for many of these biologic modules has unfortunately not generated parallel understanding of the root cause of cancers and how best to treat them. n nThe concept of cancer as a system failure and the potential to use systems biology approaches to understand the disease is generating significant discussion in the literature as investigators grapple with how to do this (Kitano, 2002; Alberghina, Chiaradonna et al. 2004; Spencer, Berryman et al. 2004; Khalil and Hill, 2005; Hornberg, Bruggeman et al. 2006). The mere recognition of cancer as a systems biology disease is a key first step. This hypothesis views the individual defects observable in solid tumors cumulatively as system failures either at the cellular or multicellular level. A systematic study and understanding of oncogenic network rewiring (Pawson and Warner, 2007) opens the potential to use systems biology approaches to generate testable models of different tumors, an exciting and as of yet unexplored realm of cancer biology. n nSystem level understanding, the approach advocated in systems biology, requires a change in our notion of “what to look for” in biology (Kitano, 2002). While an understanding role of individual genes and proteins continues to be important, the focus is superseded by the goal of understanding a systems structure, function and dynamics. This evolution in thought in the life sciences has produced a profound transformation at the threshold of what is widely regarded as the century of biology (Kafatos and Eisner, 2004). From a collection of narrow, well-defined, almost parochial disciplines, they are rapidly morphing into domains that span the realm of molecular structure and function through to the application of this knowledge to clinical medicine. The results of teams of individual specialists dedicated to specific biological goals are providing insight into system structures and function not conceivable a decade ago. n nIdentifying all the genes and proteins in an organism is analogous to creating a list of all parts of a complex device or machine, such as an airplane. While such a list provides a catalog of individual components, it alone is not sufficient to understand the complexity underlying the engineered object (Kitano, 2002). One cannot readily build an airplane or understand its functional intricacies from a parts list alone. One needs to understand how these parts are assembled to form the structure of an airplane. This is biologically analogous to drawing an exhaustive diagram of gene-regulatory interactions and their biochemical interactions, as these diagrams would provide a limited knowledge of how changes in one part of the system may affect other parts. To understand how a particular system functions, how the individual components interact during operation and under failure must be examined first. From an engineering perspective, answers to key questions become critical, such as, what is the voltage on each signal line? How are the signals encoded? How is the voltage stabilized against noise and external fluctuations? How do the circuits react when a malfunction occurs in the system? What are the design principles and possible circuit patterns, and how can they be modified to improve system performance? (Kitano, 2002) n nWhy has systems biology received so much recent attention? In short, it is because the key first step of defining system structures has quickly advanced from fantasy to reality in the post-genomic era. The achievement of full genome sequencing projects in many organisms, including Homo sapiens, has defined the “parts list” for growth, development, and normal physiologic function. The technological development associated with these achievements has spawned the nascent fields of genomics, proteomics, and multiple ”-omic” disciplines defined by their systematic, data-driven approaches to biological experimentation. These approaches are increasingly being applied to the question of understanding cancer. n nThe volume of data generated by multiple high-throughput platforms has outpaced the computational and mathematical models to integrate this information for advances in true biologic understanding (Figure 1). This will continue to be a bottleneck for the near future. n n n nFigure 1. n nEstimating cancer research growth and utilization of diverse high-throughput platforms as measured by number of Medline references. The apparent decline in 2006 could be explained by not finalized references in Medline (February 9, 2007). n n n nWe review a number of relevant observations in the context of the “hallmarks” of cancer and discuss the issue of data integration in performing systems-level experiments.Altered expression of COX-2 and overproduction of prostaglandins, particularly prostaglandin E2, are common in malignant tumors. Consequently, non-steroidal anti-inflammatory drugs (NSAIDs) attenuate tumor net growth, tumor related cachexia, improve appetite and prolong survival. We have also reported that COX-inhibition (indomethacin) interfered with early onset of tumor endothelial cell growth, tumor cell proliferation and apoptosis. It is however still unclear whether such effects are restricted to metabolic alterations closely related to eicosanoid pathways and corresponding regulators, or whether a whole variety of gene products are involved both up- and downstream effects of eicosanoids. Therefore, present experiments were performed by the use of an in vivo, intravital chamber technique, where micro-tumor growth and related angiogenesis were analyzed by microarray to evaluate for changes in global RNA expression caused by indomethacin treatment. Indomethacin up-regulated 351 and down-regulated 1852 genes significantly (p < 0.01); 1066 of these genes had unknown biological function. Genes with altered expression occurred on all chromosomes. Our results demonstrate that indomethacin altered expression of a large number of genes distributed among a variety of processes in the carcinogenic progression involving angiogenesis, apoptosis, cell-cycling, cell adhesion, inflammation as well as fatty acid metabolism and proteolysis. It remains a challenge to distinguish primary key alterations from secondary adaptive changes in transcription of genes altered by cyclooxygenase inhibition.The present paper aims at demonstrating clinically oriented applications of the multiscale four dimensional in vivo tumor growth simulation model previously developed by our research group. To this end the effect of weekend radiotherapy treatment gaps and p53 gene status on two virtual glioblastoma tumors differing only in p53 gene status is investigated in silico. Tumor response predictions concerning two rather extreme dose fractionation schedules (daily dose of 4.5 Gy administered in 3 equal fractions) namely HART (Hyperfractionated Accelerated Radiotherapy weekend less) 54 Gy and CHART (Continuous HART) 54 Gy are presented and compared. The model predictions suggest that, for the same p53 status, HART 54 Gy and CHART 54 Gy have almost the same long term effects on locoregional tumor control. However, no data have been located in the literature concerning a comparison of HART and CHART radiotherapy schedules for glioblastoma. As non small cell lung carcinoma (NSCLC) may also be a fast growing and radiosensitive tumor, a comparison of the model predictions with the outcome of clinical studies concerning the response of NSCLC to HART 54 Gy and CHART 54 Gy is made. The model predictions are in accordance with corresponding clinical observations, thus strengthening the potential of the model.Microarray gene expression profiling has been used to distinguish histological subtypes of renal cell carcinoma (RCC), and consequently to identify specific tumor markers. The analytical procedures currently in use find sets of genes whose average differential expression across the two categories differ significantly. In general each of the markers thus identified does not distinguish tumor from normal with 100% accuracy, although the group as a whole might be able to do so. For the purpose of developing a widely used economically viable diagnostic signature, however, large groups of genes are not likely to be useful. Here we use two different methods, one a support vector machine variant, and the other an exhaustive search, to reanalyze data previously generated in our Lab (Lenburg et al. 2003). We identify 158 genes, each having an expression level that is higher (lower) in every tumor sample than in any normal sample, and each having a minimum differential expression across the two categories at a significance of 0.01. The set is highly enriched in cancer related genes (p = 1.6 × 10−12), containing 43 genes previously associated with either RCC or other types of cancer. Many of the biomarkers appear to be associated with the central alterations known to be required for cancer transformation. These include the oncogenes JAZF1, AXL, ABL2; tumor suppressors RASD1, PTPRO, TFAP2A, CDKN1C; and genes involved in proteolysis or cell-adhesion such as WASF2, and PAPPA.The analysis of expression and CGH arrays plays a central role in the study of complex diseases, especially cancer, including finding markers for early diagnosis and prognosis, choosing an optimal therapy, or increasing our understanding of cancer development and metastasis. Asterias (http://www.asterias.info) is an integrated collection of freely-accessible web tools for the analysis of gene expression and aCGH data. Most of the tools use parallel computing (via MPI) and run on a server with 60 CPUs for computation; compared to a desktop or server-based but not parallelized application, parallelization provides speed ups of factors up to 50. Most of our applications allow the user to obtain additional information for user-selected genes (chromosomal location, PubMed ids, Gene Ontology terms, etc.) by using clickable links in tables and/or figures. Our tools include: normalization of expression and aCGH data (DNMAD); converting between different types of gene/clone and protein identifiers (IDconverter/IDClight); filtering and imputation (preP); finding differentially expressed genes related to patient class and survival data (Pomelo II); searching for models of class prediction (Tnasas); using random forests to search for minimal models for class prediction or for large subsets of genes with predictive capacity (GeneSrF); searching for molecular signatures and predictive genes with survival data (SignS); detecting regions of genomic DNA gain or loss (ADaCGH). The capability to send results between different applications, access to additional functional information, and parallelized computation make our suite unique and exploit features only available to web-based applications.From tumor to tumor, there is a great variation in the proportion of cancer cells growing and making daughter cells that ultimately metastasize. The differential growth within a single tumor, however, has not been studied extensively and this may be helpful in predicting the aggressiveness of a particular cancer type. The estimation problem of tumor growth rates from several populations is studied. The baseline growth rate estimator is based on a family of interacting particle system models which generalize the linear birth process as models of tumor growth. These interacting models incorporate the spatial structure of the tumor in such a way that growth slows down in a crowded system. Approximation-assisted estimation strategy is proposed when initial values of rates are known from the previous study. Some alternative estimators are suggested and the relative dominance picture of the proposed estimator to the benchmark estimator is investigated. An over-riding theme of this article is that the suggested estimation method extends its traditional counterpart to non-normal populations and to more realistic cases.With state-of-the-art microarray technologies now available for whole genome CpG island (CGI) methylation profiling, there is a need to develop statistical models that are specifically geared toward the analysis of such data. In this article, we propose a Gamma-Normal-Gamma (GNG) mixture model for describing three groups of CGI loci: hypomethylated, undifferentiated, and hypermethylated, from a single methylation microarray. This model was applied to study the methylation signatures of three breast cancer cell lines: MCF7, T47D, and MDAMB361. Biologically interesting and interpretable results are obtained, which highlights the heterogeneity nature of the three cell lines. This underlies the premise for the need of analyzing each of the microarray slides individually as opposed to pooling them together for a single analysis. Our comparisons with the fitted densities from the Normal-Uniform (NU) mixture model in the literature proposed for gene expression analysis show an improved goodness of fit of the GNG model over the NU model. Although the GNG model was proposed in the context of single-slide methylation analysis, it can be readily adapted to analyze multi-slide methylation data as well as other types of microarray data.As the founding Editor-in-Chief of Cancer Informatics, my fi rst duty is to thank the Editorial Board for their support and participation, and their input on the direction, scope and focus of the journal, and also and especially Tim Hill. Without their considerable generosity and personal effort and passion in launching the journal, Cancer Informatics would not exist. I also gratefully acknowledge Dr. William Grizzle’s critical role as co-editor of this volume, and for overseeing the independent review process for two submitted manuscripts (Normelle et al & Lyons-Weiler et al).Molecular stratification of disease based on expression levels of sets of genes can help guide therapeutic decisions if such classifications can be shown to be stable against variations in sample source and data perturbation. Classifications inferred from one set of samples in one lab should be able to consistently stratify a different set of samples in another lab. We present a method for assessing such stability and apply it to the breast cancer (BCA) datasets of Sorlie et al. 2003 and Ma et al. 2003. We find that within the now commonly accepted BCA categories identified by Sorlie et al. Luminal A and Basal are robust, but Luminal B and ERBB2+ are not. In particular, 36% of the samples identified as Luminal B and 55% identified as ERBB2+ cannot be assigned an accurate category because the classification is sensitive to data perturbation. We identify a “core cluster” of samples for each category, and from these we determine “patterns” of gene expression that distinguish the core clusters from each other. We find that the best markers for Luminal A and Basal are (ESR1, LIV1, GATA-3) and (CCNE1, LAD1, KRT5), respectively. Pathways enriched in the patterns regulate apoptosis, tissue remodeling and the immune response. We use a different dataset (Ma et al. 2003) to test the accuracy with which samples can be allocated to the four disease subtypes. We find, as expected, that the classification of samples identified as Luminal A and Basal is robust but classification into the other two subtypes is not.The antitumor drug paclitaxel stabilizes microtubules and reduces their dynamicity, promoting mitotic arrest and eventually apoptosis. Upon assembly of the α/β-tubulin heterodimer, GTP becomes bound to both the α and β-tubulin monomers. During microtubule assembly, the GTP bound to β-tubulin is hydrolyzed to GDP, eventually reaching steady-state equilibrium between free tubulin dimers and those polymerized into microtubules. Tubulin-binding drugs such as paclitaxel interact with β-tubulin, resulting in the disruption of this equilibrium. In spite of several crystal structures of tubulin, there is little biochemical insight into the mechanism by which anti-tubulin drugs target microtubules and alter their normal behavior. The mechanism of drug action is further complicated, as the description of altered β-tubulin isotype expression and/or mutations in tubulin genes may lead to drug resistance as has been described in the literature. Because of the relationship between β-tubulin isotype expression and mutations within β-tubulin, both leading to resistance, we examined the properties of altered residues within the taxane, colchicine and Vinca binding sites. The amount of data now available, allows us to investigate common patterns that lead to microtubule disruption and may provide a guide to the rational design of novel compounds that can inhibit microtubule dynamics for specific tubulin isotypes or, indeed resistant cell lines. Because of the vast amount of data published to date, we will only provide a broad overview of the mutational results and how these correlate with differences between tubulin isotypes. We also note that clinical studies describe a number of predictive factors for the response to anti-tubulin drugs and attempt to develop an understanding of the features within tubulin that may help explain how they may affect both microtubule assembly and stability.Summary: In recent years, there has been an increased interest in using protein mass spectroscopy to identify molecular markers that discriminate diseased from healthy individuals. Existing methods are tailored towards classifying observations into nominal categories. Sometimes, however, the outcome of interest may be measured on an ordered scale. Ignoring this natural ordering results in some loss of information. In this paper, we propose a Bayesian model for the analysis of mass spectrometry data with ordered outcome. The method provides a unified approach for identifying relevant markers and predicting class membership. This is accomplished by building a stochastic search variable selection method within an ordinal outcome model. We apply the methodology to mass spectrometry data on ovarian cancer cases and healthy individuals. We also utilize wavelet-based techniques to remove noise from the mass spectra prior to analysis. We identify protein markers associated with being healthy, having low grade ovarian cancer, or being a high grade case. For comparison, we repeated the analysis using conventional classification procedures and found improved predictive accuracy with our method.Background: The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium’s intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. Methods: The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions’ IRB/HIPAA standards. Results: Currently, this “virtual biorepository” has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium’s web site. Conclusions: The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy.Background Most published literature using SELDI-TOF has used traditional techniques in Spectral Analysis such as Fourier transforms and wavelets for denoising. Most of these publications also compare spectra using their most prominent feature, i.e, peaks or local maximums. Methods The maximum intensity value within each window of differentiable m/z values was used to represent the intensity level in that window. We also calculated the ‘Area under the Curve’ (AUC) spanned by each window. Results Keeping everything else constant, such as pre-processing of the data and the classifier used, the AUC performed much better as a metric of comparison than the peaks in two out of three data sets. In the third data set both metrics performed equivalently. Conclusions This study shows that the feature used to compare spectra can have an impact on the results of a study attempting to identify biomarkers using SELDI TOF data.We use Backward Chaining Rule Induction (BCRI), a novel data mining method for hypothesizing causative mechanisms, to mine lung cancer gene expression array data for mechanisms that could impact survival. Initially, a supervised learning system is used to generate a prediction model in the form of “IF THEN ” style rules. Next, each antecedent (i.e. an IF condition) of a previously discovered rule becomes the outcome class for subsequent application of supervised rule induction. This step is repeated until a termination condition is satisfied. “Chains” of rules are created by working backward from an initial condition (e.g. survival status). Through this iterative process of “backward chaining,” BCRI searches for rules that describe plausible gene interactions for subsequent validation. Thus, BCRI is a semi-supervised approach that constrains the search through the vast space of plausible causal mechanisms by using a top-level outcome to kick-start the process. We demonstrate the general BCRI task sequence, how to implement it, the validation process, and how BCRI-rules discovered from lung cancer microarray data can be combined with prior knowledge to generate hypotheses about functional genomics.Background Microarray technology has been previously used to identify genes that are differentially expressed between tumour and normal samples in a single study, as well as in syntheses involving multiple studies. When integrating results from several Affymetrix microarray datasets, previous studies summarized probeset-level data, which may potentially lead to a loss of information available at the probe-level. In this paper, we present an approach for integrating results across studies while taking probe-level data into account. Additionally, we follow a new direction in the analysis of microarray expression data, namely to focus on the variation of expression phenotypes in predefined gene sets, such as pathways. This targeted approach can be helpful for revealing information that is not easily visible from the changes in the individual genes. Results We used a recently developed method to integrate Affymetrix expression data across studies. The idea is based on a probe-level based test statistic developed for testing for differentially expressed genes in individual studies. We incorporated this test statistic into a classic random-effects model for integrating data across studies. Subsequently, we used a gene set enrichment test to evaluate the significance of enriched biological pathways in the differentially expressed genes identified from the integrative analysis. We compared statistical and biological significance of the prognostic gene expression signatures and pathways identified in the probe-level model (PLM) with those in the probeset-level model (PSLM). Our integrative analysis of Affymetrix microarray data from 110 prostate cancer samples obtained from three studies reveals thousands of genes significantly correlated with tumour cell differentiation. The bioinformatics analysis, mapping these genes to the publicly available KEGG database, reveals evidence that tumour cell differentiation is significantly associated with many biological pathways. In particular, we observed that by integrating information from the insulin signalling pathway into our prediction model, we achieved better prediction of prostate cancer. Conclusions Our data integration methodology provides an efficient way to identify biologically sound and statistically significant pathways from gene expression data. The significant gene expression phenotypes identified in our study have the potential to characterize complex genetic alterations in prostate cancer.Early detection of precancerous cells in the cervix and their clinical management is the main purpose of cervical cancer prevention and treatment programs. Cytological findings or testing for high risk (HR)-human papillomavirus (HPV) are inadequately sensitive for use in triage of women at high risk for cervical cancer. The current study is an exploratory study to identify candidate surface-enhanced laser desorption/ionization (SELDI) time of flight (TOF) mass spectrometry (MS) protein profiles in plasma that may distinguish cervical intraepithelial neoplasia (CIN 3) from CIN 1 among women infected with HR-HPV. We evaluated the SELDI-TOF-MS plasma protein profiles of HR-HPV positive 32 women with CIN 3 (cases) and 28 women with CIN1 (controls). Case-control status was kept blinded and triplicates of each sample and quality control plasma samples were randomized and after robotic sample preparations were run on WCX2 chips. After alignment of mass/charge (m-z values), an iterative method was used to develop a classifier on a training data set that had 28 cases and 22 controls. The classifier developed was used to classify the subjects in a test data set that has six cases and six controls. The classifier separated the cases from controls in the test set with 100% sensitivity and 100% specificity suggesting the possibility of using plasma SELDI protein profiles to identify women who are likely to have CIN 3 lesions.Genome wide DNA alterations were evaluated by array CGH in addition to RNA expression profiling in colorectal cancer from patients with excellent and poor survival following primary operations. DNA was used for CGH in BAC and cDNA arrays. Global RNA expression was determined by 44K arrays. DNA and RNA from tumor and normal colon were used from cancer patients grouped according to death, survival or Dukes A, B, C and D tumor stage. Confirmed DNA alterations in all Dukes A – D were judged relevant for carcinogenesis, while changes in Dukes C and D only were regarded relevant for tumor progression. Copy number gain was more common than loss in tumor tissue (p < 0.01). Major tumor DNA alterations occurred in chromosome 8, 13, 18 and 20, where short survival included gain in 8q and loss in 8p. Copy number gains related to tumor progression were most common on chromosome 7, 8, 19, 20, while corresponding major losses appeared in chromosome 8. Losses at chromosome 18 occurred in all Dukes stages. Normal colon tissue from cancer patients displayed gains in chromosome 19 and 20. Mathematical Vector analysis implied a number of BAC-clones in tumor DNA with genes of potential importance for death or survival. The genomic variation in colorectal cancer cells is tremendous and emphasizes that BAC array CGH is presently more powerful than available statistical models to discriminate DNA sequence information related to outcome. Present results suggest that a majority of DNA alterations observed in colorectal cancer are secondary to tumor progression. Therefore, it would require an immense work to distinguish primary from secondary DNA alterations behind colorectal cancer.Iodine is enriched and stored in the thyroid gland. Due to several factors, the size of the thyroid iodine pool varies both between individuals and within individuals over time. Excess iodine as well as iodine deficiency may promote thyroid cancer. Therefore, knowledge of iodine content and distribution within thyroid cancer tissue is of interest. X-ray fluorescence analysis (XRF) and secondary ion mass spectrometry (SIMS) are two methods that can be used to assess iodine content in thyroid tissue. With both techniques, choice of sample preparation affects the results. Aldehyde fixatives are required for SIMS analysis while a freezing method might be satisfactory for XRF analysis. The aims of the present study were primarily to evaluate a simple freezing technique for preserving samples for XRF analysis and also to use XRF to evaluate the efficacy of using aldehyde fixatives to prepare samples for SIMS analysis. Ten porcine thyroids were sectioned into four pieces that were either frozen or fixed in formaldehyde, glutaraldehyde, or a modified Karnovsky fixative. The frozen samples were assessed for iodine content with XRF after 1 and 2 months, and the fixed samples were analyzed for iodine content after 1 week. Freezing of untreated tissue yielded no significant iodine loss, whereas fixation with aldehydes yielded an iodine loss of 14–30%, with Karnovsky producing the least loss.


The Clinical Journal of Pain | 2015

Ultralow Dose of Naloxone as an Adjuvant to Intrathecal Morphine Infusion Improves Perceived Quality of Sleep but Fails to Alter Persistent Pain: A Randomized, Double-blind, Controlled Study

Linda Block; Christopher Lundborg; Jan Bjersing; Peter Dahm; Elisabeth Hansson; Björn Biber

Background Genetic and epigenetic alterations in colorectal cancer are numerous. However, it is difficult to judge whether such changes are primary or secondary to the appearance and progression of tumors. Therefore, the aim of the present study was to identify altered DNA regions with significant covariation to transcription alterations along colon cancer progression. Methods Tumor and normal colon tissue were obtained at primary operations from 24 patients selected by chance. DNA, RNA and microRNAs were extracted from the same biopsy material in all individuals and analyzed by oligo-nucleotide array-based comparative genomic hybridization (CGH), mRNA- and microRNA oligo-arrays. Statistical analyses were performed to assess statistical interactions (correlations, co-variations) between DNA copy number changes and significant alterations in gene and microRNA expression using appropriate parametric and non-parametric statistics. Results Main DNA alterations were located on chromosome 7, 8, 13 and 20. Tumor DNA copy number gain increased with tumor progression, significantly related to increased gene expression. Copy number loss was not observed in Dukes A tumors. There was no significant relationship between expressed genes and tumor progression across Dukes A–D tumors; and no relationship between tumor stage and the number of microRNAs with significantly altered expression. Interaction analyses identified overall 41 genes, which discriminated early Dukes A plus B tumors from late Dukes C plus D tumor; 28 of these genes remained with correlations between genomic and transcriptomic alterations in Dukes C plus D tumors and 17 in Dukes D. One microRNA (microR-663) showed interactions with DNA alterations in all Dukes A-D tumors. Conclusions Our modeling confirms that colon cancer progression is related to genomic instability and altered gene expression. However, early invasive tumor growth seemed rather related to transcriptomic alterations, where changes in microRNA may be an early phenomenon, and less to DNA copy number changes.


Brain Research | 2013

Anti-inflammatory substances can influence some glial cell types but not others.

Johan Forshammar; Per Jörneberg; Ulrika Björklund; Anna Westerlund; Christopher Lundborg; Björn Biber; Elisabeth Hansson

In recent years, the importance of glial cell activation in the generation and maintenance of long-term pain has been investigated. One novel mechanism underlying long-lasting pain is injury-induced inflammation in the periphery, followed by microglial activation in the dorsal horn of the spinal cord, which results in local neuroinflammation. An increase in neuronal excitability may follow, with intense signaling along the pain tracts to the thalamus and the parietal cortex along with other cortical regions for the identification and recognition of the injury. If the local neuroinflammation develops into a pathological state, then the astrocytes become activated. Previous studies in which lipopolysaccharide (LPS) was used to induce inflammation have shown that in a dysfunctional astrocyte network, the actin cytoskeleton is reorganized from the normally occurring F-actin stress fibers into the more diffusible, disorganized, ring-form globular G-actin. In addition, Ca2+ signaling systems are altered, Na+- and glutamate transporters are downregulated, and pro-inflammatory cytokines, particularly IL-1β, are released in dysfunctional astrocyte networks. In a series of experiments, we have demonstrated that these LPS-induced changes in astrocyte function can be restored by stimulation of Gi/o and inhibition of Gs with a combination of a μ-receptor agonist and ultralow concentrations of a μ-receptor antagonist and by inhibition of cytokine release, particularly IL-1β, by the antiepileptic drug levetiracetam. These findings could be of clinical significance and indicate a novel treatment for long-term pain.


Journal of Inflammation | 2015

Coupled cell networks are target cells of inflammation, which can spread between different body organs and develop into systemic chronic inflammation

Elisabeth Hansson; Eva Skiöldebrand

Introduction:This randomized, cross-over, double-blind, controlled study of continuous intrathecal morphine administration in patients with severe, long-term pain addresses whether the supplementation of low doses of naloxone in this setting is associated with beneficial clinical effects. Methods:All of the study subjects (n=11) provided informed consent and were recruited from a subset of patients who were already undergoing long-term treatment with continuous intrathecal morphine because of difficult-to-treat pain. The patients were (in a randomized order) also given intrathecal naloxone (40 ng/24 h or 400 ng/24 h). As control, the patients’ ordinary dose of morphine without any additions was used. The pain (Numeric Rating Scale, NRS) during activity, perceived quality of sleep, level of activity, and quality of life as well as the levels of several proinflammatory and anti-inflammatory cytokines in the blood were assessed. The prestudy pain (NRS during activity) in the study group ranged from 3 to 10. Results:A total of 64% of the subjects reported improved quality of sleep during treatment with naloxone at a dose of 40 ng per 24 hours as compared with 9% with sham treatment (P=0.024). Although not statistically significant, pain was reduced by 2 NRS steps or more during supplemental treatment with naloxone in 36% of subjects when using the 40 ng per 24 hours dose and in 18% of the subjects when using naloxone 400 ng per 24 hours dose. The corresponding percentage among patients receiving unaltered treatment was 27%. Conclusions:To conclude, the addition of an ultralow dose of intrathecal naloxone (40 ng/24 h) to intrathecal morphine infusion in patients with severe, persistent pain improved perceived quality of sleep. We were not able to show any statistically significant effects of naloxone on pain relief, level of activity, or quality of life.


Neurochemistry International | 2016

Neuropharmacological effects of Phoneutria nigriventer venom on astrocytes

Catarina Rapôso; Ulrika Björklund; Evanguedes Kalapothakis; Björn Biber; Maria Alice da Cruz-Höfling; Elisabeth Hansson

In rat microglial enriched cultures, expressing Toll-like receptor 4, we studied cytokine release after exposure with 1 ng/ml LPS for 0.5-24 h. Dexamethasone and corticosterone exposure served as controls. We focused on whether naloxone, ouabain, and bupivacaine, all agents with reported anti-inflammatory effects on astrocytes, could affect the release of TNF-α and IL-1β in microglia. Our results show that neither ultralow (10(-12) M) nor high (10(-6) M) concentrations of these agents had demonstrable effects on cytokine release in microglia. The results indicate that anti-inflammatory substances exert specific influences on different glial cell types. Astrocytes seem to be functional targets for anti-inflammatory substances while microglia respond directly to inflammatory stimuli and are thus more sensitive to anti-inflammatory substances like corticoids. The physiological relevance might be that astrocyte dysfunction influences neuronal signalling both due to direct disturbance of astrocyte functions and in the communication within the astrocyte networks. When the signalling between astrocytes is working, then microglia produce less pro-inflammatory cytokines.

Collaboration


Dive into the Elisabeth Hansson's collaboration.

Top Co-Authors

Avatar

Christina Lönnroth

Sahlgrenska University Hospital

View shared research outputs
Top Co-Authors

Avatar

Kent Lundholm

Sahlgrenska University Hospital

View shared research outputs
Top Co-Authors

Avatar

Svante Nordgren

Sahlgrenska University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Björn Biber

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marianne Andersson

Sahlgrenska University Hospital

View shared research outputs
Top Co-Authors

Avatar

Annika Gustafsson

Sahlgrenska University Hospital

View shared research outputs
Top Co-Authors

Avatar

Eva Skiöldebrand

Swedish University of Agricultural Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge