Christopher P. Saunders
South Dakota State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher P. Saunders.
BMC Genomics | 2006
Justin M. Balko; Anil Potti; Christopher P. Saunders; Arnold J. Stromberg; Eric B. Haura; Esther P. Black
BackgroundIncreased focus surrounds identifying patients with advanced non-small cell lung cancer (NSCLC) who will benefit from treatment with epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKI). EGFR mutation, gene copy number, coexpression of ErbB proteins and ligands, and epithelial to mesenchymal transition markers all correlate with EGFR TKI sensitivity, and while prediction of sensitivity using any one of the markers does identify responders, individual markers do not encompass all potential responders due to high levels of inter-patient and inter-tumor variability. We hypothesized that a multivariate predictor of EGFR TKI sensitivity based on gene expression data would offer a clinically useful method of accounting for the increased variability inherent in predicting response to EGFR TKI and for elucidation of mechanisms of aberrant EGFR signalling. Furthermore, we anticipated that this methodology would result in improved predictions compared to single parameters alone both in vitro and in vivo.ResultsGene expression data derived from cell lines that demonstrate differential sensitivity to EGFR TKI, such as erlotinib, were used to generate models for a priori prediction of response. The gene expression signature of EGFR TKI sensitivity displays significant biological relevance in lung cancer biology in that pertinent signalling molecules and downstream effector molecules are present in the signature. Diagonal linear discriminant analysis using this gene signature was highly effective in classifying out-of-sample cancer cell lines by sensitivity to EGFR inhibition, and was more accurate than classifying by mutational status alone. Using the same predictor, we classified human lung adenocarcinomas and captured the majority of tumors with high levels of EGFR activation as well as those harbouring activating mutations in the kinase domain. We have demonstrated that predictive models of EGFR TKI sensitivity can classify both out-of-sample cell lines and lung adenocarcinomas.ConclusionThese data suggest that multivariate predictors of response to EGFR TKI have potential for clinical use and likely provide a robust and accurate predictor of EGFR TKI sensitivity that is not achieved with single biomarkers or clinical characteristics in non-small cell lung cancers.
Physiology & Behavior | 2006
Thomas V. Getchell; Kevin Kwong; Christopher P. Saunders; Arnold J. Stromberg; Marilyn L. Getchell
We have investigated olfactory-mediated pre-ingestive behavior in leptin (ob/ob) and leptin receptor (db/db) mutant mice compared to age- and gender-matched wild-type (wt) mice. Olfactory-mediated behavior was tested using a buried food paradigm 5 times/day at 2-h intervals for 6 days. Mean food-finding times of ob/ob and db/db mice were approximately 10 times shorter than those of wt mice. To test the effect of leptin replacement in ob/ob mice, leptin (1 or 5 microg/g body weight in sterile saline) or carrier was injected i.p. once daily prior to testing. Mean food finding times in ob/ob mice injected with carrier or with 1 microg/g leptin were similar and were 2-3 times faster than in wt mice. Mean food finding times in ob/ob mice injected with 5 microg/g leptin tripled compared to carrier-injected ob/ob mice and were of the same order of magnitude as those of wt mice, suggesting functional leptin replacement. A 3-factor repeated measures ANOVA demonstrated significant differences between the 6 cohorts (P = 0.0001), food finding times (P< or = 0.0001), and cohort by day interaction (P< or = 0.0001). Post hoc tests suggested that the ob/ob+5 mug/g leptin cohort performed more like the wt cohort in the food-finding test than like the ob/ob or ob/ob+carrier cohort. Potential local sites of leptin production and action were identified with immunohistochemistry and in situ hybridization in epithelial and gland cells of the olfactory and nasal mucosae. Our results strongly suggest that leptin acting through leptin receptors modulates olfactory-mediated pre-ingestive behavior.
Forensic Science International | 2012
Amanda Hepler; Christopher P. Saunders; Linda J. Davis; JoAnn Buscaglia
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing samples from over 400 writers. We found that, when provided with the same two items of evidence, the three methods often would lead to differing conclusions (with rates of disagreement ranging from 0.005 to 0.48). Rates of misleading evidence and Tippet plots are both used to characterize the range of behavior for the methods over varying sized questioned documents. The appendix shows that the three score-based likelihood ratios are theoretically very different not only from each other, but also from the likelihood ratio, and as a consequence each display drastically different behavior.
Science & Justice | 2016
Danica M. Ommen; Christopher P. Saunders; Cedric Neumann
In the various forensic science disciplines, recent analytical developments paired with modern statistical computational tools have led to the proliferation of adhoc techniques for quantifying the probative value of forensic evidence. Many legal and scientific scholars agree that the value of evidence should be reported as a likelihood ratio or a Bayes Factor. Quantifying the probative value of forensic evidence is subjected to many sources of variability and uncertainty. There is currently a debate on how to characterize the reliability of the value of evidence. Some authors have proposed associating a confidence/credible interval with the value of evidence assigned to a collection of forensic evidence. In this paper, we will discuss the reasons for our opinion that interval quantifications for the value of evidence should not be used directly in the Bayesian decision-making process to determine the support of the evidence for one of the two competing hypotheses.
Journal of Forensic Sciences | 2011
Christopher P. Saunders; Linda J. Davis; JoAnn Buscaglia
Abstract: The proposition that writing profiles are unique is considered a key premise underlying forensic handwriting comparisons. An empirical study cannot validate this proposition because of the impossibility of observing sample documents written by every individual. The goal of this paper is to illustrate what can be stated about the individuality of writing profiles using a database of handwriting samples and an automated comparison procedure. In this paper, we provide a strategy for bounding the probability of observing two writers with indistinguishable writing profiles (regardless of the comparison methodology used) with a random match probability that can be estimated statistically. We illustrate computation of this bound using a convenience sample of documents and an automated comparison procedure based on Pearson’s chi‐squared statistic applied to frequency distributions of letter shapes extracted from handwriting samples. We also show how this bound can be used when designing an empirical study of individuality.
IEEE Transactions on Sustainable Energy | 2017
Ayush Shakya; Semhar Michael; Christopher P. Saunders; Douglas Armstrong; Prakash Pandey; Santosh Chalise; Reinaldo Tonkoski
Photovoltaic (PV) systems integration is increasingly being used to reduce fuel consumption in diesel-based remote microgrids. However, uncertainty and low correlation of PV power availability with load reduces the benefits of PV integration. These challenges can be handled by introducing reserve. However, this leads to increased operational cost. Solar irradiance forecasting helps to reduce reserve requirement, thereby improving the utilization of PV energy. This paper presents a new solar irradiance forecasting method for remote microgrids based on the Markov switching model. This method uses locally available data to predict one-day-ahead solar irradiance for scheduling energy resources in remote microgrids. The model considers past solar irradiance data, clear sky irradiance, and Fourier basis expansions to create linear models for three regimes or states: high, medium, and low energy regimes for days corresponding to sunny, mildly cloudy, and extremely cloudy days, respectively. The case study for Brookings, SD, USA, discussed in this paper, resulted in an average mean absolute percentage error of 31.8% for five years, from 2001 to 2005, with higher errors during summer months than during winter months.
Forensic Science International | 2012
Linda J. Davis; Christopher P. Saunders; Amanda Hepler; JoAnn Buscaglia
The likelihood ratio paradigm has been studied as a means for quantifying the strength of evidence for a variety of forensic evidence types. Although the concept of a likelihood ratio as a comparison of the plausibility of evidence under two propositions (or hypotheses) is straightforward, a number of issues arise when one considers how to go about estimating a likelihood ratio. In this paper, we illustrate one possible approach to estimating a likelihood ratio in comparative handwriting analysis. The novelty of our proposed approach relies on generating simulated writing samples from a collection of writing samples from a known source to form a database for estimating the distribution associated with the numerator of a likelihood ratio. We illustrate this approach using documents collected from 432 writers under controlled conditions.
Systems Engineering | 2015
Scott L. Rosen; Christopher P. Saunders; Samar K. Guharay
With increasing complexity of real-world systems, especially for continuously evolving scenarios, systems analysts encounter a major challenge with the modeling techniques that capture detailed system characteristics defining input-output relationships. The models become very complex and require long time of execution. In this situation, techniques to construct approximations of the simulation model by metamodeling alleviate long run times and the need for large computational resources; it also provides a means to aggregate a simulations multiple outputs of interest and derives a single decision-making metric. The method described here leverages simulation metamodeling to map the three basic SE metrics, namely, measures of performance to measures of effectiveness to a single figure of merit. This enables using metamodels to map multilevel system measures supports rapid decision making. The results from a case study demonstrate the merit of the method. Several metamodeling techniques are compared and bootstrap error analysis and predicted residual sums of squares statistic are discussed to evaluate the standard error and error due to bias.
The Annals of Applied Statistics | 2011
Christopher P. Saunders; Linda J. Davis; Andrea C. Lamas; John J. H. Miller; Donald T. Gantz
In this study we illustrate a statistical approach to questioned document examination. Specifically, we consider the construction of three classifiers that predict the writer of a sample document based on categorical data. To evaluate these classifiers, we use a data set with a large number of writers and a small number of writing samples per writer. Since the resulting classifiers were found to have near perfect accuracy using leave-one-out cross-validation, we propose a novel Bayesian-based cross-validation method for evaluating the classifiers.
Analytical Chemistry | 2014
Joshua R. Dettman; Alyssa A. Cassabaum; Christopher P. Saunders; Deanna L. Snyder; JoAnn Buscaglia
Copper may be recovered as evidence in high-profile cases such as thefts and improvised explosive device incidents; comparison of copper samples from the crime scene and those associated with the subject of an investigation can provide probative associative evidence and investigative support. A solution-based inductively coupled plasma mass spectrometry method for measuring trace element concentrations in high-purity copper was developed using standard reference materials. The method was evaluated for its ability to use trace element profiles to statistically discriminate between copper samples considering the precision of the measurement and manufacturing processes. The discriminating power was estimated by comparing samples chosen on the basis of the copper refining and production process to represent the within-source (samples expected to be similar) and between-source (samples expected to be different) variability using multivariate parametric- and empirical-based data simulation models with bootstrap resampling. If the false exclusion rate is set to 5%, >90% of the copper samples can be correctly determined to originate from different sources using a parametric-based model and >87% with an empirical-based approach. These results demonstrate the potential utility of the developed method for the comparison of copper samples encountered as forensic evidence.