Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David J. John is active.

Publication


Featured researches published by David J. John.


technical symposium on computer science education | 2006

Bioinformatics and computing curriculum: a new model for interdisciplinary courses

Jacquelyn S. Fetrow; David J. John

An interdisciplinary bioinformatics course has been taught at Wake Forest for three semesters. Undergraduate and graduate students from multiple academic specialties are brought together in a single classroom. In addition to focusing on traditional bioinformatics topics, this course concentrates on interdisciplinary collaboration in the in-class exercises and the research-based course project. A team of faculty from complementary disciplines teach the course. Productive communication is one key goal of this course.


technical symposium on computer science education | 1992

Integration of parallel computation into introductory computer science

David J. John

For the beginning student of computer science parallel computation appears to be intuitively a simple and natural extension of the classical Von Neumann model. The computational power gained from using more than one processor is “obvious”. C)nly by formal study of parallel computation and use of a parallel system does the power truly become apparent, as well as the problems of synchronization and communication. Discussions of parallel algorithms and parallel program design should be included in introductory courses. The opportunist y to construct and implement parallel algorithms must be a part of the laboratory assignments as well. These experiences will give valuable insight into the power and challenge of parallel computation.


Proceedings of the 9th Annual Cyber and Information Security Research Conference on | 2014

An initial framework for evolving computer configurations as a moving target defense

Brian Lucas; Errin W. Fulp; David J. John; Daniel A. Cañas

An evolutionary process encourages a system to change, and hopefully improve, based on environmental feed-back. When applied to a computer system, an evolutionary inspired process can be used to discover computer configurations that are different and potentially more secure. These configurations can be instantiated at different times to create a Moving Target Defense (MTD), where attackers must contend with a system that constantly changes and improves. This paper describes an initial Python-based framework that creates an evolutionary inspired MTD for computers. The framework consists of three interacting components. An evolutionary component discovers computer configurations based on previous configurations. Another component vets new configurations by instantiating them using virtual machines. Finally, a third process uses a combination of penetration software as well as reports from actual attacks to assess the configurations. The framework has been used to provide a MTD for RedHat® installed Apache™1 web servers. Experimental results indicate the servers are on average diverse, functional, and increasingly more secure.


bioinformatics and bioengineering | 2007

Metropolis-Hastings Algorithm and Continuous Regression for finding Next-State Models of Protein Modification using Information Scores

David J. John; Jacquelyn S. Fetrow; James L. Norris

The reverse engineering of protein modification networks from protein modification time sequence data is an important and challenging problem. We utilize continuous regression-based techniques to generate next-state models. Three statistical diagnostics are applied to test the stationary Markov, the equal time interval influence, and the continuous regression assumptions. The standard Metropolis-Hastings algorithm is modified to use a focused model initialization based on Pearson correlations. Additionally, an infrequent step forcing multiple changes, as opposed to the predominate single change, is incorporated into the standard Metropolis-Hastings algorithm which improves the opportunity to escape local minima of information criterion scores. Both Bayesian and corrected Akaike information criteria are used in conjunction with the Metropolis-Hastings algorithm. Real protein modification data often involves only a small number of proteins which allows a direct computation of some models. Our techniques and diagnostics are applied to two sets of time course protein modification data to produce models which score optimally with respect to information criteria. These techniques and diagnostics lend themselves to applications with similar data sets.


acm southeast regional conference | 2007

Additional limitations of the clustering validation method figure of merit

Amy L. Olex; David J. John; Elizabeth M. Hiltbold; Jacquelyn S. Fetrow

Clustering analysis is an important exploratory tool that aids in the analysis and organization of genomic data. Each biological data set has different characteris, and the decision of which clustering method is appropriate and how many clusters are optimal on a dataset-by-dataset basis can be problematic. The Figure of Merit (FOM) is a quantitative clustering validation method designed to aid in these decisions. While FOM is useful, it does have limitations which must be considered when using it. This research shows that the FOM is biased toward Euclidean distance. Performing FOM analysis on clusters created by using Pearsons correlation coefficient as a similarity measure is shown to be non-optimal, and mathematically inadvisable. A new, correlation coefficient-biased version of the FOM has been developed, and preliminary results indicate that this new FOM is effectively biased toward clusters generated using the correlation coefficient.


acm southeast regional conference | 2005

Heuristics for dependency conjectures in proteomic signaling pathways

Edward E. Allen; Jacquelyn S. Fetrow; David J. John; Stan J. Thomas

A key issue in the study of protein signaling networks is understanding the relationships among proteins in the network. Understanding these relationships in the context of a network is one of the major challenges for modern biology [2, 6]. In the laboratory a time series of protein modification measurements is taken in order that relationships among the activations can be conjectured. Laubenbacher and Stigler [5] have developed an algorithm to make conjectures concerning gene expression. Their algorithm analyses the relations as variables in polynomials, using techniques based in computational algebra. This paper focuses on heuristics for applying their method to conjecture dependencies between proteins in signal transduction networks.


BMC Bioinformatics | 2012

Bayesian probabilistic network modeling from multiple independent replicates

Kristopher L. Patton; David J. John; James L. Norris

Often protein (or gene) time-course data are collected for multiple replicates. Each replicate generally has sparse data with the number of time points being less than the number of proteins. Usually each replicate is modeled separately. However, here all the information in each of the replicates is used to make a composite inference about signal networks. The composite inference comes from combining well structured Bayesian probabilistic modeling with a multi-faceted Markov Chain Monte Carlo algorithm. Based on simulations which investigate many different types of network interactions and experimental variabilities, the composite examination uncovers many important relationships within the networks. In particular, when the edges partial correlation between two proteins is at least moderate, then the composites posterior probability is large.


international conference on bioinformatics | 2010

Examining effects of variability on systems biology modeling algorithms

Rachel A. Black; David J. John; Jacquelyn S. Fetrow; James L. Norris

Algorithms that construct protein interaction models are sensitive to variation in the experimentally derived data presented to them. Variation is introduced in the biology, the experiment, the measurement and the algorithm. This paper introduces a methodology for the analysis of the sensitivity of a given modeling algorithm to the time and individual variation in a set of time series data. This papers generated replicates simulate technical replicates conducted under similar conditions. This procedure can be applied to any interaction modeling algorithm and data set. It is shown that the algorithmic variation introduced by a specific stochastic modeling algorithm, the Continuous Bayesian method, is minimal. Furthermore, it is shown for the Continuous Bayesian method that if replicate sets differ by no more than 5% then there is high expectation that resulting models will be highly correlated. If the replicate data differ by more than 20% then there is small expectation of strong correlation. Specific statistical tests for generated model differences under different perturbations are presented.


acm southeast regional conference | 2007

The shuffle index and evaluation of models of signal transduction pathways

Edward E. Allen; Liyang Diao; Jacquelyn S. Fetrow; David J. John; Richard F. Loeser; Leslie B. Poole

The development of algorithms that conjecture proteomic networks from sparse time series laboratory data is an open problem with much current interest. The development of indices that measure how well the conjectured proteomic network matches a literature model is also an open problem. In this paper, we apply a computational algebra algorithm ([1, 2, 3]) to chondrocyte signaling data ([14]). In order to compare our model to the literature, we combine data from protein isoforms or from proteins that have been phosphorylated at different sites by summing the associated data measurements. The algorithm produces an ordered list of network edges. The resulting cotemporal model is compared to a composite next-state model derived from Signal Transduction Knowledge Environment (STKE) sources. A shuffle index is used to determine how these results from the computational algorithm compare to the composite network.


acm southeast regional conference | 2006

Reconstructing networks using co-temporal functions

Edward E. Allen; Anthony Pecorella; Jacquelyn S. Fetrow; David J. John; William H. Turkett

Reconstructing networks from time series data is a difficult inverse problem. We apply two methods to this problem using co-temporal functions. Co-temporal functions capture mathematical invariants over time series data. Two modeling techniques for co-temporal networks, one based on algebraic techniques and the other on Bayesian inference, are compared and contrasted on simulated biological network data.

Collaboration


Dive into the David J. John's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy L. Olex

Wake Forest University

View shared research outputs
Researchain Logo
Decentralizing Knowledge