Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy L. Andersen is active.

Publication


Featured researches published by Timothy L. Andersen.


international conference on document analysis and recognition | 2005

Text degradations and OCR training

Elisa H. Barney Smith; Timothy L. Andersen

Printing and scanning of text documents introduces degradations to the characters which can be modeled. Interestingly, certain combinations of the parameters that govern the degradations introduced by the printing and scanning process affect characters in such a way that the degraded characters have a similar appearance, while other degradations leave the characters with an appearance that is very different. It is well known that (generally speaking), a test set that more closely matches a training set is recognized with higher accuracy than one that matches the training set less well. Likewise, classifiers tend to perform better on data sets that have lower variance. This paper explores an analytical method that uses a formal printer/scanner degradation model to identify the similarity between groups of degraded characters. This similarity is shown to improve the recognition accuracy of a classifier through model directed choice of training set data.


international symposium on neural networks | 1999

Cross validation and MLP architecture selection

Timothy L. Andersen; Tony R. Martinez

The performance of cross validation (CV) based MLP architecture selection is examined using 14 real world problem domains. When testing many different network architectures the results show that CV is only slightly more likely than random to select the optimal network architecture, and that the strategy of using the simplest available network architecture performs better than CV in this case. Experimental evidence suggests several reasons for the poor performance of CV. In addition, three general strategies which lead to significant increase in the performance of CV are proposed. While this paper focuses on using CV to select the optimal MLP architecture, the strategies are also applicable when CV is used to select between several different learning models, whether the models are neural networks, decision trees, or other types of learning algorithms. When using these strategies the average generalization performance of the network architecture which CV selects is significantly better than the performance of several other well known machine learning algorithms on the data sets tested.


PLOS Computational Biology | 2012

Accessible High-Throughput Virtual Screening Molecular Docking Software for Students and Educators

Reed B. Jacob; Timothy L. Andersen; Owen M. McDougal

We survey low cost high-throughput virtual screening (HTVS) computer programs for instructors who wish to demonstrate molecular docking in their courses. Since HTVS programs are a useful adjunct to the time consuming and expensive wet bench experiments necessary to discover new drug therapies, the topic of molecular docking is core to the instruction of biochemistry and molecular biology. The availability of HTVS programs coupled with decreasing costs and advances in computer hardware have made computational approaches to drug discovery possible at institutional and non-profit budgets. This paper focuses on HTVS programs with graphical user interfaces (GUIs) that use either DOCK or AutoDock for the prediction of DockoMatic, PyRx, DockingServer, and MOLA since their utility has been proven by the research community, they are free or affordable, and the programs operate on a range of computer platforms.


Artificial Life | 2009

Shape homeostasis in virtual embryos

Timothy L. Andersen; Richard D. Newman; Tim Otter

We have constructed a computational platform suitable for examining emergence of shape homeostasis in simple three-dimensional cellular systems. An embryo phenotype results from a developmental process starting with a single cell and its genome. When coupled to an evolutionary search, this platform can evolve embryos with particular stable shapes and high capacity for self-repair, even though repair is not genetically encoded or part of the fitness criteria. With respect to the genome, embryo shape and self-repair are emergent properties that arise from complex interactions among cells and cellular components via signaling and gene regulatory networks, during development or during repair. This report analyzes these networks and the underlying mechanisms that control embryo growth, organization, stability, and robustness to injury.


BMC Research Notes | 2010

Dockomatic - automated ligand creation and docking

Casey W. Bullock; Reed B. Jacob; Owen M. McDougal; Greg Hampikian; Timothy L. Andersen

BackgroundThe application of computational modeling to rationally design drugs and characterize macro biomolecular receptors has proven increasingly useful due to the accessibility of computing clusters and clouds. AutoDock is a well-known and powerful software program used to model ligand to receptor binding interactions. In its current version, AutoDock requires significant amounts of user time to setup and run jobs, and collect results. This paper presents DockoMatic, a user friendly Graphical User Interface (GUI) application that eases and automates the creation and management of AutoDock jobs for high throughput screening of ligand to receptor interactions.ResultsDockoMatic allows the user to invoke and manage AutoDock jobs on a single computer or cluster, including jobs for evaluating secondary ligand interactions. It also automates the process of collecting, summarizing, and viewing results. In addition, DockoMatic automates creation of peptide ligand .pdb files from strings of single-letter amino acid abbreviations.ConclusionsDockoMatic significantly reduces the complexity of managing multiple AutoDock jobs by facilitating ligand and AutoDock job creation and management.


Journal of Chemical Information and Modeling | 2013

DockoMatic 2.0: High Throughput Inverse Virtual Screening and Homology Modeling

Casey W. Bullock; Nicolas Cornia; Reed B. Jacob; Andrew Remm; Thomas Peavey; Ken Weekes; Chris Mallory; Julia Thom Oxford; Owen M. McDougal; Timothy L. Andersen

DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly graphical user interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to (1) conduct high throughput inverse virtual screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELER programs and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.


Journal of Computational Chemistry | 2011

DockoMatic: Automated Peptide Analog Creation for High Throughput Virtual Screening

Reed B. Jacob; Casey W. Bullock; Timothy L. Andersen; Owen M. McDougal

The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three‐dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha‐conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X‐ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol−1, and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week.


document recognition and retrieval | 2005

A Fourier-descriptor-based character recognition engine implemented under the Gamera open-source document-processing framework

Jared Hopkins; Timothy L. Andersen

This paper discusses the implementation of an engine for performing optical character recognition of bi-tonal images using the Gamera framework, an existing open-source framework for building document analysis applications. The OCR engine uses features that are based on the Fourier descriptor to distinguish characters, and is designed to be able to handle character images that contain multiple boundaries. The algorithm works by assigning to each character image a signature that encodes the boundary types that are present in the image as well as the positional relationships that exist between them. Under this approach, only images having the same signature are comparable. Effectively, a meta-classifier is used which first computes the signature of an input image and then dispatches the image to an underlying neural network based classifier which is trained to distinguish between images having that signature. The performance of the OCR engine is evaluated on a set of sample images taken from the newspaper domain, and compares well with other OCR engines. The source code for this engine and all supporting modules is currently available upon request, and will eventually be made available through an open-source project on the sourceforge website.


international conference on document analysis and recognition | 2003

Features for neural net based region identification of newspaper documents

Timothy L. Andersen; Wei Zhang

Several features for neural network based document region identification are tested. Specifically, this paper examines features for non-text region identification. The neural network based region identification algorithm is a key component of a document recognition system that segments a document into regions, classifies them into text, graphic, photo, and other region types, and then uses this classification to guide the processing and analysis of the image. The input data are unusually challenging: low quality images of newspaper documents obtained from microfilmed archives. The results compare favorably with other results reported in the literature.


Archive | 1995

Using Evolutionary Computation to Generate Training Set Data for Neural Networks

Dan Ventura; Timothy L. Andersen; Tony R. Martinez

Most neural networks require a set of training examples in order to attempt to approximate a problem function. For many real-world problems, however, such a set of examples is unavailable. Such a problem involving feedback optimization of a computer network routing system has motivated a general method of generating artificial training sets using evolutionary computation. This paper describes the method and demonstrates its utility by presenting promising results from applying it to an artificial problem similar to a real-world network routing optimization problem.

Collaboration


Dive into the Timothy L. Andersen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Rimer

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Thomas Long

Boise State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge