Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sheridan K. Houghten is active.

Publication


Featured researches published by Sheridan K. Houghten.


IEEE Transactions on Information Theory | 2003

The extended quadratic residue code is the only (48,24,12) self-dual doubly-even code

Sheridan K. Houghten; Clement W. H. Lam; Larry H. Thiel; J. A. Parker

An extremal self-dual doubly-even binary (n,k,d) code has a minimum weight d=4/spl lfloor/n/24/spl rfloor/+4. Of such codes with length divisible by 24, the Golay code is the only (24,12,8) code, the extended quadratic residue code is the only known (48,24,12) code, and there is no known (72,36,16) code. One may partition the search for a (48,24,12) self-dual doubly-even code into three cases. A previous search assuming one of the cases found only the extended quadratic residue code. We examine the remaining two cases. Separate searches assuming each of the remaining cases found no codes and thus the extended quadratic residue code is the only doubly-even self-dual (48,24,12) code.


computational intelligence in bioinformatics and computational biology | 2009

DNA error correcting codes: No crossover.

Daniel Ashlock; Sheridan K. Houghten

DNA error correcting codes over the edit metric create embeddable markers for sequencing projects that are tolerant of sequencing errors. When a sequence library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. Evolutionary algorithms are currently the best known technique for optimizing DNA error correcting codes. In this study we resolve the question of the utility of the crossover operator used in earlier studies on optimizing DNA error correcting codes. The crossover operator in question is found to be substantially counterproductive. A majority of crossover events produce results that violate minimum-distance constraints required for error correction. A new algorithm, a form of modified evolution strategy, is tested and is found to locate codes with record size. The table of best know sizes for DNA-error correcting codes is updated.


Journal of Combinatorial Designs | 2001

There is no (46, 6, 1) block design*

Sheridan K. Houghten; Larry H. Thiel; J. Janssen; Clement W. H. Lam

Abstact: In this paper we show that a (46, 6, 1) design does not exist. This result was obtained by a computer search. In the incidence matrix of such a design, there must exist a “c4” configuration—6 rows and 4 columns, in which each pair of columns intersect exactly once, in distinct rows. There can also exist a “c5” configuration with 10 rows and 5 columns, in which each pair of columns intersect exactly once, in distinct rows. Thus the search for (46, 6, 1) designs can be subdivided into two cases, the first of which assumes there is no “c5”, and the second of which assumes there is a “c5”. After completing the searches for both cases, we found no (46, 6, 1) design.


information theory workshop | 2006

Construction of Optimal Edit Metric Codes

Sheridan K. Houghten; Daniel Ashlock; Jessie Lenarz

The edit distance between two strings is the minimal number of substitutions, deletions, or insertions required to transform one string into another. An error correcting code over the edit metric includes features from deletion-correcting codes as well as the more traditional codes defined using Hamming distance. Applications of edit metric codes include the creation of robust tags over the DNA alphabet. This paper explores the theory underlying edit metric codes for small alphabets. The size of a sphere about a word is heavily dependent on its block structure, or its partition into maximal subwords of a single symbol. This creates a substantial divergence from the theory for the Hamming metric. An optimal code is one with the maximum possible number of codewords for its length and minimum distance. We provide tables of bounds on code sizes for edit codes with short length and small alphabets. We describe issues relating to exhaustive searches and present several heuristics for constructing codes


International Journal of Bio-inspired Computation | 2013

Benchmark datasets for the DNA fragment assembly problem

Guillermo M. Mallén-Fullerton; James Alexander Hughes; Sheridan K. Houghten; Guillermo Fernández-Anaya

Many computational intelligence approaches have been used for the fragment assembly problem. However, the comparison and analysis of these approaches is difficult due to the lack of availability of standard benchmarks. Although similar datasets may be used as a starting point, there is not enough information to reproduce the exact overlaps matrix for the fragments used by the various approaches, creating a problem for consistency. This paper presents a collection of benchmark datasets for a wide range of fragment lengths, number of fragments, and sequence lengths, along with a description of the method used to produce them. A website has been created to maintain the datasets and the tables of results at http://chac.sis.uia.mx/fragbench/. Researchers are invited to add to the datasets by following the method described, as well as to submit results obtained by their algorithms on the benchmarks.


2009 IEEE Symposium on Computational Intelligence in Cyber Security | 2009

Genetic algorithm cryptanalysis of a substitution permutation network

Joseph Alexander Brown; Sheridan K. Houghten; Beatrice M. Ombuki-Berman

We provide a preliminary exploration of the use of Genetic Algorithms (GA) upon a Substitution Permutation Network (SPN) cipher. The purpose of the exploration is to determine how to find weak keys. The size of the selected SPN created by Stinson[1] gives a sample for showing the methodology and suitability of an attack using GA. We divide the types of keys into groups, each of which is analyzed to determine which groups are weaker. Simple genetic operators are examined to show the suitability of GA when applied to this problem. Results show the potential of GA to provide automated or computer assisted breaking of ciphers. The GA broke a subset of the keys using small input texts.


computational intelligence in bioinformatics and computational biology | 2005

A Novel Variation Operator for More Rapid Evolution of DNA Error Correcting Codes.

Daniel Ashlock; Sheridan K. Houghten

Error correcting codes over the edit metric have been used as embedded DNA markers in at least one sequencing project. The algorithm used to construct those codes was an evolutionary algorithm with a fitness function with exponential time complexity. Presented here is an substantially faster evolutionary algorithm for locating error correcting codes over the edit metric that exhibits either equivalent or only slightly inferior performance on test cases. The new algorithm can produce codes for parameters where the run-time of the earlier algorithm was prohibitive. The new algorithm is a novel type of evolutionary algorithm using a greedy algorithm to implement a variation operator. This variation operator is the sole variation operator used and has unary, binary, and k-ary forms. The unary and binary forms are compared, with the binary form being found superior. Population size and the rate of introduction of random material by the variation operator are also studied. A high rate of introduction of random material and a small population size are found to be the best.


BioSystems | 2012

On the synthesis of DNA error correcting codes.

Daniel Ashlock; Sheridan K. Houghten; Joseph Alexander Brown; John Orth

DNA error correcting codes over the edit metric consist of embeddable markers for sequencing projects that are tolerant of sequencing errors. When a genetic library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. This study compares different methods for synthesizing DNA error correcting codes. A new code-finding technique called the salmon algorithm is introduced and used to improve the size of best known codes in five difficult cases of the problem, including the most studied case: length six, distance three codes. An updated table of the best known code sizes with 36 improved values, resulting from three different algorithms, is presented. Mathematical background results for the problem from multiple sources are summarized. A discussion of practical details that arise in application, including biological design and decoding, is also given in this study.


computational intelligence in bioinformatics and computational biology | 2010

Side effect machines for quaternary edit metric decoding

Joseph Alexander Brown; Sheridan K. Houghten; Daniel Ashlock

DNA edit metric codes are used as labels to track the origin of sequence data. This study is the first to treat sophisticated decoders for these error-correcting codes. Side effect machines can provide efficient decoding algorithms for such codes. Two methods for automatically producing decoding algorithms are presented. Side Effect Machines (SEMs), generalizations of finite state automata, are used in both. Single Classifier Machines (SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines (LSEMs) use multiple side effect machines to create a tree structured iterated classification. This study examines these techniques and provides new decoders for existing codes. Presented are ideas for best practises for the creation of these two types of new edit metric decoders. Codes of the form (n,M,d)4 are used in testing due to their suitability for bioinformatics problems. A group of (12, 54–56, 7)4 codes are used as an example of the process.


nature and biologically inspired computing | 2013

Recentering, reanchoring & restarting an evolutionary algorithm

James Alexander Hughes; Sheridan K. Houghten; Daniel Ashlock

Recentering-restarting evolutionary algorithms have been used successfully to evolve epidemic networks. This study develops multiple variations of this algorithm for the purpose of evaluating its use for ordered-gene problems. These variations are called recentering or reanchoring-restarting evolutionary algorithms. Two different adaptive representations were explored that both use generating sets to produce local search operations. The degree of locality is controllable by setting program parameters. The variations and representations are applied to what may be considered the quintessential ordered gene problem, the Travelling Salesman Problem. Two sets of experimental analysis were performed. The first used large problem instances to determine how well this algorithm performs in comparison to benchmarks obtained from the DIMACS TSP implementation challenge. The second used many small problem instances to determine if any one of the recentering/reanchoring-restarting evolutionary algorithms outperforms the others. Variations of the recentering/reanchoring-restarting evolutionary algorithm were comparable to some of the best performing computational intelligence algorithms. In studying the small problem instances, no significant trend was found to suggest that one variation of baseline evolutionary algorithms or recentering/reanchoring-restarting evolutionary algorithms outperformed the others. This study shows that the new algorithms are very useful tools for improving results produced by other heuristics.

Collaboration


Dive into the Sheridan K. Houghten's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge