Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Alexander Brown is active.

Publication


Featured researches published by Joseph Alexander Brown.


international c conference on computer science & software engineering | 2012

Evolving dungeon crawler levels with relative placement

Valtchan Valtchanov; Joseph Alexander Brown

Procedural Content Generation (PCG) is the process of automating the construction of media types for use in game development, the movie industry, and other creative fields. By approaching the process of media creation as a search for content which is evaluated to express desirable features in a well-defined manner, we are able to apply evolutionary techniques such as genetic programming. This can greatly decrease the effort required to bring a project to completion by allowing artists and developers to focus on guiding the creation process. The specific generation process addressed is that of map creation for dungeon crawler video games. The search method proposed allows artists and developers to guide the generation process by specifying a set of tiles that define the composition of each map, and a fitness function that defines its structure.


International Journal of Space-Based and Situated Computing | 2013

Performance evaluation of mixed-bias scheduling schemes for wireless mesh networks

Jason B. Ernst; Joseph Alexander Brown

Typically, peripheral nodes in a multi-hop wireless network experience poor performance from starvation, congestion, queue build-up and contention along the path towards internet gateways. We propose three adaptive methods for scheduling based on mixed-bias scheduling which aim to prioritise mesh routers near the gateways to ensure they can handle their own traffic and peripheral traffic. We also give an overview of the mixed-bias approach for scheduling. We then evaluate the performance of each technique in comparison with each other and the IEEE 802.11 distributed coordination function. Each solution is evaluated based on average packet delivery ratio and average end-to-end delay. Two experiments were performed to examine the performance. First, we studied the effect of varying the inter-arrival rate of the packets. Second we examined the effect of changing the number of sources. In all experiments, the proposed approaches perform at least as well or better than IEEE 802.11 DCF.


2009 IEEE Symposium on Computational Intelligence in Cyber Security | 2009

Genetic algorithm cryptanalysis of a substitution permutation network

Joseph Alexander Brown; Sheridan K. Houghten; Beatrice M. Ombuki-Berman

We provide a preliminary exploration of the use of Genetic Algorithms (GA) upon a Substitution Permutation Network (SPN) cipher. The purpose of the exploration is to determine how to find weak keys. The size of the selected SPN created by Stinson[1] gives a sample for showing the methodology and suitability of an attack using GA. We divide the types of keys into groups, each of which is analyzed to determine which groups are weaker. Simple genetic operators are examined to show the suitability of GA when applied to this problem. Results show the potential of GA to provide automated or computer assisted breaking of ciphers. The GA broke a subset of the keys using small input texts.


BioSystems | 2012

On the synthesis of DNA error correcting codes.

Daniel Ashlock; Sheridan K. Houghten; Joseph Alexander Brown; John Orth

DNA error correcting codes over the edit metric consist of embeddable markers for sequencing projects that are tolerant of sequencing errors. When a genetic library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. This study compares different methods for synthesizing DNA error correcting codes. A new code-finding technique called the salmon algorithm is introduced and used to improve the size of best known codes in five difficult cases of the problem, including the most studied case: length six, distance three codes. An updated table of the best known code sizes with 36 improved values, resulting from three different algorithms, is presented. Mathematical background results for the problem from multiple sources are summarized. A discussion of practical details that arise in application, including biological design and decoding, is also given in this study.


computational intelligence in bioinformatics and computational biology | 2010

Side effect machines for quaternary edit metric decoding

Joseph Alexander Brown; Sheridan K. Houghten; Daniel Ashlock

DNA edit metric codes are used as labels to track the origin of sequence data. This study is the first to treat sophisticated decoders for these error-correcting codes. Side effect machines can provide efficient decoding algorithms for such codes. Two methods for automatically producing decoding algorithms are presented. Side Effect Machines (SEMs), generalizations of finite state automata, are used in both. Single Classifier Machines (SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines (LSEMs) use multiple side effect machines to create a tree structured iterated classification. This study examines these techniques and provides new decoders for existing codes. Presented are ideas for best practises for the creation of these two types of new edit metric decoders. Codes of the form (n,M,d)4 are used in testing due to their suitability for bioinformatics problems. A group of (12, 54–56, 7)4 codes are used as an example of the process.


congress on evolutionary computation | 2013

Edit metric decoding: Representation strikes back

James Alexander Hughes; Joseph Alexander Brown; Sheridan K. Houghten; Daniel Ashlock

Quaternary error-correcting codes defined over the edit metric may be used as labels to track the origin of sequence data. When used in such applications there are typically additional restrictions that are biologically motivated, such as a required GC content or the avoidance of certain patterns. As a result such codes can not be expected to have a regular structure, making decoding particularly challenging. Previous work on decoding edit codes considered the use of side effect machines for decoding, successfully decoding up to 93.86% of error vectors. In this study the recentering/restarting algorithm is used in combination with side effect machines and an alternative representation based upon transpositions. Using the same data as in the previous work, the rate of successful decoding was significantly improved, with many cases obtaining rates very close to 100%.


computational intelligence in bioinformatics and computational biology | 2012

Multiple worlds model for motif discovery

Joseph Alexander Brown

In this study we look at a novel evolutionary technique known as the multiple worlds model for unsupervised classification of sequences via evolved motifs. This evolutionary algorithm uses the biological inspirations of species and species extinction as features modeled in the evolution in order to provide classifiers where the number of classes is not known a priori. In the multiple worlds model a number of populations which do not interbreed compete in fitness evaluation. Sequence motifs are small, biologically significant DNA/RNA or amino acid segments. They are represented as strings of symbols and wild cards. The model works well to locate classification motifs for sequences whose classes have statistical deviations such as those with a GC content difference or those created from differing Self-Driving Markov models. The creation of such classifiers will allow biologists to examine large sets of sequences in order to discover significant features.


foundations of computational intelligence | 2011

Multiple Agent Genetic Networks for Iterated Prisoner's Dilemma

Joseph Alexander Brown

The well known game Iterated Prisoners Dilemma (IPD) is examined as a test case for a new algorithm of genetic search known as Multiple Agent Genetic Networks (MAGnet). MAGnet facilitates the movement of not just the agents, but also the problem instances which a population of agents is working to solve in parallel. This allows for simultaneous classification of problem instances and search for solution to those problems. As this is an initial study, there is a focus on the ability of MAGnet to classify problem instances of IPD playing agents. A problem instance of IPD is a single opponent. A good classification method, called fingerprinting, for IPD exists and allows for verification of the comparison. Results found by MAGnet are shown to be logical classifications of the problems based upon player strategy. A subpopulation collapse effect is shown which allows the location of both difficult problem instances and the existence of general solutions to a problem.


congress on evolutionary computation | 2011

Autogeneration of fractal photographic mosaic images

Joseph Alexander Brown; Daniel Ashlock; John Orth; Sheridan K. Houghten

We present a novel method for the creation of photographic mosaic images using fractals generated via evolutionary techniques. A photomosaic is a rendering of an image performed by placing a grid of smaller images that permit the original image to be visible when viewed from a distance. The problem of selecting the smaller images is a computationally intensive one. In this study we use an evolutionary algorithm to create fractal images on demand to generate tiles of the photomosaic. A number of images and tile resolutions are tested yielding acceptable results.


computer science and software engineering | 2009

Edit metric decoding: a new hope

Joseph Alexander Brown; Sheridan K. Houghten; Daniel Ashlock

In this position paper we examine preliminary results of a new type of general error correction decoder for Edit Metric Codes. The Single Classifier Machine Decoder uses the concept of Side Effect Machines(SEMs) created via Genetic Algorithms(GAs) in order to create a mapping from the Edit Metric to the Euclidean Metric to create a decoder. By not having to measure the edit distance to every codeword the decoder has a far smaller runtime complexity. Fuzzy versions are also examined which reduce the number of times the Levenshtein or Edit distance must be calculated. Codes of the form (n, M, d)4 are targeted due to their suitability for bioinformatics problems. A (12, 55, 7)4 code is used as an example of the process.

Collaboration


Dive into the Joseph Alexander Brown's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge