Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katherine Morrison is active.

Publication


Featured researches published by Katherine Morrison.


2008 5th International Symposium on Turbo Codes and Related Topics | 2008

Average min-sum decoding of LDPC codes

Nathan Axvig; Deanna Dreher; Katherine Morrison; Eric T. Psota; Lance C. Pérez; Judy L. Walker

Simulations have shown that the outputs of min-sum (MS) decoding generally behave in one of two ways: either the output vector eventually stabilizes at a codeword or it eventually cycles through a finite set of vectors that may include both codewords and non-codewords. The latter behavior has significantly contributed to the difficulty in studying the performance of this decoder. To overcome this problem, a new decoder, average min-sum (AMS), is proposed; this decoder outputs the average of the MS output vectors over a finite set of iterations. Simulations comparing MS, AMS, linear programming (LP) decoding, and maximum likelihood (ML) decoding are presented, illustrating the relative performances of each of these decoders. In general, MS and AMS have comparable word error rates; however, in the simulation of a code with large block length, AMS has a significantly lower bit error rate. Finally, AMS pseudocodewords are introduced and their relationship to graph cover and LP pseudocodewords is explored, with particular focus on the AMS pseudocodewords of regular LDPC codes and cycle codes.


IEEE Transactions on Information Theory | 2014

Equivalence for Rank-Metric and Matrix Codes and Automorphism Groups of Gabidulin Codes

Katherine Morrison

For a growing number of applications, such as cellular, peer-to-peer, and sensor networks, efficient error-free transmission of data through a network is essential. Toward this end, Kötter and Kschischang propose the use of subspace codes to provide error correction in the network coding context. The primary construction for subspace codes is the lifting of rank-metric or matrix codes, a process that preserves the structural and distance properties of the underlying code. Thus, to characterize the structure and error-correcting capability of these subspace codes, it is valuable to perform such a characterization of the underlying rank-metric and matrix codes. This paper lays a foundation for this analysis through a framework for classifying rank-metric and matrix codes based on their structure and distance properties. To enable this classification, we extend work by Berger on equivalence for rank-metric codes to define a notion of equivalence for matrix codes, and we characterize the group structure of the collection of maps that preserve such equivalence. We then compare the notions of equivalence for these two related types of codes and show that matrix equivalence is strictly more general than rank-metric equivalence. Finally, we characterize the set of equivalence maps that fix the prominent class of rank-metric codes known as Gabidulin codes. In particular, we give a complete characterization of the rank-metric automorphism group of Gabidulin codes, correcting work by Berger, and give a partial characterization of the matrix-automorphism group of the expanded matrix codes that arise from Gabidulin codes.


Neural Computation | 2013

Combinatorial neural codes from a mathematical coding theory perspective

Carina Curto; Vladimir Itskov; Katherine Morrison; Zachary Roth; Judy L. Walker

Shannons seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.


IEEE Transactions on Information Theory | 2009

Analysis of Connections Between Pseudocodewords

Nathan Axvig; Deanna Dreher; Katherine Morrison; Eric T. Psota; Lance C. Pérez; Judy L. Walker

The role of pseudocodewords in causing non-codeword outputs in linear programming decoding, graph cover decoding, and iterative message-passing decoding is investigated. The three main types of pseudocodewords in the literature-linear programming pseudocodewords, graph cover pseudocodewords, and computation tree pseudocodewords-are reviewed and connections between them are explored. Some discrepancies in the literature on minimal and irreducible pseudocodewords are highlighted and clarified, and the minimal degree cover necessary to realize a pseudocodeword is found. Additionally, some conditions for the existence of connected realizations of graph cover pseudocodewords are given. This allows for further analysis of when graph cover pseudocodewords induce computation tree pseudocodewords. Finally, an example is offered that shows that existing theories on the distinction between graph cover pseudocodewords and computation tree pseudocodewords are incomplete.


arXiv: Neurons and Cognition | 2017

What Makes a Neural Code Convex

Carina Curto; Elizabeth Gross; Jack Jeffries; Katherine Morrison; Mohamed Omar; Zvi Rosen; Anne Shiu; Nora Youngs

Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called convex if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. Convex codes have been observed experimentally in many brain areas, including sensory cortices and the hippocampus, where neurons exhibit convex receptive fields. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this work, we provide a complete characterization of local obstructions to convexity. This motivates us to define max intersection-complete codes, a family guaranteed to have no local obstructions. We then show how our characterization enables one to use free resolutions of Stanley-Reisner ideals in order to detect violations of convexity. Taken together, these results provide a significant advance in understanding the intrinsic combinatorial properties of convex codes.


Neural Computation | 2016

Pattern completion in symmetric threshold-linear networks

Carina Curto; Katherine Morrison

Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons is the support of a stable fixed point, then no proper subset or superset of can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.


Applied Simulation and Modelling | 2011

Spectral based Methods that Streamline the Search for Failure Scenarios in Large-Scale Distributed Systems

Fern Y. Hunt; Katherine Morrison; Christopher E. Dabrowski

We report our work on the development of analytical and numerical methods that enable the detection of failure scenarios in distributed grid computing, cloud computing and other large scale systems.The spectral (i.e. eigenvalue and eigenvector) properties of the matrices associated with a non-homogeneous absorbing Markov Chain are used to quickly compute the long time proportion of tasks completed at a given setting of parameters. This enables the discovery of critical ranges of parameter values where system performance deteriorates and fails.


Archive | 2016

Lessons Learned from a Math Teachers’ Circle

Gulden Karakok; Katherine Morrison; Cathleen Craviotto

In this chapter, we describe our experience running the Northern Colorado Math Teachers’ Circle (NoCOMTC), founded in 2011. The goal of the NoCOMTC is to improve middle school mathematics teachers’ mathematical and pedagogical content knowledge through interactive mathematical problem-solving professional development sessions. Our leadership team is an effective collaboration between university mathematics and mathematics education professors and middle and high school mathematics teachers. In this chapter, we describe our leadership team’s journey from founding the NoCOMTC through four academic years of monthly evening mathematics teachers’ circle sessions and three residential summer immersion workshops. We also discuss our recently initiated student circle program. We focus on aspects that were essential to forming and sustaining our program. In addition, we highlight lessons we have learned while planning and facilitating both mathematical problem-solving sessions and activities designed to help teachers’ implementation of problem solving.


international symposium on information theory and its applications | 2008

Towards universal cover decoding

Nathan Axvig; Deanna Dreher; Katherine Morrison; Eric T. Psota; Lance C. Pérez; Judy L. Walker

In this paper, the non-codeword errors was analyzed that occur during parallel, iterative decoding with the min-sum decoder. Recently, work has been done relating the min-sum decoder to the linear programming (LP) decoder via graph covers. The LP decoder recasts the problem of decoding as an optimization problem whose feasible set is a polytope defined by the parity-check matrix of a code. It was shown that LP decoding can be realized as a decoder operating on graph covers.


Advances in Mathematics of Communications | 2015

Cyclic Orbit Codes and Stabilizer Subfields

Heide Gluesing-Luerssen; Katherine Morrison; Carolyn Troha

Collaboration


Dive into the Katherine Morrison's collaboration.

Top Co-Authors

Avatar

Carina Curto

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Judy L. Walker

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Christopher E. Dabrowski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Deanna Dreher

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Eric T. Psota

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Fern Y. Hunt

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Lance C. Pérez

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Nathan Axvig

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth Gross

San Jose State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge