Kevin E. Mark
Washington University in St. Louis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin E. Mark.
Journal of the Acoustical Society of America | 1992
Michael I. Miller; Kevin E. Mark
Cochlear nerve discharge patterns in response to the synthesized consonant-vowel stimulus /da/ were collected from a population of 223 auditory-nerve fibers from a single cat. For each nerve fiber, discharges were measured from multiple, independent stimulus presentations, with the means and variances of the post-stimulus time histograms and Fourier transforms of response generated from the ensemble of stimulus presentations. The statistics were not consistent with those predicted via an inhomogeneous Poisson counting process model. Specifically, the synchronized components as measured by the Fourier transforms of post-stimulus time histogram responses have variances that are as much as a factor of 3 times lower than the predicted by the Poisson model. To account for the non-Poisson nature of the statistics, the Markov process model of Siebert/Gaumond was adopted. Using the maximum-likelihood and minimum description length algorithms, introduced by Miller [J. Acoust. Soc. Am. 77, 1452-1464 (1985)] and Mark and Miller [J. Acoust. Soc. Am. 91, 989-1002 (1992)], estimates of the stimulus and recovery functions were computed for each nerve fiber. Then, Markov point processes were simulated with the stimulus and recovery functions generated from these nerve fibers. The statistics of the simulated Markov processes are shown to have almost identical first- and second-order statistics as those measured for the population of auditory-nerve fibers, and demonstrates the effectiveness of the Markov point process model in accounting for the correlation effects associated with the discharge history-dependent refractory properties of auditory nerve response.
human language technology | 1992
Kevin E. Mark; Michael I. Miller; Ulf Grenander; Steve Abney
A new language model incorporating both N-gram and context-free ideas is proposed. This constrained context-free model is specified by a stochastic context-free prior distribution with N-gram frequency constraints. The resulting distribution is a Markov random field. Algorithms for sampling from this distribution and estimating the parameters of the model are presented.
Journal of the Acoustical Society of America | 1992
Kevin E. Mark; Michael I. Miller
Auditory-nerve fiber discharges are modeled as self-exciting point processes with intensity given by the product of a stimulus-related function and a refractory-related function. Previous methods of estimating these two functions, based on the maximum-likelihood principle, have the problem of estimating more parameters than the data can support. A new procedure, based on a Bayes criterion for choosing the complexity of the model in addition to estimating the parameters, solves the over-parametrization problem. This procedure is seen to relate asymptotically to Rissanens minimum description length (MDL) criterion. A performance comparison of the MDL procedure with previous maximum-likelihood algorithms promotes the adoption of the MDL procedure for simultaneous estimation of the stimulus and recovery properties of auditory-nerve discharge.
Journal of Digital Imaging | 2000
Robert E. Hogan; Kevin E. Mark; Indrajit Choudhuri; Lei Wang; Sarang C. Joshi; Michael I. Miller; Richard D. Bucholz
We compared manual and automated segmentations of the hippocampus in patients with mesial temporal sclerosis. This comparison showed good precision of the deformation-based automated segmentations.
Archive | 1996
Kevin E. Mark; Michael I. Miller; Ulf Grenander
Stochastic language models incorporating both n-grams and context-free grammars are proposed. A constrained context-free model specified by a stochastic context-free prior distribution with superimposed n-gram frequency constraints is derived and the resulting maximum-entropy distribution is shown to induce a Markov random field with neighborhood structure at the leaves determined by the relative n-gram frequencies. A computationally efficient version, the mixed tree/chain graph model, is derived with identical neighborhood structure. In this model, a word-tree derivation is given by a stochastic context-free prior on trees down to the preterminal (part-of-speech) level and word attachment is made by a nonstationary Markov chain. Using the Penn TreeBank, a comparison of the mixed tree/chain graph model to both the n-gram and context-free models is performed using entropy measures. The model entropy of the mixed tree/chain graph model is shown to reduce the entropy of both the bigram and context-free models.
Journal of Digital Imaging | 2000
R. Edward Hogan; Richard D. Bucholz; Indrajit Choudhuri; Kevin E. Mark; Chris S. Butler; Sarang C. Joshi
Structural hippocampal magnetic resonance (MR) imaging-based analysis is helpful in the diagnosis and treatment of mesial temporal epileptic seizures. Computational anatomic techniques provide a framework for objective assessment of three-dimensional hippocampal structure. We applied a previously validated technique of deformation-based hippocampal segmentations in 20 subjects with documented unilateral mesial temporal sclerosis (MTS) and temporal lobe epilepsy. Using composite images, we then measured shape differences between the epileptogenic, smaller hippocampus, and contralateral hippocampus. Final shape differences were projected on the contralateral “normal” side. We calculated results for the left MTS group (10 patients) and right MTS group (10 patients) separately. Both groups showed similar regions of maximal inward deformation in the affected hippocampus, which were the medical and lateral aspect of the head, and posterior aspect of the tail. These results suggest that there are specific three-dimensional patterns of volume loss in patients with mesial temporal epilepsy.
international symposium on information theory | 1995
Kevin E. Mark; Michael I. Miller; Ulf Grenander
Markov chain (N-gram) source models for natural language were explored by Shannon and have found wide application in speech recognition systems. However, the underlying linear graph structure is inadequate to express the hierarchical structure of language necessary for encoding syntactic information. Context-free language models which generate tree graphs are a natural way of encoding this information, but lack the modeling of interword dependencies. We consider a hybrid tree/chain graph structure which has the advantage of incorporating lexical dependencies in syntactic representations. Two Markov random field probability measures are derived on these tree/chain graphs from the maximum entropy principle.
Auditory Physiology and Perception#R##N#Proceedings of the 9th International Symposium on Hearing Held in Carcens, France, on 9–14 June 1991 | 1992
Michael I. Miller; Tai Lin; Kevin E. Mark; Jing Wang; Walter R. Bosch; Andrew T. Ogielski
ABSTRACT This paper explores the statistics of the discharge history dependence in cat cochlear nerve. We demonstrate that self-exciting point process models which take the intensity of discharge as a stimulus related and recovery related function of the time since previous discharge adequately accounts for the non-Poisson second-order statistics of eighth nerve firing. We also examine the physical mechanisms for this kind of history dependence. The excitatory post-synaptic potential (EPSP) is modelled as a marked, filtered Poisson counting process (PCP), with the vesicle release occurring on Poisson times with intensity following the stimulus, and the single vesicle EPSP generation a random marking and filtering of the PCP due to neurotransmitter binding and post-synaptic membrane integration times. The probability of eighth nerve discharge is the probability that the EPSP process crosses threshold, with the history dependence due to the dependence of post-synaptic threshold voltage on time since previous spike.
workshop on information technologies and systems | 1994
Joseph A. O'Sullivan; Kevin E. Mark; Michael I. Miller
The use of model-based methods for data compression for English dates back at least to Shannons Markov chain (n-gram) models, where the probability of the next word given all previous words equals the probability of the next word given the previous n-1 words. A second approach seeks to model the hierarchical nature of language via tree graph structures arising from a context-free language (CFL). Neither the n-gram nor the CFL models approach the data compression predicted by the entropy of English as estimated by Shannon and Cover and King. This paper presents two models that incorporate the benefits of both the n-gram model and the tree-based models. In either case the neighborhood structure on the syntactic variables is determined by the tree while the neighborhood structure of the words is determined by the n-gram and the parent syntactic variable (preterminal) in the tree, Having both types of neighbors for the words should yield decreased entropy of the model and hence fewer bits per word in data compression. To motivate estimation of model parameters, some results in estimating parameters for random branching processes is reviewed.
Archive | 2003
Kevin J. Frank; Kevin E. Mark