John Wickens Lamb Merrill
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Wickens Lamb Merrill.
Journal of Cognitive Neuroscience | 1996
Stephen Grossberg; John Wickens Lamb Merrill
The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensorimotor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contains circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the models description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention toward goal objects amid distractions. When these model recognition, reinforcement, sensorimotor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.
Neural Networks | 1991
John Wickens Lamb Merrill; Robert F. Port
Abstract A model is described for one method of instantiating constraints in neural networks, such as are required to account for nativist assertions in an associationist context. An implementation, inspired by some of the processes of biological development, is presented for the construction of networks with such constraints. This implementation, which generates fractally configured neural networks, is investigated by applying it to a simple generalization problem.
Journal of Symbolic Logic | 1990
John Wickens Lamb Merrill
In [vDF], van Douwen and Fleissner introduce a number of axioms which hold in models constructed by iteratively forcing MA in a nontrivial extension of the set-theoretic universe. One such model is the Bell-Kunen model, obtained by starting with a model of ZFC + GCH, then forcing “MA + ϲ = ω 2 ” by the standard means, then forcing “MA + ϲ = ω 3 ”, and so on. The Bell-Kunen model is the result of an ω 1 sequence of extensions of this form, with direct limits taken at limit ordinals. (See [BK] for a more complete description.) Van Douwen and Fleissner observed that many of the properties of this model could be distilled into a “Definable Forcing Axiom”, which states that “If P is a c.c.c. partial order which is definable from a real, then there is a sequence of filters through , such that if is any dense subset of P , then all but countably many of the ℱ α s meet in a nonempty set.” (They call such a sequence ω 1 -generic.) Van Douwen and Fleissner ask whether one can eliminate the restriction on the c.c.c. order entirely; the resulting axiom (“If P is any c.c.c. partial order of cardinality at most ϲ, then there is a sequence of filters…”) is called the Undefinable Forcing Axiom (UFA).
Journal of the Acoustical Society of America | 2007
John Wickens Lamb Merrill
The topic of this thesis is to built an accurate automatic speech recognition system to be able to recognize speech using Kaldi, an open-source toolkit for speech recognition written in C++ and with free data. First of all, the main process of automatic speech recognition is explained in details on first steps. Secondly, different approaches of training and adaptation techniques are studied in order to improve the recognition accuracy. Furthermore, as data size is a very important point in order to achieve enough recognition accuracy, the role of it, is also studied on this thesis.
Journal of the Acoustical Society of America | 1988
Sven Anderson; John Wickens Lamb Merrill; Robert F. Port
Several connectionist networks were trained to classify the English syllables ba, da, ga, pa, ta, ka collected from two male and two female speakers. Using a speech preprocessor, perceptually based spectral patterns were computed [H. Hermansky, Proc. ICASSP 87, 1159–1162 (1987)] every 5 ms. A sequential network having a limited class of recurrent connections [M. Jordan, ICS Tech. Rep. University of California at San Diego (1986)) was employed to categorize the data. Training by back propagation or second‐order back propagation, a linear increase in the certainty of classification over the course of the syllable was required. Performance of the sequential networks was evaluated on both “known” and “unknown” speakers. When tested on novel tokens of a known speaker, the sequential network did very well as oppsed to very poorly on tokens from an unknown speaker. Sequential networks trained with back propagation are capable of integrating cues distributed over time and using them to categorize data. However, t...
Journal of the Acoustical Society of America | 1988
John Wickens Lamb Merrill; Sven Anderson; Robert F. Port
Recognizing speech requires the identification and anlysis of temporally distributed cues. A system must either examine a sufficiently long window at a single glance or else internally accumulate stimulus information. Sequential networks follow the second path by storing information internally in state nodes. Feed‐forward networks do not maintain their history internally and thus require that the speech signal be presented in fixed windows. The performances of sequential and feed‐forward networks at recognition of auditorily preprocessed stop‐vowel syllables are compared. Several feed‐forward networks were trained by presenting a whole syllable to the network as a single token and requiring categorization. The sequential networks were more robust within and across speakers than the feed‐forward networks. Unfortunately, the use of the back‐propagation algorithm to train a sequential network requires presentation of a desired output at every time slice. This forces arbitrary choices for specifying target ou...
Archive | 1997
W. Trower Ii Tandy; Mark Jeffrey Weinberg; John Wickens Lamb Merrill
Archive | 1998
John Wickens Lamb Merrill; W. Trower Ii Tandy; Mark Jeffrey Weinberg
Cognitive Brain Research | 1992
Stephen Grossberg; John Wickens Lamb Merrill
Journal of the Acoustical Society of America | 1998
John Wickens Lamb Merrill; W. Trower Ii Tandy; Mark Jeffrey Weinberg