Jan E. Holly
Colby College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan E. Holly.
Journal of Symbolic Logic | 1995
Jan E. Holly
We present a canonical form for definable subsets of algebraically closed valued fields by means of decompositions into sets of a simple form, and do the same for definable subsets of real closed valued fields. Both cases involve discs, forming “Swiss cheeses” in the algebraically closed case, and cuts in the real closed case. As a step in the development, we give a proof for the fact that in “most” valued fields F , if f ( x ), g ( x ) ∈ F [ x ] and v is the valuation map, then the set { x : v ( f ( x )) ≤ v ( g ( x ))} is a Boolean combination of discs; in fact, it is a finite union of Swiss cheeses. The development also depends on the introduction of “valued trees”, which we define formally.
Neuroscience | 1996
Jan E. Holly; Gin McCollum
Two completely different motions of a subject relative to the earth can induce exactly the same stimuli to the vestibular, somatosensory and visual systems. When this happens, the subject may experience disorientation and misperception of self-motion. We have identified large classes of motions that are perceptually equivalent, i.e. indistinguishable by the subject, under three sets of conditions: no vision, with vision and earth-fixed visual surround, and with vision during possible movement of the visual surround. For each of these sets of conditions, we have developed a classification of all sustained motions according to their perceptual equivalences. The result is a complete list of the possible misperceptions of sustained motion due to equivalence of the forces and other direct stimuli to the sensors under the given conditions. This research expands the range of possible experiments by including all components of linear and angular velocity and acceleration. Many of the predictions in this paper can be tested experimentally. In addition, the equivalence classes developed here predict perceptual phenomena in unusual motion environments that are difficult or impossible to investigate in the laboratory.
Neuroscience | 1996
Jan E. Holly; Gin McCollum
There have been numerous experimental studies on human perception and misperception of self-motion and orientation relative to the earth, each focusing on one or a few types of motion. We present a formal framework encompassing many types of motion and including all angular and linear components of velocity and acceleration. Using a mathematically rigorous presentation, the framework defines the space of all possible motions, the map from motion to sensor status, the space containing each possible status of the sensors, and the map from sensor status to perceived motion. The shape of the full perceptual map from actual motion to perceived motion is investigated with the framework, using formal theory and a number of published experimental results. Two principles of simple motion perception and four principles of complex motion perception are presented. The framework also distinguishes the roles of physics and the nervous system in the process of self-motion perception for both simple and complex motions. The present rigorous development of the self-motion perception framework allows the scientist to compare and contrast results from many studies with differing types of motion. The six principles formalized here comprise a foundation with which to explain and predict perceptual phenomena, both those observed in the past and those to be encountered in the future. The framework is especially aimed to expand our capacity to investigate complex motions such as those encountered in everyday life or in unusual motion environments.
International Journal of Theoretical Physics | 1996
Jan E. Holly
Vestibular research on human perception of self-motion and orientation generally uses the head-based coordinate system standardized by Hixson, Niven, and Correia (1966) for specifying accelerations of the subject. This paper expands the head-based system to include velocities, thereby incorporating both the visual and vestibular systems, and formally defines the resulting concept of asubject-coincident coordinate system. By capturing the organisms vantage point during self-motion, subject-coincident systems give a natural framework for studying the relationship between stimulus, physiology, and perception; however, the essential approach differs from that familiar in traditional physics, so the necessary equations of motion are developed here. In addition, these equations are used to investigate the set ofsustained motions, those motions that can be sustained over a period of time. These motions can cause disorientation and misperception of motion because of saturation or adaptation of the human sensory receptors. The results on sustained motions are summarized in a complete categorization of the set of sustained motions.
Biological Cybernetics | 2010
Jan E. Holly; Scott J. Wood; Gin McCollum
Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates—slow (45°/s) and fast (180°/s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one’s overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing “standard” model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model.
Biological Cybernetics | 2006
Jan E. Holly; Sarah E. Pierce; Gin McCollum
Angular and linear accelerations of the head occur throughout everyday life, whether from external forces such as in a vehicle or from volitional head movements. The relative timing of the angular and linear components of motion differs depending on the movement. The inner ear detects the angular and linear components with its semicircular canals and otolith organs, respectively, and secondary neurons in the vestibular nuclei receive input from these vestibular organs. Many secondary neurons receive both angular and linear input. Linear information alone does not distinguish between translational linear acceleration and angular tilt, with its gravity-induced change in the linear acceleration vector. Instead, motions are thought to be distinguished by use of both angular and linear information. However, for combined motions, composed of angular tilt and linear translation, the infinite range of possible relative timing of the angular and linear components gives an infinite set of motions among which to distinguish the various types of movement. The present research focuses on motions consisting of angular tilt and horizontal translation, both sinusoidal, where the relative timing, i.e. phase, of the tilt and translation can take any value in the range −180° to 180°. The results show how hypothetical neurons receiving convergent input can distinguish tilt from translation, and that each of these neurons has a preferred combined motion, to which the neuron responds maximally. Also shown are the values of angular and linear response amplitudes and phases that can cause a neuron to be tilt-only or translation-only. Such neurons turn out to be sufficient for distinguishing between combined motions, with all of the possible relative angular–linear phases. Combinations of other neurons, as well, are shown to distinguish motions. Relative response phases and in-phase firing-rate modulation are the key to identifying specific motions from within this infinite set of combined motions.
Journal of Vestibular Research-equilibrium & Orientation | 2011
Jan E. Holly; Saralin M. Davis; Kelly E. Sullivan
During passive whole-body motion in the dark, the motion perceived by subjects may or may not be veridical. Either way, reflexive eye movements are typically compensatory for the perceived motion. However, studies are discovering that for certain motions, the perceived motion and eye movements are incompatible. The incompatibility has not been explained by basic differences in gain or time constants of decay. This paper uses three-dimensional modeling to investigate gondola centrifugation (with a tilting carriage) and off-vertical axis rotation. The first goal was to determine whether known differences between perceived motions and eye movements are true differences when all three-dimensional combinations of angular and linear components are considered. The second goal was to identify the likely areas of processing in which perceived motions match or differ from eye movements, whether in angular components, linear components and/or dynamics. The results were that perceived motions are more compatible with eye movements in three dimensions than the one-dimensional components indicate, and that they differ more in their linear than their angular components. In addition, while eye movements are consistent with linear filtering processes, perceived motion has dynamics that cannot be explained by basic differences in time constants, filtering, or standard GIF-resolution processes.
Biological Cybernetics | 1999
Jan E. Holly; Gin McCollum; Richard Boyle
Abstract. Most naturally occurring displacements of the head in space, due to either an external perturbation of the body or a self-generated, volitional head movement, apply both linear and angular forces to the head. The vestibular system detects linear and angular accelerations of the head separately, but the succeeding control of gaze and posture often relies upon the combined processing of linear and angular motion information. Thus, the output of a secondary neuron may reflect the linear, the angular, or both components of the head motion. Although the vestibular system is typically studied in terms of separate responses to linear and angular acceleration of the head, many secondary and higher-order neurons in the vestibular system do, in fact, receive information from both sets of motion sensors. The present paper develops methods to analyze responses of neurons that receive both types of information, and focuses on responses to sinusoidal motions composed of a linear and an angular component. We show that each neuron has a preferred motion, but a single neuron cannot code for a single motion. However, a pair of neurons can code for a motion by the relative phases of firing-rate modulation. In this way, information about motion is enhanced by neurons combining information about linear and angular motion.
Annals of Pure and Applied Logic | 1992
Lou van den Dries; Jan E. Holly
Abstract Van den Dries, L. and J. Holly, Quantifier elimination for modules with scalar variables, Annals of Pure and Applied Logic 57 (1992) 161–179. We consider modules as two-sorted structures with scalar variables ranging over the ring. We show that each formula in which all scalar variables are free is equivalent to a formula of a very simple form, uniformly and effectively for all torsion-free modules over gcd domains (=Bezout domains expanded by gcd operations). For the case of Presburger arithmetic with scalar variables the result takes a still simpler form, and we derive in this way the polynomial-time decidability of the sets defined by such formulas.
Journal of Symbolic Logic | 1997
Jan E. Holly
Elimination of imaginaries for 1-variable definable equivalence relations is proved for a theory of algebraically closed valued fields with new sorts for the disc spaces. The proof is constructive, and is based upon a new framework for proving elimination of imaginaries, in terms of prototypes which form a canonical family of formulas for defining each set that is definable with parameters. The proof also depends upon the formal development of the tree-like structure of valued fields, in terms of valued trees , and a decomposition of valued trees which is used in the coding of certain sets of discs.