Donald W. Mathis
University of Colorado Boulder
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Donald W. Mathis.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1991
William J. Wolfe; Donald W. Mathis; Cheryl W. Sklair; Michael Magee
The perspective view of three noncollinear points whose image-to-object correspondence is known is studied. Such measurements are known to be ambiguous, resulting in as many as four possible solutions to the perspective three-point problem. Although there can be four solutions, it is quite often the case that there are triangle configurations that cause one, two, three, or four solutions. The results also provide a justification for the common wisdom that there are usually two solutions. >
IEEE Transactions on Neural Networks | 1991
William J. Wolfe; Donald W. Mathis; Charlie Anderson; Jay Rothman; Michael Gottler; George Brady; R. Walker; Gregory S. Duane; Gila Alaghband
A special class of mutually inhibitory networks is analyzed, and parameters for reliable K-winner performance are presented. The network dynamics are modeled using interactive activation, and results are compared with the sigmoid model. For equal external inputs, network parameters that select the units with the larger initial activations (the network converges to the nearest stable state) are derived. Conversely, for equal initial activations, networks that select the units with larger external inputs (the network converges to the lowest energy stable state) are derived. When initial activations are mixed with external inputs, anomalous behavior results. These discrepancies are analyzed with several examples. Restrictions on initial states are derived which ensure accurate K-winner performance when unequal external inputs are used.
IEEE Transactions on Neural Networks | 1993
William J. Wolfe; James MacMillan; George Brady; Robert Mathews; Jay Rothman; Donald W. Mathis; Michael D. Orosz; Charlie Anderson; Gila Alaghband
A family of symmetric neural networks that solve a simple version of the assignment problem (AP) is analyzed. The authors analyze the suboptimal performance of these networks and compare the results to optimal answers obtained by linear programming techniques. They then use the interactive activation model to define the network dynamics-a model that is closely related to the Hopfield-Tank model. A systematic analysis of hypercube corner stability and eigenspaces of the connection strength matrix leads to network parameters that give feasible solutions 100% of the time and to a projection algorithm that significantly improves performance. Two formulations of the problem are discussed: (i) nearest corner: encode the assignment numbers as initial activations, and (ii) lowest energy corner: encode the assignment numbers as external inputs.
Intelligent Robots and Computer Vision VII | 1989
William J. Wolfe; Cheryl Weber-Sklair; Donald W. Mathis; Michael Magee
Determining the 3-D location of an object from image-derived features, such as edges and vertices, has been a central problem for the computer vision industry since its inception. This paper reports on the use of four coplanar points (in particular, a rectangle) and three points for determining 3-D object position from a single perspective view. The four-point algorithm of Hung and Yeh is compared to the four-point algorithm of Haralick. Both methods uniquely solve the inverse perspective problem, but in different ways. The use of three points has proven to be more difficult, mainly because of multiple solutions to the inverse perspective problem as pointed out by Fischler and Bolles. This paper also presents computer simulation results that demonstrate the spatial constraints associated with these multiple solutions. These results provide the basis for discarding spurious solutions when some prior knowledge of configuration is available. Finally, the use of vertex-pairs introduced by Thompson and Mundy is analyzed and compared to the other method.
Intelligent Robots and Computer Vision IX: Neural, Biological, and 3D Methods | 1991
William J. Wolfe; Donald W. Mathis; C. Anderson; Jay Rothman; Michael Gottler; G. Brady; R. Walker; G. Duane; Gita Alaghband
Mutually inhibitory networks are the fundamental building blocks of many complex systems. Despite their apparent simplicity they exhibit interesting behavior. We analyze a special class of such networks and provide parameters for reliable K-winner performance. We model the network dynamics using interactive activation and compare our results to the sigmoid model. When the external inputs are all equal we can derive network parameters that reliably select the units with the larger initial activations because the network converges to the nearest stable state. Conversely when the initial activations are all equal we can derive networks that reliably select the units with larger external inputs because the network converges to the lowest energy stable state. But when we mix initial activations with external inputs we get anomalous behavior. We analyze these discrepancies giving several examples. We also derive restrictions on initial states which ensure accurate K-winner performance when unequal external inputs are used. Much of this work was motivated by the K-winner networks described by Majani et at. in [1]. They use the sigmoid model and provide parameters for reliable K-winner performance. Their approach is based primarily on choosing an appropriate external input the same for all units that depends on K. We extend their work to the interactive activation model and analyze external inputs constant but possibly different for each unit more closely. Furthermore we observe a parametric duality in that changing
visual communications and image processing | 1990
William J. Wolfe; Gita Alaghband; Donald W. Mathis
We investigate the simultaneous occurrence of speech, vision and natural language. Several applications are analyzed in order to demonstrate and categorize the many ways that voice signals, images and text can be semantically related. Examples are provided of how connectionist, blackboard, and conceptual dependency approaches apply.
visual communications and image processing | 1990
William J. Wolfe; Gita Alaghband; Donald W. Mathis; Alan Baxter
In this paper we discuss applications of the Connection Machine to various robot vision problems. In particular we investigate backpropagation used for terrain typing and as part of a model-based computer vision system. Mapping the backpropagation algorithm onto the fine grain architecture of the CM-2 is sketched. We discuss the interplay between data-level and control-level parallelism in model-based vision systems.
Archive | 1996
Donald W. Mathis; Michael C. Mozer
neural information processing systems | 1994
Donald W. Mathis; Michael C. Mozer
visual communications and image processing | 1990
William J. Wolfe; Donald W. Mathis; Michael Magee; William Hoff