Roy W. Dobbins
Johns Hopkins University Applied Physics Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roy W. Dobbins.
Neural network PC tools | 1990
Russell C. Eberhart; Roy W. Dobbins
Publisher Summary This chapter presents a case study of the application of neural network tools to solve a real-world problem: the identification of spikes in a multichannel electroencephalogram(EEG) signal. It discusses the reason for the problem being important to solve. The presence of EEC waveforms identified as spikes usually indicates some sort of abnormal brain function. The polarity and amplitude patterns of the spikes often provide information on the location and severity of the abnormality, possibly including information such as whether or not seizures are focal, focused in one small volume. This information is used by neurologists while deciding on the corrective measures. The EEC spike detection system is being developed for use in the four-bed epileptic monitoring unit at The Johns Hopkins Hospital. Various versions of the system should also be suitable for use at many facilities that continuously monitor EEC signals.
computer-based medical systems | 1990
Douglas M. Stetson; Russell C. Eberhart; Roy W. Dobbins; William M. Pugh; Antonio Gino
Specification and prototyping of a medical practice support system (MePSS) are discussed. A structured approach using functional, information, and state transition models is described. Use of a prototype as a means of refining system specifications is outlined. The emphasis is on the use of front-end or analysis tools, because they offer the biggest payoff to the software life cycle. The structured approach allowed the design group to function as a team, putting aside individual biases towards specific implementation approaches.<<ETX>>
Neural Network PC Tools#R##N#A Practical Guide | 1990
Russell C. Eberhart; Roy W. Dobbins
Publisher Summary This chapter discusses the way to implement a few examples of neural network tools (NNTs) on personal computers. The implementations are done step by step, with explanations along the way. Each NNT architecture (topology, model) has been selected because of its successful track record in solving practical problems on PC-based systems. The PC implementations presented in the chapter are not meant to represent generic versions of any model. They are merely representative samples of a few NNTs that the authors believe to be potentially useful to a wide range of users. The chapter describes the back-propagation model in terms of the architecture of the NNT that implements it. The term architecture, as applied to neural networks, has been used in different ways by different authors. Often, its meaning has been taken to be basically equivalent to topology, that is, the pattern of nodes and interconnections, together with such other items as directions of data flow and node-activation functions.
Neural Network PC Tools#R##N#A Practical Guide | 1990
Russell C. Eberhart; Roy W. Dobbins; Larrie V. Hutton
Publisher Summary This chapter explores a few issues related to measuring how well a neural network tool is doing. It is not a subject that has been treated extensively in the literature; therefore, in few cases, the techniques that have been applied in related areas are adapted to measuring performance. The chapter reviews a number of issues related to measuring neural network tool performance. It discusses the selection of the gold standards against which performance is measured and the role that the decision threshold level can play in determining system performance. The performance measurements discussed in the chapter include the relatively simple measure of the percent correct, the average sum-squared error measure, receiver operating characteristic(ROC) curve measurements, measurements based on ROC curve parameters, which are recall, precision, sensitivity, specificity, etc., and the chi-square goodness-of-fit metric. The specific measure chosen depends on the type of system that is being used and on other, somewhat more loosely defined parameters such as the level of technical sophistication of the system end user.
Neural Network PC Tools#R##N#A Practical Guide | 1990
Russell C. Eberhart; Roy W. Dobbins
Publisher Summary Neural network tools (NNTs) can provide solutions for a variety of problems. For a few of these problems, no other ways are known to provide solutions. For another subset of problems, other ways to tackle them may exist; however, using an NNT is by far the easiest and/or gives the best results. For still another subset, other methods might work and could be implemented with about the same amount of work. For the last subset, there are clearly better ways to attack the problem than by using NNTs. This chapter presents guidelines regarding the evaluation of an NNT for use in a particular situation and the decision regarding the subsets that a problem falls into. This evaluation should always be done from a systems point of view. The best candidate problems for neural network analysis are those that are characterized by fuzzy, imprecise, and imperfect knowledge (data), and/or by a lack of a clearly stated mathematical algorithm for the analysis of the data. However, it is important that the data should be enough to yield sufficient training and test sets to train and evaluate the performance of an NNT effectively.
Neural Network PC Tools#R##N#A Practical Guide | 1990
Russell C. Eberhart; Roy W. Dobbins
Publisher Summary The subject of neural networks is broad and deep, covering disciplines ranging from medicine to microelectronics. Neural network tools (NNTs) are derived from the massively parallel biological structures found in brains. This chapter discusses this derivation. An NNT is an analysis tool that is modeled after the massively parallel structure of the brain. It simulates a highly interconnected, parallel computational structure with many relatively simple individual processing elements, namely, neurodes. Individual neurodes are gathered together into groups called slabs. Slabs can receive input (input slabs), provide output (output slabs), or be inaccessible to both input and output, with connections only to other slabs (internal slabs). Neural network tools are characterized in three ways. The first way is the architecture of the NNT, which is the particular way in which the slabs are interconnected and receive input and output. The second method is the transfer function of the slabs, that is, the function that describes the output of a neurode, given its input. The third method is the learning paradigm used for training the network. These three characteristics can be thought of as the top-level attributes of an NNT.
Neural Network PC Tools#R##N#A Practical Guide | 1990
Roy W. Dobbins; Russell C. Eberhart
Publisher Summary This chapter discusses neural network development environments for personal computers and describes one such environment, CaseNet. A neural network development environment has most of the features that developers have come to expect in software development environments for PCs such as editors, compilers, interpreters, linkers, library managers, debuggers, spreadsheets, etc. These tools will be integrated into the development environment so that neural network software can be developed easily. However, in addition, neural network development environments should share something in common with artificial intelligence —simulation and modeling packages that provide languages and tools to model systems, specify dynamics, run experiments and acquire sensory data. A neural network development environment should incorporate these concepts and adapt available tools to the needs of neural networks. A neural network environment should be a user-friendly system for specifying and executing network models. The user interface should support both novice and advanced users. It should be easy for the novice to become familiar with the system. The software should be menu-driven, with context-sensitive pop-up help screens and other user interface features that make it easy to learn and use.
Neural Network PC Tools#R##N#A Practical Guide | 1990
Roy W. Dobbins; Russell C. Eberhart
Publisher Summary This chapter describes software tools to model, specify, and run neural networks on PCs and explains the way by which a network specification is turned into working code. It focuses on low-level programming tools for implementing networks. Neural network software implements neural networks on a hardware platform. The terms artificial neural network and neural network simulation are often used to describe the software. The chapter describes forward and backward passes of the neural network. In a backward pass, the error signals are propagated backward through the network, starting at the output layer. The error term (or delta) at the output layer, delta2, is computed from the difference between the actual output and the desired target values for each node in the output layer, for each training pattern: delta2 = (target - out2) * out2 * (1 - out2), where out2 is the activation vector at the output layer, and target is the target vector (desired network reponse).
Neural Network PC Tools#R##N#A Practical Guide | 1990
Russell C. Eberhart; Roy W. Dobbins
Books Albert, J. (2003). Teaching statistics using baseball. Washington, DC: Mathematical Association of America. American Statistical Association. (2005). Guidelines for assessment and instruction in statistics education (GAISE) project. Retrieved July 16, 2007, from http://www.amstat.org/education/gaise/ Ben-Zvi, D., & Garfield, J. (Eds.). (2004). The challenge of developing statistical literacy, reasoning, and thinking. Dordrecht, The Netherlands: Kluwer Academic. Benjamin, L. T., Jr., Nodine, B. F., Ernst, R. M., & Broeker, C. B. (Eds.). (1999). Activities handbook for the teaching of psychology (Vol. 4). Washington, DC: American Psychological Association. Dunn, D. S., Smith, R. A., & Beins, B. (Eds.). (2007). Best practices for teaching statistics and research methods in the behavioral sciences. Mahwah, NJ: Erlbaum. Gal, I., & Garfield, J. (Eds.). (1997). The assessment challenge in statistics education. Amsterdam, The Netherlands: IOS Press. Garfield, J. B. (Ed.). (2005). Innovations in teaching statistics. Washington, DC: Mathematical Association of America. Garfield, J. B., & Burrill, G. (Eds.) (1997). Research on the role of technology in teaching and learning statistics. Voorburg, The Netherlands: International Statistical Institute. Gelman, A., & Nolan, D. (2002). Teaching statistics: A bag of tricks. New York: Oxford University Press. Moore, T. J. (Ed.). (2001). Teaching statistics: Resources for undergraduate instructors. Washington, DC: Mathematical Association of America. Scheaffer, R. L., Gnanadesikan, M., Watkins, A., & Witmer, J. (1996). Activity-based statistics. New York: Springer-Verlag.
Archive | 1996
Russell C. Eberhart; Pat Simpson; Roy W. Dobbins