Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary L. Bradshaw is active.

Publication


Featured researches published by Gary L. Bradshaw.


Machine Learning#R##N#An Artificial Intelligence Approach, Volume I | 1983

Rediscovering Chemistry with the Bacon System

Pat Langley; Gary L. Bradshaw; Herbert A. Simon

BACON.4 is a production system that discovers empirical laws. The program represents information at varying levels of description, with higher levels summarizing the levels below them. BACON.4 employs a small set of data-driven heuristics to detect regularities in numeric and nominal data. These heuristics note constancies and trends, causing BACON.4 to formulate hypotheses, to define theoretical terms, and to postulate intrinsic properties. The introduction of intrinsic properties plays an important role in BACON.4’s rediscovery of Ohm’s law for electric circuits and Archimedes’ law of displacement. When augmented with a heuristic for noting common divisors, the system is able to replicate a number of early chemical discoveries, arriving at Proust’s law of definite proportions, Gay-Lussac’s law of combining volumes, Cannizzaro’s determination of the relative atomic weights, and Prout’s hypothesis. The BACON.4 heuristics, including the new technique for finding common divisors, appear to be general mechanisms applicable to discovery in diverse domains.


Journal of Verbal Learning and Verbal Behavior | 1982

Elaborative encoding as an explanation of levels of processing.

Gary L. Bradshaw; John R. Anderson

Three experiments were performed to study how elaboration of memory structures affects recall accuracy and response latency. The experiments introduce a methodology which can independently manipulate the amount and type of elaboration given to subjects. Using this methodology, it was shown that integrated, highly elaborated memory traces were better recalled than either small unelaborated traces or large, poorly integrated traces. The results have implications for current analyses of levels of processing phenomena, and were found to support the encoding elaboration model ( J. R. Anderson & L. M. Reder, Levels of processing in human memory. Hillsdale, N.J.: Erlbaum, 1979 ).


Synthese | 1981

Scientific discovery as problem solving

Herbert A. Simon; Pat Langley; Gary L. Bradshaw

Isocyanides of the formula WHEREIN Z represents oxygen or sulfur; R represents hydrogen, halogen, lower alkyl, lower alkoxy, phenyl-lower alkylene, phenyl-lower alkyloxy, trifluoromethyl, cyano, or dimethylaminosulfonyl; R1 and R2 represent hydrogen, halogen, lower alkyl, lower alkoxy, phenyl-lower alkylene, phenyl-lower alkyloxy, trifluoromethyl, cyano, dimethylaminosulfonyl, or isocyanido; and n is 1 or 2, provided that at least one of R1 and R2 is hydrogen, and that when R is other than hydrogen, R and the isocyanido group are positioned, respectively, either ortho and para or para and ortho with respect to Z. These compounds possess utility as antibacterial and parasiticidal agents. They have also demonstrated activity as CNS depressants.


Archive | 2012

Computational Models of Learning

Gary L. Bradshaw; Pat Langley; Ryszard S. Michalski; S. Ohlsson; L. A. Rendell; Herbert A. Simon; J. G. Wolff; Leonard Bolc

In recent years, machine learning has emerged as a significant area of research in artificial intelligence and cognitive science. At present, research in the field is being intensified from both the point of view of theory and of implementation, and the results are being introduced in practice. Machine learning has recently become the subject of interest of many young and talented scientists whose bold ideas have greatly contributed to the broadening of knowledge in this rapidly developing field of science. This situation has manifested itself in an increasing number of valuable contributions to scientific journals. However, such papers are necessarily compact descriptions of research problems. Computational Models of Learning supplements these contributions and is a collection of more extensive essays. These essays provide the reader with an increased knowledge of carefully selected problems of machine learning.


international conference on machine learning | 1987

Learning about speech sounds: The NEXUS Project

Gary L. Bradshaw

Abstract Pattern recognition systems of necessity incorporate an approximate-matching process to determine the degree of similarity between an unknown input and all stored references. The matching process serves as an automatic generalization mechanism that permits each reference pattern to act as a set of specific instances. Learning mechanisms do not need to operate by manipulating hypotheses in an abstraction hierarchy, but instead can “seed” instances into the concept space, leaving generalization to the matching algorithm. This strategy represents an attractive alternative to data-driven generalization and discrimination techniques when the abstraction space is very large. NEXUS, a computer speech recognition system, incorporates learning heuristics based on this method that permit the system to identify a set of primitive acoustic concepts from experience with words. The efficacy of the resulting concepts is demonstrated by comparative recognition tests where the recognition error rate is only one-seventh that of traditional architectures.


Computational models of learning | 1987

Heuristics for empirical discovery

Pat Langley; Herbert A. Simon; Gary L. Bradshaw

In this paper, we review our experiences with the BACON project, which has focused on empirical methods for discovering numeric laws. The six successive versions of BACON have employed a variety of discovery methods, some very simple and others quite sophisticated. We examine methods for discovering a functional relation between two numeric terms, including techniques for detecting monotonic trends, finding constant differences, and hill-climbing through a space of parameter values. We also consider methods for discovering complex laws involving many terms, some of which build on techniques for finding two-variable relations. Finally, we introduce the notions of intrinsic properties and common divisors, and examine methods for inferring intrinsic values from symbolic data. In each case, we describe the various techniques in terms of the search required to discover useful laws.


Machine learning: a guide to current research | 1986

Learning by disjunctive spanning

Gary L. Bradshaw

A new concept-learning technique, disjunctive spanning, was developed to identify perceptual patterns. Similar to the Aq algorithm [227], this technique identifies disjunctive concepts without searching through a hierarchically-ordered generalization space. The disjunctive spanning algorithm was implemented in NEXUS, a speech recognition system designed to acquire knowledge about minimal articulatory units in speech recognition. Performance tests of NEXUS show it to be superior to a traditional speech recognition system when both systems were tested on the same highly-confusable vocabulary set.


Journal of the Acoustical Society of America | 1991

Feature detection using a connectionist network

Gary L. Bradshaw; Alan Bell

A feedforward connectionist network trained by backpropagation was used to detect 15 speech features. The network was trained over 240 sentences (40 men and 40 women), and tested over 200 sentences (10 men and 10 women), all part of the MIT Ice Cream database. Network input consisted of a smoothed spectral vector at 15‐ms‐intervals, plus two coefficients of amplitude and spectral change. The network achieves a signal detection discrimination level (a‐prime) of 0.87 compared to a level of 0.76 for a ten‐nearest‐neighbor system. Almost identical training and test performances indicates excellent generalization to new speakers and text. Processing costs are mainly signal processing and network training; detection itself can be done in real time. Performance is much better for broad features like sonorance, which occur frequently, than for infrequent features like sibilance, partly because of their low frequency and partly because of other characteristics. [Work supported by USWest.]


Journal of the Acoustical Society of America | 1982

A hybrid system approach to recognition of acoustically similar items

Gary L. Bradshaw; Ron Cole; Raj Reddy

Speech recognition systems based on spectral template‐matching have proven successful in restricted speech understanding tasks: especially, speaker‐dependent recognition of isolated words with acoustically distinct vocabularies. Such systems, however, have only been of limited utility where vocabulary items are acoustically similar, as in the set B, C, D, E, G, P, T, V, and Z. We will describe a hybrid system which supplements the spectral template with a set of feature measurements extracted from the signal. A learning mechanism automatically extracts a weighting vector, which emphasizes informative features as well as informative frames in the template. Recognition performance of the hybrid system is superior to the performance of the system using either the spectral information alone or the featural information alone. Hybrid recognition architectures are therefore useful in overcoming present limitations of speech recognition systems. [Work supported by NSF.]


Archive | 1987

Scientific Discovery: Computational Explorations of the Creative Processes

Pat Langley; Herbert A. Simon; Gary L. Bradshaw; Jan M. Zytkow

Collaboration


Dive into the Gary L. Bradshaw's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herbert A. Simon

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jan M. Zytkow

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Alan Bell

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John R. Anderson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louis Ceci

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Louis M. Herman

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Richard Fozzard

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge