Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dennis F. Kibler is active.

Publication


Featured researches published by Dennis F. Kibler.


Machine Learning | 1991

Instance-Based Learning Algorithms

David W. Aha; Dennis F. Kibler; Marc K. Albert

Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithms performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.


Machine Learning | 1992

The Utility of Knowledge in Inductive Learning

Michael J. Pazzani; Dennis F. Kibler

In this paper, we demonstrate how different forms of background knowledge can be integrated with an inductive method for generating function-free Horn clause rules. Furthermore, we evaluate, both theoretically and empirically, the effect that these forms of knowledge have on the cost and accuracy of learning. Lastly, we demonstrate that a hybrid explanation-based and inductive learning method can advantageously use an approximate domain theory, even when this theory is incorrect and incomplete.


international conference on machine learning | 1987

Learning Representative Exemplars of Concepts: An Initial Case Study

Dennis F. Kibler; David W. Aha

Abstract The goal of our research is to understand the power and appropriateness of exemplar-based representations and their associated acquisition methods. As part of our initial study, we present three methods for forming exemplar representations. The methods are applied to a natural data base of patient diagnoses. These methods are evaluated by the quality and size of their resulting representations as well as the computational cost for forming their representations. In particular we show that small numbers of stored examples yield accuracy rates of about 80%.


computational intelligence | 1989

Instance-based prediction of real-valued attributes

Dennis F. Kibler; David W. Aha; Marc K. Albert

Instance‐based representations have been applied to numerous classification tasks with some success. Most of these applications involved predicting a symbolic class based on observed attributes. This paper presents an instance‐based method for predicting a numeric value based on observed attributes. We prove that, given enough instances, if the numeric values are generated by continuous functions with bounded slope, then the predicted values are accurate approximations of the actual values. We demonstrate the utility of this approach by comparing it with a standard approach for value prediction. The instance‐based approach requires neither ad hoc parameters nor background knowledge.


international conference on functional programming | 1981

Parallel interpretation of logic programs

John S. Conery; Dennis F. Kibler

Logic programs offer many opportunities for parallelism. We present an abstract model that exploits the parallelism due to nondeterministic choices in a logic program. A working interpreter based on this model is described, along with variants of the basic model that are capable of exploiting other sources of parallelism. We conclude with a discussion of our plans for experimenting with the various models, plans which we hope will lead eventually to a multi-processor machine.


Sigkdd Explorations | 2000

The UCI KDD archive of large data sets for data mining research and experimentation

Stephen D. Bay; Dennis F. Kibler; Michael J. Pazzani; Padhraic Smyth

Advances in data collection and storage have allowed organizations to create massive, complex and heterogeneous databases, which have stymied traditional methods of data analysis. This has led to the development of new analytical tools that often combine techniques from a variety of elds such as statistics, computer science, and mathematics to extract meaningful knowledge from the data. To support research in this area, UC Irvine has created the UCI Knowledge Discovery in Databases (KDD) Archive (http://kdd.ics.uci.edu)which is a new online archive of large and complex data sets that encompasses a wide variety of data types, analysis tasks, and application areas. This article describes the objectives and philosophy of the UCI KDD Archive. We draw parallels with the development of the UCI Machine Learning Repository and its a ect on the Machine Learning community.


New Generation Computing | 1985

AND parallelism and nondeterminism in logic programs

John S. Conery; Dennis F. Kibler

This paper defines an abstract interpreter for logic programs based on a system of asynchronous, independent processors which communicate only by passing messages. Each logic program is automatically partitioned and its pieces distributed to available processors. This approach permits two distinct forms of parallelism. OR parallelism arises from evaluating nondeterministic choices simultaneously. AND parallelism arises when a computation involves independent, but necessary, subcomputations. Algorithms like quicksort, which follow a divide and conquer approach, usually exhibit this form of parallelism. These two forms of parallelism are conjointly achieved by the parallel interpreter.


Biological Cybernetics | 1983

A Boolean complete neural model of adaptive behavior

Steven Hampson; Dennis F. Kibler

A multi-layered neural assembly is developed which has the capability of learning arbitrary Boolean functions. Though the model neuron is more powerful than those previously considered, assemblies of neurons are needed to detect non-linearly separable patterns. Algorithms for learning at the neuron and assembly level are described. The model permits multiple output systems to share a common memory. Learned evaluation allows sequences of actions to be organized. Computer simulations demonstrate the capabilities of the model.


Machine Learning | 1986

Experimental Goal Regression: A Method for Learning Problem-Solving Heuristics

Bruce W. Porter; Dennis F. Kibler

This research examines the process of learning problem solving with minimal requirements for a priori knowledge and teacher involvement. Experience indicates that knowledge about the problem solving task can be used to improve problem solving performance. This research addresses the issues of what knowledge is useful, how it is applied during problem solving, and how it can be acquired. For each operator used in the problem solving domain, knowledge is incrementally learned concerning why it is useful, when it is applicable, and what transformation it performs. The method of experimental goal regression is introduced for improving the learning rate by approximating the results of analytic learning. The ideas are formalized in an algorithm for learning and problem solving and demonstrated with examples from the domains of simultaneous linear equations and symbolic integration.


BMC Bioinformatics | 2005

Using hexamers to predict cis-regulatory motifs in Drosophila

Bob Y. Chan; Dennis F. Kibler

BackgroundCis-regulatory modules (CRMs) are short stretches of DNA that help regulate gene expression in higher eukaryotes. They have been found up to 1 megabase away from the genes they regulate and can be located upstream, downstream, and even within their target genes. Due to the difficulty of finding CRMs using biological and computational techniques, even well-studied regulatory systems may contain CRMs that have not yet been discovered.ResultsWe present a simple, efficient method (HexDiff) based only on hexamer frequencies of known CRMs and non-CRM sequence to predict novel CRMs in regulatory systems. On a data set of 16 gap and pair-rule genes containing 52 known CRMs, predictions made by HexDiff had a higher correlation with the known CRMs than several existing CRM prediction algorithms: Ahab, Cluster Buster, MSCAN, MCAST, and LWF. After combining the results of the different algorithms, 10 putative CRMs were identified and are strong candidates for future study. The hexamers used by HexDiff to distinguish between CRMs and non-CRM sequence were also analyzed and were shown to be enriched in regulatory elements.ConclusionHexDiff provides an efficient and effective means for finding new CRMs based on known CRMs, rather than known binding sites.

Collaboration


Dive into the Dennis F. Kibler's collaboration.

Top Co-Authors

Avatar

Steven Hampson

University of California

View shared research outputs
Top Co-Authors

Avatar

David Ruby

University of California

View shared research outputs
Top Co-Authors

Avatar

Bruce W. Porter

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David W. Aha

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Piew Datta

University of California

View shared research outputs
Top Co-Authors

Avatar

Yuh-Jyh Hu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc K. Albert

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge