Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Lebowitz is active.

Publication


Featured researches published by Michael Lebowitz.


Machine Learning | 1987

Experiments with Incremental Concept Formation: UNIMEM

Michael Lebowitz

Learning by observation involves automatic creation of categories that summarize experience. In this paper we present UNIMEM, an artificial intelligence system that learns by observation. UNIMEM is a robust program that can be run on many domains with real-world problem characteristics such as uncertainty, incompleteness, and large numbers of examples. We give an overview of the program that illustrates several key elements, including the automatic creation of non-disjoint concept hierarchies that are evaluated over time. We then describe several experiments that we have carried out with UNIMEM, including tests on different domains (universities, Congressional voting records, and terrorist events) and an examination of the effect of varying UNIMEMs parameters on the resulting concept hierarchies. Finally we discuss future directions for our work with the program.


Cognitive Science | 1983

Generalization From Natural Language Text

Michael Lebowitz

Generalization and memory are part of natural language understanding. This paper describes a model of generalization that is part of a system for language understanding, the Integrated Partial Parser (IPP). The generalization process includes the retrieval of relevant examples from long-term memory so that the concepts to be created can be determined when new stories are read. Generalizations are analyzed as to their critical parts and evaluated in light of later evidence. The knowledge structures used and a number of examples of the system in operation are presented.


Cognitive Science | 1986

Integrated learning: Controlling explanation **

Michael Lebowitz

Similarity-based learning, which involves largely structural comparisons of instances, and explanation-based learning, a knowledge-intensive method for analyzing instances to build generalized schemata, are two major inductive learning techniques in use in Artificial Intelligence. In this paper, we propose a combination of the two methods—applying explanation-based techniques during the course of similiarity-based learning. For domains lacking detailed explanatory rules, this combination can achieve the power of explanationbased learning without some of the computational problems that can otherwise arise. We show how the ideas of predictability and interest can be particularly valuable in this context. We include an example of the computer program UNIMEM applying explanation to a generalization formed using similaritybased methods.


Artificial Intelligence | 1983

Memory-based parsing

Michael Lebowitz

Robust text understanding systems can be developed by focusing on the application of memory-based parsing techniques. This paper describes an experiment in extending these techniques as far as possible. Described here are the parsing methods used by the Integrated Partial Parser (IPP), a computer system designed to read and generalize from large numbers of news stories. These methods include top-down, <


national conference on artificial intelligence | 1983

Researcher: an overview

Michael Lebowitz

Described in this paper is a computer system, RESEARCHER, being developed at Columbia that reads natural language text in the form of patent abstracts and creates a permanent long-term memory based on concepts generalized from these texts, forming an intelligent information system. This paper is intended to give an overview of RESEARCHER. We will describe briefly the four main areas dealt with in the design of RESEARCHER: 1) knowledge representation, where a canonical scheme for representing physical objects has been developed, 2) memory-based text processing, 3) generalization and generalization-based memory organization that treats concept formation as an integral part of understanding, and 4) generalization-based question answering.


international conference on machine learning | 1988

Deferred commitment in UNIMEM: waiting to learn

Michael Lebowitz

Abstract Incremental learning allows a system to have available efficiently concepts learned so far from a continuous input stream. A problem for such systems is that they are often subject to order effects or simply need to wait to make decisions during processing. In this paper we describe a deferred commitment version of UNIMEM that often suspends processing of input until a time when a decision can be better made. Specifically, when the definition of a concept is in flax, the program will wait until it has been clarified. This method is designed as a bridge between incremental and non-incremental methods. We illustrate the systems performance on examples from the university and census domains.


Cognitive Science | 1983

Classifying Numeric Information for Generalization

Michael Lebowitz

Learning programs that try to generalize from real-world examples may have to deal with many different kinds of data. Continuous numeric data may cause problems for algorithms that search for identical aspects of examples. This problem can be .. . surmounted by categori=ing the nume-ric data. However, this process has problems of its own. In this paper we look at the need for categorizing numeric data, and several methods for doing so. \V e concentrate on the use of a heuristic, looking for gaps, that has been implemented in the UNIMEM computer system. An example is presented of this algorithm categorizing data about states of the United States.


Machine Learning | 1990

The utility of similarity-based learning in a world needing explanation

Michael Lebowitz

A large portion of the research in machine learning has involved a paradigm of comparing many examples and analyzing them in terms of similarities and differences. The assumption is made, usually tacitly, that the resulting generalizations will have applicability to new examples. While such research has been very successful, it is by no means obvious why similarity-based generalizations should be useful, since they may simply reflect coincidences. Proponents of explanation-based learning—a knowledge-intensive method of examining single examples to derive generalizations based on underlying causal models—could contend that their methods are more fundamentally grounded, and that there is no need to look for similarities across examples. In this chapter, we present the issues, and then show why similarity-based methods are important. We include a description of the similarity-based system UNIMEM and present four reasons why robust machine learning must involve the integration of similarity-based and explanation-based methods. We argue that: (1) it may not always be practical or even possible to determine a causal explanation; (2) similarity usually implies causality; (3) similarity-based generalizations can be refined over time; (4) similarity-based and explanation-based methods complement each other in important ways.


Machine learning: a guide to current research | 1986

Complex learning environments: hierarchies and the use of explanation

Michael Lebowitz

At Columbia, our work on concept learning covers a wide range of topics involving complex learning domains. In this summary, we briefly describe three areas of recent research: generalizing hierarchically structured representations; an experiment in explanation-based learning; and first steps at integrating explanation and similarity-based methods.


international acm sigir conference on research and development in information retrieval | 1983

Intelligent information systems

Michael Lebowitz

Natural language processing techniques developed for Artificial Intelligence programs can aid in constructing powerful information retrieval systems in at least two areas. Automatic construction of new concepts allows a large body of information to be organized compactly and in a manner that allows a wide range of queries to be answered. Also, using natural language processing techniques to conceptually analyze the documents being stored in a system greatly expands the effectiveness of queries about given pieces of text. However, only robust conceptual analysis methods are adequate for such systems. This paper will discuss approaches to both concept learning, in the form of Generalization-Based Memory, and powerful, robust text processing achieved by Memory-Based Understanding. These techniques have been implemented in the computer systems IPP, a program that reads, remembers and generalizes from news stories about terrorism, and RESEARCHER, currently in the prototype stage, that operates in a very different domain (technical texts, patent abstracts in particular).

Collaboration


Dive into the Michael Lebowitz's collaboration.

Researchain Logo
Decentralizing Knowledge