Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lev Goldfarb is active.

Publication


Featured researches published by Lev Goldfarb.


Pattern Recognition | 1984

A unified approach to pattern recognition

Lev Goldfarb

Abstract The paper is an outline of a new approach to pattern recognition developed by the author. A fuller introduction to the approach will appear soon. (1) Within the proposed framework the two principal approaches to pattern recognition—vector and syntactic—are unified.


Pattern Recognition | 1990

On the foundations of intelligent processes—I. an evolving model for pattern learning

Lev Goldfarb

Abstract A general adaptive model unifying existing models for pattern learning is proposed. The model, in addition to preserving the merits of geometric and syntactic approaches to pattern recognition, has decisive advantages over them. It can be viewed as a far-reaching generalization of the perceptron, or neural net, models, in which the vector representation and the associated vector operations are replaced by more general structural representation and the corresponding structural operations. The basis of the model is the concept of a transformation system, which is a generalization of Thue (Post-production) systems. Parametric distance functions in transformation systems are introduced. These are generalizations of weighted Levenshtein (edit) distances to more general structured objects. Learning model for transformation systems, unifying many existing models (including that of neural nets), is proposed. The model also suggests how various propositional object (class) descriptions might be generated based on the outputs of the learning processes: these descriptions represent “translation” of some information encoded in the nonpropositional “language” of the corresponding transformation system, representing the environment, into the chosen logical (propositional) language, whose semantics is now defined by the “translation”. In the light of the metric model the intelligence emerges as based on simple arithmetic processes: first, those related to the optimal distance computation, and, second, “counting” and comparing the results of counting for various “important” features, detected at the learning stage (arithmetic predicates).


Pattern Recognition | 1992

What is distance and why do we need the metric model for pattern learning

Lev Goldfarb

Abstract The concept of distance, its role in pattern recognition, and some advantages of the new model for pattern learning proposed recently by the author are discussed. The universality, flexibility, and the ability to connect intrinsically the low-level process that selects the primitives for the pattern representation with the higher level recognition process make the model clearly superior to other models proposed so far. The fundamentally new analytical feature of the model, which allows the learning machine to reconfigure itself efficiently, is the introduction of continuity in the classical discrete computational model.


Pattern Recognition Letters | 1995

Can a vector space based learning model discover inductive class generalization in a symbolic environment

Lev Goldfarb; John M. Abela; Virendra C. Bhavsar; Vithal Narasinha Kamat

Abstract We outline a general framework for inductive learning based on the recently proposed evolving transformation system model. Mathematical foundations of this framework include two basic components: a set of operations (on objects) and the corresponding geometry defined by means of these operations. According to the framework, to perform inductive learning in a symbolic environment, the set of operations (class features) may need to be dynamically updated, and this requires that the geometric component allows for an evolving topology. In symbolic systems, as defined in this paper, the geometric component allows for a dynamic change in topology, whereas finite-dimensional numeric systems (vector spaces) can essentially have only one natural topology. This fact should form the basis of a complete formal proof that, in a symbolic setting, the vector space based models, e. g. artificial neural networks, cannot capture inductive generalization. Since the presented argument indicates that the symbolic learning process is more powerful than the numeric process, it appears that only the former should be properly called an inductive learning process.


ICAPR | 1999

Why Classical Models for Pattern Recognition are Not Pattern Recognition Models

Lev Goldfarb; Jaroslav Hook

In this paper we outline a simple explanation of why, we think, the classical, or vector-space-based (including the artificial neural net) models for pattern recognition are fundamentally inadequate as such. The present simple explanation of this inadequacy is based on a radically new understanding of the nature of inductive learning processes. The latter became possible only after a careful analysis of the axiomatic foundations of a new inductive learning model proposed by the first author in 1990 which overcomes the above limitations. The new model—evolving transformation system—has emerged as a result of a 13-year long attempt to find a mathematical framework that would unify the two main and structurally different approaches to pattern recognition: the vector-space-based and the syntactic approaches. The decisive deficiency of the classical vector-space-based pattern recognition models, as it turns out, relates to the intrinsic inability of the underlying mathematical model, i.e. the normed vector space, to accommodate, during the learning process in a realistic environment, the discovery of the corresponding class distance function under more general than numeric, symbolic, pattern representation. Typically, such symbolic distance functions have very little to do with a very restricted, “Euclidean”, class of distance functions, which due to the underlying algebraic structure of the vector space are unavoidably associated with thisform ofpattern representation. In other words, the more general class of symbolic distance functions is incomparably larger than that consistent with the vector space structure, and so the discovery and construction of the appropriate class distance function during the learning process simply cannot proceed in the vector space setting.


Pattern Recognition | 1992

Primitive pattern learning

Tony Y. T. Chan; Lev Goldfarb

Abstract A new approach to the feature detection problem, i.e. learning “useful” primitive features from raw images, is proposed. The “useful” features are defined within the training environment as those that allow the learning agent (learning system) to form object representations sufficient for subsequent object recognition. In other words, the “useful” features detected are discriminating, useful features.


Digital and Optical Shape Representation and Pattern Recognition | 1988

Hybrid Associative Memories And Metric Data Models

Lev Goldfarb; Raj Verma

An approach to the design of associative memories and pattern recognition systems which utilizes efficiently hybrid architectures is illustrated. By associative memory we mean a database organization that supports retrieval by content and not only by name (or address), as is the case with practically all existing database systems. The approach is based on a general, metric, model for pattern recognition which was developed to unify in a single model two basic approaches to pattern recognition-geometric and structural-preserving the advantages of each one. The metric model offers the designer a complete freedom in the choice of both the object representation and the dissimilarity measure, and at the same time provides a single analytical framework for combining several object representations in a very efficient recognition scheme. It is our fervent hope that the paper will attract researchers interested in the development of associative memories or image recognition systems to experiment with various optical dissimilarity measures (between two images) the need for which becomes so acute with the realization of the possibilities offered by the metric model.


Archive | 1994

THE UNIFIED LEARNING PARADIGM: A FOUNDATION FOR AI

Lev Goldfarb; Sandeep Nigam


Archive | 2001

What Is a Structural Representation

Lev Goldfarb; Oleg Golubitsky; Dmitry Korkin


Archive | 1996

Inductive class representation and its central role in pattern recognition

Lev Goldfarb

Collaboration


Dive into the Lev Goldfarb's collaboration.

Top Co-Authors

Avatar

Dmitry Korkin

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Oleg Golubitsky

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaroslav Hook

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

John M. Abela

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raj Verma

University of Toronto

View shared research outputs
Top Co-Authors

Avatar

Sandeep Nigam

University of New Brunswick

View shared research outputs
Researchain Logo
Decentralizing Knowledge