Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bart Kosko is active.

Publication


Featured researches published by Bart Kosko.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1986

Fuzzy cognitive maps

Bart Kosko

Abstract Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.


systems man and cybernetics | 1988

Bidirectional associative memories

Bart Kosko

Stability and encoding properties of two-layer nonlinear feedback neural networks are examined. Bidirectionality is introduced in neural nets to produce two-way associative search for stored associations. The bidirectional associative memory (BAM) is the minimal two-layer nonlinear feedback network. The author proves that every n-by-p matrix M is a bidirectionally stable heteroassociative content-addressable memory for both binary/bipolar and continuous neurons. When the BAM neutrons are activated, the network quickly evolves to a stable state of two-pattern reverberation, or resonance. The stable reverberation corresponds to a system energy local minimum. Heteroassociative information is encoded in a BAM by summing correlation matrices. The BAM storage capacity for reliable recall is roughly m<min (n,p). It is also shown that it is better on average to use bipolar (-1,1) coding than binary


Applied Optics | 1987

Adaptive bidirectional associative memories.

Bart Kosko

Bidirectionality, forward and backward information flow, is introduced in neural networks to produce two-way associative search for stored stimulus-response associations (A(i),B(i)). Two fields of neurons, F(A) and F(B), are connected by an n x p synaptic marix M. Passing information through M gives one direction, passing information through its transpose M(T) gives the other. Every matrix is bidirectionally stable for bivalent and for continuous neurons. Paired data (A(i),B(i)) are encoded in M by summing bipolar correlation matrices. The bidirectional associative memory (BAM) behaves as a two-layer hierarchy of symmetrically connected neurons. When the neurons in F(A) and F(B) are activated, the network quickly evolves to a stable state of twopattern reverberation, or pseudoadaptive resonance, for every connection topology M. The stable reverberation corresponds to a system energy local minimum. An adaptive BAM allows M to rapidly learn associations without supervision. Stable short-term memory reverberations across F(A) and F(B) gradually seep pattern information into the long-term memory connections M, allowing input associations (A(i),B(i)) to dig their own energy wells in the network state space. The BAM correlation encoding scheme is extended to a general Hebbian learning law. Then every BAM adaptively resonates in the sense that all nodes and edges quickly equilibrate in a system energy local minimum. A sampling adaptive BAM results when many more training samples are presented than there are neurons in F(B) and F(B), but presented for brief pulses of learning, not allowing learning to fully or nearly converge. Learning tends to improve with sample size. Sampling adaptive BAMs can learn some simple continuous mappings and can rapidly abstract bivalent associations from several noisy gray-scale samples.


IEEE Transactions on Computers | 1994

Fuzzy systems as universal approximators

Bart Kosko

An additive fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. An additive fuzzy system approximates the function by covering its graph with fuzzy patches in the input-output state space and averaging patches that overlap. The fuzzy system computes a conditional expectation E|Y|X| if we view the fuzzy sets as random sets. Each fuzzy rule defines a fuzzy patch and connects commonsense knowledge with state-space geometry. Neural or statistical clustering systems can approximate the unknown fuzzy patches from training data. These adaptive fuzzy systems approximate a function at two levels. At the local level the neural system approximates and tunes the fuzzy rules. At the global level the rules or patches approximate the function. >


Information Sciences | 1986

Fuzzy entropy and conditioning

Bart Kosko

Abstract A new nonprobabilistic entropy measure is introduced in the context of fuzzy sets or messages. Fuzzy units, or fits , replace bits in a new framework of fuzzy information theory. An appropriate measure of entropy or fuzziness of messages is shown to be a simple ratio of distances: the distances between the fuzzy message and its nearest and farthest nonfuzzy neighbors. Fuzzy conditioning is examined as the degree of subsethood (submessagehood) of one fuzzy set or message in another. This quantity is shown to behave as a conditional probability in many contexts. It is also shown that the entropy of A is the degree to which A ∪ A c is a subset of A ∩ A c , an intuitive relationship that cannot occur in probability theory. This theory of subsethood is then shown to solve one of the major problems with Bayes-theorem learning and its variants—the problem of requiring that the space of alternatives be partitioned into disjoint exhaustive hypotheses. Any fuzzy subsets will do. However, a rough inverse relationship holds between number and fuzziness of partitions and the information gained from experience. All results reduce to fuzzy cardinality.


International Journal of General Systems | 1990

FUZZINESS VS. PROBABILITY

Bart Kosko

Fuzziness is explored as an alternative to randomness for describing uncertainty. The new sets-as-points geometric view of fuzzy sets is developed. This view identifies a fuzzy set with a point in a unit hypercube and a nonfuzzy set with a vertex of the cube. Paradoxes of two-valued logic and set theory, such as Russells paradox, correspond to the midpoint of the fuzzy cube. The fundamental questions of fuzzy theory—How fuzzy is a fuzzy set? How much is one fuzzy set a subset of another?—are answered geometrically with the Fuzzy Entropy Theorem, the Fuzzy Subsethood Theorem, and the Entropy-Subsethood Theorem. A new geometric proof of the Subsethood Theorem is given, a corollary of which is that the apparently probabilistic relative frequency nA /N turns out to be the deterministic subsethood S(X, A), the degree to which the sample space X is contained in its subset A. So the frequency of successful trials is viewed as the degree to which all trials are successful. Recent Bayesian polemics against fuzzy ...


International Journal of Approximate Reasoning | 1988

Hidden patterns in combined and adaptive knowledge networks

Bart Kosko

Uncertain causal knowledge is stored in fuzzy cognitive maps (FCMs). FCMs are fuzzy signed digraphs with feedback. The sign (+ or -) of FCM edges indicates causal increase or causal decrease. The fuzzy degree of causality is indicated by a number in [- 1, 1]. FCMs learn by modifying their causal connections in sign and magnitude, structurally analogous to the way in which neural networks learn. An appropriate causal learning law for inductively inferring FCMs from time-series data is the differential Hebbian law, which modifies causal connections by correlating time derivatives of FCM node outputs. The differential Hebbian law contrasts with Hebbian output-correlation learning laws of adaptive neural networks. FCM nodes represent variable phenomena or fuzzy sets. An FCM node nonlinearly transforms weighted summed inputs into numerical output, again in analogy to a model neuron. Unlike expert systems, which are feedforward search trees, FCMs are nonlinear dynamical systems. FCM resonant states are limit cycles, or time-varying patterns. An FCM limit cycle or hidden pattern is an FCM inference. Experts construct FCMs by drawing causal pictures or digraphs. The corresponding connection matrices are used for inferencing. By additively combining augmented connection matrices, any number of FCMs can be naturally combined into a single knowledge network. The credibility wi in [0, 1] of the ith expert is included in this learning process by multiplying the ith experts augmented FCM connection matrix by w i. Combining connection matrices is a simple type of adaptive inference. In general, connection matrices are modified by an unsupervised learning law, such as the


systems man and cybernetics | 1996

Fuzzy function approximation with ellipsoidal rules

Julie A. Dickerson; Bart Kosko

A fuzzy rule can have the shape of an ellipsoid in the input-output state spare of a system. Then an additive fuzzy system approximates a function by covering its graph with ellipsoidal rule patches. It averages rule patches that overlap. The best fuzzy rules cover the extrema or bumps in the function. Neural or statistical clustering systems can approximate the unknown fuzzy rules from training data. Neural systems can then both tune these rules and add rules to improve the function approximation. We use a hybrid neural system that combines unsupervised and supervised learning to find and tune the rules in the form of ellipsoids. Unsupervised competitive learning finds the first-order and second-order statistics of clusters in the training data. The covariance matrix of each cluster gives an ellipsoid centered at the vector or centroid of the data cluster. The supervised neural system learns with gradient descent. It locally minimizes the mean-squared error of the fuzzy function approximation. In the hybrid system unsupervised learning initializes the gradient descent. The hybrid system tends to give a more accurate function approximation than does the lone unsupervised or supervised system. We found a closed-form model for the optimal rules when only the centroids of the ellipsoids change. We used numerical techniques to find the optimal rules in the general case.


Neural Networks for Computing | 2008

Differential Hebbian learning

Bart Kosko

The differential Hebbian law ėij=Ċi Ċj is examined as an alternative to the traditional Hebbian law ėij =Ci Cj for updating edge connection strengths in neural networks. The motivation is that concurrent change, rather than just concurrent activation, more accurately captures the ‘‘concomitant variation’’ that is central to inductively inferred functional relationships. The resulting networks are characterized by a kinetic, rather than potential, energy. Yet we prove that both system energies are given by the same entropy‐like functional of connection matrices, Trace(Ė E). We prove that the differential Hebbian is equivalent to stochastic‐process correlation (a cross‐covariance kernel). We exactly solve the differential Hebbian law, interpret the sequence of edges as a stochastic process, and report that the edge process is a submartingale: the edges are expected to increase with time. The submartingale edges decompose into a martingale or unchanging process and an increasing or novelty process. Hence conditioned averages of edge residuals are encoded in learning though the network only ‘‘experiences’’ the unconditioned edge residuals.


ieee virtual reality conference | 1993

Virtual worlds as fuzzy cognitive maps

Julie A. Dickerson; Bart Kosko

Fuzzy cognitive maps (FCMs) can structure virtual worlds. FCMs link causal events, values, goals, and trends in a fuzzy feedback dynamical system. They direct actors in virtual worlds as the actors react to events and to one another. In nested FCMs each causal concept can control its own FCM. This combines levels of fuzzy systems that can choose goals or move objects. Adaptive FCMs change as causal patterns change. They adapt with differential Hebbian learning. FCMs are applied to an undersea virtual world of dolphins.<<ETX>>

Collaboration


Dive into the Bart Kosko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Osonde Osoba

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ashok Patel

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Mun Kim

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Brandon Franzke

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ian Lee

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Seong-Gon Kong

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge