Stuart Harvey Rubin
Central Michigan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stuart Harvey Rubin.
systems man and cybernetics | 1998
Stuart Harvey Rubin
Computing with words is defined, in this paper, to be a symbolic generalization of fuzzy logic, which admits self-reference. It entails the randomization of declarative knowledge, which yields procedural knowledge. Such randomization can occur at two levels. First is termed weak randomization, which is essentially a domain-general pattern-matching operation. Second is termed strong randomization, which entails the application of one rule set to the semantics of another-possibly including itself. Strong randomization rests on top of weak randomization. Strong randomization is essentially a heuristic process. It is fully scalable, since it can in theory map out its own needed heuristics for evermore efficient search-including segmentation of the knowledge base. It is proven that strong learning must be knowledge-based, if effective. Computing with words does not preclude the use of predicate functions or procedural attachments. Also, the paradigm for computing with words does not directly compete with that for fuzzy logic. Rather, it serves to augment the utility of fuzzy logic through symbolic randomization. A countably infinite number of domain-specific logics or knowledge-based methods for randomization exist.
systems man and cybernetics | 2004
Stuart Harvey Rubin; S.N.J. Murthy; Michael H. Smith; Ljiljana Trajkovic
In this paper and attached video, we present a third-generation expert system named Knowledge Amplification by Structured Expert Randomization (KASER) for which a patent has been filed by the U.S. Navys SPAWAR Systems Center, San Diego, CA (SSC SD). KASER is a creative expert system. It is capable of deductive, inductive, and mixed derivations. Its qualitative creativity is realized by using a tree-search mechanism. The system achieves creative reasoning by using a declarative representation of knowledge consisting of object trees and inheritance. KASER computes with words and phrases. It possesses a capability for metaphor-based explanations. This capability is useful in explaining its creative suggestions and serves to augment the capabilities provided by the explanation subsystems of conventional expert systems. KASER also exhibits an accelerated capability to learn. However, this capability depends on the particulars of the selected application domain. For example, application domains such as the game of chess exhibit a high degree of geometric symmetry. Conversely, application domains such as the game of craps played with two dice exhibit no predictable pattern, unless the dice are loaded. More generally, we say that domains whose informative content can be compressed to a significant degree without loss (or with relatively little loss) are symmetric. Incompressible domains are said to be asymmetric or random. The measure of symmetry plus the measure of randomness must always sum to unity.
Information Sciences | 2007
Stuart Harvey Rubin
Abstract In the first part of this paper, traditional computability theory is extended to prove that the attainable density of knowledge is virtually unbounded. That is, the more bits available for storage, the more information that can be stored, where the density of information per bit cannot be bounded above. In the second part, the paper explains how machine intelligence becomes possible as a result of the capability for creating, storing, and retrieving virtually unlimited information/knowledge. It follows from this theory that there is no such thing as a valid non-trivial proof, which in turn implies the need for heuristic search/proof techniques. Two examples are presented to show how heuristics can be developed, which are randomizations of knowledge – establishing the connection with the first part of the paper. Even more intriguing, it is shown that heuristic proof techniques are to formal proof techniques what fuzzy logic is to classical logic.
systems man and cybernetics | 2009
Xin Chen; Chengcui Zhang; Shu-Ching Chen; Stuart Harvey Rubin
This paper proposes a human-centered interactive framework for automatically mining and retrieving semantic events in videos. After preprocessing, the object trajectories and event models are fed into the core components of the framework for learning and retrieval. As trajectories are spatiotemporal in nature, the learning component is designed to analyze time series data. The human feedback to the retrieval results provides progressive guidance for the retrieval component in the framework. The retrieval results are in the form of video sequences instead of contained trajectories for user convenience. Thus, the trajectories are not directly labeled by the feedback as required by the training algorithm. A mapping between semantic video retrieval and multiple instance learning (MIL) is established in order to solve this problem. The effectiveness of the algorithm is demonstrated by experiments on real-life transportation surveillance videos.
systems man and cybernetics | 1993
Stuart Harvey Rubin
RSCL (random seeded crystal learning) methods address the principal problem met in the design and programming of solutions to complex problems; namely, the automation of redundant tasks. Such automation entails the evolution of a rule base. The problem then pertains to cracking the knowledge acquisition bottleneck. RSCL methods are predicated upon the existence of symmetry in the domain universe. Symmetry facilitates the induction of new knowledge-not merely a statistical interpolation of existing data. Like neural networks, RSCL methods are best matched to parallel platforms. Eventually, so-called expert /sup n/ compilers will capture all knowledge applied in the design of software. These translators acquire knowledge from the programmer and assist him/her by assuming an ever increased scope of redundant translation tasks.<<ETX>>
information reuse and integration | 2004
Stuart Harvey Rubin
In this paper, we first apply traditional computability theory to prove that the randomization problem, as defined herein, is recursively unsolvable. We then move on to extend traditional computability theory for the case of k-limited fine-grained parallel processors (i. e., temporal relativity). Using this modification, we are able to prove the Semantic Randomization Theorem (SRT). This theorem states that the complexity of an arbitrary self-referential functional (i.e., implying representation and knowledge) is unbounded in the limit. Furthermore, it then follows from the unsolvability of the randomization problem that effective knowledge acquisition in the large must be domain-specific and evolutionary. It is suggested that a generalized operant mechanics will be the fixed-point randomization of a domain-general self-referential randomization. In practice, this provides for the definition of knowledge-based systems that can formally apply analogy in the reasoning process as a consequence of semantic randomization.
north american fuzzy information processing society | 2002
Stuart Harvey Rubin; R.J. Rush; J. Murthy; Michael H. Smith; Ljiljana Trajkovic
This paper describes a shell that has been developed for the purpose of fuzzy qualitative reasoning. The relation among object predicates is defined by object trees that are fully capable of dynamic growth and maintenance. The qualitatively fuzzy inference engine and the developed expert system can then acquire a virtual-rule space that is exponentially (subject to machine implementation constants) larger than the actual, declared-rule space and with a decreasing non-zero likelihood of error. This capability is called knowledge amplification, and the methodology is named KASER. KASER is an acronym for Knowledge Amplification by Structured Expert Randomization. It can handle the knowledge-acquisition bottleneck in expert systems. KASER represents an intelligent, creative system that fails softly, learns over a network, and has enormous potential for automated decision making. KASERs compute with words and phrases and possess capabilities for metaphorical explanations.
annual conference on computers | 1998
Stuart Harvey Rubin
Abstract Data mining can benefit from fuzzy techniques to order an otherwise intractable search. This paper develops a fuzzy logic for rule discovery and inference with application to decision support systems. Data mining traditionally addresses the randomization of numerical data. It is not only clear that such mining operations can be readily extended to symbolic data, but that this then implies that two other results will follow. First, symbolic data can take the form of natural language in supervised or unsupervised learning; and second, randomization can take the form of rules for use in an expert system. It will be argued that the knowledge acquisition bottleneck can only be cracked if expert systems are bootstrapped using natural language.
systems man and cybernetics | 2000
Mihaela Ulieru; Oscar Cuzzani; Stuart Harvey Rubin; Marion G. Ceruti
Artificial intelligence, including fuzzy logic, is applied to the diagnosis of glaucoma. Current methods and difficulties are reviewed and a diagnostic and prediction system is proposed. Validation of the developed software will involve data collected from the clinical evaluation of glaucoma patients, glaucoma-suspect patients and normal subjects. The proposed system is expected to lead to a decrease in the difficulty and cost of glaucoma diagnosis as well as a decrease in the associated health risks.
information reuse and integration | 2004
Mei Ling Shyu; Shu-Ching Chen; Min Chen; Stuart Harvey Rubin
Compared to the regular documents, the major distinguishing characteristics of the Web documents are the dynamic hyper-structure. Thus, in addition to terms or keywords for regular document clustering, Web document clustering can incorporate some dynamic information such as the hyperlinks and the access patterns extracted from the user query logs. In this paper, we extend the concept of document clustering into Web document clustering by introducing the strategy of affinity-based similarity measure, which utilizes the user access patterns in determining the similarities among Web documents via a probabilistic model. Several comparison experiments are conducted using a real data set and the experimental results demonstrate that the proposed similarity measure outperforms the cosine coefficient and the Euclidean distance method under different document clustering algorithms.