Robert Levinson
University of California, Santa Cruz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert Levinson.
computational intelligence | 1995
Robert Levinson
This paper provides a blueprint for the development of a fully domain‐independent single‐agent and multiagent heuristic search system. It gives a graph‐theoretic representation of search problems based on conceptual graphs and outlines two different learning systems. One, an “informed learner”, makes use of the graph‐theoretic definition of a search problem or game in playing and adapting to a game in the given environment. The other, a “blind learner”, is not given access to the rules of a domain but must discover and then exploit the underlying mathematical structure of a given domain. Relevant work of others is referenced within the context of the blueprint.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1994
Gerard Ellis; Robert Levinson; Peter J. Robinson
Abstract The Peirce project (named after Charles Sanders Peirce) is an international collaborative project aiming to construct a freely available conceptual graphs workbench to support research in the conceptual graphs community in areas such as natural language processing, enterprise modelling, program specification and verification, management information systems, conceptual information retrieval, medical informatics, and construction of ontologies. Peirce advances the state of the art in conceptual graph implementations and in general complex object classification. At the core of the Peirce system is an abstract data type for partially ordered sets of objects (poset ADT). The poset ADT is used to organize a conceptual graph database. In this paper we give an overview of the innovative methods for complex object classification, and illustrate examples using complex object databases with hierarchies of chemical formulas, images and conceptual graph program specifications. We illustrate how conceptual graphs can be used for graphic programming in traditional domains and in organic chemistry and indicate how Peirces complex object database supports these activities.
Journal of Experimental and Theoretical Artificial Intelligence | 1992
Robert Levinson; Brian Beach; Richard Snyder; Tal Dayan; Kirack Sohn
Adaptive-predictive search (APS) is a new method by which systems can improve their search performance through experience. It is believed that the development of such methods is critical as currently a tremendous amount of computational results are potentially wasted by not integrating search and partial search results into the knowledge of a problem-solving system. In the adaptive-predictive search model pattern formation and associative recall are used to improve or replace search. In this paper the theory, background and motivations behind the model are presented and its application to two-player game-playing programs is discussed. In these programs, the system develops a knowledge base of patterns (boolean features) coupled with weights and using pattern-oriented evaluation performs only 1-ply search, yet competes respectably with programs that search more. The learning mechanism is a hybridization of machine learning and artificial intelligence techniques that have been successful in other settings. Specific examples and performance results are taken from the domains of Hexapawn, Tic-Tac-Toe, Pente, Othello, and Chess. [This report is much like 89-22, but focuses on 2-player games and gives new results.]
computational intelligence | 1994
Robert Levinson; Gil Fuchs
Pattern‐weight pairs (PWs) are a new form of search and planning knowledge. PWs are predicates over states coupled with a least upper bound on the distance from any state satisfying that predicate to any goal state. The relationship of PWs to more traditional forms of search knowledge is explored with emphasis on macros and subgoals. It is shown how PWs may be used to overcome some of the difficulties associated with macro‐tables and lead to shorter solution paths without replanning. An algorithm is given for converting a macro‐table to a more powerful PW set. Superiority over the Squeeze algorithm is demonstrated. It is also shown how PWs provide a mechanism for achieving dynamic subgoaling through the combination of knowledge from multiple alternative subgoal sequences. The flexibility and execution time reasoning provided by PWs may have significant use in reactive domains. The main cost associated with PWs is the cost of applying them at execution time. An associative retrieval algorithm is given that expedites this matching‐evaluation process. Empirical results are provided which demonstrate asymptotically improving performance with problem size of the PW technique over macro‐tables.
international conference on machine learning and applications | 2010
Foaad Khosmood; Robert Levinson
Style transformation refers to the process by which a piece of text written in a certain style of writing is transformed into another text exhibiting a distinctly different style of writing without significant change to the meaning of individual sentences. In this paper we continue investigation into the linguistic style transformation problem and demonstrate current achievements in transformation on sample texts from a standard authorship attribution corpus. Specifically, we use simple synonym and phrase replacement on the source text to strengthen the stylistic markers of a given target corpus. We validate our results using Java Graphical Authorship Attribution Program (JGAAP). We are able to demonstrate that simple replacements can alter the linguistic style of writing as detected by an independent process.
International Journal of Intelligent Systems | 1991
Robert Levinson
This article is an overview of research on the design and application of a Self‐Organizing Pattern Retrieval System. This system is a nontraditional form of database for several reasons: (1) the objects (patterns) stored in the database are complex data structures such as graphs, or matrices as opposed to relations, sets, lists or frames; (2) the system attempts to optimize its internal structure for retrieval efficiency by taking advantage of classificatory information (common patterns) it discovers about the patterns in the database; and (3) the system is representation‐independent, allowing the user to use whatever representation scheme he desires as long as he provides a small set of I/O routines and comparison functions for the representation. This system has been applied to the organization of molecules and reactions in organic chemistry. the system is able to select valid precursors to molecules based on its knowledge of real‐world reactions. It is also able to suggest new chemical reactions based on generalizations from its knowledge base. the system has also been applied to the retrieval, generalization, and classification of chess positions. Its ability to recognize the tactical similarity of positions is compared with that of a chess master. the design objectives of a system for classifying Radio Frequency Interference required by NASAs SETI (Search for ExtraTerrestrial Intelligence) project is also discussed.
international conference on machine learning and cybernetics | 2006
Foaad Khosmood; Robert Levinson
Automatic source attribution refers to the ability for an autonomous process to determine the source of a previously unexamined piece of writing. Statistical methods for source attribution have been the subject of scholarly research for well over a century. The field, however, is still missing a definitive currency of established or agreed-upon classes of features, methods, techniques and nomenclature. This paper represents continuation of research into the basic attribution problem, as well as work towards an eventual source attribution standard. We augment previous work which utilized in-common, non-trivial word frequencies with neural networks on a more standardized data set. We also use two other techniques: phrase-based feature sets evaluated with naive Bayesians and bi-gram feature sets evaluated with the nearest neighbor algorithm. We compare the three and explore methods of combining the techniques in order to achieve better results
Archive | 1994
Robert Levinson
Calculation is only one side of it. In chess no less important is intuition, inspiration, or, if you prefer, mood. I for example cannot always explain why in one position this move is good, and in another bad. In my games I have sometimes found a combination intuitively simply feeling that it must be there. Yet I was not able to translate my thought processes into normal human language. Former World Champion Mikhal Tal.
international conference on conceptual structures | 1993
Robert Levinson
Adaptive predictive search (APS), is a learning system frame-work, which given little initial domain knowledge, increases its decision-making abilities in complex problems domains. In this paper we give an entirely domain-independent version of APS that we are implementing in the PEIRCE conceptual graphs workbench. By using conceptual graphs as the “base language” a learning system is capable of refining its own pattern language for evaluating states in the given domain that it finds itself in. In addition to generalizing APS to be domain-independent and CG-based we describe fundamental principles for the development of AI systems based on the structured pattern approach of APS. It is hoped that this effort will lead the way to a more principled, and well-founded approach to the problems of mechanizing machine intelligence.
computational intelligence | 1996
Barney Pell; Susan L. Epstein; Robert Levinson
The universality of strategy games suggests that they reflect some basic insights into the nature of human intelligence. Turing cited rudimentary chess reasoning as a hallmark of AI, and some of the earliest significant research was on chess and checkers. Today, computer game playing continues to be a central concern because it addresses the fundamental issues in AI: knowledge representation, search, planning, and learning. Moreover, the multi-agent nature of games makes game playing an excellent forum for addressing core issues such as contingency planning and reasoning about actions and plans of other agents, while competition between programs and with humans provides useful benchmarks and encourages progress. This special issue highlights recent innovative work by a broad spectrum of researchers and practitioners. There are clearly at least three very different approaches to computer game playing: a high-performance determination to play better than any human, a cognitively oriented exploration of learning and behavior, and a mathematical theory of heuristics and game playing. The competition and cooperation among these approaches drive exciting and significant work whose results extend to many other problems with large search spaces. The traditional A1 approach to game playing relies upon fast, deep search to look ahead from the current game state to all possible ways to complete the contest. For difficult games, there are so many alternatives that only a fragment of the future possibilities can be considered. Therefore, move selection must also rely upon a human-designed evaluation function to estimate the worth of game states prior to the end of a contest. This technique is classically supplemented by an extensive catalog of expert openings (an opening book) and precomputed solutions to simple endgame positions (an endgame database). Because this approach tends to rely on raw computer power to compensate for a lack of knowledge or selectivity in search, these methods are often referred to as the “brute-force’’ approach. By many standards, this brute-force approach has been remarkably successful. Carefully engineered, deep-searching computers now dominate all but a few humans in a number of challenging games, including chess, checkers, and Othello. The surprising success of the engineering approach on these games has prompted researchers in other fields to seek similar search-intensive solutions to their problems, including theorem proving and natural language processing (see Marsland’s discussion in (Levinson et al. 1991)). The brute-force approach to games has its limitations, however. First, where this approach is applicable, considerable engineering effort is required to achieve success. This effort manifests itself in highly efficient, special-purpose representations (sometimes with gamespecific hardware (Ebeling 1986)), fine-tuned evaluation functions with hand-crafted features