Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Avelino J. Gonzalez is active.

Publication


Featured researches published by Avelino J. Gonzalez.


Expert Systems With Applications | 1992

A case-based reasoning approach to real estate property appraisal

Avelino J. Gonzalez; Raymond Laureano-Ortiz

Abstract Case-Based Reasoning (CBR) has emerged as an alternative to rule-based reasoning techniques for the design of expert systems. This paper concentrates on the issues involved in the application of the case-based reasoning techniques to a specific domain, property appraisal. Case-based reasoning has been recently favored because it seems to resemble more closely the psychological process humans follow when trying to apply their knowledge to the solution of problems: adapting solutions of similar problems handled in past experiences to address present situations. Property appraisal or valuation is a domain characterized by having a single parameter in its solution—the value of the property being appraised. This makes it different from most of other domains to which case-based reasoning has been applied. Those other domains require the satisfaction of multiple goals, which are related to one another in some type of explanation or plan. Because of the fact that properly appraisal has a single goal, it is particularly important to find the best possible answer for that solution. In addition to this, the achievement of consistency is also essential in this domain in which different experts may reach different answers even having the same data at their disposition. By modelling the market data approach of appraisal, using adaptations of case-based reasoning techniques, such as the similarity links and the critics, and integrating other techniques, (i.e., the use of comfort factors), a case-based reasoner for property appraisal is implemented addressing the issues just mentioned above.


systems man and cybernetics | 2002

A framework for validation of rule-based systems

Rainer Knauf; Avelino J. Gonzalez; Thomas Abel

We describe a complete methodology for the validation of rule-based expert systems. This methodology is presented as a five-step process that has two central themes: 1) to create a minimal set of test inputs that adequately cover the domain represented in the knowledge base; and 2) a Turing Test-like methodology that evaluates the systems responses to the test inputs and compares them to the responses of human experts. The development of minimal set of test inputs takes into consideration various criteria, both user-defined, and domain-specific. These criteria are used to reduce the potentially very large set of test inputs to one that is practical, keeping in mind the nature and purpose of the developed system. The Turing Test-like evaluation methodology makes use of only one panel of experts to both evaluate each set of test cases and compare the results with those of the expert system, as well as with those of the other experts. The hypothesis being presented is that much can be learned about the experts themselves by having them anonymously evaluate each others responses to the same test inputs. Thus, we are better able to determine the validity of an expert system. Depending on its purpose, we introduce various ways to express validity as well as a technique to use the validity assessment for the refinement of the rule base. Lastly, we describe a partial implementation of the test input minimalization process on a small but nontrivial expert system. The effectiveness of the technique was evaluated by seeding errors into the expert system, generating the appropriate set of test inputs and determining whether the errors could be detected by the suggested methodology.


systems man and cybernetics | 2006

Learning tactical human behavior through observation of human performance

Hans Fernlund; Avelino J. Gonzalez; Michael Georgiopoulos; Ronald F. DeMara

It is widely accepted that the difficulty and expense involved in acquiring the knowledge behind tactical behaviors has been one limiting factor in the development of simulated agents representing adversaries and teammates in military and game simulations. Several researchers have addressed this problem with varying degrees of success. The problem mostly lies in the fact that tactical knowledge is difficult to elicit and represent through interactive sessions between the model developer and the subject matter expert. This paper describes a novel approach that employs genetic programming in conjunction with context-based reasoning to evolve tactical agents based upon automatic observation of a human performing a mission on a simulator. In this paper, we describe the process used to carry out the learning. A prototype was built to demonstrate feasibility and it is described herein. The prototype was rigorously and extensively tested. The evolved agents exhibited good fidelity to the observed human performance, as well as the capacity to generalize from it.


Journal of Experimental and Theoretical Artificial Intelligence | 2000

Validation and verification of intelligent systems - what are they and how are they different?

Avelino J. Gonzalez; Valerie Barr

Researchers and practitioners in the field of expert systems all generally agree that to be useful, any fielded intelligent system must be adequately verified and validated. But what does this mean in concrete terms? What exactly is verification? What exactly is validation? How are they different? Many authors have attempted to define these terms and, as a result, several interpretations have surfaced. It is our opinion that there is great confusion as to what these terms mean, how they are different, and how they are implemented. This paper, therefore, has two aims—to clarify the meaning of the terms validation and verification as they apply to intelligent systems, and to describe how several researchers are implementing these. The second part of the paper, therefore, details some techniques that can be used to perform the verification and validation of systems. Also discussed is the role of testing as part of the above-mentioned processes.


systems man and cybernetics | 1999

Validating rule-based systems: a complete methodology

Rainer Knauf; Avelino J. Gonzalez; Klaus P. Jantke

This paper describes a complete methodology for the validation of rule-based expert systems. The methodology is presented as a 5-step process that has three central themes: creation of a minimal set of test inputs that adequately cover the domain represented in the knowledge base; a Turing test-like methodology that evaluates the systems responses to the test inputs and compares them to the responses of human experts; and use of the validation results for system improvement. The development of a minimal set of test inputs takes into consideration various criteria, both user-defined and domain-specific. These criteria are used to reduce the potentially very large exhaustive set of test inputs to one that is practical. The Turing test-like evaluation methodology makes use of a panel of experts to both evaluate each set of test cases and compare the results with those of the expert system, as well as with those of the other experts in the validation panel. The hypothesis being presented is that much can be learned about the experts themselves by having them evaluate each others responses to the same test inputs anonymously. Thus, by carefully scrutinizing the results of each expert in relation to the other experts, we are better able to judge an evaluators expertise, and consequently, better determine the validity of an expert system.


systems man and cybernetics | 1998

Validation techniques for case-based reasoning systems

Avelino J. Gonzalez; Lingli Xu; Uma M. Gupta

By their nature, case-based reasoning (CBR) systems, have a built-in set of test cases in their case library. Effective use of this feature can facilitate the validation process by minimizing the involvement of domain experts. This can reduce the cost of the validation procedure, and eliminate the subjectivity introduced by experts. This paper proposes a validation technique that makes use of the systems own case library to validate a CBR system. Called the case library subset test (CLST) technique, it evaluates the correctness of the retrieval and adaptation functions of the CBR engine with respect to the domain as represented by the case library. It is composed of three phases, 1) the retrieval test, 2) the adaptation test, and 3) the domain coverage test. A complete description of the technique and an application of it to validate an existing CBR system are included in this paper.


Neural Networks | 2005

Data-partitioning using the Hilbert space filling curves: Effect on the speed of convergence of Fuzzy ARTMAP for large database problems

José Castro; Michael Georgiopoulos; Ronald F. DeMara; Avelino J. Gonzalez

The Fuzzy ARTMAP algorithm has been proven to be one of the premier neural network architectures for classification problems. One of the properties of Fuzzy ARTMAP, which can be both an asset and a liability, is its capacity to produce new nodes (templates) on demand to represent classification categories. This property allows Fuzzy ARTMAP to automatically adapt to the database without having to a priori specify its network size. On the other hand, it has the undesirable side effect that large databases might produce a large network size (node proliferation) that can dramatically slow down the training speed of the algorithm. To address the slow convergence speed of Fuzzy ARTMAP for large database problems, we propose the use of space-filling curves, specifically the Hilbert space-filling curves (HSFC). Hilbert space-filling curves allow us to divide the problem into smaller sub-problems, each focusing on a smaller than the original dataset. For learning each partition of data, a different Fuzzy ARTMAP network is used. Through this divide-and-conquer approach we are avoiding the node proliferation problem, and consequently we speedup Fuzzy ARTMAPs training. Results have been produced for a two-class, 16-dimensional Gaussian data, and on the Forest database, available at the UCI repository. Our results indicate that the Hilbert space-filling curve approach reduces the time that it takes to train Fuzzy ARTMAP without affecting the generalization performance attained by Fuzzy ARTMAP trained on the original large dataset. Given that the resulting smaller datasets that the HSFC approach produces can independently be learned by different Fuzzy ARTMAP networks, we have also implemented and tested a parallel implementation of this approach on a Beowulf cluster of workstations that further speeds up Fuzzy ARTMAPs convergence to a solution for large database problems.


Proceedings of SPIE | 1998

Short-term electrical load forecasting using a fuzzy ARTMAP neural network

Stefan E. Skarman; Michael Georgiopoulos; Avelino J. Gonzalez

Accurate electrical load forecasting is a necessary part of resource management for power generating companies. The better the hourly load forecast, the more closely the power generating assets of the company can be configured to minimize the cost. Automation of this process is a profitable goal and neural networks have shown promising results in achieving this goal. The most often used neural network to solve the electric load forecasting problem is the back-propagation neural network architecture. Although the performance of the back- propagation neural network architecture has been encouraging, it is worth noting that it suffers from the slow convergence problem and the difficulty of interpreting the answers that the architecture provides. A neural network architecture that does not suffer from the above mentioned drawbacks is the Fuzzy ARTMAP neural network, developed by Carpenter, Grossberg, and their colleagues at Boston University. In this work we applied the Fuzzy ARTMAP neural network to the electric load forecasting problem. We performed numerous experiments to forecast the electrical load. The experiments showed that the Fuzzy ARTMAP architecture yields as accurate electrical load forecasts as a back-propagation neural network with training time a small fraction of the training time required by the back-propagation neural network.


conference on information and knowledge management | 2009

Towards a Context-Based Dialog Management Layer for Expert Systems

Victor Chou Hung; Avelino J. Gonzalez; Ronald F. DeMara

Speech-based conversation agents describe those computer-based entities that interact with humans to help accomplish a certain task via spoken word input. This paper proposes a method of managing spoken dialog interactions in response to recognizing the human users goals when accessing an expert system. In particular, a set of goals can co-exist during a single conversation, and that each goal may be presented in an asynchronous manner. Such a stipulation exists to enhance the naturalness of the interaction. Inspired by the Context-Based Reasoning paradigm, the Lifelike dialog system described herein features a goal management system that ultimately controls the behavior of the expert system.


Police Practice and Research | 2003

Tracking dirty proceeds: Exploring data mining technologies as tools to investigate money laundering

R. Cory Watkins; K. Michael Reynolds; Ronald F. DeMara; Michael Georgiopoulos; Avelino J. Gonzalez; Ron Eaglin

Money laundering enforcement operations in the USA and abroad have developed in the past decade from the simple use of informant information to the sophisticated analysis of voluminous, complex financial transaction arrays. Traditional investigative techniques aimed at uncovering patterns consume numerous man-hours. The volume of these records and the complexity of the relationships call for innovative techniques that can aid financial investigators in generating timely, accurate leads. Data mining techniques are well suited for identifying trends and patterns in large data sets often comprised of hundreds or even thousands of complex hidden relationships. This paper explores the use of innovative data mining methodologies that could enhance law enforcements ability to detect, reduce, and prevent money laundering activities. This paper provides an overview of the money laundering problem in the USA and overseas and describes the nature and scope of the money laundering problems. It reviews traditional approaches to financial crime investigation and discusses various innovative data mining and artificial-intelligence-based solutions that can assist financial investigators.

Collaboration


Dive into the Avelino J. Gonzalez's collaboration.

Top Co-Authors

Avatar

Rainer Knauf

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Ronald F. DeMara

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Michael Georgiopoulos

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Harley R. Myler

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Hollister

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ilka Philippow

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Victor Chou Hung

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

José Castro

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge