H. de Garis
Utah State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by H. de Garis.
congress on evolutionary computation | 2004
H. de Garis; T. Batty
This paper proposes initial design specifications for a PC based software system that controls the inter-connectivity and neural signaling of an artificial brain consisting of large numbers (e.g. 10,000s) of evolved neural net modules.
nasa dod conference on evolvable hardware | 2003
Jonathan Dinerstein; Nelson Dinerstein; H. de Garis
A major problem in artificial brain building is the automatic construction and training of multi-module systems of neural networks. For example, consider a biological human brain, which has millions of neural nets. If an artificial brain is to have similar complexity, it is unrealistic to require that the training data set for each neural net must be specified explicitly by a human, or that interconnections between evolved nets be performed manually. In this paper we present an original technique to solve this problem. A single large-scale task (too complex to be performed by a single neural net) is automatically split into simpler sub-tasks. A multi-module system of neural nets is then trained so that one of these sub-tasks is performed by each net. We present the results of an experiment using this novel technique for pattern recognition.
international symposium on neural networks | 2005
H. de Garis; Wang Ce; T. Batty
This paper presents a methodology for building artificial brains that is much cheaper than the first authors earlier attempt. What initially cost
congress on evolutionary computation | 2004
H. de Garis; T. Batty
500,000, now costs about
international symposium on neural networks | 2002
H. de Garis
3000. The much cheaper approach uses a Celoxica(.com) programmable board containing a Xilinx Virtex II FPGA chip with 3 million programmable logic gates, to evolve neural networks at electronic speeds. The genetic algorithm (GA) and the neural network model are programmed using a high level language called Handel-C, whose code is (silicon) compiled into the chip. The elite circuit is downloaded from the board into the memory of a PC. This process occurs up to several 10,000 s of times, once for each neural net circuit module having a unique function. Special software in the PC is used to specify the connections between the modules, according to the designs of human BAs (brain architects). The PC is then used to execute the neural signaling of the artificial brain (A-brain) in real time, defined to be 25 Hz per neuron. At this speed, the PC can handle several 10,000 s of modules. We would use our A-brain to control the behaviors of a small, four wheeled radio controlled robot with a CCD camera and gripper. The robots task is to detect and collect unexploded cluster bomblets and deposit them in some central place. The total price of the PC, Celoxica board, and robot is less than
nasa dod conference on evolvable hardware | 2002
Jonathan Dinerstein; H. de Garis
3000, making it affordable to virtually any research group interested in building artificial brains.
nasa dod conference on evolvable hardware | 2002
H. de Garis; J. Dinerstein; Ravichandra Sriram
This paper introduces conceptual problems that arise in the next 10-20 years as electronic circuits reach nanometer scale, i.e. the size of molecules. Such circuits are impossible to make perfectly, due to the inevitable fabrication faults in chips with an Avogrado number of components. Hence, they need to be constructed so that they are robust to faults. They also need to be (as far as possible) reversible circuits, to avoid the heat dissipation problem if bits of information are routinely wiped out during the computational process. They also have to be local if the switching times reach femto-seconds, which is possible now with quantum optics. This paper discusses some of the conceptual issues involved in trying to build circuits that satisfy all these criteria, i.e. that they are robust, reversible and local. We propose an evolutionary engineering based model that meets all these criteria, and provide some experimental results to justify it.
nasa dod conference on evolvable hardware | 2004
H. de Garis; T. Batty
For nearly a decade, the author has been planning of building artificial brains by evolving neural net circuits at electronic speeds in dedicated evolvable hardware and assembling tens of thousands of such individually evolved circuits into humanly defined artificial brain architectures. However, this approach will only work if the individual neural net modules have high evolvabilities (i.e. the capacity to evolve desired functionalities, both qualitative and quantitative). This paper introduces a new neural net model with superior evolvabilities compared to the model implemented in the first generation brain building machine CBM. This model may be implemented in a second generation brain building machine BM2.For nearly a decade, the author has been planning of building artificial brains by evolving neural net circuits at electronic speeds in dedicated evolvable hardware and assembling tens of thousands of such individually evolved circuits into humanly defined artificial brain architectures. However, this approach will only work if the individual neural net modules have high evolvabilities (i.e. the capacity to evolve desired functionalities, both qualitative and quantitative). This paper introduces a new neural net model with superior evolvabilities compared to the model implemented in the first generation brain building machine CBM. This model may be implemented in a second generation brain building machine BM2.
congress on evolutionary computation | 2004
S.H. Aleti; H. de Garis
This paper introduces TiPo, a new neural net model with superior evolvabilities. TiPo neural nets can dynamically change their structure with each clock tick. This provides enhanced computability for highly dynamic functions, such as curve following. Curve following is valuable for applications such as robot motion control.
international symposium on neural networks | 2003
H. de Garis; Ravichandra Sriram; Zijun Zhang