Larry D. Pyeatt
Texas Tech University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Larry D. Pyeatt.
adaptive agents and multi-agents systems | 1999
Larry D. Pyeatt; Adele E. Howe
Two layer control systems are common in robot architectures. The lower level is designed to provide fast, fine grained control while the higher level plans longer term sequences of actions to achieve some goal. Our approach uses reinforcement learning (RL) for the low level and Partially Observable Markov Decision Process (POMDP) planning for the high level. Because both levels can adapt their behavior within the scope of their tasks, the combination is expected to be robust to degradations in sensor and actuator failures and so to enhance overall system reliability. We implemented our architecture for use in the Khepera robot simulator. In a set of experiments, we show that good performance can be difficult to achieve with hand coded low level control and that performance of our RL/POMDP system degrades slowly with increasing sensor and actuator failure.
Anesthesia & Analgesia | 2011
Brett L. Moore; Anthony G. Doufas; Larry D. Pyeatt
Reinforcement learning (RL) is an intelligent systems technique with a history of success in difficult robotic control problems. Similar machine learning techniques, such as artificial neural networks and fuzzy logic, have been successfully applied to clinical control problems. Although RL presents a mathematically robust method of achieving optimal control in systems challenged with noise, nonlinearity, time delay, and uncertainty, no application of RL in clinical anesthesia has been reported.
international conference of the ieee engineering in medicine and biology society | 2009
Brett L. Moore; Larry D. Pyeatt; Anthony G. Doufas
Research has demonstrated the efficacy of closed-loop control of anesthesia using bispectral index (BIS) as the controlled variable, and the recent development of modelbased, patient-adaptive systems has considerably improved anesthetic control. To further explore the use of model-based control in anesthesia, we investigated the application of fuzzy control in the delivery of patient-specific propofol-induced hypnosis. In simulated intraoperative patients, the fuzzy controller demonstrated clinically acceptable performance, suggesting that further study is warranted.
Lecture Notes in Computer Science | 1999
Larry D. Pyeatt; Adele E. Howe
Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.
automated software engineering | 1996
Adele E. Howe; Larry D. Pyeatt
Evaluation and debugging of AI systems require coherent views of program performance and behavior. We have developed a family of methods, called Dependency Detection, for analyzing execution traces for small patterns. Unfortunately, these methods provide only a local view of program behavior. The approach described here integrates two methods, dependency detection and CHAID-based analysis, to produce an abstract model of system behavior: a transition diagram of merged states. We present the algorithm and demonstrate it on synthetic examples and data from two AI planning and control systems. The models produced by the algorithm summarize sequences and cycles evident in the synthesized models and highlight some key aspects of behavior in the two systems. We conclude by identifying some of the inadequacies of the current algorithm and suggesting enhancements.
ieee region 10 conference | 2006
Michael Helm; Daniel E. Cooke; Klaus G. Becker; Larry D. Pyeatt; Nelson Rushton
As control system complexity increases, scalability often becomes a limiting constraint. Distributed systems and multi-agent systems are useful approaches in designing complex systems, but communications for coordination are often a limiting factor as systems scale up. Colonies of social insects achieve synergistic results beneficial to the entire colony even though their individual behaviors can be described by simple hedonistic algorithms, and their available communications are very limited. Cooperative actions emerge from simple fixed action patterns in these insects. Complex control systems formed from a multitude of simpler agents or subsystems, with constrained and limited communications channels may also achieve emergent cooperation. Advantages of such systems are reduced communications complexity, and reduced complexity in any single element of the systems, as well as improved robustness.
international conference on tools with artificial intelligence | 2011
Eddy C. Borera; Brett L. Moore; Anthony G. Doufas; Larry D. Pyeatt
Recent studies in the controlled administration of intravenous propofol favor a robust automated delivery control system in lieu of a manual controller. In previous work, a Reinforcement Learning (RL) controller was successfully tested in silico and in human volunteers with promising results. In this paper, an Adaptive Neural Network Filter (ANNF) is introduced in an effort to improve RL control of propofol hypnosis. The modified controller was tested in silico on simulated intraoperative patients, and its performance was compared against previously published results. Results from the experiments show that the new controller outperformed the previous controller in the maintenance of propofol anesthesia, with modest improvement in performance during anesthetic induction.
Archive | 2019
Arisoa S. Randrianasolo; Larry D. Pyeatt
This paper summarizes our approach to predict head to head games using a similarity metric and genetic algorithm. The prediction is performed by simply calculating the distances of any two teams, that are set to play each other, to an ideal team. The nearest team to the ideal team is predicted to win. The approach uses genetic algorithm as an optimization tool to improve the accuracy of the predictions. The optimization is performed by adjusting the ideal team’s statistical data. Soccer, basketball, and tennis are the sport disciplines that are used to test the approach described in this paper. We are comparing our predictions to the predictions made by Microsoft’s bing.com. Our findings show that this approach appears to do well on team sports, accuracies above 65%, but is less successful for predicting individual sports, accuracies less than 65%. In our future work, we plan to do more testing on team sports as well as studying the effects of the different parameters involved in the genetic algorithm’s setup. We also plan to compare our approach to ranking and point based predictions.
mexican international conference on artificial intelligence | 2010
Eddy C. Borera; Larry D. Pyeatt; Arisoa S. Randrianasolo; Mahdi Naser-Moghadasic
In recent years, there has been significant interest in developing techniques for finding policies for Partially Observable Markov Decision Problems (POMDPs). This paper introduces a new POMDP filtering technique that is based on Incremental Pruning [1], but relies on geometries of hyperplane arrangements to compute for optimal policy. This new approach applies notions of linear algebra to transform hyperplanes and treat their intersections as witness points [5]. The main idea behind this technique is that a vector that has the highest value at any of the intersection points must be part of the policy. IPBS is an alternative of using linear programming (LP), which requires powerful and expensive libraries, and which is subjected to numerical instability.
international symposium on neural networks | 2009
Tae-Hyung Kim; Larry D. Pyeatt; Donald C. Wunsch
This paper shows packet delivery rate can be improved by adopting learning-based hybrid routing strategies when a wired network suffers from severe link disruption. The dynamics of the link disruptions complicate the routing problem; successful and stable routing operations of conventional routing approaches are hindered as the level of disruption increases. The target is to develop a robust and efficient routing approach in a single structure. A robust routing approach means a packet should be delivered to a destination even under severe disruptions. Efficient routing should deliver a packet with the shortest path at no disruption. These goals should be achieved with the maximum utilization of preexisting network components and with the minimal human intervention once installed. Therefore, we chose a popular conventional routing scheme, Link State, and add-ons that can learn changing network environment. Our approach is to add a learning agent and a simple routing scheme to Link State in order to automatically select a better routing scheme at an arbitrary level of disruption. Markov Decision Process is employed to model this problem. The simulation results show robustness and packet delivery rate are increased up to 35% at acceptable cost of computational and architectural complexity even when Link State approach is close to be collapsed.