Payman Arabshahi
California Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Payman Arabshahi.
global communications conference | 2001
I. Kassabalidis; Mohamed A. El-Sharkawi; Robert J. Marks; Payman Arabshahi; A.A. Gray
Swarm intelligence, as demonstrated by natural biological swarms, exhibits numerous powerful features that are desirable in many engineering systems, such as communication networks. In addition, new paradigms for designing autonomous and scalable systems may result from analytically understanding and extending the design principles and operations inherent in intelligent biological swarms. A key element of future design paradigms will be emergent intelligence - simple local interactions of autonomous swarm members, with simple primitives, giving rise to complex and intelligent global behavior. Communication network management is becoming increasingly difficult due to the increasing network size, rapidly changing topology, and complexity. A new class of algorithms, inspired by swarm intelligence, is currently being developed that can potentially solve numerous problems of such networks. These algorithms rely on the interaction of a multitude of simultaneously interacting agents. A survey of such algorithms and their performance is presented here.
international symposium on neural networks | 2002
I. Kassabalidis; Mohamed A. El-Sharkawi; Robert J. Marks; Payman Arabshahi; A.A. Gray
Swarm intelligence forms the core of a new class of algorithms inspired by the social behavior of insects that live in swarms. Its attractive features include adaptation, robustness and a distributed, decentralized nature, rendering swarm-based algorithms well-suited for routing in wireless or satellite networks, where it is difficult it implement centralized network control. We propose one such routing algorithm, dubbed adaptive swarm-based distributed routing (adaptive-SDR), which is scalable, robust and suitable to handle large amounts of network traffic, while minimizing delay and packet loss.
international symposium on neural networks | 1992
J.J. Choi; Payman Arabshahi; Robert J. Marks; Thomas P. Caudell
The general structure of a neuro-fuzzy controller applicable to many diverse neural systems is presented. As an example, fuzzy control of the backpropagation training technique is considered for multilayer perceptrons, where significant speedup in training was observed. Fuzzy control of the number of classes in an ART 1 classifier is also considered. This can be advantageous in situations where there is prior knowledge of the number of classes into which one wishes to classify the input data.<<ETX>>
IEEE Transactions on Neural Networks | 2002
Ryan Mukai; Victor A. Vilnrotter; Payman Arabshahi; Vahraz Jamnejad
The use of radial basis function (RBF) networks and least squares algorithms for acquisition and fine tracking of NASAs 70-m-deep space network antennas is described and evaluated. We demonstrate that such a network, trained using the computationally efficient orthogonal least squares algorithm and working in conjunction with an array feed compensation system, can point a 70-m-deep space antenna with root mean square (rms) errors of 0.1-0.5 millidegrees (mdeg) under a wide range of signal-to-noise ratios and antenna elevations. This pointing accuracy is significantly better than the 0.8 mdeg benchmark for communications at Ka-band frequencies (32 GHz). Continuous adaptation strategies for the RBF network were also implemented to compensate for antenna aging, thermal gradients, and other factors leading to time-varying changes in the antenna structure, resulting in dramatic improvements in system performance. The systems described here are currently in testing phases at NASAs Goldstone Deep Space Network (DSN) and were evaluated using Ka-band telemetry from the Cassini spacecraft.
ieee aerospace conference | 2005
Kourosh Rahnami; Payman Arabshahi; Andrew Gray
We discuss here, implementation of a neural network (NN) based model referenced control (MRC) algorithm to improve transient and steady state behavior of transmission control protocol (TCP) flows and active queue management (AQM) routers in a network setting. Based on a fluid theoretical model of a network, two neural networks are trained to control the traffic flow of a bottleneck router. Results show dramatic improvement of the transient and the steady state behavior of the queuing window length. The results are compared to the traditional RED algorithm and the P and PI controllers of classical control theory
ieee aerospace conference | 2005
Clayton Okino; Clement Lee; Andrews Gray; Payman Arabshahi
In this paper, we present an architecture for a reconfigurable protocol chip for space-based applications. We present a model for examining various stimuli for reconfiguration in space, and identify some approaches to operating on the stimuli. In particular, we examine fault tolerant schemes and reconfiguration based on detection of a link layer framing format
joint ifsa world congress and nafips international conference | 2001
Kourosh Rahnamai; John Maleyeff; Payman Arabshahi; Tsun-Yee Yan
The authors show that the research on N-version high-reliability software structures can be extended to neural network architectures. In addition, we explore the possibility of applying this structure to a spacecraft tracking problem. One such system is the Automated Spacecraft Monitoring System (ASMS), a beacon-monitoring or detection system. Four neural networks, each trained for various operating environments, are implemented in an N-version structure. The results of the networks are combined to form a composite outcome. The combined outcome is used as part of a hypothesis testing procedure to distinguish between the presence or absence of the beacon signal. The results show that any of a number of composite outcomes outperforms the use of any single neural network. Further, the simple average of network results provides the composite outcome with best performance.
international symposium on neural networks | 2000
Ryan Mukai; Payman Arabshahi; Victor A. Vilnrotter
The use of radial basis function networks for fine pointing NASAs 70-meter deep space network antennas is described and evaluated. We demonstrate that such a network, working in conjunction with the array feed compensation system, and trained using the computationally efficient orthogonal least-squares algorithm, can point a 70-meter deep space antenna with RMS errors of less than 0.3 millidegree under good signal-to-noise ratio conditions, achieving significantly higher accuracies than the 0.8 millidegree benchmark for communications at Ka-band frequencies of 32 GHz.
global communications conference | 2001
Ryan Mukai; Payman Arabshahi; Tsun-Yee Yan
A method for optimal adaptive setting of pulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical free space channel, is presented. The thresholds adaptive setting, in response to varying channel conditions, results in orders of magnitude improvement in probability of error, as compared to use of a fixed threshold. The adaptive threshold system itself is based on a robust channel identification system that uses average signal strengths to estimate the degree of fade and total attenuation in the channel, and a radial basis function network for estimating pulse spreads, all with excellent accuracy.
IEEE Transactions on Neural Networks | 1996
Payman Arabshahi
There is no such thing as too much of a good thing-at least when it comes to well written and comprehensive graduate level texts in any technical field that has been growing as fast as neural computing. In recent years there has been a proliferation of neural network related textbooks that ’attempt to give a broad, mathematically rigorous introduction to the theory and applications of the field for an audience of either professional engineers, graduate students, or both. Among the most notable and recent perhaps are the excellent texts by Haykin [l], Zurada [2], Kung [ 3 ] , and Hertz et. al. [4]. Add to this dozens of volumes of paper collections, and expositions from different perspectives (AI, physics, cognitive science, VLSI (very large scale integration), and parallel processing, etc.), as well as other textbooks, and you rapidly converge to the global minimum: so many books, so little time! If you are interested in learning about the underlying theory of neural computation, however, then perhaps alongside your perusal of the above texts you should also look at yet another one. Fundamentals of ArtiJcial Neural Networks emphasizes fundamental theoretical aspects of the computational capabilities and learning abilities of artificial neural networks (ANN’S). The book is intended for either first year graduate students in electrical or computer science and engineering, or practicing engineers and researchers in the field. It has evolved from a series of lecture notes of two courses on ANN’S taught by the author over the past six years at Wayne State University. Apart from the usual prerequisites of mathematical maturity (probability theory, differential equations, linear algebra, multivariate calculus), the reader is assumed to be familiar with system theory, the concept of a “state,” as well as Boolean algebra and switching theory basics. The author himself is a well-established researcher in the field, with dozens of papers to his credit. The book is well organized and presented, and a delight to read. Exercises at the end of each chapter (some 200) complement the text, and range in difficulty level from the very basic to mathematically or numerically challenging. About 700 relevant references are also provided at the end of the book. The book is centered on the idea of viewing ANN’S as nonlinear adaptive parallel computational models of varying degrees of complexity. Accordingly, the author starts out in Chapter 1 by an exposition of the computational capab es of the simplest models, namely linear and polynomial threshold gates. This is built upon basic concepts of switching theory. The discussion is then extended to the capacity and generalization ability of threshold gates via a proof of the function counting theorem. The treatment is coherent and mathematically rigorous Chapter 2 picks up from the discussion in Chapter 1 by considering networks of linear threshold gates (LTG’s) as well as neuronal units with nonlinear activation functions, and investigates their mapping capabilities. Important theoretical results on bounds on the number of functions realizable by a feedforward network of LTG’s, as