Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert E. Kalaba is active.

Publication


Featured researches published by Robert E. Kalaba.


Physics Today | 1966

Quasilinearization and nonlinear boundary-value problems

Richard Bellman; Robert E. Kalaba

Quasilinearization and nonlinear boundary-value problems , Quasilinearization and nonlinear boundary-value problems , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی


Mathematics of Computation | 1968

Numerical inversion of the Laplace transform

Richard Bellman; Robert E. Kalaba; Bernard Shiffman

Abstract : Usual analytic methods of inverting the Laplace transformation are mostly impractical for numerical work. A method applicable to the numerical analysis of the inverse Laplace transform is discussed. Numerical examples are given to illustrate this method.


Journal of Optimization Theory and Applications | 1979

A comparison of two methods for determining the weights of belonging to fuzzy sets

A. T. W. Chu; Robert E. Kalaba; K. Spingarn

Saaty has solved a basic problem in fuzzy set theory using an eigenvector method to determine the weights of belonging of each member to the set. In this paper, a weighted least-square method is utilized to obtain the weights. This method has the advantage that it involves the solution of a set of simultaneous linear algebraic equations and is thus conceptually easier to understand than the eigenvector method. Examples are given for estimating the relative wealth of nations and the relative amount of foreign trade of nations. Numerical solutions are obtained using both the eigenvector method and the weighted least-square method, and the results are compared.


Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 1992

A New Perspective on Constrained Motion

Firdaus E. Udwadia; Robert E. Kalaba

The explicit general equations of motion for constrained discrete dynamical systems are obtained. These new equations lead to a simple and new fundamental view of lagrangian mechanics.


Journal of Mathematical Physics | 1960

Invariant Imbedding and Mathematical Physics. I. Particle Processes

Richard Bellman; Robert E. Kalaba; G. M. Wing

With the use of invariance principles in a systematic fashion, we shall derive not only new analytic formulations of the classical particle processes, those of transport theory, radiative transfer, random walk, multiple scattering, and diffusion theory, but, in addition, new computational algorithms which seem well fitted to the capabilities of digital computers. Whereas the usual methods reduce problems to the solution of systems of linear equations, we shall try to reduce problems to the iteration of nonlinear transformations.Although we have analogous formulations of wave processes, we shall reserve for a second paper in this series a detailed and extensive treatment of this part of mathematical physics.


Computers & Mathematics With Applications | 1989

Time-varying linear regression via flexible least squares☆

Robert E. Kalaba; Leigh Tesfatsion

Abstract Suppose noisy observations obtained on a process are assumed to have been generated by a linear regression model with coefficients which evolve only slowly over time, if at all. Do the estimated time-paths for the coefficients display any systematic time-variation, or is time-constancy a reasonably satisfactory approximation? A “flexible least squares” (FLS) solution is proposed for this problem, consisting of all coefficient sequence estimates which yield vector-minimal sums of squared residual measurement and dynamic errors conditional on the given observations. A procedure with FORTRAN implementation is developed for the exact sequential updating of the FLS estimates as the process length increases and new observations are obtained. Simulation experiments demonstrating the ability of FLS to track linear, quadratic, sinusoidal, and regime shift motions in the true coefficients, despite noisy observations, are reported. An empirical money demand application is also summarized.


Annals of Internal Medicine | 1972

Reduction of Digitalis Toxicity by Computer-Assisted Glycoside Dosage Regimens

Roger W. Jelliffe; June Buell; Robert E. Kalaba

Abstract Computer-assisted dosage regimens of digitalis leaf, digitoxin, digoxin, and deslanoside (Cedilanid-D®) have reduced the frequency of adverse reactions to such glycosides from 35% to 12% (...


Archive | 1982

Control, identification, and input optimization

Robert E. Kalaba; Karl Spingarn

I. Introduction.- 1. Introduction.- 1.1. Optimal Control.- 1.2. System Identification.- 1.3. Optimal Inputs.- 1.4. Computational Preliminaries.- Exercises.- II. Optimal Control and Methods for Numerical Solutions.- 2. Optimal Control.- 2.1. Simplest Problem in the Calculus of Variations.- 2.1.1. Euler-Lagrange Equations.- 2.1.2. Dynamic Programming.- 2.1.3. Hamilton-Jacobi Equations.- 2.2. Several Unknown Functions.- 2.3. Isoperimetric Problems.- 2.4. Differential Equation Auxiliary Conditions.- 2.5. Pontryagins Maximum Principle.- 2.6. Equilibrium of a Perfectly Flexible Inhomogeneous Suspended Cable.- 2.7. New Approaches to Optimal Control and Filtering.- 2.8. Summary of Commonly Used Equations.- Exercises.- 3. Numerical Solutions for Linear Two-Point Boundary-Value Problems..- 3.1. Numerical Solution Methods.- 3.1.1. Matrix Riccati Equation.- 3.1.2. Method of Complementary Functions.- 3.1.3. Invariant Imbedding.- 3.1.4. Analytical Solution.- 3.2. An Optimal Control Problem for a First-Order System.- 3.2.1. The Euler-Lagrange Equations.- 3.2.2. Pontryagins Maximum Principle.- 3.2.3. Dynamic Programming.- 3.2.4. Kalabas Initial-Value Method.- 3.2.5. Analytical Solution.- 3.2.6. Numerical Results.- 3.3. An Optimal Control Problem for a Second-Order System.- 3.3.1. Numerical Methods.- 3.3.2. Analytical Solution.- 3.3.3. Numerical Results and Discussion.- Exercises.- 4. Numerical Solutions for Nonlinear Two-Point Boundary-Value Problems.- 4.1. Numerical Solution Methods.- 4.1.1. Quasilinearization.- 4.1.2. Newton-Raphson Method.- 4.2. Examples of Problems Yielding Nonlinear Two-Point Boundary-Value Problems.- 4.2.1. A First-Order Nonlinear Optimal Control Problem.- 4.2.2. Optimization of Functionals Subject to Integral Constraints.- 4.2.3. Design of Linear Regulators with Energy Constraints.- 4.3. Examples Using Integral Equation and Imbedding Methods.- 4.3.1. Integral Equation Method for Buckling Loads.- 4.3.2. An Imbedding Method for Buckling Loads.- 4.3.3. An Imbedding Method for a Nonlinear Two-Point Boundary-Value Problem.- 4.3.4. Post-Buckling Beam Configurations via an Imbedding Method.- 4.3.5. A Sequential Method for Nonlinear Filtering.- Exercises.- III. System Identification.- 5. Gauss-Newton Method for System Identification.- 5.1. Least-Squares Estimation.- 5.1.1. Scalar Least-Squares Estimation.- 5.1.2. Linear Least-Squares Estimation.- 5.2. Maximum Likelihood Estimation.- 5.3. Cramer-Rao Lower Bound.- 5.4. Gauss-Newton Method.- 5.5. Examples of the Gauss-Newton Method.- 5.5.1. First-Order System with Single Unknown Parameter.- 5.5.2. First-Order System with Unknown Initial Condition and Single Unknown Parameter.- 5.5.3. Second-Order System with Two Unknown Parameters and Vector Measurement.- 5.5.4. Second-Order System with Two Unknown Parameters and Scalar Measurement.- Exercises.- 6. Quasilinearization Method for System Identification.- 6.1. System Identification via Quasilinearization.- 6.2. Examples of the Quasilinearization Method.- 6.2.1. First-Order System with Single Unknown Parameter.- 6.2.2. First-Order System with Unknown Initial Condition and Single Unknown Parameter.- 6.2.3. Second-Order System with Two Unknown Parameters and Vector Measurement.- 6.2.4. Second-Order System with Two Unknown Parameters and Scalar Measurement.- Exercises.- 7. Applications of System Identification.- 7.1. Blood Glucose Regulation Parameter Estimation.- 7.1.1. Introduction.- 7.1.2. Physiological Experiments.- 7.1.3. Computational Methods.- 7.1.4. Numerical Results.- 7.1.5. Discussion and Conclusions.- 7.2. Fitting of Nonlinear Models of Drug Metabolism to Experimental Data.- 7.2.1. Introduction.- 7.2.2. A Model Employing Michaelis and Menten Kinetics for Metabolism.- 7.2.3. An Estimation Problem.- 7.2.4. Quasilinearization.- 7.2.5. Numerical Results.- 7.2.6. Discussion.- Exercises.- IV. Optimal Inputs for System Identification.- 8. Optimal Inputs.- 8.1. Historical Background.- 8.2. Linear Optimal Inputs.- 8.2.1. Optimal Inputs and Sensitivities for Parameter Estimation.- 8.2.2. Sensitivity of Parameter Estimates to Observations.- 8.2.3. Optimal Inputs for a Second-Order Linear System.- 8.2.4. Optimal Inputs Using Mehras Method.- 8.2.5. Comparison of Optimal Inputs for Homogeneous and Nonhomogeneous Boundary Conditions.- 8.3. Nonlinear Optimal Inputs.- 8.3.1. Optimal Input System Identification for Nonlinear Dynamic Systems.- 8.3.2. General Equations for Optimal Inputs for Nonlinear Process Parameter Estimation.- Exercises.- 9. Additional Topics for Optimal Inputs.- 9.1. An Improved Method for the Numerical Determination of Optimal Inputs.- 9.1.1. Introduction.- 9.1.2. A Nonlinear Example.- 9.1.3. Solution via Newton-Raphson Method.- 9.1.4. Numerical Results and Discussion.- 9.2. Multiparameter Optimal Inputs.- 9.2.1. Optimal Inputs for Vector Parameter Estimation.- 9.2.2. Example of Optimal Inputs for Two-Parameter Estimation.- 9.2.3. Example of Optimal Inputs for a Single-Input, Two-Output System.- 9.2.4. Example of Weighted Optimal Inputs.- 9.3. Observability, Controllability, and Identifiability.- 9.4. Optimal Inputs for Systems with Process Noise.- 9.5. Eigenvalue Problems.- 9.5.1. Convergence of the Gauss-Seidel Method.- 9.5.2. Determining the Eigenvalues of Saatys Matrices for Fuzzy Sets.- 9.5.3. Comparison of Methods for Determining the Weights of Belonging to Fuzzy Sets.- 9.5.4. Variational Equations for the Eigenvalues and Eigenvectors of Nonsymmetric Matrices.- 9.5.5. Individual Tracking of an Eigenvalue and Eigenvector of a Parametrized Matrix.- 9.5.6. A New Differential Equation Method for Finding the Perron Root of a Positive Matrix.- Exercises.- 10. Applications of Optimal Inputs.- 10.1. Optimal Inputs for Blood Glucose Regulation Parameter Estimation.- 10.1.1. Formulation Using Bolie Parameters for Solution by Linear or Dynamic Programming.- 10.1.2. Formulation Using Bolie Parameters for Solution by Method of Complementary Functions or Riccati Equation Method.- 10.1.3. Improved Method Using Bolie and Bergman Parameters for Numerical Determination of the Optimal Inputs.- 10.2. Optimal Inputs for Aircraft Parameter Estimation.- Exercises.- V. Computer Programs.- 11. Computer Programs for the Solution of Boundary-Value and Identification Problems.- 11.1. Two-Point Boundary-Value Problems.- 11.2. System Identification Problems.- References.- Author Index.


Mathematics of Computation | 1963

Polynomial approximation—a new computational technique in dynamic programming: Allocation processes

Richard Bellman; Robert E. Kalaba; Bella Kotkin

In principle, this equation can be solved computationally using the same technique that applies so well to (1.3). In practice (see [1] for a discussion), questions of time and accuracy arise. There are a number of ways of circumventing these difficulties, among which the Lagrange multiplier plays a significant role. In this series of papers, we wish to present a number of applications of a new, simple and quite powerful method, that of polynomial approximation. We shall begin with a discussion of the allocation process posed in the foregoing paragraphs and continue, in subsequent papers, with a treatment of realistic trajectory and


IEEE Transactions on Information Theory | 1957

On the role of dynamic programming in statistical communication theory

Richard Bellman; Robert E. Kalaba

In this paper we wish to show that the fundamental problem of determining the utility of a communication channel in conveying information can be interpreted as a problem within the framework of multistage decision processes of stochastic type, and as such may be treated by means of the theory of dynamic programming. We shall begin by formulating some aspects of the general problem in terms of multistage decision processes, with brief descriptions of stochastic allocation processes and learning processes. Following this, as a simple example of the applicability of the techniques of dynamic programming, we shall discuss in detail a problem posed recently by Kelly. In this paper, it is shown by Kelly that under certain conditions, the rate of transmission, as defined by Shannon, can be obtained from a certain multistage decision process with an economic criterion. Here we shall complete Kellys analysis in some essential points, using functional equation techniques, and considerably extend his results.

Collaboration


Dive into the Robert E. Kalaba's collaboration.

Top Co-Authors

Avatar

Richard Bellman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

H. Kagiwada

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Firdaus E. Udwadia

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

S. Ueno

Kyoto Computer Gakuin

View shared research outputs
Top Co-Authors

Avatar

June Buell

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

R. Sridhar

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nima Rasakhoo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

E. Zagustin

California State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge