Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Minh Q. Phan is active.

Publication


Featured researches published by Minh Q. Phan.


Journal of Guidance Control and Dynamics | 1993

Identification of observer/Kalman filter Markov parameters - Theory and experiments

Jer-Nan Juang; Minh Q. Phan; Lucas G. Horta; Richard W. Longman

This paper discusses an algorithm to compute the Markov parameters of an observer or Kalman filter from experimental input and output data. The Markov parameters can then be used for identification of a state-space representation, with associated Kalman or observer gain, for the purpose of controller design. The algorithm is a nonrecursive matrix version of two recursive algorithms developed in previous works for different purposes, and the relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and offers bounds on the proper choice of observer order. It is shown that if one uses data containing noise and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. The results of the paper are demonstrated in numerical studies and experiments on the Bubble space telescope.


IEEE Transactions on Circuits and Systems I-regular Papers | 2002

Simple learning control made practical by zero-phase filtering: applications to robotics

Haluk Elci; Richard W. Longman; Minh Q. Phan; Jer-Nan Juang; Roberto Ugoletti

Iterative learning control (ILC) applies to control systems that perform the same finite-time tracking command repeatedly. It iteratively adjusts the command from one repetition to the next in order to reduce the tracking error. This creates a two-dimensional (2-D) system, with time step and repetition number as independent variables. The simplest form of ILC uses only one gain times one error in the previous repetition, and can be shown to converge to the zero-tracking error independent of the system dynamics. Hence, it appears very effective from a mathematical perspective. However, in practice, there are unacceptable learning transients. A zero-phase low-pass filter is introduced here to eliminate the worst transients. The main purpose of this paper is to supply a presentation of experiments on a commercial robot that demonstrate the effectiveness of this approach, improving the tracking accuracy of the robot performing a high speed maneuver by a factor of 100 in six repetitions. Experiments using a two-gain ILC reaches this error level in only three iterations. It is suggested that these two simple ILC laws are the equivalent for learning control of proportional and PD control in classical control system design. Thus, what was an impractical approach, becomes practical, easy to apply, and effective.


Astrodynamics Conference | 1988

A mathematical theory of learning control for linear discrete multivariable systems

Minh Q. Phan; Richard W. Longman

When tracking control systems are used in repetitive operations such as robots in various manufacturing processes, the controller will make the same errors repeatedly. Here consideration is given to learning controllers that look at the tracking errors in each repetition of the process and adjust the control to decrease these errors in the next repetition. A general formalism is developed for learning control of discrete-time (time-varying or time-invariant) linear multivariable systems. Methods of specifying a desired trajectory (such that the trajectory can actually be performed by the discrete system) are discussed, and learning controllers are developed. Stability criteria are obtained which are relatively easy to use to insure convergence of the learning process, and proper gain settings are discussed in light of measurement noise and system uncertainties.


Journal of Vibration and Acoustics | 1995

Improvement of Observer/Kalman Filter Identification (OKID) by Residual Whitening

Minh Q. Phan; Lucas G. Horta; Jer-Nan Juang; Richard W. Longman

This paper presents a time-domain method to identify a state space model of a linear system and its corresponding observer/Kalman filter from a given set of general input-output data. The identified filter has the properties that its residual is minimized in the least squares sense, orthogonal to the time-shifted versions of itself, and to the given input-output data sequence. The connection between the state space model and a particular auto-regressive moving average description of a linear system is made in terms of the Kalman filter and a deadbeat gain matrix. The procedure first identifies the Markov parameters of an observer system, from which a state space model of the system and the filter gain are computed. The developed procedure is shown to improve results obtained by an existing observer/Kalman filter identification method, which is based on an auto-regressive model without the moving average terms. Numerical and experimental results are presented to illustrate the proposed method.


systems, man and cybernetics | 1994

Discrete frequency based learning control for precision motion control

H. Elci; Richard W. Longman; Minh Q. Phan; Jer-Nan Juang; R. Ugoletti

Concerns MIMO learning control design with well behaved transients during the learning process. The method allows dynamic and inverse dynamic control laws. The theory gives a unifying understanding of the stability boundary for convergence to zero tracking error, and of a stability condition obtained by using frequency response arguments. The former is easy to satisfy, making learning control converge with little knowledge of the system. The much more restrictive frequency response condition is interpreted as a robustness condition, representing the robustness relative to good transient behavior during learning. This ensures that the amplitudes of the frequency components of the error signal decay in a monotonic and geometric manner with each successive repetition. Noncausal zero phase filtering is used both to facilitate the generation of learning controllers having this convergence at important frequencies, and to ensure that the learning controllers maintain this property in the presence of unmodeled dynamics. The approach is in discrete time. Experiments are performed on a 7 degree-of-freedom robot, demonstrating the effectiveness of the design process for producing precision motion control.<<ETX>>


conference on decision and control | 1996

Learning control for trajectory tracking using basis functions

Minh Q. Phan; James A. Frueh

This paper proposes an iterative learning method that makes the output of a general linear time-varying system with unknown coefficients track a finite-time reference trajectory. The system learns by repeated trials, each starting from the same initial conditions. Data from multiple trials can used to identify a model of the system during the finite time interval of interest. A learning controller is then designed from the identified model. If the identification is perfect, the necessary control can be computed directly from the identified model, and there is no need for learning. If the identification is not perfect, the remaining error can be corrected by learning control. By the use of input basis functions, this formulation shows that one need to perform the identification only in a portion of the system dynamics relevant to the specific trajectory to be tracked for successful learning.


Journal of Guidance Control and Dynamics | 1992

Identification of system, observer, and controller from closed-loop experimental data

Jer-Nan Juang; Minh Q. Phan

This paper considers the identification problem of a system operating in a closed loop with an existing feedback controller. The closed-loop system is excited by a known excitation signal, and the resulting time histories of the closed-loop system response and the feedback signal are measured. From the time history data, the algorithm computes the Markov parameters of a closed-loop observer, from which the Markov parameters of the individual open-loop plant, observer, and controller are recovered. A state-space model of the open-loop plant and the gain matrices for the controller and the observer are then realized. The results of the paper are demonstrated by an example using wind tunnel aircraft flutter test data.


Journal of Guidance Control and Dynamics | 2000

System Identification in the Presence of Completely Unknown Periodic Disturbances

Neil E. Goodzeit; Minh Q. Phan

A system identie cation method to extract the disturbance-free dynamics and the disturbance effect correctly despite the presence of unknown periodic disturbances is presented. The disturbance frequencies and waveforms can be completely unknown and arbitrary. Only measurements of the excitation input and the disturbancecontaminated response are used for identie cation. Initially, the disturbances are modeled implicitly. When the order of an assumed input-output model exceeds a certain minimum value, the disturbance information is completely absorbed in theidentie ed model coefe cients. A special interaction matrix explains the mechanism by which information about the system and the disturbances are intertwined and more importantly, how they can be separated uniquely andexactly forlateruseinidentie cationand control.From theidentie ed informationa feedforward controller can be developed to reject the unwanted disturbances without requiring the measurement of a separate disturbance-correlatedsignal.Themulti-inputmulti-outputformulation ise rstderived in thedeterministicsetting for which the system and disturbance identie cation is exact. Extensions to handle noise-contaminated data are also provided. Experimental results illustrate themethod on a e exible structure. A companion paperaddresses the problem where the disturbance effect is modeled explicitly.


Journal of Chemical Physics | 1999

A SELF-GUIDED ALGORITHM FOR LEARNING CONTROL OF QUANTUM-MECHANICAL SYSTEMS

Minh Q. Phan; Herschel Rabitz

This paper presents a general self-guided algorithm for direct laboratory learning of controls to manipulate quantum-mechanical systems. The primary focus is on an algorithm based on the learning of a linear laboratory input–output map from a sequence of controls, and their observed impact on the quantum-mechanical system. This map is then employed in an iterative fashion, to sequentially home in on the desired objective. The objective may be a target state at a final time, or a continuously weighted observational trajectory. The self-guided aspects of the algorithm are based on implementing a cost functional that only contains laboratory-accessible information. Through choice of the weights in this functional, the algorithm can automatically stay within the bounds of each local linear map and indicate when a new map is necessary for additional iterative improvement. Finally, these concepts can be generalized to include the possibility of employing nonlinear maps, as well as just the laboratory control instrument settings, rather than observation of the control itself. An illustrative simulation of the concepts is presented for the control of a four-level quantum system.


Journal of Guidance Control and Dynamics | 1993

Passive dynamic controllers for nonlinear mechanical systems

Jer-Nan Juang; Shih-Chin Wu; Minh Q. Phan; Richard W. Longman

A methodology for model-independent controller design for controlling the large angular motion of multibody dynamic systems is outlined. The controlled system may consist of rigid and flexible components that undergo large rigid body motion and small elastic deformations. Control forces/torques are applied to drive the system and at the same time suppress the vibrations due to flexibility of the components. The proposed controller consists of passive second-order systems that may be designed with little knowledge of the system parameters, even if the controlled system is nonlinear. Under rather general assumptions, the passive design assures that the closed-loop system has guaranteed stability properties. Unlike positive real controller design, stabilization can be accomplished without direct velocity feedback. In addition, the second-order passive design allows dynamic feedback controllers with considerable freedom to tune for desired system response and to avoid actuator saturation. After developing the basic mathematical formulation of the design methodology, simulation results are presented to illustrate the proposed approach applied to a flexible six-degree-of-freedom manipulator.

Collaboration


Dive into the Minh Q. Phan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard S. Darling

Engineer Research and Development Center

View shared research outputs
Top Co-Authors

Avatar

Stephen A. Ketcham

Engineer Research and Development Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chung-Wen Chen

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge