Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Kramer is active.

Publication


Featured researches published by Mark A. Kramer.


Computers & Chemical Engineering | 1992

Autoassociative neural networks

Mark A. Kramer

Abstract Autoassociative neural networks are feedforward nets trained to produce an approximation of the identity mapping between network inputs and outputs using backpropagation or similar learning procedures. The key feature of an autoassociative network is a dimensional bottleneck between input and output. Compression of information by the bottleneck results in the acquisition of a correlation model of the input data, useful for performing a variety of data screening tasks. The network reduces measurement noise by mapping inputs into the space of the correlation model, and the residuals of this mapping can be used to detect sensor failures. Values for missing and faulty sensors can be estimated using the network. A related approach, “robust autoassociative networks,” filter both random noise and gross errors in data resulting from faulty sensors. These networks replace with a single forward pass the conventional multiple-step procedure of data rectification, gross error detection, failure identification and sensor value replacement estimation. Autoassociative networks can be used to preprocess data so that sensor-based calculations can be performed correctly even in the presence of large sensor biases and failures. These techniques are demonstrated on an example involving inferential measurement of concentrations via tray temperatures in a distillation column. Results show that the network approach is more effective than competitive linear techniques in this application.


IEEE Transactions on Neural Networks | 1992

Using radial basis functions to approximate a function and its error bounds

James A. Leonard; Mark A. Kramer; Lyle H. Ungar

A novel network called the validity index network (VI net) is presented. The VI net, derived from radial basis function networks, fits functions and calculates confidence intervals for its predictions, indicating local regions of poor fit and extrapolation.


Computers & Chemical Engineering | 1990

Improvement of the backpropagation algorithm for training neural networks

James A. Leonard; Mark A. Kramer

Abstract The application of artificial neural networks (ANNs) to chemical engineering problems, notably malfunction diagnosis, has recently been discussed (Hoskins and Himmelblau, Comput. chem. Engng 12, 881–890, 1988). ANNs “learn”, from examples, a certain set of input-output mappings by optimizing weights on the branches that link the nodes of the ANN. Once the structure of the input-output space is learned, novel input patterns can be classified. The backpropagation (BP) algorithm using the generalized delta rule (GDR) for gradient calculation (Werbos, Ph.D. Thesis, Harvard University, 1974), has been popularized as a method of training ANNs. This method has the advantage of being readily adaptable to highly parallel hardware architectures. However, most current studies of ANNs are conducted primarily on serial rather than parallel processing machines. On serial machines, backpropagation is very inefficient and converges poorly. Some simple improvements, however, can render the algorithm much more robust and efficient.


Computers & Chemical Engineering | 1990

Diagnosis using backpropagation neural networks—analysis and criticism

Mark A. Kramer; James A. Leonard

Abstract Artificial neural networks based on a feedforward architecture and trained by the backpropagation technique have recently been applied to static fault diagnosis problems. The networks are used to classify measurement vectors into a set of predefined categories that represent the various functional and malfunctional states of the process. While the networks can usually produce decision surfaces that correctly classify the training examples, regions of the input space not occupied by training data are classified arbitrarily. As a result, the networks may not accurately extrapolate from the training data. Although extrapolation is not required under ideal circumstances, in practice the network may be required to extrapolate when undersized training sets are used, when parent distributions of fault classes undergo shifts subsequent to training, and when the input data is corrupted by missing or biased sensors. These situations cause relatively high error rates for the neural classifier. A related probem is that the networks cannot detect when they lack the data for a reliable classification, a serious deficiency in many practical applications. Classifiers based on distance metrics assign regions of the input space according to their proximity to the training data, and thus extrapolation is not arbitrary but based on the most relevant data. Distance-based classifiers perform better under nonideal conditions and are to be preferred to neural network classifiers in diagnostic applications.


ACM Transactions on Mathematical Software | 1988

The simultaneous solution and sensitivity analysis of systems described by ordinary differential equations

Jorge R. Leis; Mark A. Kramer

The methodology for the simultaneous solution of ordinary differential equations and the associated first-order parametric sensitivity equations is presented, and a detailed description of its implementation as a modification of a widely disseminated implicit ODE solver is given. The error control strategy ensures that local error criteria are independently satisfied by both the model and sensitivity solutions. The internal logic effectuated by this implementation is detailed. Numerical testing of the algorithm is reported; results indicate that greater reliability and improved efficiency is offered over other sensitivity analysis methods.


Computers & Chemical Engineering | 1992

A NEURAL NETWORK ARCHITECTURE THAT COMPUTES ITS OWN RELIABILITY

James A. Leonard; Mark A. Kramer; Lyle H. Ungar

Abstract Artificial neural networks (ANNs) have been used to construct empirical nonlinear models of process data. Because network models are not based on physical theory and contain nonlinearities, their predictions are suspect when extrapolating beyond the range of the original training data. With multiple correlated inputs, it is difficult to recognize when the network is extrapolating. Furthermore, due to non-uniform distribution of the training examples and noise over the domain, the network may have local areas of poor fit even when not extrapolating. Standard measures of network performance give no indication of regions of locally poor fit or possible errors due to extrapolation. This paper introduces the “validity index network” (VI-net), an extension of radial basis function networks (RBFN), that calculates the reliability and the confidence of its output and indicates local regions of poor fit and extrapolation. Because RBFNs use a composition of local fits to the data, they are readily adapted to predict local fitting accuracy. The VI-net can also detect novel input patterns in classification problems, provided that the inputs to the classifier are real values. The reliability measures of the VI-net are implemented as additional output nodes of the underlying RBFN. Weights associated with the reliability nodes are given analytically based on training statistics from the fitting of the target function, and thus the reliability measures can be added to a standard RBFN with no additional training effort.


Computers & Chemical Engineering | 1997

A general framework for preventive maintenance optimization in chemical process operations

Jonathan Samuel Tan; Mark A. Kramer

Abstract Chemical process reliability has become more recognized both in terms of its impact on economics, and for providing academically challenging problems. In this work, we give an overview of some of the major challenges in formulating and optimizing preventive maintenance. As a result, we propose a general framework for preventive maintenance optimization that combines Monte Carlo simulation with a genetic algorithm. This proposed approach has distinct advantages. When applied to opportunistic maintenance problems, the method developed overcomes demonstrated shortcomings with analytic or Markov techniques in terms of solution accuracy, versatility, and tractability. The framework is easily integrable with general process planning and scheduling, and it provides sensitivity analysis. Furthermore, a genetic algorithm combines well with Monte Carlo simulation to optimize a non-deterministic objective function.


ACM Transactions on Mathematical Software | 1988

Algorithm 658: ODESSA–an ordinary differential equation solver with explicit simultaneous sensitivity analysis

Jorge R. Leis; Mark A. Kramer

ODESSA is a package of FORTRAN routines for simultaneous solution of ordinary differential equations and the associated first-order parametric sensitivity equations, yielding the ODE solution vector <italic>y</italic>&barbelow;(<italic>t</italic>) and the first-order sensitivity coefficients with respect to equation parameters <italic>p</italic>&barbelow;, &sgr;<italic>y</italic>&barbelow;(<italic>t</italic>)/&sgr;<italic>p</italic>&barbelow;. ODESSA is a modification of the widely disseminated initial-value solver LSODE, and retains many of the same operational features. Standard program usage and optional capabilities, installation, and verification considerations are addressed herein.


Computers & Chemical Engineering | 1985

Sensitivity analysis of systems of differential and algebraic equations

Jorge R. Leis; Mark A. Kramer

Abstract Formulae are derived for parametric sensitivity analysis of mathematical mo dels consisting of sets of differential and algebraic equations. Such equations often arise in dynamic modeling of equilibrium stage processes, and in solution of partial differential equations via the numerical method of lines. These formulae can be used to efficiently produce the model sensitivity coefficients, simultaneously with the solution of the model.


IEEE Intelligent Systems | 1993

Diagnosing dynamic faults using modular neural nets

James A. Leonard; Mark A. Kramer

The use of radial basis function networks (RBFNs) for diagnosis and classification is discussed. Even though RBFNs can be trained quickly compared to backpropagation networks, the training effort is still significant for large-scale diagnosis problems. Rho-Net, an architecture that decomposes the dynamic classification problem in two ways, making such training tractable, is presented. The first decomposition reduces the amount of training data needed for any stage of the training process by constructing separate networks for each fault class. The second decomposition reduces the dimensionality of the input space by incorporating temporal information at the output of the network, instead of as a temporal window at the input of the net. Application of Rho-Nets to chemical process simulation is discussed.<<ETX>>

Collaboration


Dive into the Mark A. Kramer's collaboration.

Top Co-Authors

Avatar

James A. Leonard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carlos Rojas-Guzmán

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lyle H. Ungar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

F. Eric Finch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

F.E. Finch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lloyd P. M. Johnston

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

O.O. Oyeleye

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lloyd P. M. Johnston

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

A. Gallagher

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge