Michael Amrhein
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Amrhein.
Chemometrics and Intelligent Laboratory Systems | 1996
Michael Amrhein; B. Srinivasan; Dominique Bonvin; M.M. Schumacher
Abstract The analysis of spectral measurements using standard factor-analytical (FA) techniques requires the rank of the absorbance matrix to be equal to the number of absorbing species, S. However, in many practical reaction networks, such an assumption does not hold. This paper examines various scenarios where ‘rank deficiency’ can occur. The most important case is when the number of independent reactions, R, is less than S. In such a case, rank analysis can only reveal R and, hence, standard FA techniques will fail. Hence, one possibility is to perform rotation in the reaction-spectra space of dimension R. Another possibility is to augment the rank of the data matrix to S for which two experimental methods are developed. Rank augmentation is performed by (i) multiple process runs and (ii) addition of reactants or products during the reaction. The composite data matrices are of rank S and, hence, suited to factor analysis. The number of necessary runs or additions can be detected by determining the rank of both the original and column-mean-centered data matrices. With rank augmentation, it is possible to determine both the number of independent reactions and the number of absorbing species. Furthermore, the influence on the rank of data pretreatment such as mean centering, normalization, auto-scaling, and differentiation with respect to time or wavelength is examined.
Computers & Chemical Engineering | 1994
Wolfgang Marquardt; Michael Amrhein
Reference LA-CONF-1993-011View record in Web of Science Record created on 2004-11-26, modified on 2017-05-10
Biotechnology Progress | 2009
Michal Dabros; Michael Amrhein; Dominique Bonvin; Ian Marison; Urs von Stockar
Real‐time data reconciliation of concentration estimates of process analytes and biomass in microbial fermentations is investigated. A Fourier‐transform mid‐infrared spectrometer predicting the concentrations of process metabolites is used in parallel with a dielectric spectrometer predicting the biomass concentration during a batch fermentation of the yeast Saccharomyces cerevisiae. Calibration models developed off‐line for both spectrometers suffer from poor predictive capability due to instrumental and process drifts unseen during calibration. To address this problem, the predicted metabolite and biomass concentrations, along with off‐gas analysis and base addition measurements, are reconciled in real‐time based on the closure of mass and elemental balances. A statistical test is used to confirm the integrity of the balances, and a non‐negativity constraint is used to guide the data reconciliation algorithm toward positive concentrations. It is verified experimentally that the proposed approach reduces the standard error of prediction without the need for additional off‐line analysis.
Chemometrics and Intelligent Laboratory Systems | 1999
Michael Amrhein; B. Srinivasan; Dominique Bonvin; M.M. Schumacher
Calibration is the first step in the prediction of concentrations from spectral measurements of chemical reaction systems. It is a well-known fact that the species in the calibration set must include those in the new set. Typically, the calibration set is constructed from nonreacting mixtures of known concentrations. In this paper, it is proposed instead to use the calibration data from reacting mixtures, thereby avoiding the independent variation of possibly highly-reactive intermediates. However, for the prediction to be correct, restrictions on the initial and inlet concentrations of the new data set must be imposed. When these restrictions cannot be met, calibration of data in reaction-variant form is proposed. The methodology is illustrated experimentally using an esterification reaction.
Chemical Engineering Science | 1999
Michael Amrhein; B. Srinivasan; Dominique Bonvin
Target factor analysis (TFA) has been used successfully with reaction data to determine, without knowledge of reaction kinetics, the number of independent reactions and the corresponding stoichiometries. This paper analyzes the applicability of TFA techniques to reaction data on the basis of concentration measurements. It is shown that, in some cases, a data matrix that is well suited to existing TFA techniques can be constructed from concentration measurements, material exchange terms, and initial conditions (data pre-treatment). Otherwise, TFA needs to be performed directly on the measured concentration data. In the latter scenario, knowledge of reaction-invariant relationships is required to specify necessary and sufficient conditions for the acceptance of stoichiometric targets. The case of unmeasured species is also investigated.
Computers & Chemical Engineering | 1996
Michael Amrhein; B. Srinivasan; Dominique Bonvin; M.M. Schumacher
Abstract Near-Infrared (NIR) spectroscopic methods for on-line concentration estimation are gaining popularity in chemical production. The problem with NIR data, however, is its nonlinear nature rendering standard factor-analytical (FA) techniques inapplicable to estimate the concentrations. To infer concentrations on-line from NIR spectra, the following steps are proposed: (i) measure both the NIR and Mid-Infrared (MIR) spectra of unknown reaction mixtures in the laboratory (MIR spectroscopy has the desirable property of linearity with respect to concentrations), (ii) apply FA techniques to MIR data and infer concentrations, (iii) calibrate NIR against the estimated concentrations by using standard nonlinear regression methods and (iv) use the calibration model in subsequent production runs. Experimental results are presented to illustrate the efficiency of this method.
Applied Spectroscopy | 2007
Michal Dabros; Michael Amrhein; Paman Gujral; Urs von Stockar
Spectrometers are enjoying increasing popularity in bioprocess monitoring due to their non-invasiveness and in situ sterilizability. Their on-line applicability and high measurement frequency create an interesting opportunity for process control and optimization tasks. However, building and maintaining a robust calibration model for the on-line estimation of key variables of interest (e.g., concentrations of selected metabolites) is time consuming and costly. One of the main drawbacks of using infrared (IR) spectrometers on-line is that IR spectra are compromised by both long-term drifts and short-term sudden shifts due to instrumental effects or process shifts that might be unseen during calibration. The effect of instrumental drifts can normally be reduced by referencing the measurements against a background solution, but this option is difficult to implement for single-beam instruments due to sterility issues. In this work, in order to maintain the robustness of calibration models for single-beam IR and to increase resistance to process and instrumental drifts, planned spikes of small amounts of analytes were injected periodically into the monitored medium. The corresponding measured difference spectra were scaled-up and used as reference measurements for updating the calibration model in real time based on dynamic orthogonal projection (DOP). Applying this technique led to a noticeable decrease in the standard error of prediction of metabolite concentrations monitored during an anaerobic fermentation of the yeast Saccharomyces cerevisiae.
Analytica Chimica Acta | 2009
Paman Gujral; Michael Amrhein; Dominique Bonvin
On-line measurements from first-order instruments such as spectrometers may be compromised by instrumental, process and operational drifts that are not seen during off-line calibration. This can render the calibration model unsuitable for prediction of key components such as analyte concentrations. In this work, infrequently available on-line reference measurements of the analytes of interest are used for drift correction. The drift-correction methods that include drift in the calibration set are referred to as implicit correction methods (ICM), while explicit correction methods (ECM) model the drift based on the reference measurements and make the calibration model orthogonal or invariant to the space spanned by the drift. Under some working assumptions such as linearity between the concentrations and the spectra, necessary and sufficient conditions for correct prediction using ICM and ECM are proposed. These so-called space-inclusion conditions can be checked on-line by monitoring the Q-statistic. Hence, violation of these conditions implies the violation of one or more of the working assumptions, which can be used, e.g. to infer the need for new reference measurements. These conditions are also valid for rank-deficient calibration data, i.e. when the concentrations of the various species are linearly dependent. A constraint on the kernel used in ECM follows from the space-inclusion condition. This kernel does not estimate the drift itself but leads to an unbiased estimate of the drift space. In a noise-free environment, it is shown that ICM and ECM are equivalent. However, in the presence of noise, a Monte Carlo simulation shows that ECM performs slightly better than ICM. A paired t-test indicates that this difference is statistically significant. When applied to experimental fermentation data, ICM and ECM lead to a significant reduction in prediction error for the concentrations of five metabolites predicted from infrared spectra.
advances in computing and communications | 2012
Nirav Bhatt; Michael Amrhein; Balasubrahmanya Srinivasan; Philippe Müllhaupt; Dominique Bonvin
Reaction systems can be represented by first-principles models that describe the evolution of the states (typically concentrations, volume and temperature) by means of conservation equations of differential nature and constitutive equations of algebraic nature. The resulting models often contain redundant states since the various concentrations are not all linearly independent; indeed, the variability observed in the concentrations is caused by the reactions, the mass transferred between phases, the inlet and outlet streams. A minimal state representation is a dynamic model that exhibits the same behavior as the original model but has no redundant state. This paper considers the material balance equations associated with an open fluid-fluid reaction system that involves Sl species, R independent reactions, pl independent inlets and one outlet in the first fluid phase (e.g. the liquid phase) and Sg species, pg independent inlets and one outlet in the second fluid phase (e.g. the gas phase). In addition, there are pm species transferring between the two phases. The (Sl+Sg)-dimensional model is transformed to q = R + 2pm + pl + pg +2 variant states and Sl +Sg -q invariant states. Then, using the concept of accessibility of nonlinear systems, the conditions under which the transformed model is a minimal state representation are derived. It will be shown that the minimal number of concentration measurements needed to reconstruct the full state without kinetic information is R + pm. The simulated chlorination of butanoic acid is used to illustrate the various concepts developed in the paper.
Journal of Chemometrics | 2011
Paman Gujral; Michael Amrhein; Rolf Ergon; Barry M. Wise; Dominique Bonvin
In principal component regression (PCR) and partial least‐squares regression (PLSR), the use of unlabeled data, in addition to labeled data, helps stabilize the latent subspaces in the calibration step, typically leading to a lower prediction error. For using unlabeled data in PLSR, a non‐sequential approach based on optimal filtering (OF) has been proposed in the literature. In this work, a sequential version of the OF‐based PLSR and a PCA‐based PLSR (PLSR applied to PCA‐preprocessed data) are proposed. It is shown analytically that the sequential version of the OF‐based PLSR is equivalent to that of PCA‐based PLSR, which leads to a new interpretation of OF. Simulated and experimental data sets are used to point out the usefulness and pitfalls of using unlabeled data. Unlabeled data can replace labeled data to some extent, thereby leading to an economic benefit. However, in the presence of drift, the use of unlabeled data can result in an increase in prediction error compared to that obtained with a model based on labeled data alone. Copyright