Robustness of nuclear core activity reconstruction by data assimilation
Bertrand Bouriquet, Jean-Philippe Argaud, Patrick Erhard, Sébastien Massart, Angélique Ponçot, Sophie Ricci, Olivier Thual
RRobustness of nuclear core activity reconstructionby data assimilation
Bertrand Bouriquet ∗ Jean-Philippe Argaud , Patrick Erhard S´ebastien Massart Ang´elique Pon¸cot Sophie Ricci Olivier Thual , October 29, 2018
Abstract
We apply a data assimilation techniques, inspired from meteorologicalapplications, to perform an optimal reconstruction of the neutronic activ-ity field in a nuclear core. Both measurements, and information comingfrom a numerical model, are used. We first study the robustness of themethod when the amount of measured information decreases. We thenstudy the influence of the nature of the instruments and their spatialrepartition on the efficiency of the field reconstruction.
Keywords:
Data assimilation, Schur complement, neutronic, activi-ties reconstruction, nuclear in-core measurements
In this paper, we focus on the efficiency of a neutronic field reconstructionprocedure with Data Assimilation when varying the number and the repartitionof the available instruments. The data assimilation technique used for thisreconstruction allows to combine, in an optimal and consistent way, informationcoming either from measurements or from a numerical model.Data assimilation methods are not commonly used in nuclear core physics [1],contrary to meteorology or oceanography [2, 3, 4]. The procedure proposed hereis the same as the one meteorologists use to obtain high accuracy meteorologicalreconstructed fields in time and space. This is the case, for example, of thecommonly used meteorological re-analysis data set ERA-40 [5] among others[6, 7].One of the main advantages of data assimilation is that it takes into accountevery kind of heterogeneous information within the same framework. Moreover,this method has a formalism that allows to adapt itself to instrument config-uration change. We exploit this last property here to study the quality of the ∗ [email protected] Sciences de l’Univers au CERFACS, URA CERFACS/CNRS No 1875, 42 avenue GaspardCoriolis, F-31057 Toulouse Cedex 01 - France Electricit´e de France, 1 avenue du G´en´eral de Gaulle, F-92141 Clamart Cedex - France Universit´e de Toulouse, INPT, UPS, IMFT, All´ee Camille Soula, F-31400 Toulouse -France a r X i v : . [ phy s i c s . d a t a - a n ] J un econstructed activity field as a function of the number of available measure-ments. A major point in this study is to estimate the instrumented systemrobustness in the framework of a data assimilation reconstruction procedure.Moreover, such a study also informs about the effect of instrumentation designwithin a nuclear core and the resilience to instrument removal.In this paper, we first detail the data assimilation method and how it ad-dresses field reconstruction. To evaluate the influence of the number of in-struments on the activity field reconstruction, the repeated application of themethod faces some huge computational issues. Those difficulties are overcomedusing a matrix inversion method based on the Schur complement. A detailedpresentation of this method is presented in Appendix A. First we present the re-sults on a standard case with synthetic measurements and comments on them.To get a better understanding, we extend the results to other instrumentalrepartitions and other errors settings. This allows us to give some conclusionson the error and instrument repartition effects in activity field reconstructionusing data assimilation. We briefly introduce the useful data assimilation key points to understand theiruse as applied in [8, 9, 10]. Data assimilation is a wider domain and thesetechniques are, for example, the keys of the nowadays meteorological operationalforecasts [11]. This is through advanced data assimilation methods that weatherforecasting has been drastically improved during the last 30 years. All theavailable data, such as satellite measurements as well as sophisticated numericalmodels, are used.The ultimate goal of data assimilation methods is to estimate the inaccessibletrue value of the system state, x t where the t index stands for ”true state” inthe so called ”control space”. The basic idea is to combine information from an a priori on the state of the system (usually called x b , with b for ”background”),and measurements (referenced as y o ). The background is usually the result ofnumerical simulations, but can also be derived from any a priori knowledge.The result of data assimilation is called the analysis, denoted by x a , and it isan estimation of the true state x t we want to approximate.The control and observation spaces are not necessary the same, and a bridgebetween them needs to be built. This is the observation operator H , that trans-forms values from the space of the background to the space of observations. Forour data assimilation purpose we will use its linearisation H around the back-ground. The inverse operation going from observation increments to backgroundincrements is given by the transpose H T of H .Two other ingredients are necessary. The first one is the covariance matrixof observation errors, defined as R = E [( y o − H ( x t )) . ( y o − H ( x t )) T ] where E [ . ]is the mathematical expectation. It can be obtained from the known errors onunbiased measurements which means E [ y o − H ( x t )] = 0. The second one is thecovariance matrix of background errors, defined as B = E [( x b − x t ) . ( x b − x t ) T ].It represents the error on the a priori state, assuming it to be unbiaised followingthe E [ x b − x t ] = 0 no biais property. There are many ways to get this a priori state and background error matrices. However, those matrix are commonlythe output of a model and an evaluation of accuracy, or the result of expert2nowledge.It can be proved, within this formalism, that the Best Unbiased LinearEstimator (BLUE) x a , under the linear and static assumptions, is given by thefollowing equation: x a = x b + K (cid:0) y o − H x b (cid:1) , (1)where K is the gain matrix: K = BH T ( HBH T + R ) − . (2)Moreover, we can get the analysis error covariance matrix A , characterising theanalysis errors x a − x t . This matrix can be expressed from K as: A = ( I − KH ) B , (3)where I is the identity matrix.It is worth noting that solving Equation 1 is, if the probability distributionis Gaussian, equivalent to minimise the following function J ( x ), x a being theoptimal solution: J ( x ) = ( x − x b ) T B − ( x − x b ) + (cid:0) y o − Hx (cid:1) T R − (cid:0) y o − Hx (cid:1) . (4)This minimisation is known in data assimilation as 3D-Var methodology [8]. The framework of this study is the standard configuration of a 900 MWe nu-clear Pressurized Water Reactor (PWR900). To perform data assimilation, bothsimulation code and data are needed. For the simulation code, the EDF experi-mental calculation code for nuclear core COCAGNE in a standard configurationis used. The description of the basic features of this model are done in Section3.1.To have a good understanding of the instrumentation effect, we want to studyvarious kind of configurations, even some that do not exist operationally and socannot be tested experimentally. For that purpose, synthetic data are used thatallows to have an homogeneous approach all along the document. Synthetic datais generated from a model simulation, filtered through an instrument model andnoised according to a predefined measurement error density function (usually ofGaussian type).In the present case, we study the activity field reconstruction. An horizontalslice of a PWR900 core is represented on the Figure 1. There is a total of 157assemblies within this core. Among those assemblies, 50 are instrumented withMobiles Fissions Chambers (MFC). Those assemblies are divided verticaly in29 vertical levels. Thus, the size of the control x is 4553 (157 × y o is 1450 (50 × The aim of a neutronic code like COCAGNE is to evaluate the neutronic activityfield and all associated values within the nuclear core. This field depend on the3 position y po s i t i on
0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14
Figure 1: The positions of MFC instruments in the nuclear core are localisedin assemblies in black within the horizontal slice of the core. The assemblieswithout instrument are marked in white and the reflector, out of the reactivecore, is in gray.position in the core and on the neutron energy. To do such an evaluation,the population of neutrons are divided in several groups of energy. In thepresent case only two groups are taken into account giving the neutronic fluxΦ = (Φ , Φ ) (even if the present code have no limit for the group number).The material properties depend on the position in the core, as the neutronicflux Φ, identified by solving two-group diffusion equations described by: (cid:40) − div( D grad Φ ) + (Σ a + Σ r )Φ = 1 k (cid:16) ν Σ f Φ + ν Σ f Φ (cid:17) − div( D grad Φ ) + Σ a Φ − Σ r Φ = 0 , (5)where k is the effective neutron multiplication factor, all the quantities and thederivatives (except k ) depend on the position in the core, 1 and 2 are the groupindexes, Σ r is the scattering cross section from group 1 to group 2, and foreach group, Φ is the neutron flux, Σ a is the absorption cross section, D is thediffusion coefficient, ν Σ f is the corrected fission cross section.The cross sections also depend implicitly on the concentration of boron,which is a substance added in the water used for the primary circuit to controlthe neutronic fission reaction, throught a feedback supplementary model. Thismodel takes into account the temperature of the materials and of the neutronmoderator, given by external thermal and thermo-hydraulic models. A detaileddescription of the core physic and numerical solving can be found in reference[12]. 4he overall numerical resolution consists in searching for boron concentrationsuch that the eigenvalue k is equal to 1, which means that the nuclear powerproduction is stable and self-sustaining. It is named critical boron concentrationcomputation.The activity in the core is obtained through a combination of the fluxes Φ =(Φ , Φ ), given on the chosen mesh of the core. Using homogeneous materials foreach assembly (for example 157 in a PWR900 reactor), and choosing a verticalmesh compatible with the core (usually 29 vertical levels), this result in a fieldof activity of size 157 ×
29 = 4553 that cover all the core. H The H observation operator is the composition of a selection and of a normali-sation procedure. The selection procedure extracts the values corresponding toeffective measurement among the values of the model space. The normalisationprocedure is a scaling of the value with respect to the geometry and power ofthe core. The overall operation is non linear. However, with a range of valuecompatible with assimilation procedure, we can calculate the linear associatedoperator H . This observation matrix is a (4553 × The B matrix represents the covariance between the spatial errors for the back-ground. In order to get those, we estimate them as the product of a correlationmatrix C by a normalisation factor.The correlation C matrix is built using a positive function that defines thecorrelations between instruments with respect to a pseudo-distance in modelspace. Positive functions have the property (via Bochner theorem) to buildsymmetric defined positive matrix when they are used as matrix generator [13,14]. In the present case, Second Order Auto-Regressive (SOAR) function isused to prescribe the C matrix. In such a function, the amount of correlationdepends from the euclidean distance between spatial points. The radial andvertical correlation length ( L r and L z respectively, associated to the radial r coordinate and the vertical z coordinate) have different values, which means weare dealing with a global pseudo euclidean distance. The used function can beexpressed as follow: C ( r, z ) = (cid:18) rL r (cid:19) (cid:18) | z | L z (cid:19) exp (cid:18) − rL r − | z | L z (cid:19) . (6)The matrix obtained by the above Equation 6 is a correlation matrix. It isthen multiplied by a suitable variance coefficient to get covariance matrix. Thiscoefficient is obtained by statistical study of difference between model and mea-surements in real case. In our case, the size of the B matrix is related to thesize of model space so it is (4553 × The observation error covariance matrix R is approximated by a diagonal ma-trix. This means it is assumed that no significant correlation exists between the5easurement errors of the MFC. The usual modelling is to take those value asa percentage of the observation. This can be expressed as: R jj = ( α ( y o ) j ) , ∀ j (7)The parameter α is fixed according to the accuracy of the measurement and therepresentative error associated to the instrument. The size of the R matrix isrelated to the size of observation space, so it is (1450 × To test the robustness, many BLUE calculations need to be done to evaluatethe results quality with instruments configuration modifications. We want tohave an evaluation of the quality of reconstruction as a function of the num-ber of instruments, with a significant statistical result. To efficiently performthese numerous computations, a specific method using Schur complement wasdevelopped. The details of this new method are reported in Appendix A.Here, we are interested in the evaluation of the quality of the analysis x a as afunction of the amount of provided information. To quantify this effect we makea statistic of 200 scenarios of instruments removal. We are making those statis-tics on several hypothesis, starting from a complete instrument configurationand then removing instruments two by two until none remains. The calcula-tion are done on the basis of the algorithm and hypothesis on data assimilationdescribed previously.To quantify the impact of removed instruments on the analysis, we look atthe percentage quantity v defined as follow: v = 100 || y oref − H x b || − || y oref − H x a |||| y oref − H x b || − || y oref − H x aref || , (8)where x aref corresponds to the analysis when no instrument is removed (this isthe best estimation possible with respect to the information available on thesystem), and where y oref and H are the reference observations and observationoperator used to build x aref . H stand for the observation operator when noinstrument are removed. This criterium, which is basd on the norm of theinnovation vector y oref − H x b , focuses on measurements. Since || y oref − H x a || is greater than || y oref − H x aref || (best estimate) and smaller than || y oref − H x b || (innovation), v is a measure of the quality of the analysis.Such a definition have several advantages. First of all, the limit of thisfunction are interesting. On the one hand, limit when no instrument is removedis 100%. On the other hand, the limit when all instruments are removed is0%. With such a formula we can compare the variation of the information ona unique scale. If we obtain some value above the limit of 100%, this mean theparametrisation of data assimilation was not done correctly.The interest of using this formula is that it can be applied directly as wellto experimental data as to synthetic data without any change.On Figure 2 are presented the results of the quantity v as a function of thenumber of removed instruments.As expected, the relative quality of reconstruction decreases as a function ofthe number of instruments removed. However within this decrease, three phasescan be seen: 6 R e l a t i v e e ff i c i en cy o f r e c on s t r u c t i on Number of instruments removed
Efficiency of reconstruction as a function of instument removal
Statistics on calculation (v)linear fit slope -1.64linear fit slope -2.28
Figure 2: The plain curve represents the value v given by Equation 8 as a func-tion of the number of instruments removed. This results come from a typicalPWR900 instrument location. The two other lines are linear regression corre-sponding to both decreasing regime.1. A first phase of slow decreasing until we removed roughly 20 instruments.This phase is rather clear and can be fitted by a linear regression with aslope of − .
64 ( arbitrary unit (a.u.) per instrument ). The fit is shown in(green) dash line on Figure 2.2. After 20 instrument removed the decreasing speed of the slope increases.The second linear fit has a slope of − .
28 ( a.u. per instrument ). This fitis shown in (blue) dotted line in Figure 2.3. Beyond 40 instruments removed, we reach a third phase of stagnation thena brutal decrease to 0 the limit value imposed by Equation 8.This characteristic behaviour can be seen in several cases that we studied.We have also noticed it on real measurements [15, 16]. The transition betweenthe two first decreasing phases is specially strong when we do the analysis usingreal measurements.First, we explain why the third phase is marked by a stagnation of the meanvalue of || y o − Hx a || over the set of removal scenarios taken into account. Tounderstand that effect, we work on both the cases where two instruments areremoved ( i.e
48 are remain), and the ones where two instruments are remaining.On those two cases, at most 1225 scenarios are possibles, corresponding to the C combinations. Using the hybrid Schur method, we can calculate all those7ases with a rather cheap computing time to obtain a good statistics. We plottedthe distribution of the value of || y o − Hx a || over all the scenarios in Figure 3. N u m be r o f c a s e s ||y-Hxa|| Distributions of ||y-Hxa||
Figure 3: Distribution of the norms of || y o − Hx a || for all the cases where onlytwo instruments are suppressed or two instruments remain in the instrumentnetwork on a PWR900 reactor.On the Figure 3 we notice a big difference between the two distributionsplotted, that correspond to the cases where two instruments are removed ( i.e.
48 remain) or two remain.The shifting of the mean value between those two extreme cases is logical asavailable information is dramatically changing. However, the shape of the dis-tribution is also vastly changing. We move from a very sharp distribution, when48 instruments remain, to a rather broad one when two instruments remain.The first interesting point of this shape change is that all the instruments donot have the same influence on the activity field reconstruction. To understandbetter this effect let’s assume that all the instruments are equivalent. In thiscase, as the number of scenarios present in the two distributions of Figure 3are the same, only a shifting of the mean value should have be seen. Howeverwe have not only a translation of the distribution but also a broadening. Thus,there is a non equivalence of instruments within the data assimilation procedure,in terms of marginal information assigned to each instrument, depending on itslocation in the coreThe second point of interest is that the broadening of the distribution isasymmetric. The distribution extends towards the higher values of norm. Thiseffect explains the stagnation of the v quantity when few instruments remain.The source of stagnation is the discrepancy between the most probable valueof the distribution and the mean value of the distribution which is higher. The8ost probable value leads to a decrease without stagnation. However, looking atthe mean value (that has more physical meaning), we see that this one stagnatesdue to the asymmetric broadening that compensates the overall decrease inmean value.Now the origin of the two slopes in decreasing of the information representedby the Figure 2 will be investigated. The repartition of the instruments in astandard PWR900 is very complex as shown in Figure 1. This complexity ofthe repartition does not make the situation easy to understand. Thus, we wantto study the case on a simpler repartition of the instrumentation to see if theeffect of two phases decreasing persists. The position of the instruments are presented in Figure 4. The core geome-try and assemblies configuration is the same as a PWR900 one, however theinstrument are located regularly on a Cartesian map. x position y po s i t i on
0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14
Figure 4: The MFC instruments within the nuclear core are localised in assem-blies in black within the horizontal slice of the core. The assemblies withoutinstrument are marked in white and the reflector is in gray.Within this configuration, only 40 MFC are used, which is a bit less than 50of the standard PWR configuration presented on Figure 1. With this repartition,we do the same analysis as on the previous one. The evolution on v as a functionof the number of removed instruments is plotted in Figure 5.In the Figure 5, the quality of the activity reconstruction decreases quasi-linearly as a function of the number of removed instruments. This goes on until9 R e l a t i v e e ff i c i en cy o f r e c on s t r u c t i on Instrument losses
Efficiency of reconstruction as a function of instument loss
Statistics on calculation (v)linear fit slope -1.81
Figure 5: The plain curve represents the value v given by Equation 8 as afunction of the number of instruments removed. This results come from a regularinstruments location in a PWR900. The dashed line represents the linear fit ofthe steady part of the curve.we reach the stagnation phase. It appears, comparing Figure 5 to 2 that thevariation of the decrease slope is related to the geometrical repartition of theinstruments. Such uniform linear decreasing can be observed also in severalother rather geometrically regular repartition, as one with repartition on diago-nal line. Some of the repartitions have densities of instruments close to the oneof the standard PWR900, which do not change the overall behaviour.Looking at the slope factor of the linear fit, we notice that the one withregular MFC repartition (Figure 5) is in the range of the slopes obtained withstandard MFC repartition (Figure 2).Thus the removal or fault resilience of the instrument set seems to dependon the transition point between the two decreasing steps. In the lower range ofinstrumental density, it is better to have a regular repartition, and in the higherone, it is better to have a complex ad-hoc repartition. In real PWR900 nuclearcore, because the set of intruments is fixed at a high density level, these resultsindicate that it is more robust to have an ad-hoc repartition of the instrumentsas for now. We proposed and studied here an original method to test how the neutronicactivity field reconstruction by data assimilation is tolerant to information re-10oval. The core of the reconstruction method is based on data assimilation,that is widely use in earth sciences, and allows a very good reconstruction of theactivity all over the nuclear core. An hybrid method for fast matrix inversionpartially based on Schur complement allows the execution on numerous analy-sis from which statistical results are derived. For all these analyses, syntheticdata are used to try non-experimental intruments repartitions, but similar re-sults were established using real data. This application on real data prove boththe reliability and the quality of the calculation code and the data assimilationmethodology.Using those advanced calculation methods, it was shown that the slopes ofthe reconstruction quality is mainly governed by repartition for the instruments.Depending on the chosen repartition, the decrease consists in two or three dis-tinct phases. The ultimate stagnation phase in this decreasing is governed byboth statistical effect and heterogeneity of instruments influence.The behaviour with two phases within the decreasing quality of the recon-struction as a function of the number of instruments removed is understood interm of repartition effect, but not quantified. However it can be seen as a phasetransition between two states of instrumental configuration. The quantificationof this transition is worth studying.
References [1] S´ebastien Massart, Samuel Buis, Patrick Erhard, and Guillaume Gacon.Use of 3DVAR and Kalman filter approaches for neutronic state and pa-rameter estimation in nuclear reactors.
Nuclear Science and Engineering ,155(3):409–424, 2007.[2] David F. Parrish and John C. Derber. The national meteorological center’sspectral statistical interpolation analysis system.
Monthly Weather Review ,120:1747–1763, 1992.[3] Ricardo Todling and Stephen E. Cohn. Suboptimal schemes for atmo-spheric data assimilation based on the kalman filter.
Monthly WeatherReview , 122:2530–2557, 1994.[4] Kayo Ide, Philippe Courtier, Michael Ghil, and Andrew C. Lorenc. Uni-fied notation for data assimilation: operational, sequential and variational.
Journal of the Meteorological Society of Japan , 75(1B):181–189, 1997.[5] S. M. Uppala and et al.
The ERA-40 re-analysis.
Quaterly Journal of theRoyal Meteorological Society , 131(612, Part B):2961–3012, 2005.[6] E. Kalnay and et al.
The NCEP/NCAR 40-year reanalysis project.
Bulletinof American Meteorological Society , 77:437–471, 1996.[7] G. J. Huffman and et al.
The global precipitation climatology project(GPCP) combined precipitation dataset.
Bulltin of American Meteorolog-ical Society , 78:5–20, 1997.[8] Olivier Talagrand. Assimilation of observations, an introduction.
Journalof the Meteorological Society of Japan , 75(1B):191–209, 1997.119] Eugenia Kalnay.
Atmospheric Modeling, Data Assimilation and Predictabil-ity . 2003.[10] Franois Bouttier and Philippe Courtier. Data assimilation concepts andmethods. Meteorological training course lecture series, ECMWF, March1999.[11] F. Rabier, H. J¨arvinen, E. Kilnder, J.F. Mahfouf, and A. Simmons. TheECMWF operational implementation of four-dimensional variational as-similation. part I: Experimental results with simplified physics.
QuarterlyJournal of the Royal Meteorological Society , 126:1143–1170, 2000.[12] James J. Duderstadt and Louis J. Hamilton.
Nuclear reactor analysis . JohnWiley & Sons, 1976.[13] Georges Matheron.
La th´eorie des variables r´egionalis´ees et ses appli-cations . Cahiers du Centre de Morphologie Math´ematique de l’ENSMP,Fontainebleau, Fascicule 5, 1970.[14] Denis Marcotte. G´eologie et g´eostatistique mini`eres (cours), 2008.[15] Jean-Philippe Argaud, Bertrand Bouriquet, Patrick Erhard, GuillameGacon, and S´ebastien Massart. Exploitation optimale des mesures neu-troniques pour l’´evaluation de l’´etat des coeurs de centrales nucl´eaires, SFPconference, 9-13 july 2007, 2007.[16] Jean-Philippe Argaud, Bertrand Bouriquet, Patrick Erhard, S´ebastienMassart, and Sophie Ricci. Data assimilation in nuclear power plant core.In
ECMI Conference, London, 30 June-4 July 2008 . IMA, 2008.[17] Fuzhen Zhang.
The Schur complement and its applications . Springer, 2005.
A Appendix: Schur complement method to op-timise calculation
Within the BLUE assimilation method, the limiting factor in calculation timeis the matrix inversions. In Equation 2, the costly part is the inversion of theterm: M = HBH T + R . (9)The inversion cost on huge matrix as M (around 4000 × M matrix is still huge.Thus, the idea is to use the information obtained in the inversion of the fullsize matrix to shorten calculation, to calculate smaller size matrix in a reason-able time. In this case, we want to calculate the new matrix as a perturbationof the original one. Such a method exists and exploits the Schur complement ofthe matrix.We assume we want to suppress some instruments to a given configuration.With respect to the Equation 2, we need to calculate a new matrix K n . The n M n .This determination of M n is obtained from the knowledge of the invert ofthe matrix M g calculated over all the instruments. The indices g is used todenote the reference matrix we start from M g according to Equation 9.All the components of the new matrix K n can be obtained by suppressingthe lines and columns corresponding to removed instruments in M g , inverting itand then multiplying this matrix by the corresponding H n and B n . We noticethat, in our case, we get B n = B g as we do not affect the model space.To make the demonstration easier, but without losing any generality, we canassume that the suppressed instruments correspond to the lower square of M g .If it is not the case, it is always possible to reorganise matrix in such a way.Now we put the M g under a convenient form, separating remaining measuresfrom removed ones. Assuming the starting matrix M g is m × m and that weplan so suppress s measurements, we can write M g in the following way: M g = (cid:18) P g Q g R g S g (cid:19) , (10)where: • P g contains the remaining measurements, and is a p × p matrix, • S g contains the suppressed measurement, and is a s × s matrix, • Q g et R g represent the dependence between remaining measured and sup-pressed ones. In the particular case we are dealing with, we have Q Tg = R g .However, no further use of this property is done.With such a decomposition, we got the equality m = p + s .The P g matrix corresponds to the remaining instruments, thus we have theequality: P g = M n , (11)The decomposition given in Equation 10 is the one required to build the Schurcomplement of this matrix [17]. Under the condition that P g can be inverted,the Schur complement is the following quantity: S g − R g P − g Q g , (12)and is noted ( M g / P g ). This notation reads as Schur complement of M g by P g .Thus we look for a cheap way to calculate P − g knowing M − g . For that, weuse the Banachiewiz formula [17] that gives invert of M g as a function of P g , Q g , R g , S g and ( M g / P g ) matrices: M − g = (cid:18) P g Q g R g S g (cid:19) − (13)= (cid:18) P − g + P − g Q g ( M g / P g ) − R g P − g − P − g Q g ( M g / P g ) − − ( M g / P g ) − R g P − g ( M g / P g ) − (cid:19)