Evaluating uncertainties in electrochemical impedance spectra of solid oxide fuel cells
Luka Žnidari?, Gjorgji Nusev, Bertrand Morel, Julie Mougin, ?ani Juri?i?, Pavle Boškoski
EEvaluating uncertainties in electrochemical impedance spectra of solid oxidefuel cells
Luka Žnidarič a,b, ∗ , Gjorgji Nusev a,b , Bertrand Morel c , Julie Mougin c , Ðani Juričić a , Pavle Boškoski a a Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana, Slovenia b Jožef Stefan International Postgraduate School, Jamova cesta 39, SI-1000 Ljubljana, Slovenia c Laboratoire des Technologies Hydrogèn Commissariat à l’énergie atomique et aux énergies alternatives - CEA,CEA/LITEN/DTBH/STHB/LTH 17 rue des Martyrs – 38054 GRENOBLE CEDEX 9, France
Abstract
Electrochemical impedance spectroscopy (EIS) is a widely used tool for characterization of fuel cells andelectrochemical conversion systems in general. When applied to the on-line monitoring in context of in-field applications, the disturbances, drifts and sensor noise may cause severe distortions in the evaluatedspectra, especially in the low-frequency part. Failure to account for the random effects can implicatedifficulties in interpreting the spectra and misleading diagnostic reasoning. In the literature, this fact hasbeen largely ignored. In this paper, we propose a computationally efficient approach to the quantificationof the spectral uncertainty by quantifying the uncertainty of the equivalent circuit model (ECM) by meansof the variational Bayes approach (VB) approach. To assess the quality of the VB posterior estimates, wecompare the results of VB approach with those obtained with the Markov chain Monte Carlo (MCMC)algorithm. Namely, MCMC algorithm is expected to return accurate posterior distributions, while VBapproach provides the approximative distributions. By using simulated and real data we show that VBapproach generates approximations, which although slightly over-optimistic, are still pretty close to the morerealistic MCMC estimates. A great advantage of the VB method for online monitoring is low computationalload, which is several orders of magnitude lighter than that of MCMC. The performance of VB algorithmis demonstrated on a case of ECM parameters estimation in a 6 cell solid-oxide fuel cell (SOFC) stack.The complete numerical implementation for recreating the results can be found at https://repo.ijs.si/lznidaric/variational-bayes-supplementary-material . Keywords:
Variational Bayes, Monte Carlo, Solid-oxide fuel cells, Fractional-order systems
1. Introduction
Currently available tools for characterising electrochemical energy systems predominantly rely on Nyquistcurves obtained through electrochemical impedance spectroscopy (EIS) [1]. Conceptually simple and well ∗ Corresponding author.
Email address: [email protected] (Luka Žnidarič)
Preprint submitted to Elsevier January 21, 2021 a r X i v : . [ s t a t . C O ] J a n nderstood, EIS analysis has proven to be a standard tool for characterising the health condition of cellsand stacks. Correct evaluation of the EIS characteristic is subject to several requirements.First, the perturbation signal must have low amplitude in order to not excite the nonlinear modes of thecells. If the amplitudes are too small, there is a risk that the noise-to-signal ratio will decrease, which will bereflected in an increased variance of the EIS estimates. Second, the cells should operate in stationary, stable,and repeatable test conditions. This is mandatory for a correct evaluation of the EIS curves. This means thatthe internal conditions should remain constant during the measurement and that no external perturbationsshould corrupt the measurements. Under well-controlled laboratory conditions, the perturbations are keptunder control, which usually results in smooth Nyquist curves. This is the only possible way of obtainingdata that can be validly compared for potential discrepancy used as an indicatior of a fault or degradation.Third, to obtain comparable results, measurements on a cell or stack should be performed at the sameoperating point. The above requirements are not easily met in applications outside laboratories.In the in-field operation, it is not possible to guarantee complete control over fluctuations in the balanceof plant (BoP) and system environment. For example, drift and noise in flows and temperatures can affectthe low-frequency part of the Nyquist curve. Indeed, non-stationary external conditions such as temperatureor load variations are unavoidable in real applications and can therefore contribute to non-smooth results.Increased noise in current and voltage measurements can also contribute to the non-smooth EIS estimates.In addition, unsafe operating conditions, such as increased fuel utilization, can cause increased variance inthe low-frequency portion of the spectra due to local fuel starvation problems, which then affect the voltagevariance.How to handle the effects of the uncontrolled random perturbations on the EIS evaluation and cell char-acterization is the problem addressed in this paper. A strong motivation for the work stems from several ofour own practical implementations as well as a large number of evidences found in the literature, e.g. [2, 3].For better illustration, Figure 1 shows the evaluated EIS curves obtained from successive measurements per-formed on a solid-oxide fuel cell (SOFC) stack operating in laboratory conditions and on in-field application.It appears that a combination of the above factors results in a high spread of EIS values at low frequencies.Note that the perturbations do not significantly affect the high frequency range arcs in the EIS curve.It seems that this problem has not yet been systematically addressed, at least in what concerns SOFCdomain. With the increasing need for automated condition monitoring, robust solutions are mandatory toensure reliable and stable diagnosis. This is important since the mitigation actions are undertaken from thediagnostic results. Unreliable EIS evaluations can result in misleading diagnosis and countermeasures thatcan even worsen the condition of the cell or stack.An idea of how to deal with the problem in SOFCs was recently proposed in articles [4, 5]. The authorperforms inference on the equivalent circuit modes based on a combination of EIS data smoothing and EISaveraging. Instead of using a single EIS measurement, the data from several consecutive EIS measurements2 < ( z )[ m Ω]02 − = ( z ) [ m Ω ] Measurement selected for this analysis10 consecutive EIS measurements0 . . . < ( z )[Ω cm ]0 . . − = ( z ) [ Ω c m ] Consecutive EIS measurements
Figure 1: Evaluated EIS characteristics in laboratory conditions (above) and during an in-field application (below). are averaged to reduce the spread of the estimates in the low-frequency region. This solution, while simple,requires a certain number of measurements spanned on a time window to reliably assess the change in theEIS characteristic.In this paper, we propose a novel approach, not pursued before, which is able to account for the influ-ence of perturbations on the evaluated EIS characteristic and the equivalent circuit model by a statisticalapproach. Properly quantified uncertainties lead to diagnostic solutions that, rather than making a faultstatement in clear yes/no categories, suggest the probability that a particular fault is present [6]. This isessential for a cautious and more reliable diagnosis of SOFCs, which can be additionally enhanced with theoperator’s prior knowledge.In the area of electrochemical energy conversion systems, there have been only limited attempts toanalyze the stochastic nature of the parameters. The parameters of the ECM model are usually obtainedby constrained nonlinear optimization, c.f. [7, 8], resulting in the parameter vector θ ∗ which minimizessome criterion (loss of fit) function J ( θ ) . The simplest and most straightforward approach to evaluatethe uncertainty is to locally approximate the criterion function with the second-order function J ( θ ) ≈ J ( θ ∗ ) + ( θ − θ ∗ ) (cid:48) Σ ( θ − θ ∗ ) . The uncertainty region is approximated by the ellipsoid defined by the Hessianmatrix Σ . The problem with this approach is that the approximation is rather rough and might suggesta misleading uncertainty region [9]. To obtain a realistic estimate of the uncertainty of the parameters,3ore elaborated approaches should be applied. Of the available methods, the Markov chain Monte Carlo(MCMC) simulation [10] is the most accurate for fractional system parameter estimation. The biggestproblem with this approach is the overwhelming computational time, which increases rapidly with thenumber of unknown parameters. For potential applications in the field, a computationally less excessive butstill sufficiently accurate method is welcome.One approach that offers a way to circumvent the issues imposed by MCMC is the variational Bayesapproach (VB) [11]. Unlike the Monte Carlo approach that results in the “true” posterior distribution, the VBprovides the closest approximation of the posterior distribution using the probability distribution functionsfrom the exponential family as building blocks. Consequently, instead of solving the typically intractableevidence integral in Bayesian theorem, the posterior distribution in VB is found through optimisation. Theresult contains inherent bias, but this is a small price to pay considering the impressive computationalefficiency of the approach even for multidimensional cases. VB has been successfully applied in variousareas, such as Gaussiaxn process modeling [12, 13], deep generative models [14], compressed sensing [15, 16],Hidden Markov models [17, 18], reinforcement learning and control [19]. VB is also applicable to energymanagement problems [20–24].In this paper we want to analyse the nature and shape of the uncertainty regions for our parameters andcompare the results of the computationally feasible VB approach and the well established MCMC. To thebest of the authors’ knowledge, the first attempt to study the uncertainty of equivalent circuit model (ECM)estimates has been done in [25]. This work provided the first report on the marginal probability distributionfunctions of the ECM parameters under nominal operating conditions. However, the method is practicallyinfeasible for online monitoring since it takes several hours even with a strong HPC infrastructure. Ittakes about 25 hours of computational time to get quality posterior distributions. What we suggest belowis to approximate the true posterior probability density function with a closed form distribution that bestcaptures the nature of the true posterior. The benefits of having the uncertainties of the model parametersexplained in the closed form are twofold. First, we get a useful indicator of the quality of the selectedECM model structure and the quality of the experimental data leading to the model. Second, it becomespossible to rather easily employ statistical reasoning tools for detecting changes in the posterior of the ECMparameters and thus the deterioration of the system behaviour. This is therefore the first attempt to showthe applicability of VB in the field of electrochemistry.The structure of the paper is as follows. A brief introduction to the MCMC and VB approach is givenin section 2. The performance of the method in terms of computational efficiency and accuracy is firstdemonstrated on a simulated ECM in section 3. Finally, the VB is applied to the identification of ECMparameters based on data obtained on a SOFC in section 4. The complete numerical implementation for HPC Meister, capability:
TFLOPs
2. Methodology
The presented inference approach arises from the Bayes rule (1). Let us assume a set of measureddata x generated by a system with the parameters θ . The Bayes’ rule (1) provides a link between theposterior distribution (the updated information about the model parameters) based on the prior probabilityof the model parameters p ( θ ) and the likelihood that the observations x are generated by a model with theparameters θ p ( θ | x ) (cid:124) (cid:123)(cid:122) (cid:125) Posterior = Likelihood (cid:122) (cid:125)(cid:124) (cid:123) p ( x | θ ) Prior (cid:122)(cid:125)(cid:124)(cid:123) p ( θ ) p ( x ) (cid:124)(cid:123)(cid:122)(cid:125) Evidence (1)The likelihood can be calculated from the model and the prior is specified as a design input. The normali-sation factor (evidence) is the following integral: p ( x ) = (cid:90) θ p ( x | θ ) p ( θ ) d θ . (2)For general multidimensional distributions the integral (2) becomes intractable. Consequently, getting theposterior in (1) becomes infeasible, hence the need for the approximate approaches. MCMC algorithm is a ubiquitous method for solving integration problems in various areas, e.g. statistics,physics and econometrics. Central in the approach is the way of taking samples, say a set of N samples θ ( i ) ∈ R m , i = 1 , ..., N , from a target density p ( θ ) . Using these N samples, we can approximate p ( θ ) , bycalculating the empirical point-mass function p N ( θ ) = 1 N N (cid:88) i =1 δ θ ( i ) ( θ ) , where θ ( i ) is the i th sample in our set θ and δ θ ( i ) is its delta-Dirac mass.In our case we used MCMC with No-U-Turn sampling (NUTS) [26], implemented in Python with thePyMC3 [27] library. NUTS sampling is an extension of the well known Hamiltonian Monte Carlo algorithm(HMC) [28]. HMC tends to be sensitive to the required user inputs. This is mostly avoided since theNUTS algorithm stops automatically when it starts retracing its steps. Additionally authors of the NUTSadditionally developed a method for automatic adaptation of the step size so that the sampler needs minimaluser-defined parameters on entry. We let the algorithm run for . iterations, which proved sufficientto guarantee convergence. 5 .2. Variational Bayes approach Using MCMC methods for solving (2) can produce results very close to the true posterior distribution.However, for the multidimensional cases, the computational load and sheer number of samples required forobtaining proper estimate of the posterior blows up. One solution to this problem is to find a sufficientlyclose approximation of the posterior with the significantly lower computational load.The main idea of the VB is finding a candidate distribution q λ ( θ ) (parameterized with the latent hyper-parameters λ ∈ R ν ), that is a sufficiently close approximation of the true posterior p ( θ | x ) . The distribution q λ ( θ ) is usually referred to as the variational distribution . The variational distribution is typically selectedfrom the mean-field variational family [29]. This means that one can assume independence among the latentvariables of the variational distribution.A variational distribution that is the best fit for the true posterior can be obtained by minimizing theKullback-Leibler (KL) divergence [29], which can be re-arranged as follows: KL ( q λ ( θ ) || p ( θ | x )) = E q (cid:20) log q λ ( θ ) p ( θ | x ) (cid:21) = E q [log q λ ( θ )] − E q [log p ( θ | x )]= E q [log q λ ( θ )] − E q [log p ( x , θ ) − log p ( x )]= E q [log q λ ( θ )] − E q [log p ( θ , x )] + log p ( x )= − ( E q [log p ( θ , x )] − E q [log q λ ( θ )] + log p ( x )= − ( E q [log p ( x | θ ) p ( θ )] − E q [log q λ ( θ )]) (cid:124) (cid:123)(cid:122) (cid:125) L ( λ , x ) + log p ( x ) (3)Second term log p ( x ) is constant. The first term L ( λ ) is known as evidence lower bound (ELBO) andby maximising it one can minimise the KL divergence between the variational distribution and the trueposterior. So the goal is to solve the following optimization problem: λ ∗ = arg min λ ∈ Ω KL ( q λ ( θ ) || p ( θ | x )) = arg max λ ∈ Ω L ( λ , x ) , (4)where Ω is the set of all possible values of the hyperparameters λ . The overall idea is schematically presentedin Figure 2. It is assumed that the model parameters are random variables. However, their true distributionis almost always unknown. In such a case, the best option is to select an approximate candidate distributionthat will be used instead. Instead of selecting a candidate distribution ad hoc , we try to incorporateprior knowledge as much as possible, in particular the support interval, previous empirical observations,etc. Therefore, it is likely to choose a candidate distribution sufficiently close to the true but unknownparameter’s distribution. As some mismatch is usually always present, that causes a bias in the VB solution.However, this is a small price to pay compared to the substantial increase in the computational efficiencyof the VB. The VB algorithm was implemented in Python using the PyTorch [30] library. For a link to theimplementation and an example data set sufficient to recreate a numerical example, see Appendix A.6 nknown distributionSelected initialcandidate q ϑ ( θ ) Final q ϑ ( θ ) with KLoptimised over ΩApproximation error Figure 2: Optimisation process of finding the closest variational distribution q λ ( θ ) over the set of latent variables λ . Using VB we convert the problem of finding the posterior distributions from statistical inference to amuch simpler task of optimising a cost function for a variational family parameterised by the vector ofhyperparameters λ . The cost function in this case is ELBO. For the optimization part of our algorithm weuse the adaptive moment estimation (ADAM) optimizer [31].ADAM allows an adaptive correction of the learning rates for each element λ ( s ) , s = 1 , ..., ν of the vector λ during the run of the algorithm by calculating the first and second moment of the gradient, denoted as m ( s ) t and v ( s ) t respectively. This is done using exponentially moving averages computed on the gradient g ( s ) t , s = 1 , ..., ν m ( s ) t = β m ( s ) t − + (1 − β ) g ( s ) t v ( s ) t = β v ( s ) t − + (1 − β ) (cid:16) g ( s ) t (cid:17) , (5)where t denotes the iteration number and β , are the parameters of the moving average. The default valuesfor β parameters are 0.9 and 0.999 and the initial values of first and second moment of the gradient are bothset to , since it turns out this does not impact the convergence rate significantly. For a set of measureddata (impedance in our case) x the gradients can be presented with the following vector g t = (cid:16) g (1) t , . . . , g ( ν ) t (cid:17) T = ∇ λ t L ( λ t , x ) . (6)First and second moments are only estimated with m ( s ) and v ( s ) , therefore we want them to satisfy thefollowing condition E { m ( s ) t } = E (cid:16) g ( s ) t (cid:17) E { v ( s ) t } = E (cid:16) ( g ( s ) t ) (cid:17) (7)Above conditions ensure that we are dealing with unbiased estimates. First moment at step t in the recursiveequation (5) is m ( s ) t = (1 − β ) t (cid:88) i =1 β t − i g ( s ) i . (8)We can see some bias still occurs in this estimate. Applying expected value operator to equation (8) gives7s E λ (cid:104) m ( s ) t (cid:105) = (1 − β ) t (cid:88) i =1 β t − E λ (cid:104) g ( s ) i (cid:105) ≈ (1 − β ) (cid:32) t (cid:88) i =1 β t − (cid:33) E λ (cid:104) g ( s ) t (cid:105) = (cid:0) − β t (cid:1) E λ (cid:104) g ( s ) t (cid:105) . (9)Bias correction is done automatically by ADAM during the evaluation of m and v as follows ˆ m ( s ) t = m ( s ) t − β t ˆ v ( s ) t = v ( s ) t − β t . (10)Optimisation steps are also adjusted. For each individual parameter the algorithm updates λ ( s ) t = λ ( s ) t − − η ˆ m ( s ) t (cid:113) ˆ v ( s ) t + η , (11)where η = 0 . .The use of ADAM is spreading over a wide range of applications and delivers efficient and fast results.It should be noted, however, that as a heuristic optimization algorithm there is no general guarantee ofconvergence. Fortunately, Kingma and Ba [31] proved that the algorithm converges globally in the convexsettings. This was further refined and proved by Reddi et al. [32]. We used the default settings of β = 0 . and β = 0 . , while setting the learning rate and the number of iterations depending on the data set athand.
3. Numerical example
Since we focus on solid oxide fuel cells in this paper, we will first demonstrate the performance of VBalgorithm using simulated data. A rather frequently used ECM model structure of SOFC takes the followingform: Z ( ω ) = R s + N (cid:88) i =1 R i ( jω ) α i Q i R i + 1 + jωL (12)where R s stands for serial resistance, each pole is defined with the corresponding R i (parallel resistor) and Q i (constant-phase element) parameter value, fractional order of i th pole is denoted with α i ∈ (0 , and ω =2 πf , where f is the frequency interval. The “true” impedance (12) was simulated on the frequency interval f ∈ [10 − , ] Hz by using parameters listed in Table 1. For the simulation we used Grünwald–Letnikovapproximation for the fractional derivative [33]. G-L fractional derivative is susceptible to noise in the data,therefore the simulation was performed using noiseless signals and measurement noise was added afterwards.8 ( t ) H ( s ) u ( t ) + + n ∼ N (0 , σ ) n ∼ N (0 , σ ) CWT[34] CWT[34]ImpedanceevaluationEIS curve as shown in Figure 4measuredvoltagemeasuredcurrent
Figure 3: Simulated EIS curve with noisy input and output.
The additive measurement noise was applied separately to the voltage u ( t ) and current i ( t ) as n ( t ) and n ( t ) respectively. It should be noted that n ( t ) and n ( t ) are zero-mean and uncorrelated. Having thesimulated input and output signals the transfer function was estimated from Morlet wavelet transform ofthe input and output as described in [34]. The entire process is presented in Figure 3.The first step in applying the VB approach is the definition of variational distributions that approximatethe true posterior p ( θ | x ) . They are listed in Table 1. For our case θ ∈ R , with additional limitation as α i ∈ [0 , . Beta distribution was therefore chosen as the best fit for the α i , i ∈ { , , } parameters andLog-normal distribution for the rest. We selected the means and variances for our variational distributionsand then determined their parameters using (13) and (14).The rough estimates of informative variational distributions can be assessed in an efficient manner fromthe EIS curve in Figure 4 and previous experience (expert knowledge). In the upper plot in Figure 4 one cannotice that the real part of the curve starts around . This provides a rough estimate for the parameter R s , i.e. selecting variational distribution with mean . and variance to seems a reasonable choice. Thenext step is setting the variational distributions for the parameters R , , . Just by looking at the axis values,it is apparent that they should be located within the interval (1,10) Ω . Mean values of their distributionswere therefore set in the middle of the interval. Variational distributions of the parameters Q , , wasroughly estimated by the results of previous experiments and by incorporating experts knowledge. Lastly,the fractional order powers α , , are expected to have values in the interval [0.5,1].9 able 1: Model parameters and variational distributions calculated using equations (13) and (14). True parameter Variational distribution R s = ∼ Lognormal (0 . , . R = 1 Ω , R = 2 Ω , R = 3 Ω ∼ Lognormal (1 . , . Q = 0 . F s α ∼ Lognormal ( − . , . Q = 5 F s α ∼ Lognormal (1 . , . Q = 150 F s α ∼ Lognormal (4 . , . α = 0 . , α = 0 . , α = 0 . ∼ Beta (13 . , . L = 100 nH Once we have determined the mean and variance of our variational distributions, we have to transformthem into the correct parameters for the chosen distributions. To obtain the parameters of the lognormaldistribution ( σ ln , µ ln ) from the mean and variance ( σ, µ ) , the following equations can be used: σ ln = ln (cid:32) σ (cid:112) σ + µ (cid:33) µ ln = ln (cid:18) µ σ (cid:19) (13)and for beta distribution parameters α and β e.g. Beta ( α, β ) we use: α = σ − σ µ − σ β = ασ − α (14)The optimization was performed using a global learning rate of . . The optimization was completedafter steps. ELBO loss curve can be observed in Figure 5. By sampling the resulting posteriordistributions and simulating the model (12) it is possible to visualize the obtained results. For this purpose, samples were drawn for each parameter from their respective posterior distribution. Simulating the EIS curves with the sampled sets of parameters gave us an estimate of the confidence interval of our results,which can be found in Figure 4. The EIS curve obtained from the means of the posterior distributions for theparameters can also be found on the same Figure. The variance of the posterior distributions is effectivelypresented with the sampled curves and it is bounded by the noise variance parameter. The mean EIS curveseems to be a sufficiently good fit for the input data.The evolution of the optimization process for each parameter is presented in Figure 6, where we can alsofind the true parameters used for the simulation of our input data. We can see that each parameter reachesits true value, except Q . We can assume this is the effect of noise, which seems to be greater in the thirdarc, as seen in Figure 4. Progress in confidence of the algorithm can be easily observed with the help ofthe evolution of estimated spread in parameters distributions. As seen in the Figure 6, at the beginning thespread is wide and as the optimization progresses and explores larger search space it steadily decreases.10 . . − = ( z ) [ m Ω ] Theoretical EIS3 6 9 < ( z )[ m Ω]0 . . − = ( z ) [ m Ω ] Confidence intervalEIS from estimated mean values
Figure 4: Simulated measurement data (above) and results of the VB algorithm (below).
Number of iterations − E L B O Figure 5: ELBO loss. R s [ Ω ] R s . . α , , α , , . . R [ Ω ] R R [ Ω ] R . . R [ Ω ] R . . Q [ F s α ] Q Q [ F s α ] Q Q [ F s α ] Q . . N o i s e Noise parameter 0 12500 25000Number of iterations07500 E L B O ELBO loss
Figure 6: Progress of parameter estimation during the optimisation process, true values of parameters used for simulation arerepresented with dashed lines. The variance of the estimated distribution(pink) is quite low. The selected optimal shape of theposterior distributions are shown in Figure 7. R , and Q . Table 2: Posterior distributions of parameters. R s ∼ Lognormal (1 . , . R ∼ Lognormal ( − . , . R ∼ Lognormal (0 . , . R ∼ Lognormal (1 . , . Q ∼ Lognormal ( − . , . Q ∼ Lognormal (1 . , . Q ∼ Lognormal (4 . , . α ∼ Beta (969 . , . α ∼ Beta (1016 . , . α ∼ Beta (4724 . , . . . R s [ Ω ] VBMCMC . . α
050 0 .
75 0 . α .
95 0 . α .
96 1 . R [ Ω ] . . R [ Ω ] . . R [ Ω ] .
075 0 . Q [ F s α ]0120250 4 . . Q [ F s α ]059 125 150 Q [ F s α ]0 . . . Figure 7: Posterior distributions of parameters derived from simulated data. . Experimental validation The VB was validated using EIS curves measured on anode supported solid oxide fuel cells which wereinstalled in an insulated ceramic housing. The active area of a single cell was 80 cm . The SOFC shortstack was fed with a mixture of hydrogen and nitrogen with a flow rate H /N =0.216/0.144 Nl h − cm − whereas the air flow rate was 4 Nl h − cm − . Stack was operated at nominal current of 32 A (0.4 A cm − )with FU=77.5 %.The EIS curves were obtained by inducing current excitation (galvanostatic mode) by means of the dis-crete random binary signal. The excitation current had DC value of I DC =32 A with peak-to-peak amplitudeof 2 A. The amplitude was chosen so that it stays in the linearity region, but still big enough to maintainsufficient signal-to-noise ratio.The current and voltage sensor have sufficiently wide frequency bandwidth with cut-off frequency of240 kHz. The cell voltages were measured independently using a differential 16-bit NI USB-6215 dataacquisition system. The analogue signals were firstly low pass filtered at 10.8 kHz and sampled with samplingfrequency f s = 50 kHz. The measured EIS curve is shown in Figure 8. Note here that the noise in measurements is quitelow thanks to the quality data acquisition equipment and the well controlled laboratory conditions for theexperiment. The first step in the identification process is the selection of the model structure, i.e. thenumber of poles.It is believed that overall 5 main diffusion processes are going on within the cell. However, some ofthem are very fast, for example charge transfer and its dynamics is beyond the capability of the availabledata acquisition systems, whose bandwidth is limited above to cca. 1kHz. Therefore N=3 turns to be anappropriate selection that captures the processes that fall within the frequency band of the DAQ system.Consequently, the model structure was selected to be as in (12) with N = 3 . Variational distributionswere selected by examining the measured EIS curve. If we look at the Figure 8, we can see that the highfrequency part of the real component has values around m Ω , which will serve as the mean value for the R s parameter. The scale of its distribution was set quite wide to allow the optimization process to explore wideintervals in the search for the optimum. Variational distributions of R , , parameters can be inferred fromthe complex and real values our EIS curve takes that their values should be roughly between the interval[0.1,3] m Ω . From previous testing we can assume that with the similar R values and visible poles the Q , , parameters will be at least by an order of apart, so we assumed they are located at , and F s α respectively. The scales for their variational distributions were set respectively large. Finally, for α , , we can assume from experience that they probably take values between . and . . All the variationaldistributions can be seen on Table 3. 15 able 3: Variational distributions of parameters. R s ∼ Lognormal ( − . , . R , , ∼ Lognormal ( − . , . α , , ∼ Beta (12 , Q ∼ Lognormal ( − . , . Q ∼ Lognormal (3 . , . Q ∼ Lognormal (6 . , . < ( z )[ m Ω]02.0 − = ( z ) [ m Ω ] Mean EIS(MCMC)Confidence interval(VB)Mean EIS(VB)Measured data
Figure 8: Input measurement data and resulting simulated EIS curves.
The learning rate was set to . . This value is lower than in the numerical example, since the goal wasto allow steady and efficient estimation. The optimization finished after steps.The performance of the algorithm can best be observed by looking at the EIS curves obtained from theposterior distributions of the estimated parameters. In the same manner as with numerical example, wealso took samples from each of parameters posterior distribution, to simulate EIS curves. Theconfidence interval is shown in Figure 8, together with the EIS curve obtained from the posterior means.The results confirm that despite the low number of iterations, the resulting parameters represent anaccurate fit for the measured EIS data.The evolution of ELBO can be seen in Figure 9. Evidently the plot converges. Comparing with theFigure 5 we can notice that the rate of convergence is slower, which can be associated with the much smallerlearning rate and smaller number of iterations used for the run on experimental data. As we approach theend of the computation, the ELBO value changes minimally, which means we have found our optimum forthe estimation.The evolution of parameter estimation is shown in Figure 10. The algorithm steadily converges towardsthe optimum, while the estimated spread for each parameter continues to narrow. The parameter evolutionis very smooth for each parameter. Posterior distributions are shown in Figure 11, where we compare them16 − E L B O Figure 9: The evolution of ELBO for experimental data. with results of MCMC algorithm. We can see the expected slight overconfidence of VB, while the resultsare still inline with MCMC results. It is apparent that scales (variance) of the posterior distributions arequite low. Posterior parameter distributions are listed in Table 4.
Table 4: Posterior distributions of parameters. R s ∼ Lognormal ( − . , . e − ) R ∼ Lognormal ( − . , . R ∼ Lognormal ( − . , . R ∼ Lognormal ( − . , . Q ∼ Lognormal (2 . , . Q ∼ Lognormal (6 . , . Q ∼ Lognormal (3 . , . α ∼ Beta (1081 . , . α ∼ Beta (2216 . , . α ∼ Beta (399 . , . Results in Figure 8 that the ECM identified through the VB approach provide an accurate fit for themeasured data.From ELBO plot in Figure 9 it is evident the algorithm converges. The same observation holds true forthe estimated parameters, as shown in Figure 10.Comparing the resulting posterior distributions from both VB and MCMC in Figure 11 we can concludethe following. First, results show close match for every parameter, with only a slight mismatch noticeablein R S , α and Q . Second, the posterior distributions obtained with VB have lower dispersion (variance)than the ones obtained through MCMC. As mentioned above, this is common for VB approach.One of the reasons for having overconfident results might be in the selection of the variational distribu-tions. A possible improvement might be achieved by using more elaborated approximations in terms of amixture of bounded distributions [36]. That is a topic of further research.17 R s [ m Ω ] R s Spread0 . . α , , α α α R [ m Ω ] R R [ m Ω ] R R [ m Ω ] R Q [ F s α ] Q Q [ F s α ] Q Q [ F s α ] Q Figure 10: The evolution of parameter estimation for experimental data. Estimated parameter spread at each iterationenveloping the mean of the parameter at given iteration. .15 3.25 R s [ m Ω]050000125000 VBMCMC0 . . α .
99 1 . α .
95 1 . α R [ m Ω]040000100000 3.95 4.05 R [ m Ω]02000050000 1.3 1.4 R [ m Ω]010000250008 11 Q [ F s α ]0 . . . Q [ F s α ]0 . . . Q [ F s α ]0 . . . Figure 11: Posterior distributions of parameters inferred with experimental data. . Conclusion The evaluation of the uncertainty of EIS measurements is particularly important for practical applica-tions. The proposed VB approach for ECM parameter estimation provides a solution to this problem. Theobtained posterior distributions of ECM model parameters provides a better insight on the condition of theobserved fuel cell. As a result, it becomes possible to employ more powerful statistical decision tools forchange detection.The computational load of VB method is far lower than MCMC and the implementation is quite straight-forward. The power of VB was shown for the use on SOFC, but it is also possible to apply a similarapplication for more general parameter estimation problems. This opens the possibility of harnessing thestochastic nature of the model parameters with the goal of applying statistical methods for diagnostics andhealth prognosis of electrochemical energy systems.
Acknowledgements
The authors acknowledge the support from the Slovenian Research Agency through the programme P2-0001 and project J2-9441. Part of the support is received through the project RUBY (grant agreementNo. 875047) within the framework of the Fuel Cells and Hydrogen 2 Joint Undertaking under the EuropeanUnion’s Horizon 2020 research and innovation programme, Hydrogen Europe and Hydrogen Europe research.
References [1] A. Lasia, Electrochemical Impedance Spectroscopy and its Applications, Springer-Verlag, New York, 2014.[2] V. Subotić, N. H. Menzler, V. Lawlor, Q. Fang, S. Pofahl, P. Harter, H. Schroettner, C. Hochenauer, On the origin ofdegradation in fuel cells and its fast identification by applying unconventional online-monitoring tools, Applied Energy277 (2020) 115603. URL: . doi: https://doi.org/10.1016/j.apenergy.2020.115603 .[3] E. A. Adinolfi, M. Gallo, P. Polverino, D. Beretta, S. S. Araya, C. Pianese, Ecm-based algorithm for on-board pemfcsdiagnosis, in: W. Zamboni, G. Petrone (Eds.), ELECTRIMACS 2019, Springer International Publishing, Cham, 2020,pp. 103–115.[4] J. Mougin, B. Morel, A. Ploner, P. Caliandro, J. Van Herle, P. Boškoski, B. Dolenc, M. Gallo, P. Polverino, A. Pohjoranta,A. Nieminen, S. Pofahl, J. Ouweltjes, S. Diethelm, A. Leonardi, F. Galiano, C. Tanzi, Monitoring and diagnostics of sofcstacks and systems, ECS Transactions 91 (2019) 731–743. doi: .[5] M. Gallo, P. Polverino, J. Mougin, B. Morel, C. Pianese, Coupling electrochemical impedance spectroscopy and model-based aging estimation for solid oxide fuel cell stacks lifetime prediction, Applied Energy 279 (2020) 115718. URL: . doi: https://doi.org/10.1016/j.apenergy.2020.115718 .[6] A. Rakar, Ðani Juričić, P. Ballé, Transferable belief model in fault diagnosis, Engineering Applications of Artifi-cial Intelligence 12 (1999) 555 – 567. URL: .doi: https://doi.org/10.1016/S0952-1976(99)00030-5 .
7] R. Mosbæk, Solid Oxide Fuel Cell Stack Diagnostics, Ph.D. thesis, 2014.[8] A. Leonide, SOFC Modelling and Parameter Identification by Means of Impedance Spectroscopy, Schriften des Institutsfür Werkstoffe der Elektrotechnik, Karlsruher Institut für Technologie, KIT Scientific Publ., 2010. URL: https://books.google.si/books?id=kE3Otpn2DS8C .[9] Y. Wang, D. M. Blei, Variational Bayes under model misspecification, 2019. arXiv:1905.10859 .[10] P. E. Jacob, S. M. M. Alavi, A. Mahdi, S. J. Payne, D. A. Howey, Bayesian inference in non-markovian state-spacemodels with applications to battery fractional-order systems, IEEE Transactions on Control Systems Technology 26(2018) 497–506.[11] V. Šmídl, A. Quinn, The Variational Bayes Method in Signal Processing, Springer-Verlag, 2006. doi: .[12] T. D. Bui, J. Yan, R. E. Turner, A unifying framework for Gaussian process pseudo-point approximations using powerexpectation propagation (2016). arXiv:1605.07066v3 .[13] J. Hensman, A. Matthews, Z. Ghahramani, Scalable variational Gaussian process classification (2014). arXiv:1411.2005v1 .[14] D. J. Rezende, S. Mohamed, D. Wierstra, Stochastic backpropagation and approximate inference in deep generativemodels (2014). arXiv:1401.4082v3 .[15] Z. Yang, L. Xie, C. Zhang, Variational Bayesian algorithm for quantized compressed sensing (2012). doi: . arXiv:1203.4870v2 .[16] V. P. Oikonomou, S. Nikolopoulos, I. Kompatsiaris, A novel compressive sensing scheme under the variational Bayesianframework, IEEE, 2019, pp. 1–5. doi: .[17] C. Gruhl, B. Sick, Variational Bayesian inference for hidden markov models with multivariate Gaussian output distributions(2016). arXiv:1605.08618v1 .[18] K. P. Panousis, S. Chatzis, S. Theodoridis, Variational conditional-dependence hidden markov models for human actionrecognition (2020). arXiv:2002.05809v1 .[19] S. Levine, Reinforcement learning and control as probabilistic inference: Tutorial and review (2018). arXiv:1805.00909v3 .[20] Y. Liu, H. Qin, Z. Zhang, S. Pei, Z. Jiang, Z. Feng, J. Zhou, Probabilistic spatiotemporal wind speed forecasting based ona variational Bayesian deep learning model, Applied Energy 260 (2020) 114259. doi: .[21] Y. Liu, H. Qin, Z. Zhang, S. Pei, C. Wang, X. Yu, Z. Jiang, J. Zhou, Ensemble spatiotemporal forecasting of solarirradiation using variational Bayesian convolutional gate recurrent unit network, Applied Energy 253 (2019) 113596.doi: .[22] W. Choi, H. Kikumoto, R. Choudhary, R. Ooka, Bayesian inference for thermal response test parameter estimation anduncertainty assessment, Applied Energy 209 (2018) 306–321. doi: .[23] W. Choi, K. Menberg, H. Kikumoto, Y. Heo, R. Choudhary, R. Ooka, Bayesian inference of structural error in inversemodels of thermal response tests, Applied Energy 228 (2018) 1473–1485. doi: .[24] P. Pasquier, D. Marcotte, Robust identification of volumetric heat capacity and analysis of thermal response tests byBayesian inference with correlated residuals, Applied Energy 261 (2020) 114394. doi: .[25] B. Dolenc, G. Nusev, P. Boškoski, Bertrand, Morel, J. Mougin, Ðani Juričić, Probabilistic deconvolution of solid oxidefuel cell impedance spectra, 2020. doi: .[26] M. D. Hoffman, A. Gelman, The no-u-turn sampler: Adaptively setting path lengths in hamiltonian Monte Carlo (????). arXiv:1111.4246 .[27] J. Salvatier, T. V. Wiecki, C. Fonnesbeck, Probabilistic programming in python using PyMC3, PeerJ Computer Science2 (2016) e55. URL: https://doi.org/10.7717/peerj-cs.55 . doi: .[28] M. Betancourt, A conceptual introduction to Hamiltonian Monte Carlo (????). arXiv:1701.02434 .[29] D. M. Blei, A. Kucukelbir, J. D. McAuliffe, Variational inference: A review for statisticians (2016). doi: . arXiv:1601.00670v9 .[30] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga,A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang,J. Bai, S. Chintala, Pytorch: An imperative style, high-performance deep learning library, in: H. Wal-lach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, R. Garnett (Eds.), Advances in Neural Informa-tion Processing Systems 32, Curran Associates, Inc., 2019, pp. 8024–8035. URL: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf .[31] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, 2014. arXiv:1412.6980 .[32] S. J. Reddi, S. Kale, S. Kumar, On the convergence of adam and beyond (2019). arXiv:1904.09237v1 .[33] C. Monje, Y. Chen, B. Vinagre, D. Xue, V. Feliu, Fractional Order Systems and Control - Fundamentals and Applications,2010. doi: .[34] P. Boškoski, A. Debenjak, B. M. Boshkoska, Fast Electrochemical Impedance Spectroscopy, Springer International Pub-lishing, 2017. doi: .[35] Y. Wang, D. M. Blei, Frequentist consistency of variational Bayes, Journal of the American Statistical Association 114(2018) 1147–1161. URL: http://dx.doi.org/10.1080/01621459.2018.1473776 . doi: .[36] C. Tenreiro, Boundary kernels for distribution function estimation, REVSTAT Statistical Journal 11 (2013) 169–190. Appendix A. Supplementary material: numerical implementation
Supplementary material containing numerical implementation of the VB algorithm can be foundat https://repo.ijs.si/lznidaric/variational-bayes-supplementary-material . Python Jupyter notebook file
SupplementaryMaterialSVI.ipynb contains the main algorithm. Data set used to recreatethe results presented in this article can be found in
NumPy file dataset.npzdataset.npz