Assessment of COVID-19 hospitalization forecasts from a simplified SIR model
TTech. report UCL-INMA-2020.05-v1 http://sites.uclouvain.be/absil/2020.05
Assessment of COVID-19 hospitalization forecasts from asimplified SIR model ∗ P.-A. Absil § Ousmane Diao § Mouhamadou Diallo ¶ July 22, 2020
Abstract
We propose the SH model, a simplified version of the well-known SIR compartmentalmodel of infectious diseases. With optimized parameters and initial conditions, this time-invariant two-parameter two-dimensional model is able to fit COVID-19 hospitalization dataover several months with high accuracy (mean absolute percentage error below 15%). More-over, we observed that, when the model is trained on a suitable two-week period around thehospitalization peak for Belgium, it forecasts the subsequent three-month decrease with meanabsolute percentage error below 10%. However, when it is trained in the increase phase, it isless successful at forecasting the subsequent evolution.
Key words:
COVID-19 prediction; COVID-19 forecast; SARS-CoV-2; coronavirus; SIRmodel; SH model; hidden variable; hospitalization prediction
The SIR model [KMW27] is a simple compartmental model that is widely used to model infectiousdiseases [Het00]. Letting S ( t ), I ( t ), and R ( t ) denote the number of susceptible, infectious andremoved (or recovered) individuals at time t , and letting ˙ S ( t ), ˙ I ( t ), and ˙ R ( t ) denote their timederivatives, the SIR model consists in the following three-dimensional continuous-time autonomousdynamical system ˙ S ( t ) = − βN S ( t ) I ( t ) (1a)˙ I ( t ) = βN S ( t ) I ( t ) − γI ( t ) (1b)˙ R ( t ) = γI ( t ) , (1c)where N = S ( t ) + I ( t ) + R ( t ) is the constant total population and β and γ are parameters. TheSIR model, and several (sometimes deep) variations thereof, have been applied in several worksto model the COVID-19 dynamics (see, e.g., [LGWR20, Atk20, Koz20, Nes20, CNP20, FP20,CFP20]) with known limitations (see [RVHL20, BFG +
20, WF20]). Sometimes, an SIR-like modelis used to make long-term predictions (see [BD20]). However, at the time of writing this paper, itappears that studies are still rare (see, e.g., [SM20]) where the SIR model parameters and initialconditions are learned on a “train” part of the available data in order to predict a “test” part ofthe data, making it possible to assess the prediction accuracy of the model.In this paper, we adapt the SIR model to the situation where (i) S , I and R are hidden variablesbut I ( t ) is observed through a “proxy” H ( t ) = αI ( t ), where α is unknown but constant, and (ii) ∗ The second author is supported by a fellowship awarded by UCLouvain’s Conseil de l’action internationale. § ICTEAM Institute, UCLouvain, B-1348 Louvain-la-Neuve, Belgium ( http://sites.uclouvain.be/absil/ ). ¶ Molecular Biology Unit/Bacteriology-Virology Lab, CNHU A. Le Dantec / Université Cheikh Anta Diop,Dakar, Sénégal a r X i v : . [ s t a t . A P ] J u l ot only β and γ but also the total population N are unknown and have thus to be estimated.In the context of the COVID-19 application, H will stand for the total number of lab-confirmedhospitalized patients. The proposed adapted SIR model, which we term SH model , is given in (8).It has two state variables ( ¯ S —a scaled version S —and H ) and two parameters ( ¯ β —which lumpstogether the parameters β , N , and α —and γ ).We leverage the proposed SH model as follows in order to make hospitalization predictions.Given observed values ( H o ( t )) t = t i ,...,t c , we estimate the parameters ¯ β , γ , and the initial condi-tions ¯ S ( t i ) and H ( t i ) of the SH model. Then we simulate the SH model in order to predict( H ( t )) t = t c +1 ,...,t f for a specified final prediction time t f . This approach thus combines the areasof parameter estimation (for obvious reasons), data assimilation (for the generation of the initialconditions) and machine learning (for the train-test approach). In Section 6, we will use a COVID-19 dataset for Belgium that provides us with the followingdata for t = t s , . . . , t e , where t s is 2020-03-15 and t e is 2020-07-15: • H o ( t ): number of COVID-19 hospitalized patients on day t (TOTAL_IN column); • E o ( t ): number of COVID-19 patients entering the hospital (number of lab-confirmed hospitalintakes) on day t (NEW_IN column); • L o ( t ): number of COVID-19 patients discharged from the hospital on day t (NEW_OUTcolumn).The subscript o stands for “observed”.We will also mention results obtained with a dataset for France where t s is 2020-03-18 and t e is 2020-07-17. In the data for Belgium, there is a mismatch between H o ( t ) and H o ( t −
1) + E o ( t ) − L o ( t ) for most t and H o ( t s ) + P t e t = t s +1 E o ( t ) − L o ( t ) is significantly larger than H o ( t e ). This can be due to thepatients who get infected at the hospital (they would be counted in H o without appearing in E o )and to the patients who die at the hospital (they would be removed from H o without appearing in L o ). In order to remedy this mismatch, we redefine L o ( t ) by L o ( t ) := − H o ( t ) + H o ( t −
1) + E o ( t ).For the French data, we sum the “rad” (daily number of new home returns) and “dc” (dailynumber of newly deceased persons) columns to get L o ( t ). Since there is no column for E o , wedefine E o ( t ) = H o ( t ) − H o ( t −
1) + L o ( t ).Several other COVID-19-related data are available. In particular, the daily number of infectedindividuals, I o ( t ), is also reported by health authorities. However, a visual inspection reveals thatthe graph of I o is less smooth than the graph of H o . A possible reason is that I o is affected bytwo technical sources of variation: the fraction of tested persons and the accuracy of the tests.In contrast, the reported number of COVID-19 hospitalized individuals, H o , is expected to bemuch more accurate. Moreover, for the authorities, predicting H is more crucial than predicting I . Therefore, as in [Koz20], we focus on H . https://epistat.sciensano.be/Data/COVID19BE_HOSP.csv obtained from https://epistat.wiv-isp.be/covid/ donnees-hospitalieres-covid19-2020-07-17-19h00.csv obtained from Models
We assume that, for all t , H ( t ) = αI ( t ) (2)where α is unknown but constant over time. In other words, (2) posits that a constant fraction ofthe infected people is hospitalized.Equation 2 is reminiscent of [CFP20, (3)], where the number of dead individuals plays the roleof H and α is time dependent. We assume the following observation models with additive noise: H o ( t ) = H ( t ) + (cid:15) H ( t ) (3a) E o ( t ) = E ( t ) + (cid:15) E ( t ) (3b) L o ( t ) = L ( t ) + (cid:15) L ( t ) . (3c)Assuming that the (cid:15) noises are independent Gaussian centered random variables confers a max-imum likelihood interpretation to some subsequent estimators, but this assumption is very sim-plistic. Multiplying (1a) and (1b) by α , and multiplying the numerator and denominator of (1a) by α ,we obtain α ˙ S ( t ) = − βN α αS ( t ) αI ( t ) (4) α ˙ I ( t ) = βN α αS ( t ) αI ( t ) − γαI ( t ) . (5)Letting ¯ S := αS (6)¯ β := βN α (7)and using (2), we obtain the simplified SIR model˙¯ S ( t ) = − ¯ β ¯ S ( t ) H ( t ) (8a)˙ H ( t ) = ¯ β ¯ S ( t ) H ( t ) − γH ( t ) (8b)which we term the SH model . (The “S” in this SH model can be interpreted as the number ofindividuals susceptible of being hospitalized.) The SH model has only two parameters ( ¯ β and γ ),one hidden state variable ( ¯ S ) and one observed state variable ( H ) with observation model (3a).Note that, in the SH model (8), the number of patients entering the hospital by unit of time is E ( t ) := ¯ β ¯ S ( t ) H ( t ) (9)and the number of patients leaving the hospital by unit of time is L ( t ) := γH ( t ) . (10)3 Estimation and prediction method
The goal is now to leverage the SH model (8) in order to predict future values of H based on itspast and current observations ( H o ( t )) t = t s ,...,t c . To this end, we have to estimate (or “learn”) fourvariables, which we term estimands : the two parameters ¯ β and γ and the two initial values ¯ S ( t i )and H ( t i ), where t i is the chosen initial time for the SH model (8). One possible approach is tominimize some error measure between the simulated values ( H ( t )) t = t i ,...,t c and the observed values( H o ( t )) t = t i ,...,t c as a function of the four estimands. However, the error measure is not availableas a closed-form expression of the four estimands, and this makes this four-variable optimizationproblem challenging. We now show that it is possible to estimate H ( t i ) and γ separately. Thisleaves us with an optimization problem in the two remaining estimands ¯ β and ¯ S ( t i ), making itpossible to visualize the objective function by means of a contour plot. To recap, we have t s ≤ t i < t c < t e . The provided dataset goes from t s to t e . The test set is( H o ( t ) , E o ( t ) , L o ( t )) t ∈ [ t c +1 ,t e ] , and this data cannot be used to estimate the variables and simulatethe SH model. The SH model is initialized at t i , and we refer to the data ( H o ( t ) , E o ( t ) , L o ( t )) t ∈ [ t i ,t c ] as the train set , though it is legitimate to widen it to t ∈ [ t s , t c ]. H ( t i ) It is reasonable to believe that (cid:15) H in (3a) is small in practice. Hence we simply take H ( t i ) := H o ( t i ) . γ We have L ( t ) = γH ( t ), see (10). In view of the observation model (3), we can estimate γ by aratio of means: ˆ γ RM = P t c t = t i L o ( t ) P t f t = t i H o ( t ) . Several other estimators are possible, such as the least square estimator, or the total least squaresestimator which is the maximum likelihood estimator of γ for the iid Gaussian noise model (3).Note that t i in the expression of ˆ γ can legitimately be replaced by any time between t s and t c .Only data in the test set, i.e., occurring after t c , are unavailable in the variable estimation phase. ¯ β and ¯ S ( t i ) We now have to estimate the two remaining estimands, namely ¯ β and ¯ S ( t i ). We choose thefollowing sum-of-squared-errors objective function φ ( ¯ β, ¯ S ( t i )) = c H t c X t = t i ( H ( t ) − H o ( t )) + c E t c X t = t i ( E ( t ) − E o ( t )) + c L t c X t = t i ( L ( t ) − L o ( t )) , (11)where the c coefficients are parameters, all set to 1 in our experiments unless otherwise stated.In (11), H ( t ), E ( t ) as in (9), and L ( t ) as in (10), are given by the (approximate) solution of theSH model (8) in which (i) H ( t i ) and γ take the values estimated as above, and (ii) ¯ β and ¯ S ( t i )take the values specified in the argument of φ . In order to compute the required (approximate)solution of the SH model (8), we use the explicit Euler integration with a time step of one day,yielding, for t = t i , . . . , t c −
1, ¯ S ( t + 1) = ¯ S ( t ) − ¯ β ¯ S ( t ) H ( t ) (12a) H ( t + 1) = H ( t ) + ¯ β ¯ S ( t ) H ( t ) − γH ( t ) . (12b)4ow that the objective function φ (also termed “cost function” or “loss function”) is defined,we let the estimated ( ¯ β, ¯ S ( t i )) be the (approximate) minimizer of φ returned by some optimizationsolver. H Recall that the time range between t i and t c is the train period and the time range between t c + 1and t e is termed the test period.In order to predict the values of H over the test period, we apply the above procedure toestimate the four estimand variables ¯ β , γ , ¯ S ( t i ), and H ( t i ), and we compute the solution H ( t )of (12) for t from t i to t e . The prediction is then ( H ( t )) t = t c +1 ,...,t e . The discrepancy between( H ( t )) t = t c +1 ,...,t e and ( H o ( t )) t = t c +1 ,...,t e reveals the accuracy of the prediction. ¯ β and ¯ S ( t i ) As an alternative to Section 4.4, we now present a method to estimate ¯ β independently. We donot recommend this alternative, but it sheds light on the various forecast accuracies observed inSection 6.From (8a) and (9), we obtain dd t E ( t ) H ( t ) = − ¯ βE ( t ) . Since dd t EH = H ˙ E − E ˙ HH , this yields ¯ β = ˙ H ( t )( H ( t )) − ˙ E ( t ) E ( t ) H ( t ) . Hence a possible estimator for ¯ β is b ¯ β = H o ( t + 1) − H o ( t )( H o ( t )) − E o ( t + 1) − E o ( t ) E o ( t ) H o ( t ) (13)and, from (9), a possible simple estimator for the remaining estimand is b ¯ S ( t i ) = E ( t i ) b ¯ βH ( t i ) .We can now investigate how the (cid:15) error terms in the observation model (3) impact b ¯ β . Weassume throughout that the errors in H o ( t + 1) − H o ( t ) and E o ( t + 1) − E o ( t ) are comparable.Except at the very beginning of the outbreak, E o ( t ) H o ( t ) is considerably smaller than ( H o ( t )) ,and thus the second term of (13) drives the error.Consequently, the estimation of ¯ β should be the most accurate when E o ( t ) H o ( t ) is the largest.This occurs slightly before the peak of H o ( t ). This means that the estimation of ¯ β should be themost accurate for a train period slightly before the peak. However, this does not mean that thisposition of the train period gives the most accurate forecasts, as we will see below.Let us consider the situation where the train period is located before the peak. Then theestimation of ¯ β is less accurate, and this impacts b ¯ S ( t i ). At the initial time t i , this does notimpact the right-hand term of (8b) in view of the definition of b ¯ S ( t i ). However, an overestimationof ¯ β will induce an underestimation of ¯ S ( t i ) and, in view of (8a), a subsequent even strongerunderestimation of ¯ S ( t ). Hence the first term of (8b) will be underestimated. As a consequence,the peak in H will appear sooner and lower. The case of an underestimation of ¯ β leads to theopposite conclusion, namely a peak in H that appears later and higher. In summary, the furtherbefore the peak the train period is located, the more inaccurate the position and height of thepeak is expected to be. 5inally, let us consider the situation where the train period is located after the peak. Thenwe can make the same observations as in the previous paragraph, except that predicting the peakis now irrelevant. Moreover, we are in the decrease phase, where the first term of (8b) (whichinvolves ¯ β and ¯ S ( t )) is smaller than the second term (which does not involve these quantities).Consequently, the possibly large estimation errors on ¯ β and ¯ S ( t ) will only slightly affect the forecastof H ( t ). An alternative to Sections 4.2–4.4 is to reconsider (11) as a function of all four estimands:˜ φ ( ¯ β, ¯ S ( t i ) , γ, H ( t i )) = c H t c X t = t i ( H ( t ) − H o ( t )) + c E t c X t = t i ( E ( t ) − E o ( t )) + c L t c X t = t i ( L ( t ) − L o ( t )) . (14)In (14), H ( t ), E ( t ) as in (9), and L ( t ) as in (10), are given by the solution of the discrete-timeSH model (12) where the parameters ¯ β and γ and the initial conditions ¯ S ( t i ) and H ( t i ) take thevalues specified in the argument of ˜ φ . Minimizing ˜ φ is a more challenging problem than mimizing φ (11) in view of the larger number of optimization variables. It may be essential to give a goodinitial guess to the optimization solver, and a natural candidate for this is the values obtained bythe procedure described in Sections 4.2–4.4.In our preliminary experiments, we have found that this alternative does not present a clearadvantage in terms of the prediction mean absolute percentage error (MAPE). The results reportedin Section 6 are obtained with the sequential prediction approach of Section 4, unless otherwisespecified. We now apply the method of Section 4 (by default) or a method of Section 5 (when specified) tothe data of Section 2 available for Belgium (by default) and France (when specified).The methods are implemented in Python 3 and run with Anaconda 2019.10. The code toreproduce the results is available from https://sites.uclouvain.be/absil/2020.05 . We first check how well the SH model (8) can fit the available data for Belgium. For this experi-ment, we use the method of Section 5.2 with c E = c L = 0 in order to get the best possible fit (inthe least squares sense) to the H o curve. The result is shown in Figure 1.The fitting error is remarkably small (MAPE below 15%). For the French data, the fit is evenbetter in terms of MAPE (about 3%).Note that the parameters of the SH model are constant with respect to time in our experiments.This contrasts with [Koz20] where there are two phases, and with [Nes20] where the infection rateis piecewise constant with several pieces.We stress that Figure 1 tells us little about the prediction capability of the model. If the fitover some period is bad, then predictions (i.e., forecasts) over that period can only be bad. But ifthe fit is good (as it is the case here), the predictions can still be bad due to their sensitivity withrespect to the data preceding the to-be-predicted period. For example, a better fit (in the RMSEsense) than in Figure 1 can be obtained with a polynomial of degree 8; however, its predictioncapability is abysmal.In order to assess the prediction capability of the model, we have to learn the estimand variablesover a train period that we make available to the algorithm, use the learned estimand variables inorder to predict H over a subsequent test period (whose data is not available to the algorithm),and finally compare the prediction with the data on the test period. This is what we proceed todo in the rest of this Section 6. 6 Figure 1: Belgium, fitting the SH model to the H o (total hospitalized) curve. In this experiment,the train set is the whole dataset, hence there is no test (prediction) curve. Reproduce withSHR_16PA_py_BEL_1sttraintstart1_1sttraintstop123_c100.zip. We start with a prediction experiment where the train period is around the peak. According toSection 5.1, this is a promising location.A contour plot of the objective function φ (11) is given in Figure 2. In order to make theminimizer easier to visualize, the plot shows equispaced-level curves of log( φ − . φ ∗ ), where φ ∗ is an approximation of the minimal value of φ . Based on a visual inspection, we choose (1e-5,1e4)as the initial guess of the optimization solver. The optimization solver is scipy.optimize.fmin withits default parameters.The middle plot of Figure 2 shows ( H o ( t )) t = t s ,...,t e (observed hospitalizations, gray solid line),( H ( t )) t = t i ,...,t c (hospitalizations given by the model over the train period, blue dashed line), and( H ( t )) t = t c +1 ,...,t e (hospitalizations predicted over the test period, in red dash-dot line). In orderto give a sense of the sensitivity of the results, we superpose the curves obtained for three slightlydifferent train periods. The test MAPE for the three curves are 27%, 7%, and 8%.The right-hand plot of Figure 2 shows the evolution of ¯ S ( t ). In Figure 3, we superpose the results obtained with various train periods of 14 days. The figurecorroborates the comments of Section 5.1.In particular, if the train period is fully located before the peak, then the predictions are ratherinaccurate. Placing the train period around the peak gives excellent prediction results. When thetrain period is fully located in the decrease phase, the estimation of ¯ β and ¯ S ( t i ) is seen to be verysensitive, but this does not affect much the quality of the prediction of H ( t ).7 .0 0.2 0.4 0.6 0.8 1.0 1.21e 50.00.51.01.52.0 1e4 log10(fun - fun\_opt*.99)) Figure 2: Belgium, train period around the peak. Left: contour plot of φ (11). Right: fitting and predictions with the SH model. Reproduce withSHR_16PA_py_BEL_1sttraintstart16_1sttraintstop32_c111.zip. Total_inH_trainH_pred
S_bar_trainS_bar_pred beta_bar_trainbeta_bar_pred gamma_traingamma_pred
Figure 3: Belgium, various train periods. Reproduce withSHR_16PA_py_BEL_1sttraintstart1_1sttraintstop15_c111.zip.8
Total_inH_trainH_pred
S_bar_trainS_bar_pred beta_bar_trainbeta_bar_pred gamma_traingamma_pred
Figure 4: France, various train periods. Reproduce withSHR_16PA_py_FRA_1sttraintstart1_1sttraintstop15_c111.zip.
Figure 4 is the counterpart of Figure 3 for France. These experiments are also compatible with thecomments of Section 5.1. We also considered some departments separately, with similar results.A disconcerting aspect is the evolution of the estimated γ as a function of the location of thetrain period. In Figure 3 (Belgium), the estimation of γ is grouped around 0.08 for several trainperiods. However, in Figure 4, the estimation of γ keeps decreasing, indicating that the dailynumber of patients leaving the hospital is an increasingly small fraction of the number of patientsat the hospital. The experiments in Section 6 have shown that the proposed method has a remarkably good fittingcapability over the whole available data, and also a remarkably good predictive value over certaintime ranges for the Belgian data. However, there are also time ranges where the prediction isvery inaccurate, and the accuracy is also found to be lower for the French data. The predictionsreturned by the model should thus be taken with much caution. In keeping with this warning, werefrained from displaying predictions beyond the end time of the datasets. The Python code isfreely available to make such predictions, but there is no warranty on their accuracy.Another source of caution is that we cannot rule out the situation where the considered ob-jective function would be multimodal. The optimization solver might thus get stuck in a localnonglobal minimum, yielding a suboptimal fit of the train data and possibly a poorer predictionthan what an omniscient solver would achieve. Moreover, even if the objective function is uni-modal, the stopping criterion of the solver may trigger before an accurate approximation of theminimum is reached.If the proposed model is used to guide prevention policies, then further caveats are in order.9e have seen that the estimation of ¯ β is very sensitive. Hence the proposed model can hardlyhelp assess the impact of prevention measures on ¯ β . Without knowing sufficiently accurately theimpact of prevention measures on ¯ β , we may not aptly use the model to predict their impact onthe evolution of the hospitalizations.Yet another caveat is that it may be tempting to deduce from the excellent fit with a constant-parameter model (Figure 1) that the evolution of the prevention measures over the dataset periodhas had no impact on ¯ β . But the deduction is flawed. Indeed, in view of the comments madein Section 5.1, the available data could also be very well explained with fairly large jumps in ¯ β during the decrease phase.In spite of all these caveats, the hospitalization forecasts returned by the method, and alsothe evolution of ¯ S ( t ), might be of practical use in the context of various disease outbreaks, e.g.,for resource planning. To this end, it will be important to understand which specific features ofthe COVID-19 outbreak in Belgium made it possible to forecast so accurately the hospitalizationdecrease several months ahead. References [Atk20] Andrew Atkeson. What will be the economic impact of COVID-19 in the US? Roughestimates of disease scenarios. Working Paper 26867, National Bureau of EconomicResearch, March 2020. doi:10.3386/w26867 .[BD20] Gyan Bhanot and Charles DeLisi. Predictions for europe for the Covid-19 pandemicfrom a SIR model. medRxiv , 2020. doi:10.1101/2020.05.26.20114058 .[BFG +
20] Jackie Baek, Vivek F. Farias, Andreea Georgescu, Retsef Levi, Tianyi Peng, DeekshaSinha, Joshua Wilde, and Andrew Zheng. The limits to learning an SIR process:Granular forecasting for Covid-19, 2020. arXiv:2006.06373 .[CFP20] Timoteo Carletti, Duccio Fanelli, and Francesco Piazza. Covid-19: The unreasonableeffectiveness of simple models.
Chaos, Solitons & Fractals: X , 5:100034, 2020. doi:10.1016/j.csfx.2020.100034 .[CNP20] Giuseppe C. Calafiore, Carlo Novara, and Corrado Possieri. A modified SIR modelfor the COVID-19 contagion in Italy, 2020. arXiv:2003.14391 .[FP20] Duccio Fanelli and Francesco Piazza. Analysis and forecast of COVID-19 spreadingin China, Italy and France.
Chaos, Solitons & Fractals , 134:109761, 2020. doi:10.1016/j.chaos.2020.109761 .[Het00] Herbert W. Hethcote. The mathematics of infectious diseases.
SIAM Review ,42(4):599–653, 2000. doi:10.1137/S0036144500371907 .[KMW27] William Ogilvy Kermack, A. G. McKendrick, and Gilbert Thomas Walker. A con-tribution to the mathematical theory of epidemics.
Proceedings of the Royal Societyof London. Series A, Containing Papers of a Mathematical and Physical Character ,115(772):700–721, 1927. doi:10.1098/rspa.1927.0118 .[Koz20] Gregory Kozyreff. Hospitalization dynamics during the first COVID-19 pandemicwave: SIR modelling compared to Belgium, France, Italy, Switzerland and New YorkCity data, 2020. arXiv:2007.01411 .[LGWR20] Ying Liu, Albert A Gayle, Annelies Wilder-Smith, and Joacim Rocklöv. The repro-ductive number of COVID-19 is higher compared to SARS coronavirus.
Journal ofTravel Medicine , 27(2), 02 2020. doi:10.1093/jtm/taaa021 .10Nes20] Yurii Nesterov. Online prediction of COVID19 dynamics. Belgian case study. COREDiscussion Paper 2020/22, UCLouvain, 2020. URL: https://uclouvain.be/en/research-institutes/lidam/core/core-discussion-papers.html .[RVHL20] Weston C. Roda, Marie B. Varughese, Donglin Han, and Michael Y. Li. Why is itdifficult to accurately predict the COVID-19 epidemic?
Infectious Disease Modelling ,5:271 – 281, 2020. doi:10.1016/j.idm.2020.03.001 .[SM20] Sudhansu Sekhar Singh and Dinakrushna Mohapatra. Predictive analysis for COVID-19 spread in India by adaptive compartmental model. medRxiv , 2020. doi:10.1101/2020.07.08.20148619 .[WF20] Meimei Wang and Steffen Flessa. Modelling Covid-19 under uncertainty: whatcan we expect?
The European Journal of Health Economics , 2020. doi:10.1007/s10198-020-01202-ydoi:10.1007/s10198-020-01202-y