Model Fusion to Enhance the Clinical Acceptability of Long-Term Glucose Predictions
MModel Fusion to Enhance the Clinical Acceptabilityof Long-Term Glucose Predictions
Maxime De Bois
Universit´e Paris SaclayCNRS-LIMSIOrsay, [email protected]
Mounˆım A. El Yacoubi
T´el´ecom SudParis, Universit´e Paris SaclaySAMOVAR, CNRS´Evry, Francemounim.el [email protected]
Mehdi Ammi
Universit´e Paris 8Dept. of Computer ScienceSaint-Denis, [email protected]
Abstract —This paper presents the Derivatives CombinationPredictor (DCP), a novel model fusion algorithm for makinglong-term glucose predictions for diabetic people. First, usingthe history of glucose predictions made by several models, thefuture glucose variation at a given horizon is predicted. Then,by accumulating the past predicted variations starting from aknown glucose value, the fused glucose prediction is computed.A new loss function is introduced to make the DCP model learnto react faster to changes in glucose variations.The algorithm has been tested on 10 in-silico type-1 diabeticchildren from the T1DMS software. Three initial predictors havebeen used: a Gaussian process regressor, a feed-forward neuralnetwork and an extreme learning machine model. The DCP andtwo other fusion algorithms have been evaluated at a predictionhorizon of 120 minutes with the root-mean-squared error ofthe prediction, the root-mean-squared error of the predictedvariation, and the continuous glucose-error grid analysis.By making a successful trade-off between prediction accuracyand predicted-variation accuracy, the DCP, alongside with itsspecifically designed loss function, improves the clinical accept-ability of the predictions, and therefore the safety of the modelfor diabetic people.
Index Terms —Glucose Prediction, Model Fusion, Clinical Ac-ceptability, Artificial Neural Network
I. I
NTRODUCTION
Diabetes, being the seventh leading cause of death in 2016,is one of the major diseases of the XXI century [1]. Inorder to avoid short-term (e.g., exhaustion, coma) or long-term (e.g., blindness, cardiovascular diseases) complications,diabetic people must maintain their blood glucose withinacceptable ranges (i.e., between hypoglycemia and hyper-glycemia). However, this task is far from easy given the highvariety of factors influencing the variations of blood glucose(e.g., food intake, medications, physical activity, emotions).Innovations that aim at helping diabetic people in theirdaily lives follow several leads. First, monitoring devices suchas continuous glucose monitors (e.g., FreeStyle Libre [2]) ormedical coaching applications for diabetes (e.g., mySugr [3])provide diabetic people with useful information such as currentand past glucose values or calories intake history. Moreover,artificial pancreas start being commercialized and have alreadyshown their effectiveness in managing the blood glucoseof diabetic patients [4]. Finally, nowadays, a lot of effortshave been focused towards the building of glucose predictivemodels [5]. From the patient’s past glucose, carbohydrate (CHO) intakes, and insulin infusions, those models try topredict future glucose values.In the past few years, a lot of different glucose predictivemodel architectures have been tried-out. Among them, Sun et al. proposed a generic predictive model using Long Short-Term Memory (LSTM) and bidirectional LSTM neural net-works to predict glucose at prediction horizons (PH) up to60 minutes [6]. De Paula et al. studied the use of GaussianProcesses (GP) to predict future glucose values in an auto-mated glucose controller based on reinforcement learning [7].Besides, in their work, Georga et al. analyzed different types ofExtreme Learning Machine (ELM) networks for online short-term glucose prediction [8]. Finally, Ben Ali et al. proposeda tuning methodology for the architecture of Feed-ForwardNeural Networks (FFNN) that aims at improving the glucosepredictions of diabetic patients up to 60 minutes ahead [9].Nonetheless, to this day, no algorithm is standing outfrom the others, with each of them having its own strengthsand weaknesses [10]. Several studies tried to make use ofthose specificities by combining the different predictions intoa single one. Wang et al. proposed the adaptive-weighted-average framework that combines glucose predictive modelsby weighing them based on their past errors [11]. Daskalaki etal. built a hypoglycemia/hyperglycemia events warning systemby combining autoregressive and recurrent neural networksmodels [12], [13]. More recently, Jankovic et al. studied amulti-step fusion methodology using ELM models for long-term glucose prediction [14]. Finaly, Yu et al. proposed anadapative-filters-based fusion mechanism for short-term glu-cose prediction [15].However, this particular question of long-term predictionsremains an open problem. Due to the difficulty of the task,the models often output predictions that are inconsistent overtime with a lot of high amplitude oscillations. This inconsis-tency directly impacts the clinical acceptability, measured bythe Continuous Glucose-Error Grid Analysis (CG-EGA). Toaddress this issue, we propose a novel model fusion algorithm,the Derivatives Combination Predictor (DCP), that bases itspredictions on the prediction of glucose variations.The paper is organized as follows. First, we introduce ouralgorithm. Then, we go through the different details aboutthe experiments we have conducted during the study. Finally,efore concluding, we provide the reader with an analysis ofthe results.II. D
ERIVATIVES C OMBINATION P REDICTOR
A. Presentation of the Model
The goal of the DCP is to make the glucose predictionsconsistent with each other. In particular, it tries to make thedifference between two consecutive predictions as close to thetrue glucose variation as possible. To do so, the DCP com-bines the predictions made by different predictors at a givenprediction horizon (PH) into a single prediction following atwo-step process (see Figure 1).First, at time t , the a model we call dModel predicts theglucose variation at time t + P H , ˆ˙ y t + P H , from the past historyof glucose predictions, ˆ Y Baset + P H , made by the initial predictors
Base .Then, starting from the most recent glucose value knownby the predictor (i.e., the glucose value at time t , when theprediction is made), the fused glucose prediction, ˆ y DCPt + P H , iscomputed by accumulating the last PH predicted derivatives(see Equation 1). ˆ y DCPt + P H = y t + P H (cid:88) i =1 ˆ˙ y t + i (1) Example.
We want to fuse two predictors, A and B , thatforecast glucose values 120 minutes in the future (PH of120 minutes). At every time-step t , we forecast the glucosederivative (rate of change) 120 minutes in the future bygiving the history of glucose predictions made by A and B to the dModel. With a history of 5, we give the dModel thepredictions made by A and B at t +120 , t +119 , ..., t +116 . Theway the glucose variations are predicted depends on the natureof the dModel which can be any regression model (e.g., linearregressor or neural network). Once the glucose derivatives arepredicted up to t +120 , we can compute the glucose predictionat t + 120 by using the last 120 glucose derivative predictionsand the current glucose value using Equation 1. B. Learning of the dModel
While any supervised regression model can fit into thedModel, we chose to use a FFNN for its flexibility and itsability to model complex non-linear functions. To enhancethe accuracy of the glucose predictions computed from thepredicted derivatives, we introduce a new loss function to beused inside the FFNN-based dModel : the Derivatives-BiasedMean-Squared Error (MSE DB ).The MSE DB (see Equation 2) is quite similar to the MSEloss function. For every sample i , with a small enough valueof σ , the squared error will be scaled by ± γ depending onthe value of ˆ˙ y i / ˙ y i . If ˆ˙ y i / ˙ y i > , meaning either ˆ˙ y i > ˙ y i > or ˆ˙ y i < ˙ y i < , then the loss attributed to this sample is scaleddown; if not, it is scaled up. M SE DB ( ˙ y , ˆ˙ y ) = 1 n n (cid:88) i =1 (1 + γ · tanh( 1 − ˆ˙ y i / ˙ y i σ ))( ˙ y i − ˆ˙ y i ) (2)Intuitively, using this loss function during training meansthat we encourage the model to predict derivatives with thesame sign as and higher absolute values than the true values.In practice, this makes our model react faster to changes invariations of glucose, and, therefore, make the predictionsmore accurate. III. E XPERIMENTAL R ESULTS
A. Experimental Data
In this study, we used the 10 in-silico children populationfrom the UVA/Padova Type 1 Diabetes Metabolic Simulator(T1DMS) [16]. T1DMS has been approved by the Food andDrug Administration in the United States as a substitute toclinical testing and is, therefore, extensively used in the glu-cose prediction literature [5]. Compared to data coming fromreal patients, which are sensitive and oftentimes impossible tobe shared, simulated data can be reproduced. Therefore, usingthe simulator makes our results reproducible and available forcomparison.The simulation lasted 28 days and outputted three differenttime-series sampled every minute: glucose value, insulin in-fusion and carbohydrate (CHO) intake over time. To accountfor the diversity of real-life situations, the subjects have beenput under the following daily open-loop scenario:
1) Meals:
Every day is made of 3 meals, with each meal’stiming and CHO amount being sampled from Gaussian dis-tributions. The timing Gaussian distributions have a varianceof 0.5 and a mean of 7h, 13h and 20h respectively. The CHOamount Gaussian distributions have a variance of 0.5 and amean of 40g, 85g and 60g respectively. Every meal lasts 15minutes.
2) Insulin Boluses:
At the start of every meal an insulinbolus is taken. The value of the bolus is sampled uniformlybetween 0.7 and 1.3 times the optimal insulin bolus. Theoptimal insulin bolus is computed by the simulator from thechild’s personal carbohydrate-to-insulin ratio.
3) Basal Insulin:
The basal insulin, computed by the sim-ulator, is constant and optimal for every child.
B. Base Predictors
The predictions of three popular (in the context of glucoseprediction) predictors have been used as the input to the DCP:a Feed-forward Neural Network (FFNN), a Gaussian Processregressor (GP), and an Extreme Learning Machine network(ELM) [5], [7]–[10], [14]. All the models have been optimizedthrough the tuning of their hyperparameters.
1) Data Preprocessing:
Keeping their sequential natureintact, the time-series outputted by the simulator have beengrouped by days and then split into train and test subsets. Thesplitting has been done according to a 4 fold cross-validationused during the training and testing of the models. For each Y Base ( t − P H )+ P H dModel ˆ˙ y ( t − P H )+ P H ... ... ...ˆ Y Baset + P H dModel ˆ˙ y t + P H y t P ˆ y DCPt + P H
Fig. 1:
DCP data flow, from the initial glucose predictions made by the
Base predictors we want to fuse to the prediction of the glucosederivatives by the dModel and the computation of the fused glucose predictions. − − − . . y ˙ y M S E (1) − − − . .
505 ˆ˙ y ˙ y M S E S B (2) Fig. 2:
Surface plots of the per sample
MSE (1) and
MSE DB with γ = 0 . and σ = 10 − (2) loss functions. testing fold, a third of the training set constitutes the evaluationset, used to tune the models. After data normalization (zero-mean and unit-variance), the time-series are fed to the modelsas histories of the past 60 values (1 hour-long history).
2) GP:
The GP model has been implemented with a dot-product kernel [10]. Whereas the kernel coefficient and theinhomogeneity have been set to . and . respectively,the noise-controlling hyperparameter has been grid-searchedinside the [10 − , ] range.
3) ELM:
Given the inherent simplicity of ELM modelsin general (no training and close to no tuning are needed),we optimized the number of neurons inside its single hiddenlayer within the [1 , range (20160 being the number oftraining samples) [17]. To reduce the overfitting of the modelto the training set, we added a L2 penalty ( ) to the weightsof the neurons.
4) FFNN:
The FFNN model is made of 4 hidden layers ofrespectively 128, 64, 32, and 16 neurons. We used the scaledexponential linear unit (SELU) activation function [18]. Themodel is trained by the Adam optimizer with the mean-squarederror (MSE) loss function, mini-batches of size , initiallearning rate and decay of − and − respectively. To fightthe overfitting of the model, we used several regularizationmethods, namely a L2 penalty of − and early stopping. C. DCP Implementation
The FFNN-based dModel is made of 3 hidden layersof respectively 256, 128, and 64 neurons (ReLU activationfunction). The network takes as input the history of the 10past predictions from the 3 base predictors, making-up to 30inputs.For every train/test split, the network has been trainedfor 500 epochs with mini-batches of size 2500. To avoidthe overfitting of the network to the training set, we usedbatch normalization layers (applied between the outputs ofthe neurons and the activation functions) and dropout (rate of0.5). Furthermore, early stopping has been applied to the root-mean-squared error of the glucose predictions (computed withEquation 1) made on the evaluation set ( / of the training set,not used during training). Finally, the γ and σ coefficients ofthe M SE DB loss function has been optimized through grid-search, ending with a value of 0.65 and − respectively. D. Fusion Models for Comparison
In order to evaluate the performance of the DCP, weimplemented two other fusion algorithms: a model we callthe Artificial neural network Combination Predictor (ACP) andthe Adaptive-Weighed-Average (AWA) fusion algorithm fromWang et al. [11]. ) ACP:
The ACP model is a FFNN with the exact samearchitecture as the FFNN-based dModel. However, instead ofpredicting the glucose derivatives, it directly predicts the futureglucose values. Therefore, the MSE DB loss function cannot beused since it is tailored to the dModel. The traditional MSEloss function has been used in its stead. The purpose of thismodel is to compare the DCP, a fusion model that predictsthe future glucose values through the prediction of the futurevariations, to a model that directly predicts the future glucosevalues.
2) AWA:
The idea behind the AWA model is that every basepredictor is assigned a weight based on its past recent errors.The weights are dynamically changed with the knowledge ofnew prediction errors. To strenghen the impact of the mostrecent errors on the weights, a forgetting factor is used [11].It has been optimized through grid search and ended up witha value of . . E. Evaluation Metrics
In this study, we use three complementary metrics to evalu-ate the models: the Root-Mean-Squared Error of the prediction(RMSE), the Root-Mean-Squared Error of predicted variations(dRMSE), and the Continuous Glucose-Error Grid Analysis(CG-EGA). Whereas the RMSE and the dRMSE provide withthe accuracy of the predictions, the CG-EGA measures theclinical acceptability of the models.
1) RMSE:
The RMSE (see Equation 3, with y and ˆ y being, respectively, the true and predicted glucose values) is astandard metric to evaluate regression models and in particularglucose predictive models. It provides a measure of the averageaccuracy of the glucose predictions. RM SE ( y , ˆ y ) = (cid:118)(cid:117)(cid:117)(cid:116) n n (cid:88) i =1 ( y i − ˆ y i ) (3)
2) dRMSE:
The dRMSE is simply the RMSE applied to thederivatives of the glucose predictions instead of the glucosepredictions themselves. It gives a measure of the accuracy ofthe variations of the predictions when compared to the truevariations. dRM SE ( y , ˆ y ) = (cid:118)(cid:117)(cid:117)(cid:116) n − n − (cid:88) i =1 (∆ y i − ∆ˆ y i ) (4)
3) CG-EGA:
The CG-EGA is the most used metric inevaluating glucose predictive models as it measures the clinicalacceptability of the predictions [5]. In particular, it assesses,for every prediction, depending on the glycemia region (hy-poglycemia, euglycemia , hyperglycemia), the dangerousnessof making such a prediction. This is very useful in the contextof diabetes management since prediction errors can threatenthe life of the patient. The euglycemia region is the region between hypoglycemia and hyper-glycemia (between 70 mg/dL and 180 mg/dL ). Technically, the CG-EGA is made of two evaluation grids:the point-error grid analysis (P-EGA) and the rate-error gridanalysis (R-EGA). While the P-EGA determines the clinicalacceptability of the predictions themselves, the R-EGA focuseson the rates of change (the difference between two consecutiveglucose predictions). The clinical acceptability is describedby grades, from A to E, for both grids (Figure 4 providesthe reader with a graphical example of the two grids). Theoverall clinical acceptability of the prediction is assessed bycombining the two grids and classifying the prediction aseither an accurate prediction (AP), a benign error (BE) or anerroneous prediction (EP). If a prediction and its associatedderivative are both classified into the A or B categories, theprediction is then an AP.
4) Complementarity of the Chosen Metrics:
Whereas theRMSE and the P-EGA evaluate the accuracy of glucosepredictions, the dRMSE and the R-EGA focus on the predictedglucose rates of change. We can then say that, generally,improving the RMSE improves the P-EGA, and, improvingthe dRMSE improves the R-EGA.The CG-EGA is the most important metric as it determinesif a predictive model is safe to use by diabetic people. Whilebeing very self-explanatory (in its simplified representation),it is a very complex metric. This implies that humans can havea hard time comparing models solely using the CG-EGA asthe models might have different strengths and weaknesses. Inthe other hand, the RMSE and the dRMSE, being single valuemetrics, are very simple. This makes the comparison betweenmodels fast and straightforward.
F. Results
The results of our study are represented by Table I. In thistable, the performances of the models, in terms of RMSE(in mg /dL ), dRMSE (in mg /dL /s ), and CG-EGA (inpercentage of predictions falling into the different categories),are given.The difference between DCP 1 and DCP 2 relies on the lossfunction they use: DCP 1 uses the traditional MSE and DCP2 uses the MSE DB . IV. D ISCUSSION
A. DCP Results Analysis
First, with higher AP rates, all the fusion models show abetter clinical acceptability than the base predictors. This showthe overall usefulness of using model fusion algorithms ingeneral. With the EP rates remaining stable, the improvementscome mainly from a shift of BE to AP. Among the fusionmodels, the improvements are much more significant for theDCP models with 10.53% (DCP 1) and 11.54% (DCP 2)more AP, demonstrating the superiority of the proposed fusionalgorithm.This improvement is made possible by the increased accu-racy in the predicted variations represented by the dRMSE.Nonetheless, we can notice a loss in the prediction accuracy(RMSE). This is an acceptable trade-off since the prediction
ABLE I:
Performances of the models with mean ± standard deviation across the children population. Models RMSE dRMSE CG-EGA
Accurate Benign ErroneousPredictions Errors Predictions
GP 48.47 ± ± ± ± ± ELM 42.30 ± ± ± ± ± FFNN 37.79 ± ± ± ± ± ACP 36.14 ± ± ± ± ± AWA 37.51 ± ± ± ± ± DCP 1 52.68 ± ± ± ± ± DCP 2 49.37 ± ± ± ± ± accuracy remains good enough not to threaten the life of thepatient (the clinical acceptability being high).Figure 3 and 4, taken from a particular day of one of thepatients, illustrate those dynamics. In a first hand, the P-EGA(representing the accuracy of the predictions) of the DCP2 model is slighly worse than the one of the ACP model.But, in the other hand, the R-EGA of the ACP model isconsiderably worse than the one of the DCP 2 model, thepredicted variations of the ACP model being more spread outfrom the optimal diagonal line. B. Influence of the MSE DB loss function Finally, DCP 2, with a lower RMSE and a higher clinicalacceptability when compared to DCP 1, shows the importanceof using the MSE DB loss function. The right side of Figure3 depicts its influence on the training of the dModel: thepredicted values are closer to the true values because the modelreacts faster to the changes of glucose variations.V. C ONCLUSION
In this work, we proposed a new fusion algorithm, theDerivatives Combination Predictor, that, by predicting futureglucose values through the prediction of its variations, im-proves the clinical acceptability of long-term glucose predic-tions.To enhance the accuracy of the predictions, we introduceda new loss function, the MSE DB , specifically designed for theDCP algorithm.As a conclusion, the DCP fusion algorithm seems to bepromising at addressing the problem of long-term glucosepredictions. In future studies, we aim to investigate the useof other models inside the dModel module of the DCP aswell as the use of the algorithm itself inside an end-to-endglucose predictive model.A CKNOWLEDGMENT
This work is supported by the ”IDI 2017” project fundedby the IDEX Paris-Saclay, ANR-11-IDEX-0003-02.R
EFERENCES[1] W. H. Organization et al. , Global report on diabetes . World HealthOrganization, 2016. [2] A. F. ´Olafsd´ottir, S. Attvall, U. Sandgren, S. Dahlqvist, A. Pivodic,S. Skrtic, E. Theodorsson, and M. Lind, “A clinical trial of the accuracyand treatment experience of the flash glucose monitor freestyle librein adults with type 1 diabetes,”
Diabetes technology & therapeutics ,vol. 19, no. 3, pp. 164–172, 2017.[3] K. Rose, M. Koenig, and F. Wiesbauer, “Evaluating success for behav-ioral change in diabetes via mhealth and gamification: Mysugrs keys toretention and patient engagement,”
Diabetes Technology & Therapeutics ,vol. 15, p. A114, 2013.[4] S. K. Garg, S. A. Weinzimer, W. V. Tamborlane, B. A. Buckingham,B. W. Bode, T. S. Bailey, R. L. Brazg, J. Ilany, R. H. Slover, S. M.Anderson et al. , “Glucose outcomes with the in-home use of a hybridclosed-loop insulin delivery system in adolescents and adults with type1 diabetes,”
Diabetes technology & therapeutics , vol. 19, no. 3, pp.155–163, 2017.[5] S. Oviedo, J. Veh´ı, R. Calm, and J. Armengol, “A review of personalizedblood glucose prediction strategies for t1dm patients,”
Internationaljournal for numerical methods in biomedical engineering , vol. 33, no. 6,p. e2833, 2017.[6] Q. Sun, M. V. Jankovic, L. Bally, and S. G. Mougiakakou, “Predictingblood glucose with an lstm and bi-lstm based deep neural network,” in .IEEE, 2018, pp. 1–5.[7] M. De Paula, L. O. ´Avila, and E. C. Mart´ınez, “Controlling bloodglucose variability under uncertainty using reinforcement learning andgaussian processes,”
Applied Soft Computing , vol. 35, pp. 310–332,2015.[8] E. I. Georga, V. C. Protopappas, D. Polyzos, and D. I. Fotiadis, “Onlineprediction of glucose concentration in type 1 diabetes using extremelearning machines,” in
Engineering in Medicine and Biology Society(EMBC), 2015 37th Annual International Conference of the IEEE .IEEE, 2015, pp. 3262–3265.[9] J. B. Ali, T. Hamdi, N. Fnaiech, V. Di Costanzo, F. Fnaiech, and J.-M.Ginoux, “Continuous blood glucose level prediction of type 1 diabetesbased on artificial neural network,”
Biocybernetics and BiomedicalEngineering , vol. 38, no. 4, pp. 828–840, 2018.[10] M. De Bois, M. El Yacoubi, and M. Ammi, “Study of short-termpersonalized glucose predictive models on type-1 diabetic children,”accepted and presented at IJCNN 2019.[11] Y. Wang, X. Wu, and X. Mo, “A novel adaptive-weighted-average frame-work for blood glucose prediction,”
Diabetes technology & therapeutics ,vol. 15, no. 10, pp. 792–801, 2013.[12] E. Daskalaki, K. Nørgaard, T. Z¨uger, A. Prountzou, P. Diem,and S. Mougiakakou, “An early warning system forhypoglycemic/hyperglycemic events based on fusion of adaptiveprediction models,”
Journal of diabetes science and technology , vol. 7,no. 3, pp. 689–698, 2013.[13] R. H. Botwey, E. Daskalaki, P. Diem, and S. G. Mougiakakou, “Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events,” in
Engineering in Medicine and Biology Society(EMBC), 2014 36th Annual International Conference of the IEEE .IEEE, 2014, pp. 4843–4846.[14] M. V. Jankovic, S. Mosimann, L. Bally, C. Stettler, and S. Mougiakakou,“Deep prediction model: The case of online adaptive prediction of sub-cutaneous glucose,” in
Neural Networks and Applications (NEUREL),2016 13th Symposium on . IEEE, 2016, pp. 1–5.
200 400 600 800 1 ,
000 1 , G l u c o s ec o n ce n tr a t i o n [ m g/ d L ] AWAACPDCP
Fig. 3:
Daily glucose predictions of the fusion models against ground truth for a child during a specific day. [15] X. Yu, K. Turksoy, M. Rashid, J. Feng, N. Hobbs, I. Hajizadeh,S. Samadi, M. Sevil, C. Lazaro, Z. Maloney et al. , “Model-fusion-basedonline glucose concentration predictions in people with type 1 diabetes,”
Control engineering practice , vol. 71, pp. 129–141, 2018.[16] C. D. Man, F. Micheletto, D. Lv, M. Breton, B. Kovatchev, andC. Cobelli, “The uva/padova type 1 diabetes simulator: new features,”
Journal of diabetes science and technology , vol. 8, no. 1, pp. 26–34,2014.[17] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine:theory and applications,”
Neurocomputing , vol. 70, no. 1-3, pp. 489–501,2006.[18] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” in
Advances in Neural Information Pro-cessing Systems , 2017, pp. 971–980. a) ACP
100 200 300 400100200300400
A BBCCD DEE
True glucose value [ mg/dL ] P r e d i c t e d g l u c o s e v a l u e [ m g / d L ] Point-Error Grid Analysis APBEEP − − − − − − − − AB B CCD DE E
True glucose rate of change [ mg/dL/min ] P r e d i c t e d g l u c o s e r a t e o f c h a n g e [ m g / d L / m i n ] Rate-Error Grid Analysis APBEEP (b)
DCP2
100 200 300 400100200300400
A BBCCD DEE
True glucose value [ mg/dL ] P r e d i c t e d g l u c o s e v a l u e [ m g / d L ] Point-Error Grid Analysis APBEEP − − − − − − − − AB B CCD DE E
True glucose rate of change [ mg/dL/min ] P r e d i c t e d g l u c o s e r a t e o f c h a n g e [ m g / d L / m i n ] Rate-Error Grid Analysis APBEEP