Probabilistic Prediction of Geomagnetic Storms and the K p Index
ssubmitted to
Journal of Space Weather and Space Climate c (cid:13) The author(s) under the Creative Commons Attribution 4.0 International License (CC BY 4.0)
Probabilistic Prediction of Geomagnetic Stormsand the K p Index
S. Chakraborty and S. K. Morley Bradley Department of Electrical & Computer Engineering, Virginia Tech, VA, USAe-mail: [email protected] Space Science and Applications (ISR-1), Los Alamos National Laboratory, NM, USAe-mail: [email protected]
ABSTRACT
Geomagnetic activity is often described using summary indices to summarize the likelihood ofspace weather impacts, as well as when parameterizing space weather models. The geomagneticindex K p in particular, is widely used for these purposes. Current state-of-the-art forecast modelsprovide deterministic K p predictions using a variety of methods – including empirically-derivedfunctions, physics-based models, and neural networks – but do not provide uncertainty estimatesassociated with the forecast. This paper provides a sample methodology to generate a 3-hour-ahead K p prediction with uncertainty bounds and from this provide a probabilistic geomagneticstorm forecast. Specifically, we have used a two-layered architecture to separately predict storm(K p ≥ − ) and non-storm cases. As solar wind-driven models are limited in their ability to pre-dict the onset of transient-driven activity we also introduce a model variant using solar X-ray fluxto assess whether simple models including proxies for solar activity can improve the predictionsof geomagnetic storm activity with lead times longer than the L1-to-Earth propagation time. Bycomparing the performance of these models we show that including operationally-available in-formation about solar irradiance enhances the ability of predictive models to capture the onset ofgeomagnetic storms and that this can be achieved while also enabling probabilistic forecasts. Key words.
Geomagnetic Storms, K p Forecasting, Deep Learning, LSTM, Gaussian Process.
1. Introduction
Modern electrical systems and equipment on the Earth such as navigation, communication, satelliteand power grid systems can be a ff ected by space weather (e.g., Choi et al., 2011; Qiu et al., 2015;Morley, 2019). The societal impact of space weather is increasing (Schrijver et al., 2015; Eastwoodet al., 2017) and operational centers provide a range of predictions for end-users (Bingham, Suzyet al., 2019), including geomagnetic storm predictions based on the Kp index (Sharpe and Murray,2017). a r X i v : . [ phy s i c s . s p ace - ph ] J u l hakraborty and Morley: Probabilistic Storm Forecast K p is a planetary 3-hour averaged range index that describes the intensity of the magnetic dis-turbance (Bartels, 1949). K p starts from 0 (very quiet) to 9 (very disturbed) with 28 discrete val-ues described by 0 , + , − , , + , ..., − , p . In addition to being a forecast product inits own right, the K p index is widely used as an input to other magnetospheric models (e.g., Carbary,2005) including some aimed at operational use (O’Brien, 2009; Horne et al., 2013). Storm Level K p Range G K p < - G - ≤ K p < - G - ≤ K p < - G - ≤ K p < - G - ≤ K p < - G K p > - Table 1.
Table showing di ff erent categories of geomagnetic storm and associated K p levels. Thecategorization is done based on the intensity of the geomagnetic storm following the NOAA SWPCscales.There are three main categories of models for K p prediction: coupled physics-based models suchas the Space Weather Modeling Framework (e.g., T´oth et al., 2005; Haiducek et al., 2017); “tra-ditional” empirical models (Newell et al., 2007; Luo et al., 2017); and machine-learning models(Costello, 1998; Boberg et al., 2000; Wing et al., 2005; Wintoft et al., 2017; Tan et al., 2018; Sextonet al., 2019).Longer lead-time predictions are typically the domain of empirical (e.g., Luo et al., 2017) andmachine-learning models (e.g., Sexton et al., 2019). Since the first neural network-based K p fore-casting model proposed by Costello (Costello, 1998), many subsequent forecast models (Boberget al., 2000; Wing et al., 2005; Tan et al., 2018) implement di ff erent variants of the neural networkto improve the forecast accuracy. To date, none of these predictions have provided a probabilisticprediction, and very few attempted to characterize the uncertainty associated with the predicted K p value (see, e.g., Wintoft et al., 2017). With the development of new machine-learning techniques,recent K p and storm forecast models come with much higher accuracy, but few have separately ex-amined the model performance under di ff erent ranges of geomagnetic storm conditions (see, e.g.,Zhelavskaya et al., 2019). Recent work on K p prediction by (Shprits et al., 2019) highlighted the in-herent limitation of solar wind-driven models for long lead-time predictions, and noted that “furtherimprovements in long-term modeling should include ... empirical modeling driven by observationsof the Sun”.To assess the viability of moving beyond solar wind-driven models using operationally-availabledata, we also investigate the inclusion of solar X-ray flux data as a model parameter. Solar X-ray flux was chosen as recent studies have shown that these data can be used to forecast so-lar flare activity (Winter and Balasubramaniam, 2015) as well as Solar Energetic Particle (SEP)events (N´u˜nez, 2018). While the majority of large geomagnetic storms are caused by CMEs (e.g.,Gonzalez et al., 1999), it has been shown that CMEs are correlated with solar flare activity (Zhou et al., 2003; Kay et al., 2003; Wu et al., 2008; Lippiello et al., 2008). Further, solar X-ray fluxis operationally available from the NOAA GOES satellites (see ). We, therefore, include GOES X-ray data in a variant of our pre-dictive model to determine whether its use, as a proxy of solar magnetic activity, allows us to bettercapture the (often CME-driven) geomagnetic storms.The primary aim of this paper is to generate a K p prediction with associated uncertainty boundsand exploit it to provide a probabilistic geomagnetic storm forecast. The secondary objective isto test whether a simple, operationally-available solar irradiance data set can be included in thisframework to better capture the e ff ects of solar transients. We use a machine-learning method toforecast K p , including the associated uncertainty, then exploit the uncertainty bounds in K p to pro-vide a probabilistic forecast. The paper is organized as follows: Section 2 explains the data analysisand the model development; Section 3 describes the results, shows how we develop a probabilisticstorm forecast and assesses the model performance; and finally we discuss the results in the contextof similar work.
2. Data Analysis & Model Architecture
Here we describe the data sets and the model architecture used in this study. Specifically, we presentthe basic characteristics of the data sets and a brief justification of our choices of model features.In addition, we provide a short introduction to the technical terms and metrics used to evaluate themodel performance. Finally, we describe the construction of our predictive models and note somestrengths and weaknesses of our approach.
The solar wind energy and magnetospheric coupling are known to be well-described by the solarwind parameters and the state of the magnetosphere (Dungey, 1961; Baker et al., 1981). However,many of the solar wind parameters are correlated with each other and might carry redundant solarwind structure information (e.g., Hundhausen, 1970; Wing et al., 2016). There is a long historyof using plasma moments (number density and velocity) and the interplanetary magnetic field todescribe the coupling of energy from the solar wind into the magnetosphere (e.g., Baker et al., 1981;Borovsky et al., 1998; Xu and Borovsky, 2015a). More recently-developed coupling functions usingsolar wind parameters have been shown to have better correlations with geomagnetic indices (e.g.,Newell et al., 2007) and clear physical origins (e.g., Borovsky, 2013). It is also clear that includinga measure of the present state of the magnetosphere helps predict the evolution of global indiceslike K p (Borovsky, 2014; Luo et al., 2017).Based, in part, on these considerations we use just nine solar wind parameters and the historicalK p as model features (input parameters). The input parameters are chosen based on the studies doneby the previous studies (Newell et al., 2007; Xu and Borovsky, 2015b,a). These model featuresare listed in Table 2 along with the notation we use in this paper and any transformations appliedprior to model training. For the solar wind data, we use 20 years of 1-minute resolution values,starting from 1995, obtained from NASA’s OMNIWeb service ( https://omniweb.gsfc.nasa.gov/ow_min.html ). The 3-hourly K p index is obtained from the GFZ German Research Centre forGeosciences at Potsdam ( ). As solar wind structures are spatial in nature, the measured parameters are auto-correlated. Solarwind data from L1 monitors are point measurements and hence spatial structure along the Sun-Earthline manifests as a temporal correlation. Hence, we performed an auto-correlation analysis of all thesolar wind parameters as presented in Figure 1. From the figure, we can conclude that, during solarminimum, most of the solar wind parameters are highly autocorrelated for a longer duration, whileduring solar maximum, the correlation coe ffi cient drops within a few hours. This suggests moretransients in solar wind during solar maximum than the solar minimum, consistent with observations(Richardson and Cane, 2012). All parameters selected as model features display auto-correlation,and most parameters decorrelate within 3 hours, with solar wind speed being a notable exception.Indeed, at solar maximum, all parameters except solar wind speed decorrelate in less than threehours. At solar minimum, the auto-correlation for the majority of parameters falls below 0.5 withinthree hours. We, therefore, used 3-hourly averaged solar wind data to train our models, which hasthe added benefit of placing the input data on the same cadence as the predicted output. Note thatthe goal of this linear analysis (autocorrelation) is to describe the redundant information givenin subsequent samples of individual solar wind parameters, rather than identifying time lags inthe magnetospheric response to solar wind driving. In addition, we acknowledge that includingnonlinear correlations may provide more robust estimates of the correlation scales, and could exhibitdi ff erent behaviors (e.g., Johnson et al., 2018; Wing et al., 2016).As the K p index is quasi-logarithmic and essentially categorical (Mayaud, 1980), we convertedthe reported K p values to decimal numbers using k ± following Tan et al. (2018). Features Symbol Units Transformation Model
Anti-sunward component of IMF B x nT Box-Cox AllTransverse component of IMF B T = (cid:113) B + B nT Box-Cox AllIMF clock angle θ c = tan − (cid:16) B y B z (cid:17) radian Box-Cox AllBulk Velocity v = (cid:113) v + v + v km s -1 Box-Cox AllNumber Density n m -3 Box-Cox AllElectron Temperature T K Box-Cox AllDynamic Pressure P dyn Nm -2 Box-Cox AllMach number M a β p K (t-3)p None LSTM c , dGPR + s X-ray Flux Ratio R 1 None LSTM c ,dGPR + s Table 2.
Table showing input features of the model, transformations used (if any), and which modelsuse each feature. The given transformation is only used prior to fitting the Gaussian process model.Box-Cox → B(x) = sgn(x) × log e x and Erf → erf(x) = sgn(x) × √ -1 (cid:16) a | x | b (cid:17) . In this study, we will also test the e ff ectiveness of including a previously-unused set of solar datafor K p prediction. That is, we will introduce our modeling approach using a standard construction with only upstream solar wind measurements to drive the model. We will then train a second modelthat di ff ers only in the inclusion of X-ray flux in the model features.As described above, the idea behind using the solar X-ray data is that the upstream solar windcarries little to no information about transients that are moving towards Earth. Advanced notice oftransients from in-situ L1 measurements is therefore limited to periods typically less than 1 hour.By including solar data, even a coarse measure such as the X-ray flux, we aim to demonstrate thatadditional information about the likelihood of transients can be included in the model and improveforecasts with a lead time longer than the L1-to-Earth propagation time. In other words, we treatthe X-ray flux data as a proxy for the magnetic complexity of the Sun and anticipate that includingthis data will allow the model to predict the arrival of CMEs earlier than a model-driven purely bysolar wind measurements. We obtain the GOES X-ray flux data from NOAA’s National Centers forEnvironmental Information ( https://satdat.ngdc.noaa.gov/sem/goes/data/ ).The GOES X-ray sensors have 2 channels that measure solar irradiance in two wavebands,namely, hard (0.05-0.4 nm) and soft (0.1-0.8 nm) X-rays. In this study, we followed Winter andBalasubramaniam (2015) in using a background term and a flux ratio derived from the two GOESwavebands. The X-ray background (B) has been computed as the smoothed minimum flux in a24-hour window preceding each 1-minute GOES soft X-ray flux observation. In a recent study,the X-ray background parameter was found to describe the solar activity cycle better than the dailyF10.7 parameter (Winter and Balasubramaniam, 2014). The X-ray flux ratio (R) has been calculatedby taking the ratio of hard X-ray flux over soft X-ray flux, and provides a measure of the tempera-ture of the flare emission (Garcia, 1994). Kay et al. (2003) showed that flares associated with CMEstended to have lower temperatures, at a given intensity level, than flares without CMEs. Thus boththe soft X-ray flux and the flux ratio, R, are important for determining the likelihood of an eruptiveevent. Further, Michalek (2009) showed a good correlation between the energy of the CME and thepeak of the X-ray flare. Finally, recent studies showed that the X-ray flux ratio is a good predictorfor extreme solar events (N´u˜nez, 2018; Kahler and Ling, 2018).As X-rays propagate from the Sun to the Earth at the speed of light, there will, of course, be atime lag associated with the arrival at Earth of any related geomagnetic activity due to associatedcoronal mass ejections. In this preliminary work to demonstrate the utility of including solar X-rayflux data in a K p prediction model, we assume a constant time lag that we apply to the derivedX-ray products B and R. Figure 2 presents a time-lagged cross-correlation analysis of B and R withother solar wind parameters to highlight any time lags between these two data sets. The correlationanalysis shows that the X-ray background (B) parameter is significantly correlated with many solarwind parameters at lags around 48 hours. The correlations between the X-ray flux ratio (R) andthe solar wind parameters are smaller and less consistent across solar wind parameters. A lackof clear, or strong, linear correlations with solar wind parameters at a given fixed lag does notnecessarily indicate that the parameter R is confounding. Better lag estimates could be obtainedusing nonlinear analysis (e.g., Johnson et al., 2018; Wing et al., 2016), however, models used in thisstudy can extract nonlinear relationships. We therefore expect nonlinear relationships in the datasetto be captured by the proposed models.Transit times of coronal mass ejections can range from less than 20 hours to more than 80 hours(e.g., Schwenn et al., 2005), however faster CMEs tend to be more geoe ff ective (Gonzalez et al.,1999; Srivastava and Venkatakrishnan, 2002). Hence we apply a constant time lag of 48 hoursto the X-ray flux derived parameters, consistent with a typical travel time from the Sun to Earth of geomagnetic-storm associated interplanetary CMEs (Srivastava and Venkatakrishnan, 2004).Although we note that in future work it may be useful to explore the e ff ects of choosing this lagover a di ff erent fixed lag, or even use of a variable lag. & Metrics for Model Evaluation
In this subsection, we define the technical terms and the metrics that we use in the latter part of thisstudy to evaluate and compare the models’ performance. Good overviews of model performancemetrics and model validation methodologies targeted at space weather have been given by Morleyet al. (2018a), Liemohn et al. (2018), and Camporeale (2019).For binary event analysis, we define correct “yes” events as
True Positives (denoted as a ).Similarly, we call incorrect “yes” events False Positives (b) , incorrect “no” events are
FalseNegatives (c) , and correct “no” events are
True Negatives (d) .ROC curve = A graphical diagnostic illustration of a binary classifier.AUC = Area under the ROC curve.Probability of Detection (PoD) = aa + c Probability of False Detection (PoFD) = bb + d False Alarm Ratio (FAR) = ba + b Bias = a + ba + c Critical Success Index (CSI) = aa + b + c Matthews correlation coe ffi cient (MCC) = a × d − b × d √ ( a + b )( a + c )( b + d )( c + d )Heidke Skill Score (HSS) = × ( ad − bc )( a + c )( c + d ) + ( a + b )( b + d )Note that a perfect forecast will have HSS, MCC, PoD, Bias, CSI, R equal to 1 and FAR equal to0. Unskilled or random forecasts will have HSS, MCC, PoD, CSI, R of 0, FAR of 1.For assessing numerical predictions of the K p index we use:Root mean square error (RMSE) ( (cid:15) ) = (cid:115) (cid:88) i ( x i − ˆ x i ) Mean Absolute Error (MAE) ( ¯ | δ | ) = n (cid:88) i | x i − ˆ x i | Coe ffi cient of Determination ( R ) = − (cid:80) i ( x i − ˆ x i ) (cid:80) i ( x i − n (cid:80) i x i ) where, x i , ˆ x i and n are observations, predicted output and a total number of observations, respec-tively. A perfect model will have zero RMSE and MAE, while a coe ffi cient of determination of1. Figure 3 presents the distribution of 22 years of K p values, where the log-scaled frequency of oc-currence is shown on the vertical axis and the K p value is shown on the horizontal axis. From thefigure, it is evident that most of the events are distributed between [0 , − ), a relatively small numberof events are distributed between [5 − , − ), and there are very few extreme events ≥
7. Followingthe NOAA G-scale for geomagnetic storms, we choose K p ≥ − as the threshold for “storm-time”,marked with a vertical line in Figure 3. Using this division, storm-time comprises approximatelyone-twentieth of the data set. This number continues to drop rapidly as K p increases. If we take theratio of more extreme events ( K P ≥ + ) versus non-storm events, the number drops to ≈ . Thise ff ect is known as data imbalance (e.g., Estabrooks et al., 2004) and can lead to significant errors ina model fit to the data without accounting for the imbalance (see, e.g., Shprits et al., 2019).Both oversampling and undersampling techniques have been used to address data imbalance(Estabrooks et al., 2004; Shprits et al., 2019), and both methods help develop models with a betterpredictive skill (Yap et al., 2014). Choosing a single regression model to predict K p across quietand disturbed geomagnetic conditions will likely not provide an optimal forecast unless the dataimbalance is addressed. Here we take a similar approach to Tan et al. (2018), Boberg et al. (2000)and minimize the data imbalance by separating the problem into two di ff erent categories: storm-time and non-storm. This then leads to the first step in our model architecture, where we use aclassifier to determine quiet or active conditions, and subsequently use a probabilistic regressor topredict the K p value.In this study, we have used a deterministic long-short term memory (LSTM) neural net-work (Hochreiter and Schmidhuber, 1997). An LSTM is a type of recurrent neural network, wherea “memory cell” is used to store information between time steps (see Tan et al., 2018, and refer-ences therein). LSTM neural networks are a special type of recurrent neural network, well-suitedto time series data analysis, and they require continuous data for training. However, we encountermissing IMF and solar wind data values issue. To handle the missing data issue we use the inter-polation method described in Tan et al. (see section 2.1 2018, and references therein). The methodused by Tan et al. (2018) is appropriate for relatively small (up to 12 samples) data gaps, for largerdata gaps we discarded the samples. As with other types of neural networks, an LSTM can be usedas either a regressor or a classifier. The layout of our LSTM classifier is shown schematically inFigure 4. Panel 4a presents a schematic of a single LSTM unit. The LSTM unit consists of input,output, forget gates, and memory cell ( C t ) (see Hochreiter and Schmidhuber, 1997, and referencestherein). Panel 4b illustrates the overall architecture of the classifier, where the central layer is com-prised of LSTM units. Panel 4c shows the implementation as layers in the Keras model. The “N”in the input / output shape of the model blocks shows the number of time samples, which can bevaried at run-time. However, as described later in this section we use a 27-day window at 3-hourlycadence, therefore N = ∼
100 LSTMunits led to smaller improvements. We therefore chose to retain only 100 LSTM units in our hiddenlayer. To help reduce overfitting and increase the generalizability of the model we included dropoutregularization. The input shape of the data is therefore ( N × × = are the number of data points, the number of input parameters (see Table 2), and the time-laggedunits (the input vector at times t, t −
1, and t − ×
3. Note, the input shape for one data point can also be 12 ×
3, based on the choiceof model ( µ OMNI or µ OMNI + , see the following paragraphs for details). To ensure that the classifierperformance can be generalized beyond the training data, we split available data into two categories:training / validation and testing. For training and validation we used the intervals 1995–2000 and2011–2014. To mitigate the e ff ects of data imbalance we used a random downsampling strategy tobalance the storm-time and no-storm intervals. After downsampling (from 29200 to 5716 points),we split the data into “training” and “validation” sets and train the classifier, where the validationdata is used for tuning the model parameters and comprises 20% of the training set. Data from2001–2010 was reserved for out-of-sample testing of the predictions (26645 points). The rationalebehind using the above mentioned periods for train / validation and test the model is to increase themodel performance and reduce the model bias. Both train and test periods consist of di ff erent solarmaximum and minimum data to capture solar cycle dynamics and testing with out-of-sample dataensures that the model generalizes well to unseen data.Following Tan et al. (2018) we employ a two-layer architecture, where we use separate regressionmodels to predict K p under quiet or active geomagnetic conditions. Unlike Tan et al. (2018) weuse probabilistic regressors. The model structure is shown in Figure 5. The prediction made bythe classifier is used to determine which regressor is going to be selected. As the primary aim ofthis work is to produce a probabilistic prediction of K p , we chose regression models that output adistribution rather than a single value. We used semi-parametric Deep Gaussian Process Regression,commonly known as deepGPR, to build the regressors. DeepGPR (dGPR) is a Gaussian Processmodel with neural network architecture. A Gaussian Process is a random process that follows amultivariate normal distribution. Specifically, dGPR (Al-Shedivat et al., 2017) is a Gaussian ProcessRegression (Rasmussen and Williams, 2006) method, which uses a deep LSTM network to optimizethe hyperparameters of the Gaussian Process kernels.For example, if the classifier predicts a geomagnetic storm then regressor dGPR s is selected,otherwise regressor dGPR q is used. At each time step, the dGPR is retrained using the intervalof preceding data (the training window), and thus our regressors are dynamic non-linear models.Dynamic models do not need to assume a constant functional relationship between the model co-variates (e.g., solar wind drivers) and response (K p ). Static models implicitly combine the e ff ectsof any potentially time-varying relationships in the error terms or average over the e ff ects in theestimation of model coe ffi cients (see, e.g., Osthus et al., 2014). By using a relatively short, localtraining window, the data is used more e ffi ciently and computational complexity is reduced. Fortraining and validation of the dGPR-based dynamic model, including training window selection,we used the mean squared error (MSE) as our loss function.Optimizing the hyperparameters of a dGPR is much easier while working with input parametersthat are normally-distributed. To ensure better behavior of the Gaussian process model we trans-formed all the substantially non-normally distributed input parameters listed in Table 2 using eithera Box-Cox transform or the Complementary Error Function. After the transformation, we check theskewness and kurtosis of the transformed variable to validate the transformation. We found Box-Cox transformation worked well for all IMF and solar wind parameters except plasma beta. Wetransform the plasma beta using the Complimentary Error Function. The quiet-time and storm-time regressors each use di ff erent training windows, the lengths ofwhich were selected to minimize the training error using the mean squared error (MSE) in K p asthe loss function. Figure 6 shows one example of how the MSE varies with the training window (indays) for predictions over two months during 1995. It can be clearly seen that a training windowof ≈
27 days is optimal at this time, as this captures recurrent structure in the solar wind such ascorotating interaction regions (Richardson and Cane, 2012). While the deep architecture helps tocapture the nonlinear trends in data to provide better accuracy, the Gaussian process mappingsare used to provide the error distribution with a mean predicted K p . The two dGPR regressors aredi ff erent in terms of the length of the training window used for forecasting. The dGPR modulededicated for non-storm, or quiet, conditions has a 27-day training window, whereas the dGPRmodule for storm conditions uses a 14-day training window.One of the di ffi culties in predicting the “events” – i.e., the storm-time K p values – is that theseare typically driven by solar wind transients , which include interplanetary CMEs and corotatinginteraction regions (CIRs) (see, e.g., Kilpua et al., 2009; Zhang et al., 2018), with the largest stormsdriven by CMEs (Borovsky and Denton, 2006). The in-situ solar wind measurements from an L1monitor do not convey the information required to predict the occurrence of these transients fora 3-hour-ahead prediction of K p , or for longer prediction horizons. For this reason, we perform apreliminary investigation in which we include information that may encode the likelihood of CMEeruption. Following Winter and Balasubramaniam (2014) we use X-ray flux data from the NOAAGOES platforms as a measure of possible solar activity.To test whether the inclusion of a proxy for solar activity improves our ability to predict storm–time Kp, we constructed two di ff erent prediction systems. The first was trained only on OMNIsolar wind parameters ( µ OMNI ). The second added the X-ray background (B) and the flux ratio (R)as extra model parameters ( µ OMNI + ). When using the X-ray data we add B and R as model featuresto the LSTM classifier as well as the storm-time regressor. Note that we do not use the X-ray datafor the quiet-time regressor. Both the models are validated and evaluated against 10 years of K p (2001-2010), in addition to a specific storm-time validation using 38 intervals listed in Table 3. Table 3.
List of Storm Events
Start date Start time End date End time Max. K p
19 March 2001 1500 21 March 2001 2300 7 +
31 March 2001 400 1 April 2001 2100 9 -
18 April 2001 100 18 April 2001 1300 7 +
22 April 2001 200 23 April 2001 1500 6 +
17 August 2001 1600 18 August 2001 1600 730 September 2001 2300 2 October 2001 0 6 -
21 October 2001 1700 24 October 2001 1100 8 -
28 October 2001 300 29 October 2001 2200 7 -
23 March 2002 1400 25 March 2002 500 617 April 2002 1100 19 April 2002 200 7 +
19 April 2002 900 21 April 2002 600 7 - Continued on next page
Table 3 –
Continued from the previous page
Start date Start time End date End time Max. K p
11 May 2002 1000 12 May 2002 1600 7 -
23 May 2002 1200 24 May 2002 2300 8 + + + + +
20 November 2002 1600 22 November 2002 600 629 May 2003 2000 30 May 2003 1000 5 +
17 June 2003 1900 19 June 2003 300 611 July 2003 1500 12 July 2003 1600 617 August 2003 1800 19 August 2003 1100 7 +
20 November 2003 1200 22 November 2003 0 9 -
22 January 2004 300 24 January 2004 0 711 February 2004 1000 12 February 2004 0 6 + +
22 July 2004 2000 23 July 2004 2000 724 July 2004 2100 26 July 2004 1700 626 July 2004 2200 30 July 2004 500 7 +
30 August 2004 500 31 August 2004 2100 711 November 2004 2200 13 November 2004 1300 5 +
21 January 2005 1800 23 January 2005 500 87 May 2005 2000 9 May 2005 1000 8 +
29 May 2005 2200 31 May 2005 800 8 -
12 June 2005 1700 13 June 2005 1900 7 +
31 August 2005 1200 1 September 2005 1200 713 April 2006 2000 14 April 2006 2300 714 December 2006 2100 16 December 2006 300 8 -
3. Results
In this section, we present model forecasts and quantitative comparison of predicted K p , comparingthe models with and without the GOES X-ray data. We further describe a simple method to exploitthe uncertainty bounds of the predicted K p to provide a probabilistic geomagnetic storm prediction.Finally, we analyze the performance of the probabilistic K p prediction models. p Forecast Models
As the first step, we present 3-hour ahead predicted K p during two months of 2004. Panels (a) and(b) of Figure 7 shows predicted K p with a 95% confidence interval using models µ OMNI and µ OMNI + , respectively. The horizontal dashed black line shows the storm versus non-storm demarcation line.The root mean square error ( (cid:15) ) and mean absolute error ( | ¯ δ | ) for this 2 month interval are giveneach panel as annotations. In the figure, we have discretized the K p values and the 95% confidenceinterval bounds by rounding them to the nearest valid K p values (see section 4.2 of Morley et al.,2018b). We have chosen this time interval as an example to showcase the ability of the modelto capture the dynamics of both storm-time and quiet-time K p . Examining Figure 7, it is visuallyapparent that both the models can capture the change in K p during the transitions into, and outof, storm-time. The error metrics given in each panel suggest that models µ OMNI and µ OMNI + arecomparable in their performance. However, a more detailed analysis is required to allow us to drawfirm conclusions and assess the impact of including the X-ray flux data. Specifically, as the intentof including the X-ray flux is to better capture storm-time transients, we need validation methodsthat allow us to determine the performance as a function of activity level.We have conducted a thorough head-to-head test of two K p forecast models, µ OMNI and µ OMNI + ,using predictions across our validation data set (from January 2001 through December 2010). Wealso compare the model predictions for 38 storm-time intervals (listed in Table 3). Summary statis-tics for the di ff erent models are presented in Table 4. When comparing the models across the fullvalidation set, the error metrics are nearly identical between the model variants. The RMSE, MAEand correlation coe ffi cient for both of our models show similar performance to the models of Boberget al. (2000) and Bala and Rei ff (2012). On the full data set our model does not perform as well asthat of Tan et al. (2018). However, in addition to generating a probabilistic forecast, we seek to an-swer the question of whether including GOES X-ray data provide a meaningful improvement in theprediction of storm-time K p intervals. Looking at the same metrics for the 38 storm-time intervals, adi ff erent picture emerges. The model variant incorporating X-ray flux data ( µ OMNI + ) outperforms thestandard model by a substantial margin. The RMSE of µ OMNI + is 0.9 and the MAE is 0.67, showingthat typical storm-time Kp predictions are within 1 Kp interval of observation. Of particular im-portance is that the correlation coe ffi cient increases for µ OMNI + relative to the performance acrossthe full validation set. Here we note that the correlation coe ffi cient can be considered a measure ofpotential performance, as it neglects conditional and unconditional bias (Murphy, 1995).To graphically display the model bias across these two validation sets, Figure 8 plots observedK p against predicted K p . Panel (a) shows the comparison across the full 10-year validation set andpanel (b) shows the comparison for the 38 storm intervals. Black and blue colors represent predictedK p using µ OMNI and µ OMNI + , respectively. The circles give the mean predicted K p and the verticallines represent 1- σ error bars associated with that predicted K p level. As above, the predictions fromboth models are comparable when we use the full validation set (Figure 8(a)) and do not account forthe activity level. For K p ≤ + the predictions show little to no bias; the mean predicted K p is nearlyidentical to the observed value during quiet times. At higher levels of geomagnetic activity, we see aclear tendency for the mean predicted K p to be lower than the observation. That is, high values of K p tend to be underpredicted by both models. Comparing the two models shows a slight improvementin the bias at higher K p values using the µ OMNI + model, but the most visible improvement using thisdisplay format is in the smaller error bars on the µ OMNI + predictions.Table 4 presents a summary of overall fit statistics, both both model, for the 2001-2010 dataset as well as the storm-time data set. On both data sets there is an improvement in the fit whenadding the X-ray flux data to the model. Because the correlations have the observations in common,we test whether the improvement is significant using a test for correlated correlation coe ffi cients (Steiger, 1980; Revelle, 2020), where we use the e ff ective number of samples ( n e f f where n e f f < n )estimated by correcting for the lag-1 autocorrelation (see, e.g., Wilks, 2006). The improvements in r for both models are statistically significant, with a p-value of 4 . − for the 2001-2010 data setand p < − for the storm-time data set. Given the results presented in both Table 4 and Figure 8,we conclude that including X-ray flux information provides information that improves K p predictionaccuracy during geomagnetically active intervals. Forecast Model Case r R MSE ( (cid:15) ) MAE ( | ¯ δ | ) R µ OMNI − µ OMNI + − µ OMNI
Storms 0.69 1.48 1.11 0.29 µ OMNI + Storms 0.75 0.9 0.67 0.56
Table 4.
Table Showing the Prediction Summary of the µ OMNI and µ OMNI + models during 2001–2010and geomagnetic storms listed in Table 3.However, this analysis only uses the mean prediction and neglects the fact that we have used aprobabilistic regressor. So, how can we use the uncertainties provided by the dGPR prediction andhow well do our probabilistic predictions perform? Here we describe how we exploit the uncertainty in predicted K p to provide a probabilistic geo-magnetic storm forecast using the SW-driven model µ OMNI as an example. Figure 9 illustrates theprobabilistic prediction of both the K p index and of geomagnetic storm occurrence. Figure 9(a)shows a “tra ffi c light” display which gives the probability of K p exceeding 5 − , which we use to de-lineate storm-time, following the NOAA G-scale of geomagnetic storms (refer Table 1).The colorrepresents the likelihood of storm activity: green is Pr ≤ .
3, yellow is 0 . < Pr ≤ .
66, and redmarks intervals where the probability of geomagnetic storm conditions exceeds 0.66. Note that,Pr is the probability of geomagnetic storm, using the NOAA SWPC definition of K p ≥ - as thethreshold for geomagnetic storm.Figure 9(b) presents 10 days (21 st –31 st July 2004) of 3-hour ahead predictions of K p , using µ OMNI + model (cf. Figure 7(b)). The horizontal dashed line is at K p = − and the vertical bar marks the timeof the prediction shown in Figure 9(c). Figure 9(c) illustrates how to calculate the probability ofgeomagnetic storm from the predicted distribution of K p . The blue curve gives the output Gaussianprobability density function from the dGPR regressor while the blue and red vertical lines representthe mean prediction and the observed K p . The vertical dashed black line marks K p = − and theintegral of the shaded area is the probability of exceeding the threshold.In a non-probabilistic model, we would simply have a binary outcome of storm or non-storm.Following this method, we see that the prediction of a probability distribution gives us both un-certainty bounds on our prediction of the actual K p as well as the probability of exceeding a giventhreshold in K p . To assess the probabilistic storm prediction (i.e., the probability of exceeding a threshold), we willexamine binary event forecasts. For this we convert each probabilistic prediction into a predictionof “event” or “no event”. For this, we need to choose a probability threshold. As our predictedprobability distribution is Gaussian, the mean prediction is also the 50 th percentile. Simply ignoringthe predicted distribution and using the mean value is equivalent to using a threshold of 0.5. Wecan similarly convert the observed K p to a series of events and non-events, and can subsequentlydetermine whether the prediction was a true positive, false positive, false negative or true negative(see section 2.2).Figure 10 shows K p predictions using the µ OMNI model during a geomagnetic storm at the endof March 2001. One forecast from each of the true positive (TP), false positive (FP), false negative(FN) and (true negative) TN categories are shown by the vertical lines. Figure 10 is in a similarformat to Figure 9, except that panel (c) is omitted. We use the simpler model for this graphic toillustrate the main e ff ect that our µ OMNI + model aims to combat. Specifically, the FP and FN casesare occurring in a specific pattern. At the start of the storm, we see a false negative and at the endof the storm, we see a false positive. This is typical for solar wind-driven models for predictionsthat are further ahead than the lead time given by the solar wind. In this case, we have a 3-hourlead time to our prediction and so the model has no information to capture the sudden onset of thegeomagnetic activity. By the next prediction, the model has “caught up” and now correctly predictsa very high likelihood of storm-time. The inverse is seen, though perhaps less clearly, at the trailingedge of the storm. The model is unable to predict the cessation of storm-level geomagnetic activity,although we do note that the uncertainty bounds include a non-negligible likelihood of K p < − .The model prediction is, therefore, lagging the observation. While this figure shows one examplewhere predicted K p is lagging the observations by 3 hours, most of the storm-time predictions arelagging the observations by at least one 3-hour prediction window. This implies that the model µ OMNI has insu ffi cient information to capture the imminent arrival of a solar wind transient fromthe L1 data alone, and the prediction is likely to be strongly persistent (giving high weight to theprevious value of K p ) in the model. By including the X-ray data in the µ OMNI + model as a proxy forthe likelihood of CME occurrence, we aim to improve storm-time predictions and hopefully combatthis “lag” e ff ect. In the previous sections, we described a method to use the uncertainty bound to a predict probabilis-tic storm forecast. Here we are going to compare the predictions using models ( µ OMNI and µ OMNI + ),using the di ff erent metrics defined in section 2. We begin with the Receiver Operating Characteristic(ROC) curve. The ROC curve is calculated from the probability of detection (PoD) and the proba-bility of false detection (PoFD) over a range of decision thresholds. If we make a decision thresholdof Pr =
0, then all predictions lie above it and thus every time is predicted as an event and the PoDand PoFD are both 1. Conversely, taking a decision threshold of Pr = µ OMNI and µ OMNI + models, using dif-ferent K p threshold values. The solid lines are the ROC curves from model µ OMNI and the dashedlines are the ROC curves from model µ OMNI + . Thresholds of K p =
2, 4 and 6 are shown in red,black and blue, respectively. We also use the area under the ROC curve (abbreviated as AUC) as a summary measure to compare the performance of our models (cf. Bradley, 1997), and the AUC foreach ROC curve is given in the figure legend. For the lower K p threshold values are shown (K p = p thresholdvalue (K p =
6) the ROC curves visibly diverge across a broad range of decision thresholds. TheAUC for µ OMNI + is higher than that for µ OMNI . This provides qualitative support for the hypothesisthat the inclusion of GOES X-ray data has improved the performance of our K p model for highgeomagnetic activity.As the aim of including the X-ray flux data was to potentially provide information relevant topredicting the intervals of higher K p with a longer lead time, we also test the di ff erence betweenthe AUC for µ OMNI + and µ OMNI , when K p ≥
6. Because the same test data is used for both ROCcurves – the two blue curves in figure 11 – we use DeLong’s nonparametric test for the area undercorrelated ROC curves (DeLong et al., 1988; Sun and Xu, 2014) as implemented in the pROCpackage (Robin et al., 2011). A two-sided test yielded ( Z = − . p < . − ) showing thatthe visual di ff erence between the ROC curves for the two models is statistically significant. Thisconfirms the qualitative analysis presented above and supports the hypothesis that including evensimple proxies for solar activity can improve the prediction of geomagnetic activity with lead timesgreater than the L1-to-Earth transit time.Figure 12 also explores activity-dependent model performance. Using Pr = . p threshold used to define an “event” (see also Liemohn et al., 2018). In each of the panels,the black markers show the results for µ OMNI and the blue markers show the results for µ OMNI + . Theerror bars show the 95% confidence interval estimated using 2000 bootstrap resamplings (Morleyet al., 2018b; Morley, 2018). Panel (a) shows the threshold-dependent Heidke Skill Score (HSS),which measures model accuracy relative to an unskilled reference. Panel (b) shows the MatthewsCorrelation Coe ffi cient, which can be interpreted in a similar way to a Pearson correlation. Panel (c)shows the probability of detection (PoD), also called hit rate or true positive rate. Panel (d) showsthe false alarm ratio (FAR), which is the fraction of predicted events that did not occur. The idealFAR is therefore zero. Panel (e) shows the critical success index (CSI), which can be interpreted asan accuracy measure after removing true negatives from consideration. Finally, panel (f) displaysthe frequency bias, where an unbiased forecast predicts the correct number of events and non-eventsand scores 1. As a reminder, the metrics displayed in panels a, b, c and e are positively-oriented,where 1 constitutes a perfect score. FAR (panel d) is negatively-oriented and a perfect model has anFAR of zero. The metrics shown in panels a-e have an upper bound of 1, and this is marked by thered dashed line. In every measure, the performance between the two models is indistinguishable atlow values of K p – which, as we recall, constitutes the vast majority of geomagnetic conditions –but as the threshold for identifying an event increases we clearly see improved performance fromthe µ OMNI + model. While the confidence intervals substantially overlap for these scores we note thatparameter estimates with overlapping confidence intervals can be significantly di ff erent (Afshartousand Preston, 2010). In other words, while non-overlapping confidence intervals are likely to showthat the performance metrics are significantly di ff erent, the inverse is not necessarily true. Due to avariety of factors, we cannot assess the significance of the improvement in all performance metricspresented here. Among these are the fact that the metrics are correlated with each other, and wewould need to correct for the e ff ect of multiple significance tests. We have instead noted throughoutthis work where we were able to test for significance and described the consistent improvement in performance metrics. Although the improvement in these metrics is modest, we again conclude thatadding the GOES X-ray flux data improves the model’s ability to predict geomagnetically activetimes.Finally, we assess the reliability of the probability distributions generated by our dGPR models.In this context, reliability assesses the agreement between the predicted probability and the meanobserved frequency. In other words, if the model predicts a 50% chance of exceeding the stormthreshold, is that prediction correct 50% of the time?Figure 13 presents a reliability diagram of the observed probability of a geomagnetic storm (fordi ff erent K p threshold levels, i.e. 2 , ,
6) plotted against the forecast probability of a geomagneticstorm. The top row – panels (a.1) and (a.2) – presents reliability diagram for models µ OMNI and µ OMNI + , respectively. In this figure red, blue and black lines represent geomagnetic storm thresholdsof K p =
2, 4 and 6 respectively. A perfectly reliable forecast should lie on the x = y line (blackdashed line). For smaller chances of geomagnetic storms, both forecast models are reliable in theirprobabilistic predictions. As the predicted probability increases so do the tendency for the predictedprobability to be higher than the observed probability. That is, the model tends to over-forecastslightly. Comparing the reliability of µ OMNI to that of µ OMNI + , we see similar results for activitythresholds of K p = µ OMNI + model predictions for a storm-time threshold ofK p = µ OMNI and µ OMNI + displaysharpness, with local peaks near probabilities of zero and one.Both models exhibit high sharpness, which can be interpreted as the confidence of the model inits event prediction (Wilks, 2006). Further, both models perform similarly for lower K p thresholds.The µ OMNI + model, when using an event threshold of K p ≥
6, has slightly improved calibration overthe µ OMNI . The addition of the solar X-ray flux data has consistently improved performance whenassessing its performance in a deterministic setting, and here is shown to improve the calibration ofthe model at high activity levels without impact to the sharpness of the model. This analysis furthersupports the performance of our probabilistic model and confirms that the GOES X-ray data addsvalue to our K p prediction model.
4. Discussion
In this study, we presented a novel, probabilistic, geomagnetic storm forecast model that predicts3 hours ahead. Our model structure combined an LSTM classifier and dynamically-trained deepGaussian processes to generate predictions of K p with an associated probability distribution. To testwhether a simple, operationally-available data set could improve predictions of geomagnetic stormtimes, we trained two variants of our model: the first used only solar wind data and historical valuesof K p ; the second added X-ray fluxes from the NOAA GOES satellites, as a proxy for magneticcomplexity at the Sun. Using a variety of model validation methods, we have confirmed that includ- ing the X-ray data enhances the performance of the forecast model at times of high geomagneticactivity. Due to the low number of samples (at high K p levels) for model testing, many measuresof model performance suggest an improvement in the model performance at high activity levelsbut statistical significance could not be demonstrated. Significance tests of the improvement in thecorrelation coe ffi cients and the change of the ROC AUC show that there is a quantified, statistically-significant improvement in the model performance when GOES X-ray flux data is included. In thissection, we further analyze the performance metrics and compare them with prior studies.Although exact comparisons should not be made as we use di ff erent data sets for model valida-tion, we place our results in the context of previous work. In comparison with some earlier models(e.g. Boberg et al., 2000; Wing et al., 2005; Bala and Rei ff , 2012) our models typically perform well,with an RMSE of 0 .
77. The performance, as measured by RMSE, is not as good as the RMSE for the3 hr-ahead predictions of Zhelavskaya et al. (2019) (RMSE = .
67) and Tan et al. (2018) (RMSE = . p ≥ precision and recall , using the nomenclature of machine learning literature.Using terminology from statistical literature, the precision is perhaps better known as the positivepredictive value and represents the fraction of predicted positives that were correct. Similarly, therecall is the probability of detection and represents the fraction of observed events that were cor-rectly predicted. The F1-score for the Tan et al. (2018) model was 0 .
55. Our initial model ( µ OMNI )gave an F1-score of 0 .
56, while our model including the solar X-ray flux data ( µ OMNI + ) gave anF1-score of 0 . p forecast models, whereas, in thispaper, we aim to provide a probabilistic prediction of K p without compromising the predictive skillof the model. We have further demonstrated that including a simple, operationally-available proxyfor the likelihood of solar activity improves the prediction of geomagnetic storms. The inability ofK p prediction models to predict larger storms (K p ≥
5) well from L1 solar wind data has previouslybeen discussed in the literature (see, e.g., Zhelavskaya et al., 2019), and this work shows that includ-ing solar X-ray flux can directly improve the prediction of high levels of geomagnetic activity. Inthis work we found that including solar X-ray flux in our model features reduces the overall RMSEby 0.01, from 0.78 to 0.77. At the same time the correlation coe ffi cient increased by a small but sta-tistically significant amount (from 0.82 to 0.83). Importantly, for the storm-time test data the RMSEwas reduced by 0.58, from 1.48 to 0.9, and the correlation coe ffi cient increased from 0.69 to 0.75.For details of the results and the significance testing, see Table 4 and section 3.1. Similarly, we notethat analyzing the area under the ROC curve shows a significant improvement in the probabilisticpredictions of K p , for high activity levels, when X-ray flux is included (see section 3.3). These com-parisons show that inclusion of solar flux can enhance the storm time forecasting capability withoutdiminishing the performance during less active periods.Although we present only one sample model architecture, we use this to highlight a straightfor-ward method by which uncertainty bounds can be predicted using machine-learning models, andalso improve predictions of intervals of high geomagnetic activity. This clear demonstration thatthe X-ray flux data meaningfully improves our prediction of geomagnetic storms strongly suggeststhat future work including solar data sources are a promising way to extend the meaningful fore-cast horizon for high levels of geomagnetic activity. However, other operationally-available datasources exist that are likely to carry more information about magnetic complexity at the Sun (e.g., solar magnetograms (Arge et al., 2010)), and hence using these will further improve the predictionof both the CME- and non-CME-driven geomagnetic activity. Further work is planned to investigatethis work as well as incorporating other recent advances that will help improve the fidelity of ourpredictive model.We also note that Zhelavskaya et al. (2019) explored methods for selecting model features andreducing a large number of candidate features to a smaller selection of those that are most impor-tant. Relative to their work we use a small number of features, all of which were selected based on aphysically-motivated interpretation and subject to the constraint of being products generally avail-able operationally. Applying their feature selection methods and further developing the architecture,such as using convolution neural network to process and extract CME features from 2-dimensionalsolar X-ray and magnetogram data, would likely yield immediate improvements in the model per-formance.
5. Conclusion
The two main objectives addressed by this work were to: 1. build a probabilistic K p forecast model;and 2. test whether the inclusion of operationally-available proxies for solar activity could improvethe prediction of geomagnetic storms (using K p , following the NOAA G-scale).We presented a two-layer architecture, using an LSTM neural network to predict the likelihood ofstorm-time and deep Gaussian Process Regression models to predict the value of K p including un-certainty bounds. We then exploited these uncertainty bounds to provide a probabilistic geomagneticstorm forecast. Our analysis demonstrates that this architecture can be used to build probabilisticspace weather forecast models with good performance.Further, we tested two variants of our model that di ff ered only in the input parameters (“fea-tures”). The first used only upstream solar wind data and the second added solar X-ray flux datafrom the GOES spacecraft. Analysis of the predictions and the errors, for both the values of K p aswell as the probability of exceeding a threshold in K p , showed that the addition of X-ray flux dataresulted in significant model performance improvements during geomagnetically active periods.The model using X-ray flux data had a significantly higher correlation coe ffi cient on the storm-timetest data (increased from 0.69 to 0.75), with other performance metrics showing improvements. TheRMSE on the storm-time data set decreased from 1.48 to 0.9. This improvement in model perfor-mance was also seen across all contingency table-based metrics, with improvements in skill andreductions in false alarms. Similarly, the probabilistic predictions were shown to be significantlybetter by testing the di ff erence in the area under the ROC curve. The probabilistic predictions wereshown to be well-calibrated and sharp.Adding solar X-ray flux data to empirical or machine-learned models can add useful informa-tion about transient solar activity, improving the 3-hour ahead prediction of the K p index for highgeomagnetic activity levels. Although including this relatively simple data set increased the ac-curacy of the forecast, supporting the suggestion that X-ray data is a reasonable proxy for solarmagnetic activity, our model still shows lags in predicting large geomagnetic storms. Capturinguncertainty, providing probabilistic predictions and improving our ability to capture transient be-havior are all within reach with modern tools and do not require sacrificing model predictive per-formance. We hope that future work continues to bring together recent advances in feature selection(e.g., Zhelavskaya et al., 2019), model design to accommodate probabilistic prediction, and more complex solar data sources such as solar magnetograms, to provide accurate forecasting of stronggeomagnetic activity with longer lead times. Acknowledgements.
SC thanks to the Space Science and Applications group and the Center for Space andEarth Science (CSES) at Los Alamos National Laboratory for organizing and supporting the Los AlamosSpace Weather Summer School. Portions of this work by SKM were performed under the auspices ofthe US Department of Energy and were partially supported by the Laboratory Directed Research andDevelopment (LDRD) program, award number 20190262ER. The authors wish to acknowledge the useof the OMNI solar wind data, available at https://omniweb.gsfc.nasa.gov/ow.html . The K p indexand GOES X-ray datasets were accessed through the GFZ-Potsdam website and NOAA FTP server https://satdat.ngdc.noaa.gov/sem/goes/data/ , respec-tively. The majority of analysis and visualization was completed with the help of free, open-source soft-ware tools, notably: Keras (Chollet et al., 2015); matplotlib (Hunter, 2007); IPython (Perez and Granger,2007); pandas (McKinney, 2010); Spacepy (Morley et al., 2011); PyForecastTools (Morley, 2018); and oth-ers (see, e.g., Millman and Aivazis, 2011). The code developed during this work is available at https://github.com/shibaji7/Codebase_Kp_Prediction . References
Afshartous, D., and R. A. Preston, 2010. Confidence intervals for dependent data: Equating non-overlap with statistical significance.
Computational Statistics & Data Analysis , (10), 2296–2305. 10.1016 / j.csda.2010.04.011, URL . 3.3Al-Shedivat, M., A. G. Wilson, Y. Saatchi, Z. Hu, and E. P. Xing, 2017. Learning Scalable Deep Kernelswith Recurrent Structure. Journal of Machine Learning Research , (82), 1–37. URL http://jmlr.org/papers/v18/16-498.html . 2.3Arge, C. N., C. J. Henney, J. Koller, C. R. Compeau, S. Young, D. MacKenzie, A. Fay, and J. W. Harvey,2010. Air Force Data Assimilative Photospheric Flux Transport (ADAPT) Model. Twelfth InternationalSolar Wind Conference , , 343–346. 10.1063 / https://aip.scitation.org/doi/abs/10.1063/1.3395870 . 4Baker, D. N., E. W. Hones, J. B. Payne, and W. C. Feldman, 1981. A high time resolutionstudy of interplanetary parameter correlations with AE. Geophysical Research Letters , (2), 179–182. 10.1029 / GL008i002p00179, URL https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/GL008i002p00179 . 2.1Bala, R., and P. Rei ff , 2012. Improvements in short-term forecasting of geomagnetic activity. Space Weather , (6). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2012SW000779 . 3.1, 4Bartels, J. R., 1949. The standardized index, Ks and the planetary index, Kp. IATME , (12b). 1Bingham, Suzy, Murray, Sophie A., Guerrero, Antonio, Glover, Alexi, and Thorn, Peter, 2019. Summaryof the plenary sessions at European Space Weather Week 15: space weather users and service providersworking together now and in the future. J. Space Weather Space Clim. , , A32. 10.1051 / swsc / https://doi.org/10.1051/swsc/2019031 . 1 18hakraborty and Morley: Probabilistic Storm ForecastBoberg, F., P. Wintoft, and H. Lundstedt, 2000. Real time { Kp } predictions from solar wind data using neuralnetworks. Physics and Chemistry of the Earth, Part C: Solar, Terrestrial & Planetary Science , (4), 275–280. 10.1016 / S1464-1917(00)00016-7, URL . 1, 2.3, 3.1, 4Borovsky, J. E., 2013. Physical improvements to the solar wind reconnection control functionfor the Earth’s magnetosphere.
Journal of Geophysical Research: Space Physics , (5), 2113–2121. 10.1002 / jgra.50110, URL https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/jgra.50110 . 2.1Borovsky, J. E., 2014. Canonical correlation analysis of the combined solar wind and geomagnetic indexdata sets. Journal of Geophysical Research: Space Physics , (7), 5364–5381. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2013JA019607 . 2.1Borovsky, J. E., and M. H. Denton, 2006. Di ff erences between CME-driven storms and CIR-driven storms. Journal of Geophysical Research: Space Physics , (A7). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2005JA011447 . 2.3Borovsky, J. E., M. F. Thomsen, R. C. Elphic, T. E. Cayton, and D. J. McComas, 1998. The trans-port of plasma sheet material from the distant tail to geosynchronous orbit. Journal of GeophysicalResearch: Space Physics , (A9), 20,297–20,331. 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/97JA03144 . 2.1Bradley, A. P., 1997. The use of the area under the ROC curve in the evaluation of machine learningalgorithms. Pattern Recognition , (7), 1145–1159. 10.1016 / S0031-3203(96)00142-2, URL . 3.3Camporeale, E., 2019. The Challenge of Machine Learning in Space Weather: Nowcasting andForecasting.
Space Weather , (8), 1166–1207. 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018SW002061 . 2.2Carbary, J. F., 2005. A Kp–based model of auroral boundaries. Space Weather , (10).10.1029 / http://doi.wiley.com/10.1029/2005SW000162 . 1Choi, H.-S., J. Lee, K.-S. Cho, Y.-S. Kwak, I.-H. Cho, Y.-D. Park, Y.-H. Kim, D. N. Baker, G. D. Reeves, andD.-K. Lee, 2011. Analysis of GEO spacecraft anomalies: Space weather relationships. Space Weather , (6). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2010SW000597 . 1Chollet, F., et al., 2015. Keras. https://keras.io . 5Costello, K. A., 1998. Moving the Rice MSFM into a real-time forecast mode using solar wind drivenforecast modules. Ph.D. thesis. URL https://scholarship.rice.edu/handle/1911/19251 . 1DeLong, E. R., D. M. DeLong, and D. L. Clarke-Pearson, 1988. Comparing the Areas under Two or MoreCorrelated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics , (3),837–845. 10.2307 / . 3.3Dungey, J. W., 1961. Interplanetary Magnetic Field and the Auroral Zones. Phys. Rev. Lett. , (2), 47–48.10.1103 / PhysRevLett.6.47, URL https://link.aps.org/doi/10.1103/PhysRevLett.6.47 . 2.1 19hakraborty and Morley: Probabilistic Storm ForecastEastwood, J. P., E. Bi ffi s, M. A. Hapgood, L. Green, M. M. Bisi, R. D. Bentley, R. Wicks, L.-A. McKinnell,M. Gibbs, and C. Burnett, 2017. The Economic Impact of Space Weather: Where Do We Stand? RiskAnalysis , (2), 206–218. 10.1111 / risa.12765, URL https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.12765 . 1Estabrooks, A., T. Jo, and N. Japkowicz, 2004. A Multiple Resampling Method for Learning fromImbalanced Data Sets. Computational Intelligence , (1), 18–36. 10.1111 / j.0824-7935.2004.t01-1-00228.x, URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.0824-7935.2004.t01-1-00228.x . 2.3Garcia, H. A., 1994. Temperature and emission measure from GOES soft X-ray measurements. SolarPhysics , (2), 275–308. 10.1007 / BF00681100, URL https://link.springer.com/article/10.1007/BF00681100 . 2.1Gonzalez, W. D., B. T. Tsurutani, and A. L. Cl´ua de Gonzalez, 1999. Interplanetary origin of geomagneticstorms.
Space Science Reviews , (3), 529–562. 10.1023 / A:1005160129098, URL https://doi.org/10.1023/A:1005160129098 . 1, 2.1Haiducek, J. D., D. T. Welling, N. Y. Ganushkina, S. K. Morley, and D. S. Ozturk, 2017. SWMF GlobalMagnetosphere Simulations of January 2005: Geomagnetic Indices and Cross-Polar Cap Potential.
SpaceWeather , (12), 1567–1587. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017SW001695 . 1Hochreiter, S., and J. Schmidhuber, 1997. Long Short-Term Memory. Neural Comput. , (8), 1735–1780.10.1162 / neco.1997.9.8.1735, URL https://doi.org/10.1162/neco.1997.9.8.1735 . 2.3Horne, R. B., S. A. Glauert, N. P. Meredith, D. Boscher, V. Maget, D. Heynderickx, and D. Pitchford, 2013.Space weather impacts on satellites and forecasting the Earth’s electron radiation belts with SPACECAST. Space Weather , (4), 169–186. 10.1002 / swe.20023, URL https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/swe.20023 . 1Hundhausen, A. J., 1970. Composition and dynamics of the solar wind plasma. Reviews of Geophysics , (4), 729–811. 10.1029 / RG008i004p00729, URL https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/RG008i004p00729 . 2.1Hunter, J. D., 2007. Matplotlib: A 2D graphics environment.
Computing In Science & Engineering , (3),90–95. 10.1109 / MCSE.2007.55, URL https://ieeexplore.ieee.org/document/4160265 . 5Johnson, J. R., S. Wing, and E. Camporeale, 2018. Transfer entropy and cumulant-based cost as measuresof nonlinear causal relationships in space plasmas: applications to D st . Annales Geophysicae , (4), 945–952. 10.5194 / angeo-36-945-2018, URL . 2.1, 2.1Kahler, S. W., and A. G. Ling, 2018. Forecasting Solar Energetic Particle (SEP) events with Flare X-raypeak ratios. Journal of Space Weather and Space Climate , , A47. 10.1051 / swsc / . 2.1Kay, H. R. M., L. K. Harra, S. A. Matthews, J. L. Culhane, and L. M. Green, 2003. The soft X-ray character-istics of solar flares, both with and without associated CMEs. Astronomy & Astrophysics , (2), 779–784.10.1051 / . 1, 2.120hakraborty and Morley: Probabilistic Storm ForecastKilpua, E. K. J., J. G. Luhmann, J. Gosling, Y. Li, H. Elliott, et al., 2009. Small Solar Wind Transients andTheir Connection to the Large-Scale Coronal Structure. Solar Physics , (1), 327–344. 10.1007 / s11207-009-9366-1, URL https://doi.org/10.1007/s11207-009-9366-1 . 2.3Liemohn, M. W., J. P. McCollough, V. K. Jordanova, C. M. Ngwira, S. K. Morley, et al., 2018.Model Evaluation Guidelines for Geomagnetic Index Predictions. Space Weather , (12), 2079–2102.10.1029 / http://doi.wiley.com/10.1029/2018SW002067 . 2.2, 3.3Lippiello, E., L. de Arcangelis, and C. Godano, 2008. Di ff erent triggering mechanisms for solar flares andcoronal mass ejections. Astronomy & Astrophysics , (2), L29–L32. 10.1051 / . 1Luo, B., S. Liu, and J. Gong, 2017. Two empirical models for short-term forecast of Kp. SpaceWeather , (3), 503–516. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2016SW001585 . 1, 2.1Mayaud, P. N., 1980. Derivation, Meaning and Use of Geomagnetic Indices, vol. 22 of Geophysical Monograph . American Geophysical Union. 10.1029 / GM022, URL https://agupubs.onlinelibrary.wiley.com/doi/book/10.1029/GM022 . 2.1McKinney, W., 2010. Data Structures for Statistical Computing in Python. In S. van der Walt and J. Millman,eds., Proceedings of the 9th Python in Science Conference, 56–61. 10.25080 / Majora-92bf1922-012, URL https://conference.scipy.org/proceedings/scipy2010/mckinney.html . 5Michalek, G., 2009. Two types of flare-associated coronal mass ejections.
Astronomy & Astrophysics , (1),263–268. 10.1051 / . 2.1Millman, K. J., and M. Aivazis, 2011. Python for Scientists and Engineers. Computing in Science & Engineering , (2), 9–12. 10.1109 / MCSE.2011.36, URL https://ieeexplore.ieee.org/document/5725235 . 5Morley, S., 2018. drsteve / PyForecastTools: PyForecastTools. 10.5281 / zenodo.1256921, URL https://doi.org/10.5281/zenodo.1256921 . 3.3, 5Morley, S. K., 2019. Challenges and opportunities in magnetospheric space weather prediction. SpaceWeather , n / a (n / a). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018SW002108 . 1Morley, S. K., T. V. Brito, and D. T. Welling, 2018a. Measures of Model Performance Based On the LogAccuracy Ratio. Space Weather , (1), 69–88. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017SW001669 . 2.2Morley, S. K., J. Koller, D. T. Welling, B. A. Larsen, M. G. Henderson, and J. T. Niehof, 2011. Spacepy- A Python-based library of tools for the space sciences. In Proceedings of the 9th Python in scienceconference (SciPy 2010), 67–71. Austin, TX. URL https://conference.scipy.org/proceedings/scipy2010/pdfs/morley.pdf . 5Morley, S. K., D. T. Welling, and J. R. Woodro ff e, 2018b. Perturbed Input Ensemble Modeling With theSpace Weather Modeling Framework. Space Weather , (9), 1330–1347. 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018SW002000 . 3.1, 3.3 21hakraborty and Morley: Probabilistic Storm ForecastMurphy, A. H., 1995. The Coe ffi cients of Correlation and Determination as Measures of per-formance in Forecast Verification. Weather and Forecasting , (4), 681–688. 10.1175 / < > https://doi.org/10.1175/1520-0434(1995)010<0681:TCOCAD>2.0.CO;2 . 3.1Newell, P. T., T. Sotirelis, K. Liou, C.-I. Meng, and F. J. Rich, 2007. A nearly universal solar wind-magnetosphere coupling function inferred from 10 magnetospheric state variables. Journal of GeophysicalResearch: Space Physics , (A1). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2006JA012015 . 1, 2.1N´u˜nez, M., 2018. Predicting well-connected SEP events from observations of solar soft X-rays and near-relativistic electrons. Journal of Space Weather and Space Climate , , A36. 10.1051 / swsc / . 1, 2.1O’Brien, T. P., 2009. SEAES-GEO: A spacecraft environmental anomalies expert system for geosynchronousorbit. Space Weather , (9). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2009SW000473 . 1Osthus, D., P. C. Caragea, D. Higdon, S. K. Morley, G. D. Reeves, and B. P. Weaver, 2014. Dynamiclinear models for forecasting of radiation belt electrons and limitations on physical interpretation of pre-dictive models. Space Weather , (6), 426–446. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2014SW001057 . 2.3Perez, F., and B. E. Granger, 2007. IPython: A System for Interactive Scientific Computing. Computingin Science Engineering , (3), 21–29. 10.1109 / MCSE.2007.53, URL https://ieeexplore.ieee.org/document/4160251 . 5Qiu, Q., J. A. Fleeman, and D. R. Ball, 2015. Geomagnetic Disturbance: A comprehensive ap-proach by American Electric Power to address the impacts.
IEEE Electrification Magazine , (4),22–33. 10.1109 / MELE.2015.2480615, URL https://ieeexplore.ieee.org/abstract/document/7343043 . 1Rasmussen, C. E., and C. K. I. Williams, 2006. Gaussian processes for machine learning. MIT Press. ISBN026218253X. URL . 2.3Revelle, W. R., 2020. psych: Procedures for Personality and Psychological Research. R package version1.9.12.31, URL https://CRAN.R-project.org/package=psych . 3.1Richardson, I. G., and H. V. Cane, 2012. Solar wind drivers of geomagnetic storms during more than foursolar cycles.
J. Space Weather Space Clim. , , A01. 10.1051 / swsc / https://doi.org/10.1051/swsc/2012001 . 2.1, 2.3Robin, X., N. Turck, A. Hainard, N. Tiberti, F. Lisacek, J. C. Sanchez, and M. M uller, 2011. pROC:an open-source package for R and S + to analyze and compare ROC curves. BMC Bioinformatics , (77). 10.1186 / https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-12-77 . 3.3Schrijver, C. J., K. Kauristie, A. D. Aylward, C. M. Denardini, S. E. Gibson, et al., 2015. Understandingspace weather to shield society: A global road map for 2015-2025 commissioned by COSPAR andILWS. Advances in Space Research , (12), 2745 – 2807. 10.1016 / j.asr.2015.03.023, URL . 1 22hakraborty and Morley: Probabilistic Storm ForecastSchwenn, R., A. Dal Lago, E. Huttunen, and W. D. Gonzalez, 2005. The association of coronal mass ejectionswith their e ff ects near the Earth. Annales Geophysicae , (3), 1033–1059. 10.5194 / angeo-23-1033-2005,URL https://angeo.copernicus.org/articles/23/1033/2005/ . 2.1Sexton, E. S., K. Nykyri, and X. Ma, 2019. Kp forecasting with a recurrent neural network. Journal of SpaceWeather and Space Climate , , A19. 10.1051 / swsc / . 1Sharpe, M. A., and S. A. Murray, 2017. Verification of Space Weather Forecasts Issued by the Met O ffi ceSpace Weather Operations Centre. Space Weather , (10), 1383–1395. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017SW001683 . 1Shprits, Y. Y., R. Vasile, and I. S. Zhelavskaya, 2019. Nowcasting and Predicting the KpIndex Using Historical Values and Real-Time Observations. Space Weather , (8), 1219–1229.10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018SW002141 . 1, 2.3Srivastava, N., and P. Venkatakrishnan, 2002. Relationship between CME Speed and Geomagnetic StormIntensity. Geophysical Research Letters , (9). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2001GL013597 . 2.1Srivastava, N., and P. Venkatakrishnan, 2004. Solar and interplanetary sources of major geomagnetic stormsduring 1996–2002. Journal of Geophysical Research: Space Physics , (A10). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2003JA010175 . 2.1Steiger, J. H., 1980. Tests for Comparing Elements of a Correlation Matrix. Psychological Bulletin , (2),245–251. 10.1037 / https://psycnet.apa.org/record/1980-08757-001 .3.1Sun, X., and W. Xu, 2014. Fast Implementation of DeLong’s Algorithm for Comparing the Areas UnderCorrelated Receiver Operating Characteristic Curves. IEEE Signal Processing Letters , (11), 1389–1393. 10.1109 / LSP.2014.2337313, URL https://ieeexplore.ieee.org/document/6851192 . 3.3Tan, Y., Q. Hu, Z. Wang, and Q. Zhong, 2018. Geomagnetic Index Kp Forecasting With LSTM.
SpaceWeather , (4), 406–416. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017SW001764 . 1, 2.1, 2.3, 3.1, 4T´oth, G., I. V. Sokolov, T. I. Gombosi, D. R. Chesney, C. R. Clauer, et al., 2005. Space Weather ModelingFramework: A new tool for the space science community. Journal of Geophysical Research: SpacePhysics , (A12). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2005JA011126 . 1Wilks, D. S., 2006. Statistical methods in the atmospheric sciences, 2nd Edition.Academic Press. ISBN 9780123850232. URL . 3.1,3.3Wing, S., J. R. Johnson, E. Camporeale, and G. D. Reeves, 2016. Information theoretical approachto discovering solar wind drivers of the outer radiation belt. Journal of Geophysical Research:Space Physics , (10), 9378–9399. 10.1002 / http://doi.wiley.com/10.1002/2016JA022711 . 2.1, 2.1 23hakraborty and Morley: Probabilistic Storm ForecastWing, S., J. R. Johnson, J. Jen, C.-I. Meng, D. G. Sibeck, K. Bechtold, J. Freeman, K. Costello, M. Balikhin,and K. Takahashi, 2005. Kp forecast models. Journal of Geophysical Research: Space Physics , (A4).10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2004JA010500 . 1, 4Winter, L. M., and K. Balasubramaniam, 2015. Using the maximum X-ray flux ratio and X-ray backgroundto predict solar flare class. Space Weather , (5), 286–297. 10.1002 / http://doi.wiley.com/10.1002/2015SW001170 . 1, 2.1Winter, L. M., and K. S. Balasubramaniam, 2014. Estimate of Solar Maximum using the 1–8ÅGeostationary Operational Environmental Satellites X–ray Measurements. The Astrophysical Journal , (2), L45. 10.1088 / / / / l45, URL https://iopscience.iop.org/article/10.1088/2041-8205/793/2/L45/pdf . 2.1, 2.3Wintoft, P., M. Wik, J. Matzka, and Y. Shprits, 2017. Forecasting Kp from solar wind data: input pa-rameter study using 3-hour averages and 3-hour range values. Journal of Space Weather and SpaceClimate , , A29. 10.1051 / SWSC / . 1Wu, D. J., H. Q. Feng, and J. K. Chao, 2008. Energy spectrum of interplanetary magnetic flux ropes and itsconnection with solar activity. Astronomy & Astrophysics , (1), L9–L12. 10.1051 / . 1Xu, F., and J. E. Borovsky, 2015a. A new four-plasma categorization scheme for the solar wind. Journalof Geophysical Research: Space Physics , (1), 70–100. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2014JA020412 . 2.1Xu, F., and J. E. Borovsky, 2015b. A new four-plasma categorization scheme for the solar wind. Journalof Geophysical Research: Space Physics , (1), 70–100. 10.1002 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2014JA020412 . 2.1Yap, B. W., K. A. Rani, H. A. A. Rahman, S. Fong, Z. Khairudin, and N. N. Abdullah, 2014. An Applicationof Oversampling, Undersampling, Bagging and Boosting in Handling Imbalanced Datasets. In T. Herawan,M. M. Deris, and J. Abawajy, eds., Proceedings of the First International Conference on Advanced Dataand Information Engineering (DaEng-2013), 13–22. Springer Singapore, Singapore. ISBN 978-981-4585-18-7. 2.3Zhang, J., X. Blanco-Cano, N. Nitta, N. Srivastava, and C. H. Mandrini, 2018. Editorial: Earth-a ff ectingSolar Transients. Solar Physics , . 10.1007 / s11207-018-1302-9, URL https://link.springer.com/article/10.1007/s11207-018-1302-9 . 2.3Zhelavskaya, I. S., R. Vasile, Y. Y. Shprits, C. Stolle, and J. Matzka, 2019. Systematic Analysis ofMachine Learning and Feature Selection Techniques for Prediction of the Kp Index. Space Weather , (ja). 10.1029 / https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019SW002271 . 1, 4, 5Zhou, G., J. Wang, and Z. Cao, 2003. Correlation between halo coronal mass ejections and solar surfaceactivity. A & A , (3), 1057–1067. 10.1051 / . 1 24hakraborty and Morley: Probabilistic Storm Forecast Fig. 1.
Auto–correlation functions of di ff erent solar-wind parameters during: (a) solar minima, and(b) solar maxima. Fig. 2.
Cross–correlation functions of di ff erent solar-wind parameters with (a) GOES flux back-ground (B) and (b) ratio (R) of hard (0.05–0.4 nm) and soft (0.1–0.8 nm) X–ray flux data. Fig. 3.
Distribution of K p . 20 years 1995–2014 of data has been used in this plot. f (K p ) is thefrequency (i.e., the number of events) plotted on a logarithmic scale. The black vertical line isK p = − . Fig. 4.
Schematics showing architectures (a) of a single LSTM block, (b) of the classifier consistingof one input layer, one LSTM layer consists of 100 nodes (neurons), dropout, and output layer, and(c) of the classifier model implemented using Keras.
Fig. 5.
Proposed model ( µ ) architecture: Classifier is deterministic in nature, and regressors areprobabilistic in nature. The threshold for the classifier is K p = − . Here, w q & w s are the variabletraining windows for two regressors. For details refer text. Fig. 6.
Variation of root mean square error (RMSE, (cid:15) ) in with the length of the training window ( τ )in days. Each point of this curve is generated using the average RMSE of two months of data. Fig. 7. − hour forecast of K p using LSTM classifier & Deep Gaussian Process Regression (DeepGPR) for a solar maximum period (1 st July–31 st August, 2004). Panels: (a) prediction from themodel using OMNI solar wind data and (b) prediction from the model using OMNI solar winddata and GOES solar flux data. Blue and black dots in panels (a) and (b) are mean predictions andred dots show observed K p , respectively. Light blue and black shaded regions in panels (a) and (b)respectively show 95% confidence interval. RMSE ( (cid:15) ) and MAE ( | ¯ δ | ) are mentioned inside eachpanel. Fig. 8.
The performance comparison of µ OMNI and µ OMNI + models which predict K p µ OMNI (in black) and µ OMNI + (in blue) models for (a) 10 years ofprediction and (b) 38 storm–time prediction (listed in the Table 3). In each panel o ffi cial (Postdam)K p is plotted on the x-axis and the model prediction is plotted on the y-axis. Perfect predictionswould lie on the line with a slope of one. The error bar indicates one standard deviation and thecorrelation coe ffi cient and coe ffi cient of determination are mentioned inside each panel. Fig. 9. µ OMNI + model) showing (a) probability of geomagnetic storms, (b)K p (22 nd July–31 st July, 2004) and (c) an illustration of the method to extract probability of stormoccurrence for one prediction marked by vertical black line in panel (b). Black dashed lines inpanels (b) and (c) represent the threshold K p = − , red and blue thin lines in panel (c) are observedK p , and predicted mean respectively. Panel (b) is in the same format with Figure 7. The shadedregion in panel (c) provides probability of geomagnetic storm (Pr[e] = Fig. 10.
Example model predictions using µ OMNI model showing True Positive (TP), False Positive(FP), False Negative (FN), and True Negative(TN) predictions, mentioned by vertical lines. Topand bottom panels show the probability of geomagnetic storms and K p with uncertainty bounds(shaded) region. Fig. 11.
Receiver operating characteristic (ROC) curves showing the relative performance of indi-vidual the storm forecasting model µ OMNI (represented by solid curves) and µ OMNI + (represented bydashed curves) with di ff erent storm intensity levels (for K p ≥
2, 4, and 6).
Fig. 12. Di ff erent performance metrics (a) HSS, (b) MCC, (c) PoD, (d) FAR, (e) CSI, and (f) Biascomparing the two variants of geomagnetic storm forecasting model at di ff erent K p thresholds.Model µ OMNI (in black circle) and µ OMNI + (in blue diamonds). Fig. 13.
Reliability diagram showing observed frequency of a geomagnetic storm (for K p ≥2,4, and6) plotted against the Forecast probability of geomagnetic storms.