Forecasting the underlying potential governing the time series of a dynamical system
FForecasting the underlying potential governing the time seriesof a dynamical system
V. N. Livina , , G. Lohmann , M. Mudelsee , , and T. M. Lenton National Physical Laboratory, Teddington, UK University of East Anglia, Norwich, UK Alfred Wegener Institute for Polar and Marine Research, Bremerhaven, Germany Climate Risk Analysis, Hannover, Germany College of Life and Environmental Sciences, University of Exeter, UK
Abstract
We introduce a technique of time series analysis, potential forecasting, which is based ondynamical propagation of the probability density of time series. We employ polynomial coefficientsof the orthogonal approximation of the empirical probability distribution and extrapolate themin order to forecast the future probability distribution of data. The method is tested on artificialdata, used for hindcasting observed climate data, and then applied to forecast Arctic sea-icetime series. The proposed methodology completes a framework for ‘potential analysis’ of tippingpoints which altogether serves anticipating, detecting and forecasting non-linear changes includingbifurcations using several independent techniques of time series analysis. Although being appliedto climatological series in the present paper, the method is very general and can be used to forecastdynamics in time series of any origin.
Many dynamical systems in general, and geophysical subsystems in particular, lack analytical deter-ministic descriptions with fully developed physical models, being represented mainly by recorded timeseries. At the same time such systems may be of great public interest and societal impact, such asthe current climate change with rising temperature records around the globe.In these circumstances, powerful research tools may be provided by statistical time series analy-sis (
Priestley , 1981;
Tong , 1990;
Box et al , 2008), which have found entry into the analysis of climatedata from a general viewpoint (
Mudelsee , 2010) and also from a nonlinear dynamical system view-point (
Donner and Barbosa , 2008). In particular, system dynamics can be approximated by meansof simple generalised stochastic models, where uncertain or unknown variables are represented bystochastic components (
Hasselmann , 1976;
Chan and Tong , 2001). Of specific regard to the presentpaper are time series analysis methods that deal with correlations and scaling of fluctuations (
Penget al , 1994;
Podobnik and Stanley , 2008;
Laloux et al , 1999;
Plerou et al , 1999;
Koscielny et al , 1996;
Fraedrich and Blender , 2003).In previous papers (
Livina and Lenton , 2007;
Livina et al. , 2010, 2011), we have developed severaltime series techniques for anticipating and detecting ’tipping points’ in trajectories of dynamicalsystems, with applications in climatology. Modified ’degenerate fingerprinting’ (
Livina and Lenton ,2007) was introduced for early warning of critical behaviour in time series to allow one to anticipate anupcoming bifurcation or transition (climate tipping points (
Lenton et al. , 2008)). This was based on’degenerate fingerprinting’ (
Held and Kleinen , 2004), where the decay rate in the series is monitored1 a r X i v : . [ phy s i c s . d a t a - a n ] A p r sing lag-1 autocorrelation in an autoregressive model (AR1). Modified degenerate fingerprintingemploys Detrended Fluctuation Analysis for the same purposes. For noisy time series, we developedthe method of potential analysis ( Livina et al. , 2010, 2011), which derives the number of systemstates under the assumption of quasi-stationarity of a data subset. This can distinguish a transition,which may be a forced drifting of the record without structural changes in the fluctuations, from abifurcation which happens when the underlying system potential (the system states that a climatevariable may sample) changes in structure, e.g. instead of two potential wells one or three wells appear.A bifurcation is characterised by structural change in the dynamical system, whereas transitional seriespreserve the same structure of fluctuations.If both techniques give indication of dynamical change, this denotes a genuine bifurcation. Ifmodified degenerate fingerprinting indicates a change but potential analysis does not, this means atransition rather than a bifurcation, with no changes in the underlying system potential.In this paper, we develop the methodology further, so that we become able to not only anticipateand detect, but also to forecast the time series dynamics. The skill of such a forecast will depend onseveral factors, in particular, whether the upcoming change will be gradual or abrupt, at what rate itwill be happening and how the scaling properties of the stochastic component may change with time.Here we outline the methodology, test it on artificial data, in several hindcast case studies, andprovide a forecast of the dynamics of Arctic sea-ice extent in the nearest future.
We consider a simple stochastic model with a polynomial potential U as an approximation of thesystem dynamics, ˙ x ( t ) = − U (cid:48) ( x ) + ση, (1)where ˙ x is the time derivative of the system variable x ( t ) (time series of an observed variable), η isGaussian white noise of unit variance and σ is the noise level. In the case of a double-well potential,it can be approximated by a polynomial of 4th order: U ( x ) = a x + a x + a x + a x. According to the Fokker-Planck equation for the dynamic evolution of the probability densityfunction p ( x, t ), ∂ t p ( x, t ) = ∂ x [ U (cid:48) ( x ) p ( x, t )] + 12 σ ∂ x p ( x, t ) (2)its stationary solution is given by ( Gardiner , 2004) p ( x ) ∼ exp[ − U ( x ) /σ ] . (3)The potential can be reconstructed from time series data of the system as U ( x ) = − σ p d ( x ) , (4)which means that the empirical probability density p d has the number of modes corresponding to thenumber of wells of the potential.This simple approximative approach allowed us to reconstruct the system potential of variousclimatic records (see ( Livina et al. , 2011)). It works with remarkable accuracy for data subsets oflength as short as 400 to 500 data points, demonstrating above 90% rate of accurate detection, as was2hown in an experiment with artificial data. For data subsets of length above 1000 points it correctlydetects the structure of the potential with a rate of 98% (
Livina et al. , 2011). Potential analysis wasintroduced in (
Livina et al. , 2010) which is an open-access paper published by Copernicus.org; thismakes it easily accessible for the broad readership and more details on the methodology can be foundthere.Here we develop the potential method beyond its detection capability, such that we are able toforecast the behaviour of a time series on the basis of its potential. To that effect, we introduce anextrapolation technique that would use the potential structure of the time series with linear extrap-olation of the coefficients of the approximating polynomials. To reduce the biases introduced duringvarious stages of the potential analysis (due to kernel distribution approximation, further logarithmictransformation, noise estimation, and finally polynomial fits), we use the empirical probability densityrather than its logarithmic transformation, the potential (see Eq. 4). Moreover, unlike in previouswork (
Livina et al. , 2011), we use not the 2 N potential coefficients (where N is the number of potentialwells) but the coefficients of the approximation of the empirical probability density by a finite Cheby-shev polynomial series (following the approach of ( Neagoe , 1990)). Chebyshev approximation has anadvantage of being near optimal, and already 10th-degree approximation in most cases of observedtime series provides an accurate fit with low values of the error function.
To approximate the empirical probability density, we use the orthogonal (in the interval [ − , T ( x ) = 1 ,T ( x ) = x,. . .T n +1 ( x ) = 2 xT n ( x ) − T n − ( x ) . (5)The polynomial T n ( x ) has n zeros in the interval [ − ,
1] at points x = cos (cid:18) π ( k − / n (cid:19) , k = 1 , , . . . , n. The approximation of a function f ( x ) can be done by using a truncated (finite N ) sum of thefollowing form f ( x ) ∼ = (cid:32) N − (cid:88) k =0 c k T k ( x ) (cid:33) − c , (6)where c k are the coefficients obtained by discrete cosine transform of the vector of nonuniformly spacedsamples of the considered function over the sampling grid x k = cos (cid:18) π ( k − / n (cid:19) , k = 1 , , . . . , n. For an arbitrary interval [ a, b ] it is necessary to transform variables as y = x − . · ( b + a )0 . · ( b − a ) . A good example of the above calculations is given in (
Neagoe , 1990).3hen decomposition (6) is obtained according to the particular time series problem to be analysed,the resulting polynomial is expanded, thus producing the final coefficients f ( x ) ∼ = (cid:32) N − (cid:88) k =0 ˜ C k x k (cid:33) . (7) We consider the approximation of the empirical probability density using Chebyshev polynomials T , . . . , T . When the approximating polynomial is derived, the decomposition coefficients ˜ C k (Eq. 7)are linearly extrapolated using a set of preceding values. The interval of these, as well as the extensionof the extrapolated interval can be chosen according to the particular time series to be analysed. Ifthe series dynamics is homogeneous and steady, the horizon of the forecast may be as far as severaldecades of daily data (i.e., thousands of points). If the dynamics is highly nonlinear and abrupt, thehorizon of the forecast may be very short, and the forecast probability density function may differsignificantly from the observed one at the end of the forecast interval. However, even in this case,although the pdfs may differ, the simulated time series — due to the sampling in the close domain andsorting according to the observed data — is realistically similar to the hindcast data (see the exampleof artificial data in Fig. 3). As a rule of thumb, the horizon of forecast may be chosen at the scale ofseveral hundreds up to several thousand points, which in case of daily data means up to 10 years offorecast horizon.The initial pdfs (those providing sequences of potential coefficients to be extrapolated) are esti-mated for fixed time interval, in a sequence of subsets prior to the forecast starting point (in slidingwindows). The extrapolation of parameters provides adequate scaling of the pdf, and no furthernormalisation is necessary.Once a new probability density is calculated, we generate a forecast time series using a rejectionsampling algorithm (see, for instance, ( Gilks and Wild , 1992)). This provides an artificial series withthe prescribed distribution, but this may be not enough for obtaining a realistic forecast time series,because the ordering of the series (and hence scaling properties like long-term memory) should bereconstructed according to the initial data. For this purpose, we apply so-called ”sorting” of the timeseries, that means arranging its values in the same order as in the initial data (before the forecaststarted), thus reproducing realistic correlations (because their distributions are already very similardue to the extrapolation of probability density). Sorting is a simple numerical algorithm which usesranking of the values of two series, initial subsample and forecast subsample. However, this should bedone with care, especially in data with seasonality: if there is a seasonal trend, it is very importantto sort the forecast series according to observed data at the same date of the year, so that furtherseasonal variability would be adequate. This is achieved by going back along the series with a stepequal to the seasonality period (365 for daily data or 12 for monthly data). Since certain years may beanomalous in fluctuations (due to internal variability in the system), the initial data used for sortingmay be an average over several years starting from the same date in a year (for instance, March 1stin several consecutive years). This average is then used to sort the forecast series starting on March1st and projecting into the future.
It is necessary to note that minor uncertainties are introduced at various steps of the forecasting algo-rithm: first when the potential stochastic model is used as an approximation of dynamics, then when4he empirical probability density is approximated by Chebyshev polynomials (minor outliers); further-more, the polynomial coefficients are linearly extrapolated, which means that the actual dynamics iscorrectly forecast only in the case of linear evolution of such coefficients.In many cases of abrupt highly nonlinear dynamics the linear extrapolation of the decompositioncoefficients may produce an empirical distribution with large deviations, especially in case of non-stationarity of the data. We attempted bootstrapping of the decomposition of coefficients accordingto (
Mudelsee , 2010). Based on bootstrapping techniques, it is possible to consider blocks of data in achosen subset x of size L = N IN T (cid:32) W / √ a ( x − ¯ x )1 − ( a ( x − ¯ x )) (cid:33) , (8)where N IN T ( · ) is the nearest integer function, W is the window length, a is the lag-1 autocorrelation,¯ x is the mean value of the subset x . This block length selector was derived in ( Mudelsee , 2010)from (
Sherman et al , 1998), who adapted a formula from (
Carlstein , 1986). For a → L is chosenequal to 1; when the denominator of Eq. (8) tends to 0, L is chosen equal to N − Mudelsee , 2010) (Chapter 3 therein) for that means in a future study.Furthermore, the important parameter that affects the skill of the forecast is the extrapolationperiod. The skill of the forecast drops with increase of its value.Obviously, in the case of abrupt changes using linear extrapolation of coefficients may prove unsat-isfactory, and our method in those cases may be not applicable. The best results are obtained whenthe data undergoes gradual dynamic change and the forecast horizon is within 100 time units (whichmeans 3 months for daily data and up to 8 years for monthly data).To assess the skill of the forecast, we used several techniques widely applied in modelling communityfor comparison of observed and modelled data, for instance, in hydrology (
Nash and Sutcliffe , 1970;
Livina et al. , 2007):Daily Root Mean Square (cid:113) n (cid:80) Ni =1 ( x im − x io ) , Nash-Sutcliffe efficiency 1 − (cid:16)(cid:80) Ni =1 ( x im − x io ) (cid:46)(cid:80) Ni =1 ( x io − x ) (cid:17) , Percent bias (cid:104)(cid:80) Ni =1 ( x im − x io ) (cid:46)(cid:80) Ni =1 ( x io ) (cid:105) × , where x o and x m are the observed and modelled series, respectively, i = 1 , . . . , N is time index, x isthe mean value of the series x . It is easy to see that for two identical time series DRMS=0, NS=1,and %bias=0, and any deviation from those values would indicate the difference between the modelledand observed time series pointwise.However, it is necessary to mention that these skill estimators perform best when applied to datawith normal distribution without outliers, which is not the case in most of our datasets; therefore,they may introduce additional biases in estimation of accuracy of arbitrary data; we apply them forgeneral information only. 5 Tests of artificial and climate data
We considered several simulated time series: an artificial dataset where the potential is varying fromsingle-well to double-well and back several times (Fig. 1), double-well-potential data with decreasingnoise level (Fig. 2) and artificial data bifurcating from one-well- to double-well-potential (simulatedtipping point, Fig. 3). Our aim is to test if the proposed methodology is capable to capture themodelled dynamics of the series and forecast the record adequately.We perform hindcast of these series by choosing a certain point where we start extrapolation ofthe empirical probability density, then we compare the modelled series with actual data at the endof the forecast. Figures 1 and 2 show datasets in panels (a), the probability density functions andhistograms of the data and hindcast in panels (b, c) of each plot, and the samples of modelled datain panels (d).Figure 3 combines two hindcasts of the series bifurcating from one-well to double-well potential, inthe intervals shown by arrows in the figure (probability density functions and histogram of observedand modelled data in panels (b, d, e, h) and (c, f, g, i)). The initial potential is U ( x ) = x − x ;then the term with 1st power of x starts growing gradually from zero until the potential reaches form U ( x ) = x − x + 8 x . In this experiment, the forecast is performed in total for 1530 “days” withoutany intermediate assimilation of modelled data, and for these conditions the modelled series is veryclose in statistical properties to the hindcast data (although the probability density is not entirelyidentical). The result demonstrates that for certain bifurcating systems with gradual dynamics themethodology may be very efficient as a forecast tool. Similarly to artificial data in the previous section, we performed hindcast experiments with observedclimate data, temperature and sea-ice extent. The results are shown in Figures 4,5.The Central England Temperature (
Manley , 1974;
Parker et al , 1992), which is available asmonthly series since 1659 and as daily since 1772, is considered here in daily format. It is first desea-sonalised (subtracting the average annual cycle of daily data) and then the hindcast is performed asin the above artificial records.Sea ice area data were obtained from ‘The Cryosphere Today’ project of the University of Illinois.This dataset uses SSM/I and SMMR series satellite products and spans 1979 to present at dailyresolution. The fluctuations of the Arctic sea-ice area are not only deseasonalised, but also thequadratic trend is removed, as we pre-processed the sea-ice data in our recent paper ( Livina andLenton , 2013). This was done to study the properties of the fluctuations.Altough the real data has much more complicated variability and dynamics, the method performsas well as in the cases of artificial data analysed above, with modelled series having the same statisticalproperties as the initial data.
The skill of the forecast is calculated for multiple subsets along series using the hindcast approach asdescribed above. We show the skill in Figure 6 as boxplots with whiskers (outliers are not shown) andconclude that in some cases the exact statistical properties and correlations are not reproduced well,with acceptable mean values over subsets but rather large standard deviations. This is because ourstochastic approach is based on the probability density function, which means that the exact dynamicsmay vary for series with the same histograms, and skill estimators based on point-wise comparison http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008
6f time series may deviate from ideal values. However, the patterns of the series are very close andsuitable for long-term stochastic predictions, as we show below for the Arctic sea-ice.The stochastic forecast is not expected to be precise at particular values. For example, even thesophisticated deterministic climate models still struggle to forecast actual weather at medium-termscale. Our forecast technique is much lighter computationally, being based on a simple stochasticmodel; yet it reproduces the pattern of the series and takes into account typical seasonal variabilityfrom the past, which produces realistic forecast series. There exist regional climate models thatachieve good accuracy of forecast, but those usually implement assimilation of observational data,which cannot be done when attempting longer-timescale forecast.
Arctic sea-ice dynamics have been a topic of recent scientific debate, and available estimates of whensummer ice cover will disappear range from as early as 2016 to never; the early ice loss estimate comesfrom extrapolation of PIOMAS data (
Zhang et al , 2010;
Wang and Overland , 2009;
Boe et al , 2009).There is also an ongoing debate about whether tipping points could occur in the Arctic region(see (
Serreze , 2011;
Tietsche , 2011;
Wassman and Lenton , 2012;
Lenton , 2012) and references therein).Several studies argue that sea-ice loss is highly reversible and therefore not a tipping point (
Tietsche ,2011). Others suggest that reversible tipping points can occur and summer sea-ice loss may be onecandidate (
Lenton et al. , 2008). Here we restrict ourselves to the question of when summer ice coveris forecast to disappear in the Arctic.Here we propose a stochastic forecast based on the above methodology, without taking into accountcomplex feedbacks (that may reverse the declining dynamics as well as speed it up). Assuming thatthe present dynamics continues its gradual development, and taking into account what we alreadyknow about this time series, we forecast it using potential analysis.In Fig. 7, we show a long-term forecast of the Arctic sea-ice by combining our knowledge aboutseasonality and current trends in the data. From the point of forecast (which is chosen in 2008 ratherthan most recently in order to assess the accuracy of the forecast over the recently observed data),we combine the last 5-year seasonal average, extrapolated quadratic trend, and the forecast of thefluctuations as described above. In the inset we show that between 2008 and 2010 the forecast issurprisingly good and very close to the observed data; the later departure is mainly due to veryanomalous summers in the recent few years.The forecast indicates complete loss of Arctic summer sea-ice area by the 2030s. Obviously, themethod provides an estimate of sea ice loss in the Arctic Ocean if the system will not experience newfeedbacks which are not included in our approach. It is therefore a conservative estimate solely basedon extrapolation of the current trends.
We have developed a forecasting technique based on potential analysis with dynamical propagationof the probability density. As the main idea, we employ approximations of the empirical probabilitydistributions and extrapolate them to the future probability distribution of data. We conducted exper-iments with artificial, model and observed climatic data and showed the efficient forecast performanceof the new methodology.As one particular example, we applied the method to the expected sea ice trend in the ArcticOcean. The dynamics of Arctic sea ice was forecast and shown to disappear in summer in the 2030sif the current trends remain the same. This can be compared to fully dynamical models.7s a logical next step, bootstrap resampling (Section 2.4) can help to construct prediction confi-dence intervals based on percentiles or standard errors (
Mudelsee , 2010).
Acknowledgement.
The idea of the paper was developed during VNL’s visit to Alfred WegenerInstitute for Polar Research, Germany (February 2012). VNL and TML were supported by NERCthrough the project ”Detecting and classifying bifurcations in the climate system” (NE/F005474/1)and by AXA Research Fund through a postdoctoral fellowship. MM is supported by the EuropeanCommission as PI in the Marie Curie Initial Training Network LINC (project number 289447) underthe Seventh Framework Programme. The Central England Temperature dataset was provided by theBritish Atmospheric Data Centre (BADC), which is part of the NERC National Centre for Atmo-spheric Science (NCAS). The research was carried out on the High Performance Computing Clustersupported by the Research Computing Service at the University of East Anglia.
References
Priestley M.B. (1981),
Spectral Analysis and Time Series.
Academic Press, London. 890 pages.Tong H. (1990),
Non-linear Time Series.
Clarendon Press, Oxford. 564 pages.Box E. P., G. M. Jenkins, and G. C. Reinsel (2008), Time Series Analysis: Forecasting and Control,Wiley, 4th edition.Mudelsee M. (2010),
Climate Time Series Analysis: Classical Statistical and Bootstrap Methods ,Springer, 474p.Donner R.V., S.M. Barbosa (Eds.) (2008),
Nonlinear Time Series Analysis in the Geosciences: Ap-plications in Climatology, Geodynamics and Solar–Terrestrial Physics.
Springer, Berlin. 390 pages.Hasselmann K., Stochastic climate models: Theory,
Tellus
XXVIII (6) , 473–484.Chan K,-S. and H. Tong (2001),
Chaos: A Statistical Perspective.
Springer, New York. 300 pages.Peng C.- K., S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger (1994),Mosaic organization of DNA nucleotides,
Phys. Rev. E , 1685–1689.Podobnik B. and H. E. Stanley (2008), Detrended Cross-Correlation Analysis: A new method foranalyzing two non-stationary time series, Phys. Rev. Lett. , 084102.Laloux L., P. Cizeau P., J. P. Bouchaud, M. Potters (1999), Noise dressing of financial correlationmatrices,
Phys. Rev. Lett. , 1467;Plerou V., P. Gopikrishnan, B. Rosenow, L. A. N. Amaral, and H. E. Stanley, Universal and nonuni-versal properties of cross correlations in financial time series, Phys. Rev. Lett. , 1471 (1999).Koscielny-Bunde E., A.Bunde, S.Havlin, Y.Goldreich (1996), Analysis of daily temperature fluctua-tions, Physica A , 393-396.Fraedrich K. and R. Blender (2003), Scaling of atmosphere and ocean temperature correlations in ob-servations and climate models,
Phys. Rev. L. , 108501 (doi:10.1103/PhysRevLett.90.108501).Livina V. and T. Lenton (2007), A modified method for detecting incipient bifurcations in a dynamicalsystem,
Geophys. Res. Lett.
34 (3) , L03712. 8ivina, V. N., F. Kwasniok, F., and T. M. Lenton (2010), Potential analysis reveals changing numberof climate states during the last 60 kyr,
Climate of the Past , 77-82.Livina V. N., F. Kwasniok, G. Lohmann, J. W. Kantelhardt, and T. M. Lenton (2011), Changingclimate states and stability: from Pliocene to present, Climate Dynamics
37 (11-12) , 2437-2453,DOI: 10.1007/s00382-010-0980-2.Lenton, T. M. et al. (2008), Tipping elements in the Earth’s climate system,
Proc. Nat. Acad. SciUSA
105 (6) , 1786-1793, doi:10.1073/pnas.0705414105.Held H. and T. Kleinen (2004), Detection of climate system bifurcations by degenerate fingerprinting,
Geophys. Rev. Lett. 31 , L23207.Gardiner C. W. (2004),
Handbook of Stochastic Methods , 3rd ed., 415pp., Springer.Neagoe V.-E. (1990), Chebyshev nonuniform sampling cascaded with the discrete cosine transform foroptimum interpolation,
IEEE Transactions on Acoustics, Speech and Signal Processing
38 (10) ,1812–1815.Gilks W. and P. Wild (1992), Adaptive rejection sampling for Gibbs sampling,
Applied Statistics -Journal of the Royal Statistical Society Series C
41 (2) , 337–348.Sherman M., F. M. Speed Jr., F. M. Speed (1998), Analysis of tidal data via the blockwise bootstrap
Journal of Applied Statistics , 333340.Carlstein E. (1986), The use of subseries values for estimating the variance of a general statistic froma stationary sequence,
The Annals of Statistics , 1171–1179.Nash J. E. and J. V. Sutcliffe (1970), River flow forecasting through conceptual models part I - Adiscussion of principles,
Journal of Hydrology
10 (3) , 282-290.Livina V., Z. Kizner, P. Braun, T. Molnar, A. Bunde, S. Havlin (2007), Temporal scaling comparisonof real hydrological data and model runoff records,
Journal of Hydrology , 186-198.Manley G. (1974), Central England temperatures: monthly means 1659 to 1973.
Quarterly Journalof the Royal Meteorological Society , 389–405.Parker D. E., T. P. Legg, and C. K. Folland (1992), A new daily Central England temperature series,1772–1991,
Int. J. Climatol. , 317–342.Livina V. N. and T. M. Lenton (2013), A recent bifurcation in the Arctic sea-ice cover, TheCryosphere , 275–286, doi:10.5194/tc-7-275-2013Zhang, J., M. Steele, and A. Schweiger (2010), Arctic sea ice response to atmospheric forcings withvarying levels of anthropogenic warming and climate variability, Geophys. Res. Lett , L20505.Wang, M. Y., and J. E. Overland (2009), A sea ice free summer Arctic within 30 years?, Geophys.Res. Lett. , 5.Boe, J. L., A. Hall, and X. Qu (2009), September sea-ice cover in the Arctic Ocean projected to vanishby 2100, Nature Geoscience , 341–343.Serreze M. (2011), Climate change: Rethinking the sea-ice tipping point,
Nature , 47–48,doi:10.1038/471047a. 9ietsche S., D. Notz, J. H. Jungclaus, J. Marotzke (2011), Recovery mechanisms of Arctic summersea ice,
Geophys. Res. Lett. , L02707.Wassmann, P. and T. M. Lenton (2012), Arctic tipping points in an Earth system perspective, Am-bio
41 (1) , 1–9, doi:10.1007/s13280-011-0230-9.Lenton T. M (2012), Arctic climate tipping points,
Ambio
41 (1)41 (1)