Single atom force measurements: mapping potential energy landscapes via electron beam induced single atom dynamics
Ondrej Dyck, Feng Bao, Maxim Ziatdinov, Ali Yousefzadi Nobakht, Seungha Shin, Kody Law, Artem Maksov, Bobby G. Sumpter, Richard Archibald, Stephen Jesse, Sergei V. Kalinin
11 Single atom force measurements: mapping potential energy landscapes via electron beam induced single atom dynamics
O. Dyck,
F. Bao, M. Ziatdinov,
A. Yousefzadi Nobakht, S. Shin, K. Law,
A. Maksov,
B.G. Sumpter,
R. Archibald,
S. Jesse, and S.V. Kalinin The Institute for Functional Imaging of Materials, Oak Ridge National Laboratory, Oak Ridge, TN 37831 The Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831 Department of Mathematics, The University of Tennessee at Chattanooga, Chattanooga, TN, 37403 Department of Mechanical, Aerospace, and Biomedical Engineering, The University of Tennessee, Knoxville, TN 37996 Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 School of Mathematics, University of Manchester, Manchester, UK Bredesen Center for Interdisciplinary Research and Education, The University of Tennessee, Knoxville, TN 37996
In the last decade, the atomically focused beam of a scanning transmission electron microscope (STEM) was shown to induce a broad set of transformations of material structure, open pathways for probing atomic-scale reactions and atom-by-atom matter assembly. However, the mechanisms of beam-induced transformations remain largely unknown, due to an extreme mismatch between the energy and time scales of electron passage through solids and atomic and molecular motion. Here, we demonstrate that a single dopant Si atom in the graphene lattice can be used as an atomic scale force sensor, providing information on the random force exerted by the beam on chemically-relevant time scales. Using stochastic reconstruction of molecular dynamic simulations, we recover the potential energy landscape of the atom and use it to determine the beam-induced effects in the thermal (i.e. white noise) approximation. We further demonstrate that the moving atom under beam excitation can be used to map potential energy along step edges, providing information about atomic-scale potentials in solids. These studies open the pathway for quantitative studies of beam-induced atomic dynamics, elementary mechanisms of solid-state transformations, and predictive atom-by-atom fabrication.
Controlling and assembling matter on the atomic scale remained one of the central foci of modern sciences, dating back to the renowned talk by Richard Feynman in 1960. The scanning tunneling microscopy (STM) based atomic assembly demonstrated by Don Eigler thirty years later was the first realization of this concept, and, in synergy with advanced surface science methods, has opened the pathway for single atom fabrication, for the first time enabling fabrication of solid-state qubits, the enabling element of the quantum computer, with atomic precision. In the last few years, the requirements of the quantum computing race and growing need for beyond Moore technologies make atomic based fabrication one of the central targets for basic and applied research, necessitating both optimization of existing and development of new paradigms for manipulation of matter atom-by-atom. Atomic scale manipulation is inseparable from visualization of matter at the atomic level, and in fact STM was originally introduced as a purely imaging tool. The alternative pathway for atomically resolved imaging is the electron beam based (scanning) transmission electron microscopy ((S)TEM). Following the introduction of aberration correction, atomically-resolved imaging has become routine. Notably, the advancement of STEM has resulted in multiple observations of beam induced changes in matter on the atomic level, where beam induced transformations can be visualized before, during, and after the process. For example, many studies of graphene have noted a variety of defect transformations under e-beam irradiation.
Additionally, beam induced transformations between graphene and foreign atomic species have also been studied, as well as beam induced transformations in other material systems.
Following these observations, it was proposed that the synergy of electron beam control and real time feedback can be used to devise electron beam driven assembly of matter. Recent developments along this line of investigation have begun to establish the physics behind in situ beam-induced atomic processes,
32, 42-45 as well as demonstrate various atomic scale manipulation schemes, explore bonding and atomic scale electronic structure,
34, 35, 52-54 and model theoretical structures which may be within the grasp of these emerging capabilities.
55, 56
The progress in this field necessitates understanding of fundamental mechanisms of beam-induced changes in solids on the atomic level. Notably, the theory of the elastic and inelastic scattering in STEM is well developed, as driven by the requirements of STEM image simulation and prediction of electron-energy loss spectroscopy (EELS). However, analysis of the beam-induced damage in solids represents a considerably more complicated problem, traditionally analyzed on the level of direct elastic scattering on nuclei (knock-on) or two-temperature models. A number of treatments of high energy processes starting from the formation of non-equilibrium hot electron and hole carriers, thermalization of the electronic subsystem, and energy transfer to the ionic subsystem are also available. However, the extreme mismatch between the time and energy scales of the electron beam (40-300 keV, and ~ attoseconds for electron residence time in the atomic volume) and chemical transformations (~1 eV) and atomic models render the problem extremely complex for forward prediction. Here, we demonstrate that the observation of electron beam-induced displacements of a single Si impurity atom confined at a defined lattice site in a 2D graphene lattice can be used as a sensitive probe of the energy transfer from the electron beam to the lattice. In this matter, we utilize the impurity as a single-atom force sensor. We develop a stochastic reconstruction method that allows extraction of the free energy landscape from molecular dynamics (MD) simulation of the system and, with known energy landscape, determination of the excitation exerted by the beam in the thermal (i.e. assuming that force is a white noise-like) approximation. We further demonstrate that the observations of the moving atoms allow reconstruction of the free energy landscape along step edges. Figure 1 shows an example HAADF-STEM image of a single Si substitutional defect in a graphene lattice. It is well-known that the contrast produced by this imaging mode is proportional to the Z number of the atom.
63, 64
The decrease in intensity as the beam is moved away from the atom is due to the diminished intensity of the probe tails. Thus, the atom acts as a delta function convolved with the probe profile. On close inspection of the image, however, we note that the Si atom exhibits a streaky and broken intensity profile. It may be tempting to ascribe this behavior to the (often observed) emission tip noise. Yet, if this were so, such streaky and broken profiles would also be observed on the neighboring C atoms as well. Furthermore, we observed multiple examples of streaky atoms at the adjacent lattice sites (i.e. “dimer” structures), with clear alignment of dark and bright streaks by single lattice translation vector. Lacking another explanation and comparing to similar dynamics often observed in scanning tunneling microscopy (STM) experiments, we conclude that the observation is not microscope instability and results from the Si atom moving slightly toward and away from the beam, which acts to modulate the observed intensity. Depending on exact beam parameters, this beam-induced displacement can be confined to a single lattice site or result in atom jumping between two adjacent lattice sites (which process can give rise to “dimers” or “cut” atoms). Based on this supposition, we can use the line-to-line variations in the image to approximate the dopant atom motion during imaging.
Figure 1: (a) HAADF image of a single Si atom in the graphene lattice. (b) Illustration of atomic instability. Note the presence of the “cut” atom in the lower part of the image. (c) Zoom-in illustrating the presence of the clearly visible streaks due to the wobbling motion of the atom during image acquisition. The scan is acquired from left to right (fast scan direction) and from top to bottom (slow scan direction). Images were artificially colored using the “Green Fire Blue” look up table in ImageJ. The unique aspect of STEM imaging is that the measured image can be, to a good approximation, represented as a convolution of the ideal image with the STEM point spread function. Correspondingly, the local image intensity and position of the maximum along the scan line can be used to infer the information on the position of the atom center with respect to the beam. Here, we analyze a single frame of a raster scanned image of a single dopant atom in a graphene lattice to extract information on the average residence time that the atom spends at particular locations during imaging. To perform this analysis, it is assumed that the 2D intensity profile of the dopant atom is a radially symmetric Gaussian with a fixed height and width. Next, we fit the intensity profile of each scan line to a noise-floor thresholded 1D Gaussian which has four fitting parameters: amplitude, the central position of the atom in the x-direction ( x center ), width, and noise floor height. An initial fitting is run for each scan line with all four of the parameters free. From this, a max height ( H max ), max width ( w max ), and noise floor height is determined for the whole image. The line-by-line 1D Gaussian fit is run again, this time with the width and noise floor set to the global average. Because we have assumed a radially symmetric 2D Gaussian for the shape of the observed atom, we can combine information about the line-to-line fitted height ( H local ), the overall max amplitude, the atom width, and the scan line position ( y local ) to calculate the line-to-line variation in atom position in the y-direction ( y center ): 𝑦 𝑐𝑒𝑛𝑡𝑒𝑟 = 𝑤 𝑚𝑎𝑥 √ln( 𝐻 𝑚𝑎𝑥 𝐻 𝑙𝑜𝑐𝑎𝑙 ) − |𝑦 𝑙𝑜𝑐𝑎𝑙 | (1) Because the inverse function of the Gaussian is double-valued, some care is needed to make sure that the correct solution for y center is chosen throughout. The final output of this portion of the analysis is an estimate of the central x,y position of the atom for each scan line which can be used to yield a map of the likelihood of the atom residing at particular locations. Figure 2:
Extraction of the observed fluctuations of a single atom in an atomic image. (a) Observed atomic contrast, (b) sketch of the fitting procedure, and (c) derived atomic positions. Note the asymmetry of the probability distribution associated with the scanning motion of the beam. This analysis yields the information on the atomic position with respect to the beam while the beam scans across the atom, thus defining beam induced dynamics at the ~pm length scales with ~ms time resolution. However, to extract physical meaning and information on the effects induced by the beam it is necessary to combine this data with models of the free energy landscape. In the first approximation, the latter can be estimated as the sum of elastic responses of the individual Si-C bonds, and the force density experienced by the atom can be obtained via elementary arithmetic operation assuming the atomic jumps are uncorrelated. However, this approach is limited, since the carbon subsystem can also react to the Si atom motion on the time scale of observed vibrations (and below). Furthermore, while STEM data provides the information on the atomic motion in the image plane only, in a realistic material motion in the z-direction is also possible. To avoid these uncertainties, here we adopt a method based on the stochastic reconstruction of the free energy landscape from observed trajectories assuming thermal-like excitation of the atom (see supplemental materials for details ). In this approximation, we assume that the atom is excited by white (i.e. having constant spectral density) uncorrelated force, which is equivalent to an effective temperature or deposition of energy, E, active on the atom. This effect can in turn be represented as effective temperature of the atom induced by the beam excitation. We implement this approach in two steps. As a first step, we perform the molecular dynamic simulation of the Si-C system and reconstruct the free energy landscape from calculated atomic displacements for a known excitation force. In the second step, we determine the unknown excitation force due to beam-nucleus interaction from the experimentally observed atomic dynamics. The MD simulations were performed to simulate the Si-C atom response to a known force applied to a single dopant Si atom embedded in a monovacancy in a graphene lattice. For this purpose, a periodic force with an exponential distribution was applied to the Si atom and the potential energy variations were recorded during the simulations (Figure 3). For more details about the MD simulation methodology refer to the supplementary materials. This modeling yields ( x,y,z ) atomic positions as a function of time. Figure 3 : a) Silicon atom configuration in graphene sheet. b) Probing force distribution applied to Si atom. c) 𝑒 −𝑉 𝑧 /𝐸 𝑧 and trajectory of (𝑧 𝑡 ) . d) 𝑒 −𝑉 ⊥ /𝐸 ⊥ and trajectory of (𝑥 𝑡 , 𝑦 𝑡 ) . The energy scale (color) in (c,d) is in eV. To reconstruct the potential over the (𝑥, 𝑦, 𝑧) space, we choose the model potential consistent with site symmetry as 𝑉(𝑥, 𝑦, 𝑧; 𝐭) = 𝑉 ⊥ (𝑥, 𝑦) + 𝑉 𝑧 (𝑧) , where the following parametric ansatz are employed for the individual terms 𝑉 ⊥ (𝑥, 𝑦) = 𝑎(𝑥 + 𝑦 )[1 + 𝑏cos(4tan −1 (𝑥/𝑦))] , 𝑉 𝑧 (𝑧) = 𝑐𝑧 + 𝑑𝑧 , (2a,b) where t = ( a,b,c,d ) are parameters. Note that in MD simulation all ( x,y,z ) coordinates are available during the modeling, unlike the STEM experiment which is sensitive only to ( x,y ). For femto-scale dynamics of the MD simulation, on the order of the decorrelation time of the underlying process, it makes sense to utilize dynamical information in the reconstruction. We leverage the sequential nature of the data and the dynamical information by using a sequential Monte Carlo sampling method to process the observed atomic trajectories and perform parameter estimation. First the energy, E, is fit to the quadratic variation of the process. As mentioned above, it is assumed that the potential 𝑉 has a specific form and is governed by a set of parameters t . It is furthermore assumed that the data arise as exact observations from a discretized first order Langevin stochastic differential equation (SDE) in this potential, 𝜁𝑑𝑋 𝑡 = −∇𝑉(𝑋 𝑡 )𝑑𝑡 + √2𝜁𝐸𝑑𝑊 𝑡 , where 𝑋 𝑡 is the state, 𝐸 = 𝑘 𝐵 𝑇 is the characteristic energy in units of [eV], 𝜁 is a fluctuation-dissipation constant with units of mass/time, and 𝑊 𝑡 is a standard Brownian motion with units of [ √𝑠] . This equation has invariant measure exp(−𝑉/𝐸) /𝑍 , where Z is a normalizing constant. In this context we recover a Bayesian posterior distribution with a simple tractable form, and we can then implement a sequential Monte Carlo sampling framework by leveraging the sequential form of the posterior. We can further reduce the computational expense by introducing a pseudo-dynamics on the parameters in the spirit of and implementing a standard particle filter. The former results are presented for reconstruction of the posterior potential V given the MD simulation data. The observed atom positions from STEM are separated by much longer times than the scale of the MD. Due to decorrelation in the stochastic model above, we therefore assume they comprise independent and identically distributed observations from exp(−𝑉/𝐸) /𝑍 , which allows us to reconstruct E. Based on this analysis, we find E = 0.26 eV → 𝑇 = 3017 K. Details about the Bayesian parameter estimation of the molecular potential can be found in the supplementary materials. The characteristic information on the effective parameters of electron beam induced atomic excitation can be used to map the free energy landscape experienced by the atom during more complex dynamic processes, e.g. atomic motion between adjacent lattice sites. As an example of application of this for analysis of beam-induced transformations, we apply it to reconstruct free energies along a graphene step edge. In this example, a hole in one layer of bilayer graphene was found with two dopant atoms passivating the step edge. The difference in intensity between the two atoms indicates they are of different atomic species, the brighter one having a higher atomic number. A series of images was acquired, exposing the system to the 60 keV electron beam irradiation while simultaneously recording the positions of each atom. Figure 4: Deep convolutional autoencoder based analysis of impurity trajectories in graphene. (a) Representative image frames of the experimental STEM “movie” with two atomic impurities. (b) Output of convolutional de-noising autoencoder for image frames shown in (a). Both encoder and decoder parts of the autoencoder consisted of 3 convolutional layers, with 16 filters of size 3 px by 3 px in each layer, separated by max pooling layer (encoder) and un-pooling layer (decoder). (c) Plot of coordinates of atomic positions for brighter and darker atoms extracted for each of 248 movie frames. (d) Average hole contour overlaid onto the image representing a sum of about 248 movie frames. (e) Analysis of impurity atoms trajectories along the edge of hole represented as a function of the coordinate along the edge (boundary index). The high level of noise in the experimental series of images (hereafter, “movies”) of impurity motion (Fig. 4a) did not allow us to extract positions of the atomic species in each frame using standard image analysis tools ( e.g. , median filtering and thresholding). To alleviate this issue, we employed a deep convolutional de-noising autoencoder (CDAE) for processing the raw experimental data (Fig. 4b).
70, 71
The CDAE is a powerful tool for reconstructing missing/corrupted data from the images, including images where signal and noise can be barely differentiated by the human eye. We used a CDAE model in which both encoder and decoder parts consist of three convolutional layers separated by a max pooling layer (encoder) and an un-pooling layer (decoder). The CDAE was trained on simulated STEM images corrupted with the levels of noise comparable to those observed in the experiment. We then applied the trained CDAE to the real STEM “movies”, which allowed us to reconstruct noisy experimental images (“movie” frames) into the images with two well-defined atomic impurities of different intensities (“brighter” and “darker”) on a uniform background (Fig. 4b). Thus, with the help of CDAE the problem has been reduced to a trivial problem of tracking a motion of two blobs of different intensities across the frames of the STEM “movie”. The extracted positions of atomic impurities from all the “movie” frames are plotted in Fig. 4c. We then extracted an approximate contour of the hole edge (Fig. 4d) assuming the hole’s boundary does not undergo any drastic reconstructions during the imaging and plotted an impurity trajectory as a function of coordinate along the edge (boundary index) for both the “brighter” and “darker” atom (Fig. 4e). The impurity dynamics show the character of a telegraph process, in which there are long periods of stability with rapid “jumps” in-between. As mentioned above, the timescale of microseconds is effectively macroscale with respect to the femto-second dynamics and nanosecond pings, so the observations are considered to be independent and identically distributed (i.i.d.) realizations from the invariant distribution of the underlying femto-scale process. Here we have dynamics of two interacting processes, a dark atom 𝑆 𝑑 (𝑡) and a bright atom 𝑆 𝑏 (𝑡) , which are assumed to evolve on the torus parametrized by s in [0,1), in units of the circumference of the ring, which is estimated to be between 24 and 28 Angstroms. A non-parametric kernel density estimate is used to reconstruct an approximation of the invariant density p( 𝑠 𝑑 , 𝑠 𝑏 )=exp(-U( 𝑠 𝑑 , 𝑠 𝑏 ))/Z. A Gaussian kernel is used for the non-parametric density reconstruction, with standard deviation l=0.05. Figure 5 shows the marginal reconstructed densities (on each of 𝑠 𝑑 and 𝑠 𝑏 ) after the full 248 observations. The joint density, which describes the correlations between the processes, is presented in the supplementary material. Figure 5: (a) Experimental slow-scanned image of the hole prior to the acquisition of the “movie”. Fourier filtering was applied to enhance the lattice periodicity. Note the clearly visible structure of bilayer graphene around the hole, atomic structure of graphene in the hole, and “blobs” formed due to the movement of atoms during imaging. (b) Potential energy landscape experienced by the atoms along the hole edge and schematic graphene configuration, (c) reconstructed marginal probability density of the dark atom, d) reconstructed marginal probability density of the bright atom, where p 𝑒 −𝑈 . Green crosses mark the origin and arrows show direction of observations. To summarize, here we demonstrate an approach to reconstruct the force exerted by the electron beam using a single impurity atom with a known free energy landscape as a force sensor. We approximate the electron beam effect on atomic dynamics via a Gaussian process, which is equivalent to an effective temperature approximation, and determine this parameter from the experimental observation of single site atomic dynamics. We further use this approach to map the free energy landscape through which atom travels and relate it to structure. We note that future opportunities include extending this approach for non-Gaussian models for beam-induced atomic dynamics, where the frequency spectrum of the excitation force has non-trivial frequency dispersion. Furthermore, we envision the determination of the excitation parameters as a function of beam position vs. nucleus, and mapping in the z-direction via a tilt series. Finally, of interest is the evolution of the effective beam parameters describing energy transfer to an atomic nucleus as a function of experimentally controlled parameters such as beam current and energy. Overall, this study sets the approach for the description of experimentally observed atomic dynamics under the action of an electron beam, and hence is consistent with the description of the multitude of processes reported in the last several years including crystallization of amorphous materials, elastic-plastic transition, ferroelectric domain switching, phase transitions,
77, 78 vacancy formation and dynamics, creation and inversion of molecular bonds, atomic motion, sculpting, and liquid electrochemistry. This will enable the e-beam to be used as a controllable (or at least understood) probe in the beam induced transformations, providing a new and powerful probe of the atomic world.
Acknowledgements:
Research was performed at the Center for Nanophase Materials Sciences, which is a US Department of Energy Office of Science User facility. Experimental work was supported by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC for the U.S. Department of Energy (O.D., S.V.K., S.J.) This research was also sponsored by the Applied Mathematics Division of ASCR, DOE; in particular under the ACUMEN project (F. B., K. J. H. L., R. A.). A.M. acknowledges fellowship support from the
UT/ORNL Bredesen Center for Interdisciplinary Research and Graduate Education. References
1. R. P. Feynman, Caltech Engineerring and Science (5), 22-36 (1960). 2. D. M. Eigler and E. K. Schweizer, Nature (6266), 524-526 (1990). 3. S. R. Schofield, N. J. Curson, M. Y. Simmons, F. J. Ruess, T. Hallam, L. Oberbeck and R. G. Clark, Phys. Rev. Lett. (13) (2003). 4. M. Fuechsle, J. A. Miwa, S. Mahapatra, H. Ryu, S. Lee, O. Warschkow, L. C. L. Hollenberg, G. Klimeck and M. Y. Simmons, Nat. Nanotechnol. (4), 242-246 (2012). 5. B. Weber, S. Mahapatra, H. Ryu, S. Lee, A. Fuhrer, T. C. G. Reusch, D. L. Thompson, W. C. T. Lee, G. Klimeck, L. C. L. Hollenberg and M. Y. Simmons, Science (6064), 64-67 (2012). 6. S. Jesse, A. Y. Borisevich, J. D. Fowlkes, A. R. Lupini, P. D. Rack, R. R. Unocic, B. G. Sumpter, S. V. Kalinin, A. Belianinov and O. S. Ovchinnikova, ACS Nano (6), 5600-5618 (2016). 7. N. Jiang, E. Zarkadoula, P. Narang, A. Maksov, I. Kravchenko, A. Borisevich, S. Jesse and S. V. Kalinin, MRS Bull. (9), 653-659 (2017). 8. S. V. Kalinin and S. J. Pennycook, MRS Bull. (9), 637-643 (2017). 9. R. Mishra, R. Ishikawa, A. R. Lupini and S. J. Pennycook, MRS Bull. (9), 644-652 (2017). 10. X. Zhao, J. Kotakoski, J. C. Meyer, E. Sutter, P. Sutter, A. V. Krasheninnikov, U. Kaiser and W. Zhou, MRS Bull. (9), 667-676 (2017). 11. A. W. Robertson, C. S. Allen, Y. A. Wu, K. He, J. Olivier, J. Neethling, A. I. Kirkland and J. H. Warner, Nature Communications , 1144 (2012). 12. S. Kurasch, J. Kotakoski, O. Lehtinen, V. Skákalová, J. Smet, C. E. Krill, A. V. Krasheninnikov and U. Kaiser, Nano Lett. (6), 3168-3173 (2012). 13. Ç. Ö. Girit, J. C. Meyer, R. Erni, M. D. Rossell, C. Kisielowski, L. Yang, C.-H. Park, M. F. Crommie, M. L. Cohen, S. G. Louie and A. Zettl, Science (5922), 1705 (2009). 14. J. C. Meyer, C. Kisielowski, R. Erni, M. D. Rossell, M. F. Crommie and A. Zettl, Nano Lett. (11), 3582-3586 (2008). 15. K. Suenaga, H. Wakabayashi, M. Koshino, Y. Sato, K. Urita and S. Iijima, Nature Nanotechnology , 358 (2007). 16. J. H. Warner, E. R. Margine, M. Mukai, A. W. Robertson, F. Giustino and A. I. Kirkland, Science (6091), 209-212 (2012). 17. O. Lehtinen, S. Kurasch, A. V. Krasheninnikov and U. Kaiser, Nature Communications , 2098 (2013). 18. J. Kotakoski, C. Mangler and J. C. Meyer, Nature Communications , 3991 (2014). 19. J. Kotakoski, A. V. Krasheninnikov, U. Kaiser and J. C. Meyer, Phys. Rev. Lett. (10), 105505 (2011). 20. J. Kotakoski, J. C. Meyer, S. Kurasch, D. Santos-Cottin, U. Kaiser and A. V. Krasheninnikov, Phys. Rev. B (24), 245420 (2011). 21. A. W. Robertson, K. He, A. I. Kirkland and J. H. Warner, Nano Lett. (2), 908-914 (2014). 22. A. W. Robertson, G.-D. Lee, K. He, E. Yoon, A. I. Kirkland and J. H. Warner, Nano Lett. (3), 1634-1642 (2014). 23. A. W. Robertson, G.-D. Lee, K. He, Y. Fan, C. S. Allen, S. Lee, H. Kim, E. Yoon, H. Zheng, A. I. Kirkland and J. H. Warner, Nano Lett. (9), 5950-5955 (2015). 24. X. Wei, M.-S. Wang, Y. Bando and D. Golberg, ACS Nano (4), 2916-2922 (2011). 25. T. Susi, J. Kotakoski, R. Arenal, S. Kurasch, H. Jiang, V. Skakalova, O. Stephan, A. V. Krasheninnikov, E. I. Kauppinen, U. Kaiser and J. C. Meyer, ACS Nano (10), 8837-8846 (2012). 26. R. Zan, U. Bangert, Q. Ramasse and K. S. Novoselov, Nano Lett. (3), 1087-1092 (2011). 27. Q. M. Ramasse, R. Zan, U. Bangert, D. W. Boukhvalov, Y.-W. Son and K. S. Novoselov, ACS Nano (5), 4063-4071 (2012). 5 28. Z. He, K. He, A. W. Robertson, A. I. Kirkland, D. Kim, J. Ihm, E. Yoon, G.-D. Lee and J. H. Warner, Nano Lett. (7), 3766-3772 (2014). 29. A. W. Robertson, B. Montanari, K. He, J. Kim, C. S. Allen, Y. A. Wu, J. Olivier, J. Neethling, N. Harrison, A. I. Kirkland and J. H. Warner, Nano Lett. (4), 1468-1475 (2013). 30. Z. Yang, L. Yin, J. Lee, W. Ren, H.-M. Cheng, H. Ye, S. T. Pantelides, S. J. Pennycook and M. F. Chisholm, Angew. Chem. (34), 9054-9058 (2014). 31. J. Lee, W. Zhou, S. J. Pennycook, J.-C. Idrobo and S. T. Pantelides, , 1650 (2013). 32. T. Susi, J. Kotakoski, D. Kepaptsoglou, C. Mangler, T. C. Lovejoy, O. L. Krivanek, R. Zan, U. Bangert, P. Ayala, J. C. Meyer and Q. Ramasse, Phys. Rev. Lett. (11), 115501 (2014). 33. Y.-C. Lin, P.-Y. Teng, C.-H. Yeh, M. Koshino, P.-W. Chiu and K. Suenaga, Nano Lett. (11), 7408-7413 (2015). 34. Q. M. Ramasse, C. R. Seabourne, D.-M. Kepaptsoglou, R. Zan, U. Bangert and A. J. Scott, Nano Lett. (10), 4989-4995 (2013). 35. D. Kepaptsoglou, T. P. Hardcastle, C. R. Seabourne, U. Bangert, R. Zan, J. A. Amani, H. Hofsäss, R. J. Nicholls, R. M. D. Brydson, A. J. Scott and Q. M. Ramasse, ACS Nano (11), 11398-11407 (2015). 36. I. Gonzalez-Martinez, A. Bachmatiuk, V. Bezugly, J. Kunstmann, T. Gemming, Z. Liu, G. Cuniberti and M. Rümmeli, Nanoscale (22), 11340-11362 (2016). 37. Z. W. Xu and A. H. W. Ngan †, Philos. Mag. Lett. (11), 719-728 (2004). 38. J. Nan, Rep. Prog. Phys. (1), 016501 (2016). 39. S. Dai, J. Zhao, L. Xie, Y. Cai, N. Wang and J. Zhu, Nano Lett. (5), 2379-2385 (2012). 40. A. V. Krasheninnikov and K. Nordlund, J. Appl. Phys. (7), 071301 (2010). 41. S. V. Kalinin, A. Borisevich and S. Jesse, Nature (7630), 485-487 (2016). 42. T. Susi, D. Kepaptsoglou, Y.-C. Lin, Q. M. Ramasse, J. C. Meyer, K. Suenaga and J. Kotakoski, 2D Materials (4), 042004 (2017). 43. T. Susi, J. C. Meyer and J. Kotakoski, Ultramicroscopy , 163-172 (2017). 44. J. C. Meyer, F. Eder, S. Kurasch, V. Skakalova, J. Kotakoski, H. J. Park, S. Roth, A. Chuvilin, S. Eyhusen, G. Benner, A. V. Krasheninnikov and U. Kaiser, Phys. Rev. Lett. (19), 196102 (2012). 45. J. Kotakoski, D. Santos-Cottin and A. V. Krasheninnikov, ACS Nano (1), 671-676 (2012). 46. S. Jesse, Q. He, A. R. Lupini, D. N. Leonard, M. P. Oxley, O. Ovchinnikov, R. R. Unocic, A. Tselev, M. Fuentes-Cabrera, B. G. Sumpter, S. J. Pennycook, S. V. Kalinin and A. Y. Borisevich, Small (44), 5895-5900 (2015). 47. J. Lin, O. Cretu, W. Zhou, K. Suenaga, D. Prasai, K. I. Bolotin, N. T. Cuong, M. Otani, S. Okada, A. R. Lupini, J.-C. Idrobo, D. Caudel, A. Burger, N. J. Ghimire, J. Yan, D. G. Mandrus, S. J. Pennycook and S. T. Pantelides, Nature Nanotechnology , 436 (2014). 48. S. Jesse, B. M. Hudak, E. Zarkadoula, J. Song, A. Maksov, M. Fuentes-Cabrera, P. Ganesh, I. Kravchenko, P. C. Snijders and A. R. Lupini, arXiv preprint arXiv:1711.05810 (2017). 49. O. Dyck, S. Kim, S. V. Kalinin and S. Jesse, Appl. Phys. Lett. (11), 113104 (2017). 50. O. Dyck, S. Kim, S. V. Kalinin and S. Jesse, arXiv preprint arXiv:1710.10338 (2017). 51. O. Dyck, S. Kim, E. Jimenez-Izal, A. N. Alexandrova, S. V. Kalinin and S. Jesse, ArXiv e-prints (2017). 52. T. P. Hardcastle, C. R. Seabourne, D. M. Kepaptsoglou, T. Susi, R. J. Nicholls, R. M. D. Brydson, A. J. Scott and Q. M. Ramasse, J. Phys.: Condens. Matter (22), 225303 (2017). 53. T. Susi, T. P. Hardcastle, H. Hofsäss, A. Mittelberger, T. J. Pennycook, C. Mangler, R. Drummond-Brydson, A. J. Scott, J. C. Meyer and J. Kotakoski, 2D Materials (2), 021013 (2017). 54. R. J. Nicholls, A. T. Murdock, J. Tsang, J. Britton, T. J. Pennycook, A. Koós, P. D. Nellist, N. Grobert and J. R. Yates, ACS Nano (8), 7145-7150 (2013). 55. D. Nosraty Alamdary, J. Kotakoski and T. Susi, physica status solidi (b) (11), 1700188-n/a (2017). 6 56. A. Ramasubramaniam and D. Naveh, Phys. Rev. B (7), 075405 (2011). 57. E. J. Kirkland, Advanced computing in electron microscopy . (Springer Science & Business Media, 2010). 58. R. Egerton,
Electron energy-loss spectroscopy in the electron microscope . (Springer Science & Business Media, 2011). 59. R. F. Egerton, P. Li and M. Malac, Micron (6), 399-409 (2004). 60. R. F. Egerton, R. McLeod, F. Wang and M. Malac, Ultramicroscopy (8), 991-997 (2010). 61. R. F. Egerton, F. Wang and P. A. Crozier, Microsc. Microanal. (1), 65-71 (2005). 62. W. J. Weber, E. Zarkadoula, O. H. Pakarinen, R. Sachan, M. F. Chisholm, P. Liu, H. Z. Xue, K. Jin and Y. W. Zhang, Sci Rep (2015). 63. S. J. Pennycook, Ultramicroscopy (1), 58-69 (1989). 64. O. L. Krivanek, M. F. Chisholm, V. Nicolosi, T. J. Pennycook, G. J. Corbin, N. Dellby, M. F. Murfitt, C. S. Own, Z. S. Szilagyi, M. P. Oxley, S. T. Pantelides and S. J. Pennycook, Nature (7288), 571-574 (2010). 65. L. Jones and P. D. Nellist, Microsc. Microanal. (4), 1050-1060 (2013). 66. S. J. Pennycook and P. D. Nellist, Scanning transmission electron microscopy: imaging and analysis . (Springer Science & Business Media, 2011). 67. T. Schlick,
Molecular modeling and simulation: an interdisciplinary guide: an interdisciplinary guide . (Springer Science & Business Media, 2010). 68. P. Del Moral, A. Doucet and A. Jasra, Journal of the Royal Statistical Society: Series B (Statistical Methodology) (3), 411-436 (2006). 69. J. Liu and M. West, in Sequential Monte Carlo Methods in Practice , edited by A. Doucet, N. de Freitas and N. Gordon (Springer New York, New York, NY, 2001), pp. 197-223. 70. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio and P.-A. Manzagol, J. Mach. Learn. Res. , 3371-3408 (2010). 71. L. Gondara, presented at the 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), 2016 (unpublished). 72. S. Jesse, He., Q, Lupinin, A.R., Leonard, D., Oxley, M.P., Ovchinnikov, O., Unocic, R.R., Tselev, A., Fuentes-Cabrera, M., Sumpter, B.G., Pennycook, S.J., Kalinin, S.V., and Borisevich, A.Y., Small, 1 (2015). 73. I. T. Bae, Y. W. Zhang, W. J. Weber, M. Higuchi and L. A. Giannuzzi, Applied Physics Letters (2) (2007). 74. Y. Zhang, J. Lian, C. M. Wang, W. Jiang, R. C. Ewing and W. J. Weber, Physical Review B (9) (2005). 75. S. Dai, J. Zhao, L. Xie, Y. Cai, N. Wang and J. Zhu, Nano Letters (5), 2379-2385 (2012). 76. J. L. Hart, S. Liu, A. C. Lang, A. Hubert, A. Zukauskas, C. Canalias, R. Beanland, A. M. Rappe, M. Arredondo and M. L. Taheri, Physical Review B (17) (2016). 77. F. Cao, H. Zheng, S. F. Jia, X. S. Bai, L. Li, H. P. Sheng, S. J. Wu, W. Han, M. M. Li, G. Y. Wen, J. Yu and J. B. Wang, J. Phys. Chem. C (38), 22244-22248 (2015). 78. H. M. Zheng, J. B. Rivest, T. A. Miller, B. Sadtler, A. Lindenberg, M. F. Toney, L. W. Wang, C. Kisielowski and A. P. Alivisatos, Science (6039), 206-209 (2011). 79. H. P. Komsa, J. Kotakoski, S. Kurasch, O. Lehtinen, U. Kaiser and A. V. Krasheninnikov, Phys. Rev. Lett. (3) (2012). 80. A. Markevich, S. Kurasch, O. Lehtinen, O. Reimer, X. L. Feng, K. Mullen, A. Turchanin, A. N. Khlobystov, U. Kaiser and E. Besley, Nanoscale (5), 2711-2719 (2016). 81. Z. Q. Yang, L. C. Yin, J. Lee, W. C. Ren, H. M. Cheng, H. Q. Ye, S. T. Pantelides, S. J. Pennycook and M. F. Chisholm, Angew. Chem.-Int. Edit. (34), 8908-8912 (2014). 82. R. Ishikawa, R. Mishra, A. R. Lupini, S. D. Findlay, T. Taniguchi, S. T. Pantelides and S. J. Pennycook, Phys. Rev. Lett. (15) (2014). 7 83. A. V. Krasheninnikov and K. Nordlund, Journal of Applied Physics (7) (2010). 84. R. R. Unocic, A. R. Lupini, A. Y. Borisevich, D. A. Cullen, S. V. Kalinin and S. Jesse, Nanoscale (34), 15581-15588 (2016). upplemental materials 1. MD Simulations
Periodic boundary conditions are applied in the all directions. To avoid interactions between periodic cells in z direction, a large vacuum (50 nm) considered in both sides of graphene sheet in this direction. Here, x and y are in-plane directions and z is out of plane direction. A graphene sheet with size of 12×12 nm was used and single Si atom in a monovacancy is considered in the graphene sample. (Fig. S1). Adaptive intermolecular reactive empirical bond order (AIREBO) potential is used to model C-C bonded interactions, and the Tersoff potential was employed for modeling Si-C interactions. Atomic/Molecular Massively Parallel Simulator (LAMMPS) was used to carry out the molecular dynamics simulations; the total simulation time and time step size for all simulations are chosen to be 5.0 ns and 0.5 fs, respectively. Figure S1.
Silicon atom configuration in graphene sheet. The initial structure was allowed to relax at the beginning of the simulation using an NPT ( isothermal-isobaric) ensemble for 0.5 ns, which was followed by a NVT ensemble at a temperature of 300K for another 0.5 ns to achieve steady temperature (Temperature fluctuations < 3K) for entire structure. After reaching a steady temperature for the system, a periodic force is applied, with the form
𝐹 = 𝐴(𝑒 ) was applied. Here F is the effective force applied to the Si atom in – z direction and m is a random generated number and A is the factor defining the force magnitude. The force was applied as a hort delta function with a duration of 100 fs and at a rate of every 2.5 ps. The time gap between each force application is long enough so the effect of the previous force is damped by the system. Fig. S2 shows an example of the applied force distribution. Figure S2 : a) Force distribution applied to Si atom. b) Potential energy of Si atom with respect to time. In order to calculate potential, the position and potential energy of the Si atom was exported every time step during the NVE ensemble. The potential energy variation with respect to time for the Si atom under the influence of the force shown in Figure S2 a) is illustrated in Figure S2 b). Bayesian parameter estimations of the molecular potential
A somewhat non-standard and ad hoc statistical model is introduced which simplifies the problem of static parameter estimation for stochastic differential equations (SDEs). In particular, discretization error is ignored and observations are assumed exact, which yields a tractable posterior distribution. A sequential Monte Carlo sampler is then implemented for inference from the proposed posterior distribution. a: Setup
Consider the following statistical model 𝑛 (𝑑𝜃) = ∏ 𝑝 𝛿𝑛𝑖=1 (𝑥 𝑖 |𝑥 𝑖−1 , 𝜃)𝑝 (𝑑𝜃) , where 𝑝 is a prior on some parameter 𝜃 ∈ ℝ 𝑑 , and 𝑝 𝛿 is a 𝛿 time-step Euler approximation of the following SDE 𝑑𝑋 𝑡 = −∇𝑈(𝑋 𝑡 ; 𝜃)𝑑𝑡 + √2𝐷𝑑𝑊 𝑡 , (1) where 𝐷 ∈ ℝ is a diffusion constant, 𝑊 𝑡 is Brownian motion on ℝ 𝑝 , 𝑋 𝑡 ∈ ℝ 𝑝 , and 𝑎: ℝ 𝑝+𝑑 → ℝ 𝑝 is Lipschitz. That is 𝑝 𝛿 (𝑥 𝑖 |𝑥 𝑖−1 , 𝜃) = exp(− |𝑥 𝑖 − 𝑥 𝑖−1 + ∇𝑈(𝑋 𝑡 ; 𝜃)𝛿| ) . (2) Define the posterior density of 𝜃|𝑥 , … , 𝑥 𝑛 by 𝜂 𝑛 (𝑑𝜃) = 𝛾 𝑛 (𝑑𝜃)/𝛾 𝑛 (1) . The objective is to sample 𝜃 𝑖 ∼ 𝜂 𝐽 for 𝑖 = 1, … , 𝑁 , and approximate expectations for bounded functions 𝜑: ℝ 𝑑 → ℝ 𝜂 𝑛 (𝜑): = ∫ 𝜑 ℝ 𝑑 (𝜃)𝜂 𝑛 (𝑑𝜃) = ∫ 𝜑 ℝ 𝑑 (𝜃)𝜂 𝑛 (𝜃)𝑑𝜃 , by 𝜂 𝑛 (𝜑) ≈ 1𝑁 ∑ 𝜑 𝑁𝑖=1 (𝜃 𝑖 ) . We cannot sample from this distribution but we can obtain a convergent estimator using SMC samplers, as described below. b: Inferring 𝜃 Suppose D is known for now. It is well-known how to estimate D for this type of model, given (𝑥 , … , 𝑥 𝐽 ) , which will be done ahead of time (see Section c). Define 𝐺 𝑛 (𝜃) = 𝛾 𝑛+1 (𝜃)/𝛾 𝑛 (𝜃) = 𝑝 𝛿 (𝑥 𝑛 + 1|𝑥 𝑛 , 𝜃) . Let 𝑀 𝑛 denote an MCMC kernel such that (𝜂 𝑛 𝑀 𝑛 )(𝑑𝜃): = ∫ 𝜂 𝑛ℝ 𝑑 (𝑑𝜃′)𝑀 𝑛 (𝜃′, 𝑑𝜃) = 𝜂 𝑛 (𝑑𝜃) . Let 𝜃 ∼ 𝑝 . For 𝑛 = 1, … , 𝐽 , repeat the following steps for 𝑖 = 1, … , 𝑁 : Define 𝑤̃ 𝑛𝑖 : = 𝐺 𝑛−1 (𝜃 𝑛−1(𝑖) ) and 𝑤 𝑛𝑖 = 𝑤̃ 𝑛𝑖 / ∑ 𝑤̃ 𝑛𝑖𝑁𝑗=1 . • Resample. e.g. select 𝐼 𝑛𝑖 ∼ {𝑤 𝑛1 , … , 𝑤 𝑛𝑁 } , and let 𝜃̂ 𝑛(𝑖) = 𝜃 𝑛−1𝐼 𝑛𝑖 . • Mutate. Draw 𝜃 𝑛(𝑖) ∼ 𝑀 𝑛 (𝜃̂ 𝑛(𝑖) ,⋅) . Define 𝜂 𝑛𝑁 (𝜑) ∶= 1𝑁 ∑ 𝜑 𝑁𝑖=1 (𝜃 𝑛(𝑖) ) . Under mild conditions it is well known that as
𝑁 → ∞ , one has that 𝜂 𝑛𝑁 (𝜑) → 𝜂 𝑛 (𝜑) almost surely. Rates, central limit theorem, and large deviations estimates can also be obtained. c: Inferring D Observe that 𝑥 𝑛+1 − 𝑥 𝑛 ∼ 𝑁(−𝛿∇𝑈(𝑋 𝑡 ; 𝜃), 2𝐷𝛿) . It is clear then that the drift term is higher order in 𝛿 when computing an approximation of the quadratic variation of the limiting SDE 𝑄̂ ∶= 1𝐽 + 1 ∑(
𝐽𝑛=0 𝑥 𝑛+1 − 𝑥 𝑛 ) . Indeed, this gives a very good approximation to , and we define
𝐷̂ ∶= 𝑄̂ /(2𝛿). As 𝛿 → 0 , one has that 𝐷̂ → 𝐷 . With a known temperature, i.e. value of 𝑘 𝑏 𝑇 = 𝐸 = 𝜁𝐷 , we will then be able to estimate the fluctuation-dissipation coefficient 𝜁 by 𝜁̂ = 𝐸/ 𝐷̂ . d: Filtering with pseudo-dynamics
An alternative approach to parameter inference is to introduce a pseudo-dynamic on the parameter as follows 𝜃 𝑛+1 ∼ 𝑁(𝜃 𝑛 , 𝐶 𝜃 ) , and then solve the filtering problem for 𝜃 𝑛 |𝑥 , … , 𝑥 𝑛 , using a standard particle filter. We find that this approach can provide a reasonable approximate reconstruction with appropriately chosen 𝐶 𝜃 ∝𝛿 . e: Inference at the “macroscale” ere we consider data points separated by 100s or 1000s of decorrelation times. Note that under appropriate assumptions this implies 𝑋 𝑡 → 𝜌 in distribution, where 𝜌(𝑥; 𝜃) = exp(−𝑈(𝑥; 𝜃)/𝐷) . (3) Indeed, one has convergence of ergodic averages ∫ 𝜑 𝑇0 (𝑋 𝑡 )𝑑𝑡 → 𝔼 𝜌 (𝜑) . (4) In this case, the same methodology is applicable, except equation (2) will not make sense. It would be very challenging to compute the transition density with a reasonable time-step, as it would require marginalization over all the intermediate steps. However, in light of equation (4), in this case it is reasonable to assume the data points are independent and identically distributed (i.i.d.) and to replace equation (2) with 𝑝(𝑥 𝑖 |𝑥 𝑖−1 , 𝜃) = exp(− 𝑈(𝑥 𝑖 , 𝜃)𝐷 ) . In our case, the macroscale is given ironically by milliseconds, or even micro-seconds, i.e. the scale on which experimental observations are made. f: Other approaches
The other approaches considered here include non-parametric estimation, and a mixed approach consisting of joint non-parametric estimation with nonlinear regression. The non-parametric estimation proceeds by smoothing an empirical distribution through convolution with a Gaussian kernel
𝐾(𝑥; ℓ) ∝ exp{− 12ℓ |𝑥| } . In this case, one constructs an empirical measure based on the data points {𝑥 𝑖 } 𝑖=1𝑁 as follows (there is an implicit i.i.d. assumption here) 𝜌̂ 𝑁 ∶= 1𝑁 ∑ 𝛿 𝑥 𝑖 𝑁𝑖=1 , and then aims to reconstruct equation (3) by 𝜌̂ 𝐾 (𝑥) ∶= 𝐾 ⋆ 𝜌̂ 𝑁 = ∫ 𝐾(𝑥 − 𝑦) 𝜌̂ 𝑁 (𝑦)𝑑𝑦 = 1𝑁 ∑ 𝐾 𝑁𝑖=1 (𝑥 − 𝑥 𝑖 ; ℓ) . From here, assuming that 𝐾 is normalized, one has an estimate for 𝑈/𝐷 as well ̂ 𝐾 (𝑥) ∶= −log 𝜌̂ 𝐾 (𝑥) . (5) The second mixed approach proceeds as follows. The estimate in equation (5) can now be fit to a parametric form 𝑈(⋅; 𝜃)/𝐷 , using nonlinear regression along a set of points 𝑥 ∈ 𝒳 ∶= {𝑥 min, 𝑥 min + Δ, … , 𝑥 max − Δ, 𝑥 max } . That is, one minimizes 𝛷(𝜃) ∶= ∑ | 𝑥∈𝒳 𝑈̂ 𝐾 (𝑥) − 𝑈(𝑥; 𝜃)𝐷 − log (∫ 𝑒 −𝑈(𝑥;𝜃)𝐷 𝑑𝑥)| , and you take 𝜃 ∗ ∶= argminΦ as the estimator. The integral is approximated with a quadrature rule. This function could also be taken as a likelihood in a Bayesian framework. Naturally, in light of the discussion in section 3e, the framework presented in this section makes more sense for the “macroscale” data, i.e. those data which are observed in the experiments. The estimator (5) is used to approximate the effective joint potential for the interacting system of light and dark atoms, from experimental observations {𝑥 𝑖 } 𝑖=1𝑁 . In this case 𝑥 = (𝑧 , 𝑧 ) , where 𝑧 𝑖 ∈[0,1) , with 𝑖 = 1,2 corresponding to light and dark atom positions, respectively. g: Results Now the long MD trajectory is used to fit an effective Langevin dynamics. It is assumed that in this small mass and long time regime it is reasonable to ignore inertial effects and employ a first order Langevin equation for fitting. First the method of section c is employed to predict the diffusion constant from the quadratic variation. It is observed that the dynamics is smooth on a femtosecond timescale, and so a timestep of 𝛿 = 0.025 ps is considered. An additional difficulty arises here due to the deterministic time-dependent forcing in the 𝑧 direction, denoted by 𝐹(𝑡) =(0,0, f(t)) 𝑇 , as well as the difference in effective diffusion coefficients in the (𝑥, 𝑦) and 𝑧 directions, denoted 𝐷 ⊥ and 𝐷 𝑧 , respectively. The Langevin dynamics now take the form 𝑑𝑋 𝑡 = −(∇𝑈(𝑋 𝑡 ) + 𝐹(𝑡))𝑑𝑡 + √2𝑫𝑑𝑊 𝑡 , (6) where 𝑋 = (𝑥, 𝑦, 𝑧) 𝑇 , and 𝑫 = diag{𝐷 ⊥ , 𝐷 ⊥ , 𝐷 𝑧 } . Note that the equations (6) and (1) balance Å . In particular, (1) has been multiplied through by 𝜻 −1 , the inverse of the fluctuation-dissipation coefficient, which is now an operator 𝜻 = diag{𝜁 ⊥ , 𝜁 ⊥ , 𝜁 z }. Following a similar procedure as the one described in section c, we can estimate 𝜁 ⊥ = 𝐸/𝐷 ⊥ and 𝜁 z = 𝐸/𝐷 𝑧 , by substituting in T=273K, .e. E = 0.024 eV. In particular, the estimates using the method in section c are 𝐷 ⊥ = 0.16 Å /ps and 𝐷 𝑧 = 0.55 Å /ps. Since 𝐸 = 0.024 eV, this gives 𝜁 ⊥ = 0.15 eV ps/Å and 𝜁 𝑧 = 0.044 eV ps/Å . Comparing (1) with (6), one can observe furthermore that ∇𝑈(𝑋 𝑡 ) = 𝜻 −1 ∇𝑉(𝑋 𝑡 ). It is assumed for simplicity that the dynamics obey an invariant measure, despite the time-dependent periodic forcing F(t). It is furthermore assumed that the potential is separable as
U(𝑥, 𝑦, 𝑧) = 𝑈 ⊥ (𝑥, 𝑦) +𝑈 𝑧 (𝑧) , where the following parametric ansatz are employed for the individual terms 𝑈 ⊥ (𝑥, 𝑦) = 𝑎(𝑥 + 𝑦 ) [1 + 𝑏cos (4tan −1 (𝑥𝑦))] , 𝑈 𝑧 (𝑧) = 𝑐𝑧 + 𝑑𝑧 , and so the invariant measure is proportional to exp(−(𝑈 ⊥ /𝐷 ⊥ + 𝑈 𝑧 /𝐷 𝑧 )) . The method of section b is used to fit (𝑎, 𝑏, 𝑐, 𝑑) . Note that degeneracy may occur if b is allowed to change sign, and ill-posedness will result from 𝑎, 𝑐 < 0 . So, it is assumed that 𝑎 = 𝑒 𝜃 , 𝑏 =𝑒 𝜃 , 𝑐 = 𝑒 𝜃 and 𝑑 = 𝜃 . A prior 𝜃 ∼ 𝑁(0, 𝐶 ) is used, with 𝐶 = 𝜎 𝐼 , and 𝜎 = 2 . The parameters 𝑎𝜁 ⊥ , 𝑐𝜁 𝑧 have units eV/(Å ) , 𝑑𝜁 𝑧 has units of eV/(Å) , and 𝑏 is dimensionless. The resulting posterior after 𝑛 = 1141 observations separated by 𝛿 = 0.025 ps is presented in Figure S3. The posterior expectations on the parameters are given by 𝔼𝑎 = 13.7 , 𝔼𝑏 = 0.0125 , 𝔼𝑐 = 0.882 , and
𝔼𝑑 = 0.479 . It is interesting to observe that 𝑑 ≈ − ∫ 𝑓 𝑇0 (𝑡)𝑑𝑡 . In the end, the posterior mean potential in eV is given by 𝑉 ⊥ = 𝑈 ⊥ 𝜁 ⊥ and 𝑉 𝑧 = 𝑈 𝑧 𝜁 𝑧 , i.e. 𝑉 ⊥ (𝑥, 𝑦) = 2.06(𝑥 + 𝑦 ) [1 + 0.0125cos (4tan −1 (𝑥𝑦))] , 𝑉 𝑧 (𝑧) = 0.039𝑧 + 0.021𝑧. These are plotted in Figures 3 c) and 3 d) with observation trajectory overlayed. Now, this potential can be used together with data obtained from observed fluctuations of a single atom as presented in Figure 2 in order to infer the effective temperature. A non-parametric optimization approach similar to the one described in section f is used to fit the curvature at the mode, which yields E = 0.26 eV → 𝑇 = 3017 K. There is 10% relative error in the reconstruction, in L2 norm. Figure S3 . Marginal posterior densities of the pushforward of 𝛉|𝐱 , … , 𝐱 𝐧 to (a,b,c,d). The results are on a log scale and compared to some functional forms. The joint potentials arising from the reconstructions of the density p( 𝑠 𝑑 , 𝑠 𝑏 )=exp(-V( 𝑠 𝑑 , 𝑠 𝑏 ))/Z, as described in the main text, after 1, 10, 100, and full 248 observations are shown in figure S4. The resulting potential is dimensionless. Figure S4.
Reconstructions of potential on a torus expanded coordinate system a) after 1, b) after 10, c) after 100, and d) after 248 observations. e) Surface plot of the final reconstruction.
References
1. S. J. Stuart, A. B. Tutein and J. A. Harrison, The Journal of Chemical Physics (14), 6472-6486 (2000). 2. R. Devanathan, T. Diaz de la Rubia and W. J. Weber, J. Nucl. Mater. (1), 47-52 (1998). 3. S. Plimpton, J. Comput. Phys. (1), 1-19 (1995). . P. D. Moral, A. Doucet and A. Jasra, J. Roy. Stat. Soc. Ser. B. (Stat. Method.) (3), 411-436 (2006). 5. P. Del Moral, in Feynman-Kac Formulae (Springer, 2004), pp. 47-93. 6. J. Liu and M. West, in
Sequential Monte Carlo Methods in Practice , edited by A. Doucet, N. de Freitas and N. Gordon (Springer New York, New York, NY, 2001), pp. 197-223. 7. T. Schlick,