Methods to Quantify Dislocation Behavior with Dark-field X-ray Microscopy Timescans of Single-Crystal Aluminum
Arnulfo Gonzalez, Marylesa Howard, Sean Breckling, Leora E. Dresselhaus-Marais
NNevada National Security Site Lawrence Livermore National Lab
RESEARCH ARTICLE
Methods to Quantify Dislocation Behavior with Dark-field X-rayMicroscopy Timescans of Single-Crystal Aluminum
Arnulfo Gonzalez* | Marylesa Howard | Sean Breckling | Leora E. Dresselhaus-Marais Signal Processing and AppliedMathematics, Nevada National Security Site Physical and Life Sciences, LawrenceLivermore National Lab
Correspondence *Arnulfo Gonzalez. Email:[email protected]
Summary
Crystal defects play a large role in how materials respond to their surroundings, yetthere are many uncertainties in how extended defects form, move, and interact deepbeneath a material’s surface. A newly developed imaging diagnostic, dark-field X-ray microscopy (DFXM) can now visualize the behavior of line defects, known asdislocations, in materials under varying conditions. DFXM images visualize dislo-cations by imaging the very subtle long-range distortions in the material’s crystallattice, which produce a characteristic adjoined pair of bright and dark regions. Fullanalysis of how these dislocations evolve can be used to refine material models,however, it requires quantitative characterization of the statistics of their shape, posi-tion and motion. In this paper, we present a semi-automated approach to effectivelyisolate, track, and quantify the behavior of dislocations as composite objects. Thisanalysis drives the statistical characterization of the defects, to include dislocationvelocity and orientation in the crystal, for example, and is demonstrated on DFXMimages measuring the evolution of defects at 98 % of the melting temperature forsingle-crystal aluminum, collected at the European Synchrotron Radiation Facility. KEYWORDS:
Image Processing, Feature Tracking, Dislocations, Material Science
Dark-field X-ray microscopy (DFXM) is a new imaging technique that was developed over the last decade to image specificpopulations of distortions in materials’ crystal lattices [23]. While a hypothetical “perfect” crystal would produce a DFXM imageshowing only a single flat field of intensity across the image, DFXM images of real materials reveal bright and dark objectsthat originate from imperfections (defects) in the crystal’s lattice. For a sufficiently large crystal grain, a single unprocessedDFXM image shows individual defects along a given crystal plane (for crystals with a low density of defects). Classical DFXMstudies have measured static materials by collecting a stack of images that resolve different distortion-fields in a crystal, thenreconstructing the full distortion field with post-processing; this has already demonstrated DFXM’s ability to resolve importantsub-surface defect information that is required to refine material models [11, 22]. Recent work has begun to use raw DFXMimages to time-resolve the evolution of defects deep beneath a material’s surface[4]. This work has already demonstrated theimportance of quantifying the statistics of the defect-related features resolved by DFXM, which requires specialized methods.A dislocation is a defect of particular importance to materials science, which is shown schematically in Figure 1 a (left). Adislocation is defined by its core, which is the 2D line that truncates a single extra plane of atoms in the 3D lattice. Long-rangedisplacements fields (comprised of strain and rotation) emanate from the dislocation core, spanning hundreds of nanometers to a r X i v : . [ phy s i c s . d a t a - a n ] A ug FIGURE 1
Schematic of a single edge dislocation showing (a) the configuration of the extra truncated plane of atoms thatdefine a dislocation’s core, and (b) a plot showing the long-range strain field that emanates from the dislocation core. We notethat the full displacement gradient field that defines DFXM’s resolution includes contributions from both the strain and rotationtensors, as described in full in [19].(a) (b)
FIGURE 2
The first frame of a timescan experiment, showing full set of dislocations in single-crystal aluminum (a) and zoomedin to a region of interest about active dislocations (b).micrometers in some cases, and DFXM images resolve specific components of that those subtle lattice distortions [19]. As aresult, each dislocation appears as a single asymmetric object corresponding to a joined region of dark and light pixels, acting asa bandpass filter of the displacement gradient field, following a similar relation to the strain map depicted in Figure 1 (right).Recent experiments described in [4] measured time-resolved scans of DFXM at the European Synchrotron Radiation Facilityto study how populations of individual dislocations change as a function of time in high-temperature single-crystal aluminum.Each experiment collected timescans at a specific (constant) temperature with increments of 500 frames per scan and a spacingof Δ 𝑡 ≈ 0 . seconds between frames (precise Δ 𝑡 calculated from the timestamps that were recorded for each frame). The firstraw image in the timescan is shown in Figure 2 (left), with and a zoomed-in region about a few dislocations of interest (right).The dislocations are seen as the joined bright/dark pairs and are the primary focus for the analysis workflow in this paper.While it might be conceivable to manually analyze a small set of dislocations from a single timescan movie, these experimentstypically perform hundreds of runs and measure thousands to millions of DFXM images. An automated approach to statisticallycharacterize defect behavior is, therefore, essential for rapid advancement in material science.Even with conventional imaging techniques, quantitative characterization of materials using image analysis is often chal-lenging due to inherent limitations of experiments, for example, acquisition modalities and image-to-image variability frominstrumental noise. Statistical computations are further limited by the inability to perfectly replicate irreversible dynamic exper-iments due to stochastic parameters in the physics. Consequently, many specialized approaches have been developed to analyzeimaging data in material science, including multi-scale feature extraction, segmentation, texture analysis, and machine learning [12, 5, 16, 2, 3, 24]. While processing methods are useful to extract key image features and objects of interest (OOI), to a tem-poral sequence of images, motion estimation methods are often necessary to precisely track OOIs and quantify their behaviorover time [21]. In DFXM data, dislocations are comprised of a single bright and dark region pair, the mathematical identifica-tion of which is complicated by a fluctuating background intensity profile. To interpret the physics captured by the motion andinteractions of individual dislocations resolved in DFXM images, scientists require automated and robust analysis methods thatcan track the dislocation features in time and space. In this work, we develop an analysis approach to characterize the statisticsof dislocation motion and interactions as a function of time to enable materials science studies to interpret the physics in [4] andin future similar studies.The novelty of this paper is in the effective combination of image processing and computer vision techniques to achieve semi-autonomous object location and tracking within a time-sequence of images, capturing a direct statistical view of defects in thehighly-variable DFXM images. The complementary discussion of the quantitative physics and its interpretation are presented in[4]. We begin by extracting the five dislocation objects in Section 2 using a stationary wavelet transform (SWT) and binarizationto identify the bright regions of each OOI, followed by a fast marching method (FMM) to segment the corresponding darkregions. The positions of dislocations are tracked in time with a Kalman filter approach and labeled with a Munkres assignmentin Section 3. Each dislocation’s motion and behavior are quantified at each time using four quantities of interest (QOIs): position,velocity, acceleration, and orientation in Section 4. We define each of the dislocation object positions with centroids, whose ( 𝑥, 𝑦 ) positions are rotated approximately 45 ◦ to orient them along the directions we define as “glide" and “climb", respectively(corresponding to the physical mechanism of their motion in each direction [9]). The conclusion follows in Section 5. Theanalysis is completed using the Image Processing toolbox in Matlab 2018b [18]. In this analysis, we manually restrict the full-frame images to capture a specific dislocation interaction event that is of interest tothe material’s physics (in this case, a lone dislocation inserts into an existing line of dislocations). The selected region of interest(ROI) corresponds to an area approximately 60 x 60 microns in size, given by 410 ×
146 pixels. We analyze only the first 60frames of the set of 500, as these frames correspond to the most active motion of the OOIs; for the subsequent 440 frames in thisROI, the motion of the OOIs are relatively static. Over our selected ROI, we resolve the 5 OOIs essential to the physics analysisas they rapidly change in size and direction of motion, and merge and diverge over time.
To locate the bright regions corresponding to each dislocation OOI in the images, we apply a discrete 2D SWT. The SWTexecutes a multilevel image decomposition by applying a series of convolutions with low-pass and high-pass decompositionfilters, which are associated with a designated orthogonal or biorthogonal wavelet, to the original image array [17]. Specifically,we use a Daubechies-4 orthogonal wavelet. In this scheme, the original image array is set to an initial approximation coefficientsarray at a level 𝑗 , which is subsequently used to produce the approximation at next level ( 𝑗 + 1 ), as well as the detail coefficientarrays in three spatial orientations: horizontal, vertical, and diagonal. The coefficient arrays, also known as subbands, havedimensions 𝑚 × 𝑛 × 𝑁 , where 𝑚 , 𝑛 are the original image array dimensions, and 𝑁 is equal to the number of levels. As aniterative process, each level produces coarser scales of the image frames. In this case, we applied a 3-level SWT, giving us awavelet representation of our image consisting of four pixel arrays (i.e. the frame dimensions are preserved).The SWT is commonly used for noise removal and/or feature extraction, where image arrays are represented by either individ-ual or combined coefficient arrays [10, 13, 27]. For each frame of the image sequence, we specifically used the horizontal detailcoefficients array of the 3rd level to substitute for our original image, as these arrays capture the bright regions of the dislocationswith significantly greater intensity relative to the background and omit the noise captured by the remaining detail coefficients.For the first frame in the image series, the third level of the horizontal detail coefficients array is given in Figure 3 (a). In order to isolate the bright regions of OOI in the wavelet transformed frames (Figure 3 (a)), we convert the SWT framesto binary images. For each of the SWT-ROI frames, connected objects equal to or less than 20 pixels are removed to further reduce image noise. A simple thresholding scheme was applied to binarize the frames. The array median was first subtractedfrom each of the frames, then the threshold value was set to 3.5 times the standard deviation of the frame (coefficients) array.The thresholding operates on individual wavelet coefficients, setting them to zero when falling below the threshold value: 𝐵 𝐶 = 𝐂 − ̃𝐶 > . 𝜎 𝐶 , (1)where 𝐵 𝑐 is the binary array, 𝐶 is the individual coefficients array, ̃𝐶 represents the array median, and 𝜎 𝑐 is the standarddeviation of the array. The threshold scaling value was determined empirically to be the minimum value that allows us tocapture the OOIs. Following the thresholding, we applied a morphological closing operation to fill in gaps that were found in theremaining connected objects, using a disk-shaped structuring element with a radius of 5-pixels. The radius chosen is sufficientlysmall to preserve the original size and shape of objects present in the frames. The resulting extracted OOIs from Frame 1 inFigure 3 are overlaid with the corresponding raw image.(a) (b) FIGURE 3
The 3rd level of the horizontal detail coefficients array in the stationary wavelet transform corresponding to theROI of the first frame in the sequence (a), and the extracted OOIs resulting from binarization of (a) with morphological closingplotted on the original frame (b).
As displayed in Figure 3 (a), the bright regions of each of the five dislocation can be effectively extracted using a 2D SWT.To capture the dark regions corresponding to each OOI, each frame of the timescan is segmented using the FMM. The FMMis an efficient numerical method to track the evolution of contours and can be adapted to segmentation by using image features(e.g. grayscale intensity difference) to solve the Eikonal equation [1, 6]. In particular, we use MatlabâĂŹs imsegfmm function[18], which requires a weight array 𝐖 , a set of seed points, and a set of threshold values to obtain the segmented OOI from theestimated geodesic distance array.The weight array 𝐖 consists of weights 𝑤 𝑖𝑗 for each of the pixels in the original ROI image. The value of each weight isinversely proportional to the intensity difference, 𝑑 𝑖𝑗 , between each of the image’s pixels and the average intensity value of a setof specified seed points (our reference value). That is, 𝑤 𝑖𝑗 = 1 𝑑 𝑖𝑗 , (2)where 𝑖 = 410 , 𝑗 = 146 (the size of the image ROI) and the maximum value of 𝑤 𝑖𝑗 is 1. In this scheme, relatively small weightvalues indicate background, while larger values indicate foreground.The seed points for each frame of the timescan are determined as follows:1. Extract bright regions of dislocations using SWT, and calculate the centroids [14] .2. Starting at each of the five centroids, search pixels in the immediate neighborhood (25 x 25) of the bright regions andrecord their intensity value.3. For each neighborhood, find the minimum intensity value and set the corresponding pixel as the new seed point. The south-easternmost OOI (see the right image in Figure 3 ) neighbors a dark boundary where the intensity values may besmaller than those found in the dislocation’s dark region. To avoid interference from the dark region at the bottom of the imagefor that particular case, we restrict the search in Step 2 to only the upper right quadrant, relative to the centroid positions for eachof the five bright regions. It should be noted that in future implementation of this method to other classes of OOIs, the searchregion and size may require modification.The parameter required to threshold the geodesic distance array returned by the FMM is then input as a vector of values tunedto minimally capture each the five OOI. The threshold values are: 0.001, 0.005, 0.01, 0.03 and 0.02, where the smallest (0.01) andlargest (0.02) threshold values correspond to an intensity difference of approximately 32 and 7 on the RGB scale, respectively.As was performed for bright region OOIs, a morphological closing operation was applied using the same disk structuringelement with a radius of 5 pixels. In this case, the morphological closing needed to be applied to each OOI individually to avoiderroneous merging. That is, each of the five OOIs were individually mapped to an array of zeros (with the same dimensions asthe ROI) and closed; the union of the five arrays is the resulting extraction of the dark region. The resulting dark regions areshown in Figure 4 (a) for the first frame and are overlaid on the original ROI frame in (b).(a) (b)
FIGURE 4
The dark regions for each dislocation object, as extracted from Frame 1 using the FMM (a), and an overlay of thedark region OOIs on the original ROI image.
After the five bright and dark regions were extracted over the entire sequence of frames, they were combined into the fullOOIs by adding the frame arrays. Subsequently, a Gaussian filter, with a standard deviation of 2, was applied to each frame tosmooth sharp corners and remove small artifacts introduced by combining the objects [7]. The preceding steps produce a set of60 binarized frames, where the OOIs have been isolated from the background. Using the binarized frames, the OOIs are thentracked according to their centroid positions. In Figure 5 we show the OOIs overlaid on frames 10 (c) and 53 (d), as well astheir corresponding unaltered frames ((a) and (b), respectively). (a) (b)(c) (d)
FIGURE 5
The original image ROI for Frames 10 and 53 are given in figures (a) and (b), respectively. The lone dislocationin Frame 10 has inserted into the dislocation boundary (line) by Frame 53. Using the workflow in Section 1, all five activedislocations are identified and segmented for Frames 10 and 53 in images (c) and (d), respectively.
Using the full sequence of binarized frames, the centroid positions of the OOIs are tracked over time by applying a Kalman Filter(KF) to each of the 5 dislocations. Mathematically, the KF is an estimator used to make predictions followed by corrections forstates of linear processes, in a manner that minimizes the mean of the square error [8, 25]. For our data, we apply a KF thatassumes linear motion of the OOIs between frames, which is a reasonable approximation in this case, as the sampling rate issufficiently fast compared to the velocity of each dislocation for this assumption to hold. Applying a kinematic model, the KFiteratively predicts the position and velocity of the dislocations in each frame 𝑖 using the kinematic equations [26, 20]: 𝑟 𝑖 = 𝑟 𝑖 −1 + 𝑣 𝑖 −1 𝑡 + 𝑎 𝑡 , (3) 𝑣 𝑖 = 𝑣 𝑖 −1 + 𝑎𝑡, (4)where acceleration is assumed to be constant, 𝑟 = ( 𝑥, 𝑦 ) , and 𝑣 = ( ̇𝑥, ̇𝑦 ) . The position and velocity values are incorporated intothe KF via the state vector ̄𝑋 and are predicted using the state dynamic equation: ̄𝑋 𝑖 = 𝐀 𝑋 𝑖 −1 + 𝐁 𝑢 𝑖 −1 + ℇ 𝐗 , (5)or equivalently, ⎡⎢⎢⎢⎢⎣ 𝑥 𝑖 𝑦 𝑖 ̇𝑥 𝑖 ̇𝑦 𝑖 ⎤⎥⎥⎥⎥⎦ = ⎡⎢⎢⎢⎢⎣ 𝑑𝑡
00 1 0 𝑑𝑡 ⎤⎥⎥⎥⎥⎦ ⎡⎢⎢⎢⎢⎣ 𝑥 𝑖 −1 𝑦 𝑖 −1 ̇𝑥 𝑖 −1 ̇𝑦 𝑖 −1 ⎤⎥⎥⎥⎥⎦ ⎡⎢⎢⎢⎢⎣ 𝑑𝑡 𝑑𝑡 𝑑𝑡𝑑𝑡 ⎤⎥⎥⎥⎥⎦ ⋅ 𝑎 + ⎡⎢⎢⎢⎢⎢⎣ 𝑑𝑡 𝑡 𝑑𝑡 𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡 ⎤⎥⎥⎥⎥⎥⎦ , (6)where 𝐀 is the state transition matrix, 𝐵 is the control input matrix applied to the control vector 𝑢 , and ℇ 𝐗 is an error termexpressing the variance in the behavior of the system. The time step is set to 𝑑𝑡 = 1 , and the velocity and acceleration are set toinitial values of zero. After applying these conditions, the resulting state measurement equation can be expressed as: ̄𝑍 𝑖 = 𝐇 ̄𝑋 𝑖 −1 + ℇ 𝐙 , (7)or equivalently, [ 𝑥 𝑖 𝑦 𝑖 ] = [ ] [ 𝑥 𝑖 −1 𝑦 𝑖 −1 ] + [ 𝜎 𝑥 𝜎 𝑦 ] , (8) where ̄𝑍 is the measurement in frame 𝑖 , and 𝐇 is the observation matrix. ℇ 𝑍 is the measurement noise covariance matrix,with the standard deviations 𝜎 𝑥 in the horizontal direction, and 𝜎 𝑦 in the vertical direction set to 5. For a complete descriptionof the KF algorithm, see [8, 25]. FIGURE 6
Example of a merger, which occludes the bright region of one of the dislocations. In cases like the one shown here,the KF allows us to continue to track the centroids of the bright and dark regions.Ideally, the KF would be applied directly to the dislocations as single composite objects (comprised from the bright and darkregions), however, in this experiment the behavior of dislocations causes them to sometimes merge (events we call “mergers”)or occlude regions of the the bright and dark regions for in select frames. In these cases, the feature extraction methods appliedas above – specifically using the bright centroids to inform the FMM – fail because we do not have centroid values for all 5 OOIover the entire 60 frames. Therefore, for frames in which the bright or dark regions cannot be not detected (or are ambiguouslydetected), we substitute their respective centroid values with KF predictions to complete the object tracking. By using the KFpredictions to substitute actual centroid positions, seed locations and unambiguous labels can be maintained for all dislocations.Given the short duration of the image sequence, we define OOIs as objects that can be tracked for 3 observations, and removetheir tracks if they are missing from 3 frames.
Once positions are identified for each OOI, a Munkres algorithm [15] is used to uniquely label the individual dislocations ineach of the 60 frames. The initial frame is arbitrarily labeled as dislocations 1-5 based on the centroid positions, ( 𝑥, 𝑦 ) , and theMunkres method labels all subsequent frames by assigning a dislocation based on the shortest-distance between the detectedOOI and the KF-predicted centroid positions. That is, for a given frame, we take the centroid position for each dislocation andcalculate the Euclidian distance between it and every one of the the centroid positions predicted by the KF for the compositeand/or bright and dark regions in the subsequent frame, assigning the centroid position based on its shortest predicted distance.The algorithm is applied as follows: 𝐷 𝑥 = 𝑥 𝑇𝑑𝑒𝑡. − 𝑥 𝑝𝑟𝑒𝑑. , (9) 𝐷 𝑦 = 𝑦 𝑇𝑑𝑒𝑡. − 𝑦 𝑝𝑟𝑒𝑑. , (10) 𝐃 = √ 𝐷 𝑥 . + 𝐷 𝑦 . , (11)where 𝐷 𝑥 . and 𝐷 𝑦 . indicate element-wise powers, and 𝐃 is a 𝑗 × 𝑘 array (where 𝑗, 𝑘 = 5 for the five dislocations) representingthe Euclidean distances between the detected and predicted dislocation centroids in each frame. The resulting minimum distanceis calculated as 𝑚𝑖𝑛 ( 𝐃 ) = [ 𝑚𝑖𝑛 ( 𝑑 ,𝑘 ) , 𝑚𝑖𝑛 ( 𝑑 ,𝑘 ) , 𝑚𝑖𝑛 ( 𝑑 ,𝑘 ) , 𝑚𝑖𝑛 ( 𝑑 ,𝑘 ) , 𝑚𝑖𝑛 ( 𝑑 ,𝑘 )] , (12)where the detected dislocations are then ordered according to 𝑎𝑟𝑔𝑚𝑖𝑛 ( 𝐃 ) . (a) (b) FIGURE 7
Labeled assignment for dislocations and visual representation of the glide and climb reference axes (a). The climband glide positions over time for the 5 dislocations (b).
Following the extraction of and tracking of OOIs over the entire sequence of frames, the physical behaviors of interest – position,velocity, and orientation – can be characterized. In this case, we describe the position of each dislocation based on its componentsalong the climb and glide directions, corresponding to the different mechanism by which the dislocations must take to movein either direction [9]. The relative positions of the dark and bright region for each dislocation is not consistent in each frame,as may be seen by comparing the dislocation objects in Figure 5 (c) and (d). We define “orientation” of each dislocation asthe angle between the line connecting the centroids of corresponding bright and dark regions and the horizontal axis, assumingpositive angles are counter-clockwise.
Each dislocation moves via specific mechanisms that require different energy costs, or activation energies, for specific direc-tions [9]. For this reason, physical analysis of the dislocation’s motion requires that it be divided into the position and velocitycomponents along two specific directions, defined as the glide and climb directions, based on the orientation and physics of thecrystalline sample. As shown in by the white arrows in Figure 7 (a), glide and climb are equivalent to the x- and y-axes rotatedby ≈ 45 ◦ (counter-clockwise), respectively. The five dislocations are labeled 1-5 for tracking purposes. We note that each pixelalong the horizontal axis maps to 75-nm in the sample, while each pixel along the vertical axis maps to 205-nm in the sample,however, we specify the climb and glide directions based on their position in the sample.The positions of dislocations 1-5, decomposed into their components along the glide and climb directions, are plotted as afunction of time in Figure 7 (b). The positions were decomposed into their components along the glide and climb directionsusing dot product operators. As Figure 7 shows, the climb motion deviates only slightly for most of the dislocations over time,with the primary changes occurring as interactions between dislocations 1 and 3 from 4 to 8 seconds. The glide motion, however,increases for each of the dislocations, most notably for dislocation 1, up until the insertion process is completed at frame 30 (t ∼ 𝑡 ∼ 8 . s. For dislocations 2 and 3, the maximum (negative) velocity occurs at 𝑡 ∼ ∼ As we previously defined, the orientation of each dislocation is the angle between the line connecting the bright and dark regioncentroids and the horizontal axis, in the counter-clockwise direction. Figure 9 (a) shows the line and corresponding angle (a) (b)
FIGURE 8
Dislocation velocity for the glide and climb directions (a), and absolute velocity (b), plotted as a function of timefor the first 60 frames.(a) (b)
FIGURE 9
Frame 1 is shown with a thick white line connecting the red dots that plot the measured bright and dark centroidsand indicating the orientation angle for Dislocation 1 (a). The orientation angles corresponding to all give dislocations are plottedover the entire sequence, providing a figure of merit for the amount of interaction between neighboring dislocations (b).for the dislocation orientation in Frame 1 from the sequence. We track the orientation angle of all five dislocations throughtime in Figure 9 (b), demonstrating the changes to interactions between the distortion fields surrounding each dislocation.When two dislocations are sufficiently close to each other, their surrounding displacement fields can add either constructively ordestructively; in our case, the orientation angle demonstrates changes in the interactions between the adjacent dislocations. Weobserve a relatively sharp contrast between the orientation values for each of the dislocations early in the sequence, particularlywhen comparing the progressions of dislocations 2 and 4. While the strong dislocation interactions (described fully in [4]) causesignificant variation in the orientation angles, after dislocation 1 inserts into the array, at approximately 𝑡 = 8 . seconds, theorientations remain fairly stable and much more similar to one another.To resolve correlations between the orientation changes between each pair of dislocations, we show a heatmap of the globalPearson correlation coefficients for dislocation orientation in Figure 10 (a). This representation shows that there are relativelyweak relationships between the orientation of each of the dislocations, with the exception of dislocation 4, which anomalouslyhas a negative correlation with each of the other dislocations. The global correlation coefficient values between dislocations 3and 4 (-0.87), and 3 and 5 (-0.79), qualify as strong correlations. As such, to further test these relationships, we calculate therolling coefficient values for the stated pairs and plot them in Figure 10 (b). Notably, between 3 to 4 seconds, dislocations arealmost perfectly negatively correlated – as the orientation angle of 3 is decreasing from approximately 90 to 80 degrees, angle4 is increasing from approximately 70 to 80 degrees. (a) (b) FIGURE 10
A heatmap displaying the global Pearson correlation coefficients for dislocation orientation (a), and the rollingcorrelation coefficients for orientation measured between dislocations 3 & 4 and 4 & 5 (b).
Dark field X-ray microscopy (DFXM) is a novel imaging diagnostic that allows material scientists to resolve the structural behav-ior of crystal lattices at the mesoscale. New developments in DFXM have enabled it to visualize the behavior of dislocationsover time. The image data requires quantified information about defect behavior to supplement physics models and our under-standing of behavior in different materials and environments. The approach presented here demonstrates our ability to identifyand locate dislocations in the images and to track them over a sequence of images collected over time. Using the informationwe capture about each dislocation, we demonstrate progressions of the dislocations in position, velocity, and orientation as thedislocations interact, providing important opportunities to connect DFXM data to the materials science with statistical sampling.Our approach combines several signal and image processing techniques, and the efficacy of this approach is demonstrated byapplication to a timescan DFXM video data set of single-crystal aluminum, collected at the European Synchrotron Radiationfacility. Beginning with a 2D stationary wavelet transform, we extract the bright regions of each dislocation by representing theoriginal DFXM timescan frames as 3rd level horizontal detail coefficient arrays. Using the centroid locations of the extractedbright regions, we initiate segmentation of the dislocations’ dark regions via a seeded fast marching method. This approachallows us to effectively track dislocations both as composite objects and as distinct bright and dark regions. We highlight theimportance of tracking these dislocations as split regions, noting that their orientation is defined according to this treatment.Tracking the dislocations allows us to quantify their motion and interactions. By applying a Kalman filter, we track the positionof each dislocation, even in cases when they merge or occlude. Following tracking, we can quantify behavior of the defects usingposition and velocity as a function of the climb and glide planes, as well as orientation versus time. While the time resolutionof the data allowed us to successfully apply Kalman filters based on our assumption of nearly linear motion between frames,we note that in future work we expect to incorporate the underlying physics of dislocation motion with more robust non-linearmotion estimation models.
We wish to thank Colin Ophus at Lawrence Berkeley National Laboratory for his insight and helpful discussions about this work.This manuscript has been authored in part by Mission Support and Test Services, LLC, under Contract No. DE-NA0003624with the U.S. Department of Energy and supported by the Site-Directed Research and Development Program, U.S. Departmentof Energy, National Nuclear Security Administration. The United States Government retains and the publisher, by accepting thearticle for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwidelicense to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government pur-poses. The U.S. Department of Energy will provide public access to these results of federally sponsored research in accordancewith the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). The views expressed in the article donot necessarily represent the views of the U.S. Department of Energy or the United States Government. DOE/NV/03624--0796. Contributions from LEDM were performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344, and the Lawrence Fellowship.
References [1] Adalsteinsson, D. and J. Sethian, 1995: A fast level set method for propagating interfaces.
Journal of ComputationalPhysics , , 269–277.[2] Bunge, H.-J., 1982: Texture Analysis in Material Science . Butterworth.[3] DeCost, B. and E. Holm, 2015: A computer vision approach for automated analysis and classification of microstructuralimage data.
Computational Material Science , , 126–133, doi:https://doi.org/10.1016/j.commatsci.2015.08.011.[4] Dresselhaus-Marais, L. E., G. Winther, M. Howard, A. Gonzalez, S. Breckling, C. Yildirim, P. K. Cook, M. Kutsal,L. Zeppeda-Ruiz, A. Samanta, C. Detlefs, J. H. Eggert, H. Simons, and H. F. Poulsen, 2020: In-situ visualization oflong-range dislocation interactions in the bulk. Submitted .[5] Fonseca, J., C. O’Sullivan, and M. Coop, 2009: Image segmentation techniques for granular materials.
AIP ConferenceProceedings , , 223–226, doi:https://doi.org/10.1063/1.3179898.[6] Forcadel, N. and C. Gout, 2008: Generalized fast marching method: Applications to image segmentation. NumericalAlgorithms , , 189–211, doi:10.1007/s11075-008-9183-x.[7] Getreuer, P., 2013: A survey of gaussian convolution algorithms. Image Processing On Line , 286–310,doi:doi.org/10.5201/ipol.2013.87.[8] Hargrave, P., 1989: "a tutorial introduction to kalman filtering".
IEE Colloquium on Kalman Filters: Introduction,Applications and Future Developments .[9] Hirth, J. P. and J. Lothe, 1992:
Theory of Dislocations . Krieger Pub Co.[10] I.Alwan, 2012: Color image denoising using stationary wavelet transform and adaptive wiener filter.
Al-KhwarizmiEngineering Journal , , 18–26.[11] Jakobsen, A. C., H. Simons, W. Ludwig, C. Yildirim, H. Leemreize, C. Detlefs, and H. F. Poulsen, 2019: Mapping ofindividual dislocations with dark-field x-ray microscopy. J. Appl. Cryst. , , 122–132, doi:10.1107/S1600576718017302.[12] Jiang, Z. and C. Zhang, 2010: Wavelets-based feature extraction for texture classification. Advanced Materials Research , , 1273–1276, doi:https://doi.org/10.1063/1.4919835.[13] Jumah, A. A., 2013: Denoising of an image using discrete stationary wavelet transform and various thresholding techniques. Journal of Signal and Information Processing , , 33–41, doi:10.4236/jsip.2013.41004.[14] Kaiser, M. and T. Morin, 1993: Algorithms for computing centroids. Computers & Operations Research , , 151–161,doi:https://doi.org/10.1016/0305-0548(93)90071-P.[15] Kuhn, H., 1955: The hungarian method for the assignment problem. Naval Research Logistics Quaterly , 83–97.[16] Kumar, A., P. Mandal, Y. Zhang., and S. Litster, 2015: Image segmentation of nanoscale zernike phase contrast x-raycomputed tomography images.
Journal of Applied Physics , , doi:https://doi.org/10.1063/1.4919835.[17] Mallat, 1989: "a theory for multiresolution signal decomposition: The wavelet representation". ACM Transactions ofPattern Analysis and Machine Intelligence , .[18] MATLAB, 2018: version 10.10.5 (r2018b). The MathWorks Inc. .[19] Poulsen, H. F., L. E. Dresselhaus-Marais, G. Winther, and C. Detlefs, 2020: Forward modelling of dark-field x-raymicroscopy.
Submitted . [20] Sahbani, B., 2016: Kalman filter and iterative-hungarian algorithm implementation for low complexity point tracking aspart of fastmultiple object tracking system. IEEE 6th International Conference on System Engineering and Technology(ICSET) .[21] Simmons, J., L. Drummy, C. Bouman, and M. D. Graef, 2019: Statistical methods for materials science: The data scienceof microstructure characterization.
Computational Material Science , doi:10.1201/9781315121062.[22] Simons, H., A. B. Haugen, A. C. Jakobsen, S. Schmidt, F. StÃűhr, M. Majkut, C. Detlefs, J. E. Daniels, D. Damjanovic,and H. F. Poulsen, 2018: Long-range symmetry breaking in embedded ferroelectrics.
Nature Materials , , 814–819.[23] Simons, H., A. King, W. Ludwig, C. Detlefs, W. Pantleon, S. Schmidt, F. Stöhr, I. Snigireva, A. Snigirev, andH. F. Poulsen, 2015: Dark field x-ray microscopy for multiscale structural characterization. Nat. Commun. , , 6098,doi:10.1038/ncomms7098.[24] Wei, J., X. Chu, X. Sun, K. Xu, H. Deng, J. Chen, Z. Wei, and M. Lei, 2019: Machine learning in materials science. InfoMat. , , 338–358, doi:https://doi.org/10.1002/inf2.12028358 WEI ET AL.[25] Welch, G. and G. Bishop, 2006: "an introduction to the kalman filter".[26] Weng, Kuo, and Tu, 2006: Video object tracking using adaptive kalman filter. Journal of Visual Communication and ImageRepresentation , , 1190–1208, doi:https://doi.org/10.1016/j.jvcir.2006.03.004.[27] Zhang, Y., S. Wang, Y. Huo, and L. Wu, 2010: Feature extraction of brain mri by stationary wavelet transform and itsapplications. Journal of Biological Systems ,18