CNN-Based Ultrasound Image Reconstruction for Ultrafast Displacement Tracking
Dimitris Perdios, Manuel Vonlanthen, Florian Martinez, Marcel Arditi, Jean-Philippe Thiran
PPREPRINT 1
CNN-Based Ultrasound Image Reconstruction forUltrafast Displacement Tracking
Dimitris Perdios,
Student Member, IEEE,
Manuel Vonlanthen, Florian Martinez,
Member, IEEE,
Marcel Arditi,
Senior Member, IEEE, and Jean-Philippe Thiran,
Senior Member, IEEE
Abstract — Thanks to its capability of acquiring full-viewframes at multiple kilohertz, ultrafast ultrasound imagingunlocked the analysis of rapidly changing physical phenom-ena in the human body, with pioneering applications suchas ultrasensitive flow imaging in the cardiovascular systemor shear-wave elastography. The accuracy achievable withthese motion estimation techniques is strongly contingentupon two contradictory requirements: a high quality ofconsecutive frames and a high frame rate. Indeed, theimage quality can usually be improved by increasing thenumber of steered ultrafast acquisitions, but at the expenseof a reduced frame rate and possible motion artifacts. Toachieve accurate motion estimation at uncompromisedframe rates and immune to motion artifacts, the proposedapproach relies on single ultrafast acquisitions to recon-struct high-quality frames and on only two consecutiveframes to obtain 2-D displacement estimates. To this end,we deployed a convolutional neural network-based imagereconstruction method combined with a speckle trackingalgorithm based on cross-correlation. Numerical and invivo experiments, conducted in the context of plane-waveimaging, demonstrate that the proposed approach is ca-pable of estimating displacements in regions where thepresence of side lobe and grating lobe artifacts prevents anydisplacement estimation with a state-of-the-art techniquethat rely on conventional delay-and-sum beamforming. Theproposed approach may therefore unlock the full potential ofultrafast ultrasound, in applications such as ultrasensitivecardiovascular motion and flow analysis or shear-waveelastography.
Index Terms — Biomedical imaging, deep learning, diffrac-tion artifacts, displacement estimation, image reconstruc-tion, speckle tracking, ultrafast ultrasound imaging.
I. I
NTRODUCTION U LTRAFAST ultrasound (US) imaging allows reconstruct-ing full-view images from single acquisitions by in-sonifying the entire field of view at once, using unfocusedtransmit wavefronts such as plane waves (PWs) or diverging
This work was supported in part by the Swiss National ScienceFoundation under Grant 205320_175974 and Grant 206021_170758. (Dimitris Perdios and Manuel Vonlanthen contributed equally to this work.)(Corresponding author: Dimitris Perdios.)
D. Perdios, M. Vonlanthen, F. Martinez, M. Arditi, and J.-Ph. Thiranare with the Signal Processing Laboratory 5 (LTS5), École polytechniquefédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland (email:dimitris.perdios@epfl.ch).J.-Ph. Thiran is also with the Department of Radiology, UniversityHospital Center (CHUV) and University of Lausanne (UNIL), 1011Lausanne, Switzerland, and with the Center for Biomedical Imaging(CIBM), 1015 Lausanne, Switzerland. waves (DWs) [1]. Ultrasound images are then reconstructedfrom the received echo signals using the well-known delay-and-sum (DAS) algorithm. Ultrafast US imaging thus breakswith the trade-off between field of view and frame rateinherent to conventional transmit-focused line-by-line scanning.This allows imaging large tissue regions at very high framerates of multiple kilohertz, limited only by the round-trippropagation time of single acoustic waves. High frame rates areimperative for studying and analyzing rapidly changing physicalphenomena inside the human body, such as highly complexmotions occurring inside the cardiovascular system [2]–[5] orthe propagation of shear waves through tissue [6]–[10]. Severalbreakthrough US imaging modes based on motion estimationwithin a large field of view rely on ultrafast US imaging, suchas shear-wave elastography [6], ultrasensitive flow imaging [3],and functional US neuroimaging [11].Because of the absence of transmit-focusing, images obtainedfrom ultrafast acquisitions are of low quality, suffering heavilyfrom poor lateral resolution and low contrast [4], [7]–[9], [12],[13]. Both effects are related to the point spread function (PSF)of ultrafast US imaging systems, characterized by a broadermain lobe (lower lateral resolution) and stronger diffraction arti-facts (lower contrast) caused by side lobes (SLs), grating lobes(GLs), and edge waves (EWs), compared with conventionalfocused-US imaging systems. Naturally, low-quality imagesalso limit the accuracy of subsequent displacement estimationmethods involved in ultrafast US imaging modes [5], [7],[9]. The state-of-the-art solution for increasing the quality ofultrafast US imaging is coherent compounding, where a seriesof low-quality images, reconstructed from multiple, differentlysteered, unfocused wavefronts, are coherently summed [7], [12].In [7], an image quality surpassing state-of-the-art multi-focusimaging was obtained by compounding 71 PW acquisitions,increasing the frame-rate by a factor of approximately seven.However, for analyzing motion at very high frame rates,coherent compounding suffers from two considerable disadvan-tages. Firstly, the increase in image quality is directly linkedto the number of compounded acquisitions, which in turn islimited by the minimum frame rate necessary to analyze theunderlying physical phenomenon of interest. Secondly, coherentcompounding assumes, similarly to line-by-line scanning, thatthe region of interest is stationary for the duration of anacquisition sequence used to reconstruct a single frame. Thisassumption does not hold when imaging fast-moving tissueregions or complex flows, for which coherent compounding a r X i v : . [ ee ss . I V ] S e p PREPRINT suffers from strong motion artifacts [13], [14].The first issue is well exemplified in [7], in which Mon-taldo et al. demonstrated, in the context of shear-wave elastog-raphy, that the quality of estimated elasticity maps is directlylinked to the number of compounded acquisitions, which inturn was limited to a maximum of twelve acquisitions to ensurea minimum frame rate of 1 kHz. In particular, displacementestimation in highly heterogeneous tissue regions, where theaforementioned diffraction artifacts were dominant, was a majorobstacle. Issues due to diffraction artifacts hindering accuratedisplacement estimates were reported for several methods, allof them suffering from the trade-off between image qualityand frame rate [7], [9], [15].The occurrence of severe motion artifacts when compoundingmultiple acquisitions of rapidly evolving physical phenomena(inter-frame displacement close to the effective wavelength)was discussed in [13], [14], [16], and motion compensationtechniques were proposed to tackle this problem. They consistof estimating inter-acquisition displacement, using either con-ventional Doppler [14], [16] or 1-D correlation methods [13],and compensate for it before compounding all acquisitions toproduce a motion-compensated high-quality image. However,these motion compensation techniques can also suffer fromstrong diffraction artifacts [13], as they are themselves basedon displacement estimation from low-quality images, obtainedfrom unfocused wavefronts. It thus remains unclear if suchmethods could help improve motion estimation in regionsplagued by such artifacts.Consequently, there exists a great need for a robust dis-placement estimation technique that does not rely on multipleacquisitions to reconstruct consecutive frames. This is ofparticular interest in extreme conditions, when analyzingrapidly evolving physical phenomena in zones with highlyheterogeneous echogenicities.In [17], we introduced a method for reconstructing high-quality US images from single unfocused acquisitions. Itconsists of a backprojection-based DAS operation followedby the application of a convolutional neural network (CNN),specifically trained to reduce the diffraction artifacts inherentto the deployed ultrafast US imaging setup. Strong artifactreduction was demonstrated in simulated, in vitro , and invivo environments. The CNN-based image reconstructionmethod works strictly on an frame-by-frame basis and relieson the spatial information of each image only. Hence, itis completely agnostic to the time dimension and thus toany displacement between consecutive frames, making it aperfect fit for combination with state-of-the-art image-baseddisplacement estimation techniques. In a preliminary work [18]we showed that a CNN-based image reconstruction methodmay preserve the time-coherence of speckle patterns betweenconsecutive frames, which is essential to any image-baseddisplacement estimation technique.In this work, we propose an approach for estimating2-D inter-frame displacements at maximum frame rates, bycombining our single-shot CNN-based image reconstructionmethod [17] with a state-of-the-art 2-D speckle trackingalgorithm. Although estimating the axial displacement (only) re-mains the standard in US imaging, 2-D displacement estimation is increasingly gaining attention in both flow and tissue motionapplications [5], [19], [20], as it allows the analysis of morecomplex motion patterns. In elastography, 2-D displacementmaps may be of interest to increase the quality and robustnessof the estimated elasticity maps [21]. Also, 2-D speckle trackingrepresents an optimal fit for high-frame-rate displacementestimation since, unlike vector Doppler techniques, it doesnot rely on multi-angle acquisitions. Moreover, displacementestimation can be performed accurately from two consecutiveframes only, whereas Doppler-based techniques usually requiremultiple consecutive frames to estimate the phase accurately.Since our aim is to tackle displacement estimation atmaximum frame rates, the proposed approach relies onlyon single unfocused acquisitions to reconstruct consecutiveframes and on two consecutive frames only to obtain 2-Ddisplacement estimates. The primary goal of this work was toassess whether the diffraction artifact reduction and specklerestoration capabilities of our CNN-based image reconstructionmethod [17] could allow accurate estimation of displacementsin zones initially shadowed by GL, SL, and EW artifacts. Thiswork was conducted in the context of PW imaging with alinear transducer array (Section II). The accuracy of proposedapproach was evaluated both in numerical and in in vivo experi-ments, and was compared with a state-of-the-art coherent planewave compounding (CPWC)-based displacement estimationapproach (Section III). Results, implications, and limitationsof the experiments carried out are analyzed and discussed inSections IV and V, respectively. Concluding remarks are givenin Section VI.
II. M
ATERIALS AND M ETHODS
A. Imaging Configurations
We considered a US acquisition system composed of a 9L-D transducer (GE Healthcare, Chicago, Illinois, USA) and aVantage 256 system (Verasonics, Kirkland, WA, USA), identicalto the one considered in [17]. Relevant imaging configurationparameters are summarized in Table I. The 9L-D is a 192-element linear transducer array with a center frequency of5.3 MHz and a bandwidth of 75 % (at −6 dB). A typical speedof sound in soft tissue of 1540 m/s was assumed, resulting in anelement spacing (i.e. pitch) of ∼ . λ at that frequency. Notethat, as a result, images reconstructed with this transducer in thecontext of ultrafast imaging by conventional DAS algorithmswill inevitably be contaminated by GL artifacts. All pulse-echo measurements were carried by transmitting a single-cycletri-state waveform of 67 % duty cycle centered at .
208 MHz ,with leading and trailing equalization pulses of quarter-cycledurations and opposite polarities. The received echo signalswere sampled at .
833 MHz , guaranteeing a Nyquist samplingrate up to a bandwidth of 200 %. To reconstruct images up toa depth of 60 mm, we considered a maximum pulse repetitionfrequency (PRF) of 9 kHz.All image reconstruction methods considered in this studyrely on PW acquisitions performed without transmit apodization.Single PW acquisitions with normal incidence were used for theproposed CNN-based image reconstruction method (Section II-B), and steered PW acquisitions were used for CPWC-based
REPRINT 3
TABLE IS
PECIFICATIONS OF THE I MAGING C ONFIGURATIONS C ONSIDERED
Parameter Value
Center frequency 5.3 MHzBandwidth 75 %Aperture 43.93 mmElement number 192Pitch 230 µmElement width a
207 µmElement height 6 mmElevation focus 28 mmTransmit frequency 5.208 MHzExcitation cycles b a Guessed (no official data available). b Single excitation cycle with equalization pulses. comparison methods (Section II-C). For each transmit-receiveevent, echo signals were recorded on all transducer elements(i.e. full aperture).
B. CNN-Based Image Reconstruction Method
To obtain high-quality images from single-shot unfocusedacquisitions, we relied on our CNN-based image reconstructionmethod proposed in [17], briefly summarized hereafter.The method consists of first reconstructing a (vectorized)low-quality estimate ˜ x ∈ R n from the (vectorized) transducerelements measurements y ∈ R m , obtained from a singleunfocused insonification, by means of a backprojection-basedDAS operator D : R m → R n as ˜ x = D y . The operator D is composed of the adjoint of a linear measurement model(backprojection) and a pixel-wise reweighing operator (imageequalization). The measurement model is based on linearacoustics and is derived from the spatial impulse response(SIR) model [22], assuming far-field approximation both for thetransmitter (e.g. ideal wavefront) and the receiver (e.g. narrowtransducer element), an ideal Dirac pulse-echo waveform, andneglecting tissue attenuation. Before summation, measurementvalues were interpolated using a B-spline approximation oforder three [23]. Analytic (complex) images, also calledin-phase quadrature (IQ) images, were reconstructed on a λ / × λ / (Cartesian) grid, with a width spanning the 9L-Daperture (Table I) and a depth from 1 mm to 60 mm. The imagegrid resolution was chosen to guarantee Nyquist sampling ofradio frequency (RF) content of US images in both dimensions,resulting in images of × pixels. The process wasimplemented with PyUS, a graphics processing unit (GPU)-accelerated Python package for US imaging developed in ourlaboratory.In a second step, the low-quality estimate ˜ x is fed to aCNN f θ : R n → R n , with parameters θ , trained to recovera high-quality estimate as ˆ x = f θ ( ˜ x ) , with strongly reduceddiffraction artifacts and well-preserved speckle patterns. TheCNN architecture is based on the popular U-Net [24] andon [25], with several improvements such as the use of residualconvolutional blocks (RCBs) and additive intrinsic skip con- https://gitlab.com/pyus/pyus nections [17]. It is a residual CNN with multi-scale and multi-channel filtering properties, composed of 2-D convolutionallayers (CLs) and rectified linear units (ReLUs) arranged insymmetric downsampling and upsampling paths. As real-timedisplacement estimation was not a primary goal of this work, weused the best-performing CNN architecture analyzed in [17],with 32 initial expansion channels. The CNN was trainedprecisely as detailed in [17], namely in a supervised mannerusing a dataset composed of 30 000 simulated image pairs (i.e.input and ground-truth). The well-known Adam optimizer [26]was used to minimize the mean signed logarithmic absoluteerror (MSLAE) loss, introduced in [17] to account for both thehigh dynamic range (HDR) and the RF property of US images.A total of 500 000 iterations were performed with a batchsize of 2 and a learning rate of 5 × −5 . The same trainingdataset of simulated images was used. It is composed of low-quality input images reconstructed from single PW acquisitionswith normal incidence. High-quality reference images werereconstructed from the complete set of synthetic aperture(SA) acquisitions using a spatially-oversampled version ofthe transducer array to ensure the absence of GL artifacts (onlypossible in a simulation environment). To reconstruct bothinput and reference images, element raw-data were simulatedusing an in-house 3-D SIR simulator, validated against the well-known Field II simulator [27]. Each numerical phantom wascomposed of random scatterers with a density that ensured fully-developed speckle patterns throughout the resulting images.The simulated images composing the training dataset arecharacterized by overlapping ellipsoidal zones of random size,position, and orientation, with mean echogenicities spanningan 80-dB range. C. Comparative Image Reconstruction Methods
For the CPWC-based comparison methods, acquisitions toreconstruct consecutive frames consisted of sequential transmit-receive events of N a differently steered PWs, fired at maximumPRF. The PW steering angle spacing was evaluated as [7], [13] ∆ β = arcsin (cid:16) λ L (cid:17) ≈ . ° , (1)where λ is the wavelength of transmit excitation and L is thetransducer aperture. We restricted ourselves to odd acquisitionnumbers, thus the linearly increasing sequence of steeringangles can be expresses as β n = n ∆ β, n = − M , − M + , . . . , , . . . , M − , M , (2)where M = ( N a − )/ . We deployed an alternate steering anglesequence (− β M , β M , − β M − , β M − , . . . , − β , β , ) , as proposedin [13].In particular, we considered single PW acquisitions withnormal incidence, used both with the proposed CNN-basedimage reconstruction method and with DAS beamforming, aswell as sequences of 3, 9, 15, and 87 steered PW acquisi-tions used with DAS beamforming. Comparison DAS-basedmethods are denoted CPWC-1, CPWC-3, CPWC-9, CPWC-15,and CPWC-87. The parameters for each imaging acquisitionsequence considered are summarized in Table II. The CPWC-87was used for reference purposes only, in settings not suffering PREPRINT
TABLE IIP
LANE W AVE I MAGING A CQUISITION S EQUENCES C ONSIDERED
Method Sequence Parameters Frame Rate N a ∆ β β M Type PRFCNN 1 × a × a × a × a × a × a × a × a a Single PW with normal incidence. from inter-acquisition motion artifacts. This reference numberof acquisitions was computed following [7] as N ref a = L λ F ≈ , (3)with an F-number F = . . The other comparison methods,namely CPWC-1 to CPWC-15, were selected to obtain a rangeof frame rates, namely from 9 kHz to 0.6 kHz, spanning typicalvalues necessary for analyzing rapid events occurring in thehuman body.Each PW acquisition was reconstructed using the DASalgorithm detailed in Section II-B. Coherent compounding ofimages reconstructed from steered acquisitions was realized bysimple pixel-wise averaging. Note that as CPWC-1 only relieson single PW acquisitions, it is not a compounding method.Its designation was adopted to simplify the naming convention.Also, images obtained from CPWC-1 are identical to inputimages of the CNN-based image reconstruction (Section II-B),as the same DAS algorithm was deployed in both cases. D. Speckle Tracking Algorithm
The proposed speckle tracking algorithm is a block-matchingalgorithm based on normalized cross-correlation. It is heavilyinspired by both the speckle tracking method described in [28],which won the challenge on synthetic aperture vector flowimaging (SA-VFI) organized during the IEEE InternationalUltrasonic Symposium (IUS) 2018 [29], and the PIVlabtoolbox [30], a popular software for particle image velocimetry(PIV). Speckle tracking is fundamentally linked to PIV. How-ever, instead of tracking particles to visualize flows, speckletracking estimates displacements by tracking speckle patternsarising from interferences by scatterers separated by sub-resolution distances, assuming that these patterns are highlycorrelated between consecutive frames.To estimate the 2-D displacement field between two consecu-tive frames S and S , both frames were identically subdividedinto overlapping interrogation windows. The most probabledisplacement that occurred between a pair of interrogationwindows was obtained by finding the maximum value (peak)of the (2-D) zero-normalized cross-correlation (ZNCC). Toachieve sub-pixel precision, we applied a 2-D Gaussianregression around the ZNCC peak, as proposed in [31]. Inorder to analyze complex displacements, including shear androtation, this process was deployed in a coarse-to-fine multi-pass algorithm [30]. Between each pass, S was deformed(B-spline interpolation) using the estimated displacements to resemble S more closely. For the next pass, the displacementsbetween S and the deformed S were estimated in a similarway. The remaining displacement estimates of each pass wereaccumulated, resulting in more accurate estimates after a fewpasses. After each pass, statistical outliers of the estimateswere smoothed using the unsupervised smoothing algorithmdescribed in [32].Speckle tracking was performed on envelope images, ob-tained by computing the (pixel-wise) modulus of IQ images. En-velope images were downsampled by a factor of two in the axialdimension, in a uniformly spaced spatial grid of λ / × λ / (i.e. × pixels). While applying normalized cross-correlationbased speckle tracking directly to RF signals may lead to ahigher precision than using envelope signals [33], especiallywhen analyzing very small displacements close to the Cramér-Rao lower bound [34], it is also much more prone to faultydisplacement estimation because of speckle decorrelation [35,Sec. 14.2.1]. Speckle decorrelation increases when analyzinglarger displacements, more complex displacements patternswith strong gradients (e.g. rotation), and tissue deformation [36],[37]. As our method was designed to be a robust displacementestimator over a wide range of displacements and flow patterns,envelope images were preferred for the purpose of speckletracking. However, it is easily adapted to work with RF imagesif the potential increase in precision for small displacementsis of interest.For adapting the speckle tracking parameters to the imagingconfigurations and displacement ranges considered, we cross-validated a wide range of different interrogation window sizes,number of passes, and window overlaps using a dedicatednumerical test phantom, namely a rotating cylinder centeredat the elevation focus of the transducer, equivalent to theones deployed in the numerical experiment (Section III-A).Two different angular velocities were considered, resulting inthe same inter-frame displacements considered in this work.Consecutive frames were generated by simulating high-qualityimages using CPWC-87 without rotating the cylinder betweensuccessive steered PW acquisitions (only achievable in asimulation environment). Interestingly, the speckle trackingparameters yielding best overall displacement estimates in oursettings were identical to the ones deployed in [28]. Thus,for all experiments conducted in this work, irrespectively ofthe displacement range and frame rate under consideration,we deployed the proposed speckle algorithm with four passes,square interrogation windows of 4 mm, 2.5 mm, 2 mm, and1.5 mm, and a window overlap of 65 %. E. Metrics
To evaluate the accuracy of displacement estimates through-out the experiments, we relied on the well-known endpointerror (EPE), a quality metric commonly used in flow estimationtechniques [38], [39]. Considering a vector displacementestimate ˆ u ∈ R and its true counterpart u ∈ R , the EPEcan be expressed as EPE = (cid:107) ˆ u − u (cid:107) , (4)where (cid:107) · (cid:107) represents the Euclidean norm. We also relied ona normalized version of EPE, denoted relative endpoint error REPRINT 5 (REPE), which is expressed asREPE = (cid:107) ˆ u − u (cid:107) (cid:107) u (cid:107) . (5) III. E
XPERIMENTS
We conducted two experiments (numerical and in vivo )to assess the performance of the proposed 2-D displacementestimation approach, which combines our CNN-based imagereconstruction methods [17] (Section II-B) to reconstruct con-secutive frames with single PW acquisitions and the deployedspeckle tracking algorithm (Section II-D). In both experiments,we compared the proposed CNN-based displacement estimationmethod to CPWC-based tracking, which consists of applyingthe same speckle tracking algorithm to consecutive framesreconstructed using conventional CPWC (Section II-C). ForCPWC, a larger number of compounded acquisitions results,in the absence of motion artifacts, in better image quality andconsequently in improved displacement estimation, at the costof a reduced achievable frame rate. Thus, by studying differentnumbers of compounded acquisitions (Table II) we comparedthe proposed approach to multiple levels of displacementestimation accuracy.
A. Numerical Experiment
For the first experiment, we used computer simulations tocontrol the motion pattern, the relative echogenicities of tissue-mimicking structures, and the diffraction artifact levels precisely.The goal was to show the quality of displacement tracking thatcan be achieved using the proposed method in rapidly moving,highly heterogeneous tissue, where strong diffraction artifactshinder proper motion analysis with conventional CPWC-basedtracking. All simulations were conducted using the same SIRsimulator used to generate the training dataset (Section II-B).We designed a dynamic numerical test phantom composedof scatterers randomly positioned within four cylinders [A, B,C, and D in Fig. 1(a)], embedded in an anechoic background.Each cylinder has a radius of 6.86 mm and a height of 1.0 mm,the latter corresponding to the resolution cell size in elevationevaluated for the imaging configuration considered [17]. Withineach of the four zones, an average of ten scatterers perresolution cell was used to ensure fully-developed specklepatterns in the resulting images [40, Sec. 8.4.4]. The cylinderswere centered such that cylinder A spawns distinct and spatiallyseparable diffraction artifacts onto cylinders B, C, and D.Cylinders B, C, and D were positioned such that they aremaximally covered by EW, SL, and GL artifacts, respectively[Fig. 1(b)]. The mean amplitudes of scatterers located withincylinders B, C, and D were chosen to blend in with theamplitude of EW, SL, and GL artifacts arising from cylinderA [Fig. 1(b)]. Specifically, the mean amplitudes in cylindersA, B, C, and D were set to 20 dB, −20 dB, −20 dB, and0 dB with respect to an arbitrary 0 dB reference, respectively.Between successive simulated transmit-receive events, thescatterers were rotated with a constant counter-clockwiseangular velocity around the center of the cylinder within whichthey are positioned. The same angular velocity was used forall cylinders.
TABLE IIID
ISPLACEMENT AND V ELOCITY R ANGES C ONSIDEREDFOR THE N UMERICAL E XPERIMENT
Method FrameRate Large Ranges Small Ranges
D. (µm) V. (cm/s) D. (µm) V. (cm/s)CNN 9 kHz 33–600 29.7–540 3.3–60 2.97–54CPWC-1 9 kHz 33–600 29.7–540 3.3–60 2.97–54CPWC-3 3 kHz 33–600 9.9–180 3.3–60 0.99–18CPWC-9 1 kHz 33–600 3.3–60 3.3–60 0.33–6CPWC-15 0.6 kHz 33–600 2.0–36 3.3–60 0.20–3.6
This experiment was designed to evaluate the accuracyof displacement estimates, obtained using the same speckletracking algorithm on consecutive frames reconstructed withthe different image reconstruction methods considered, withinprescribed inter-frame displacement ranges. Inter-frame dis-placements ranging from 3.3 µm to 600 µm (i.e. approximatelyfrom λ / to λ ) were analyzed, covering a range fromthe small displacements that typically occur in shear-waveelastography [7] or acoustic radiation force imaging [41], upto the large displacements that typically occur in externalcompression-based elastography [41]. Furthermore, when an-alyzed at a frame rate of 9 kHz, these ranges correspondto velocities up to 5.4 m/s, which are close to the peakvelocities inside the cardiovascular system [42]. To this end,two different sets of numerical phantoms were simulated foreach image reconstruction method considered and associatedframe rate, covering two inter-frame displacement ranges,namely 3.3 µm to 60 µm and 33 µm to 600 µm. The respectiveangular velocities were determined such that the maximuminter-frame displacement occurred at a radius of 6.5 mm. Theresulting border of 0.36 mm was used to avoid speckle trackingborder effects in the quality evaluation. It corresponds to theapproximate average resolution cell size in the transducerplane. A similar zone was ignored in the center of eachcylinder. Displacement ranges and corresponding cross-radialvelocity ranges are made explicit in Table III for each imagereconstruction method considered.Inter-frame displacements were estimated using the proposedCNN-based approach, as well as CPWC-1, CPWC-3, CPWC-9,and CPWC-15 at their respective maximum frame rates. Forall test configurations considered (i.e. method and displacementrange), 50 statistically independent scatterer realizations weresimulated, resulting in 50 inter-frame displacement estimatemaps for each configuration. The accuracy of each methodwas measured locally in terms of REPE, by computing (5) foreach displacement estimate (grid point) and corresponding true(analytical) value. The mean local REPE was also computedover the 50 independent realizations (in each displacementestimate grid point). B. In Vivo Experiment
For the second experiment, we applied the proposed approachto in vivo acquisitions, to analyze the natural tissue motionaround the carotid artery. The goal of this experiment was totest the robustness and translatability of the results obtainedin the numerical experiment to the full complexity of in vivo
PREPRINT −20 −10 0 10 20Lateral Dimension (mm)1020304050 A x i a l D i m en s i on ( mm ) AB C D(a) −20 −10 0 10 20Lateral Dimension (mm)(b) −20 −10 0 10 20Lateral Dimension (mm)(c) −20 −10 0 10 20Lateral Dimension (mm)(d) −20 −10 0 10 20Lateral Dimension (mm)(e)
Fig. 1. B-mode image representations (80-dB range) of a numerical test phantom sample: (a) the 2-D geometry of the deployed numerical phantoms,composed of four cylinders (A, B, C, and D) filled with dense point-scatterers rotating at constant angular velocity around their respective cylindercenter; (b) image reconstructed by delay-and-sum (DAS) beamforming a single plane-wave (PW) acquisition (CPWC-1), simultaneously representingthe convolutional neural network (CNN) input image for the proposed method; (c) image reconstructed using CNN-based reconstruction; imagesreconstructed by coherent plane wave compounding (CPWC) using nine steered PW acquisitions (CPWC-9): (d) small displacement range and (e)large displacement range. The frame rate and displacement range for each image reconstruction method considered are given in Table III. imaging. As the natural motion induced by cardiac pulsations isslow, it allowed us to obtain reference inter-frame displacementestimates, at maximum PRF. For the methods to be compared,the analysis was performed at a low frame rate, selected toresult in inter-frame displacement ranges of interest.We analyzed the slow-moving tissue between the skin andthe carotid artery of a healthy volunteer. In particular, motionwithin a specific tissue region of size × (Fig. 3)was analyzed at 10 Hz, resulting in inter-frame displacementssimilar to those studied in the numerical experiment (Section III-A), namely ranging from 5 µm to 125 µm approximately. There-fore, identical speckle tracking settings were used (Section II-D). Speckle tracking was performed on full images, butwe restricted our analysis to a specific zone characterizedby fully-developed speckle patterns, plagued by diffractionartifacts mainly originating from the highly echogenic carotidwalls when imaged using CPWC-1 [Fig. 3(a)]. The meanechogenicity of the analyzed speckle zone was approximately20 dB lower than the echogenicity of the carotid walls, thussimilar to the relative echogenicity between cylinders A andD studied in the numerical experiment.To obtain reference displacement estimates of the imagezone considered, we reconstructed consecutive frames usingCPWC-87 (Table II). As compounded acquisitions were per-formed at a PRF of 9 kHz, inter-frame displacements were negli-gible. Hence, consecutive frames reconstructed using CPWC-87were considered free of motion artifacts and displacementestimates obtained by speckle tracking were considered asreference. We compared displacement estimates obtained usingthe proposed approach with the ones obtained using CPWC-1and CPWC-15. For each method being compared, consecutiveframes were reconstructed using the relevant subset of steeredPW acquired for the reference CPWC-87 method (Section II-C).Therefore, CPWC-15 was also free of any motion artifacts.A total of 30 frames were obtained at a frame rate of 10 Hzfrom acquisitions performed at a PRF of 9 kHz resulting in29 inter-frame displacement estimate maps. For each inter-frame displacement estimate map, the accuracy of each methodwas measured locally in terms of EPE, by computing (4) foreach displacement estimate (grid point) and correspondingreference value (CPWC-87). The quality of the displacementestimates for each frame-pair was assessed by computing the mean endpoint error (MEPE) obtained within the region ofinterest. IV. R
ESULTS
A. Numerical Experiment
Fig. 2 displays local REPE values, averaged over the50 independent realizations performed in each configurationconsidered (Section III-A). To facilitate the analysis, we deemedas invalid any displacement estimate resulting in an averagedlocal REPE value exceeding 100 %. From each set of validestimates we computed two global evaluation metrics, namelythe ratio of valid estimates (RVE) and the mean relativeendpoint error (MREPE). These global evaluation metrics arereported in Table IV.Zone A was designed such that it did not suffer fromdiffraction artifacts and could be used to assess displacementestimation in pure speckle zones. In the large-displacementcase [Fig. 2(a)], CPWC-based tracking suffered from increasingmotion artifacts with the number of compounded acquisitionswhen tracking identical inter-frame displacements (i.e. atdecreasing frame rates), reaching a stable motion artifact levelafter nine compounded acquisitions. The proposed methodperformed best and improved over CPWC-1 both in termsof local and global metrics. In the small-displacement case[Fig. 2(b)] motion artifacts were negligible and all methodsperformed efficiently. A typical comparison of CPWC withand without motion artifacts is shown in Fig. 1(d) and 1(e) forCPWC-9.Zone B was designed to suffer from EW artifacts. Theproposed method was not capable of restoring speckle patternsshadowed by EW artifacts accurately, resulting in performancemetrics only slightly improved compared with CPWC-1. Inac-curate restoration of speckle patterns plagued by EW artifactscan be observed in Fig. 1(c) (e.g. clipped values). These artifactscould only be progressively resolved in the small displacementcase [Fig. 2(b)] with the increase in compounded acquisitions,because of the absence of motion artifacts.Zone C was designed to suffer from SL artifacts. In thelarge displacement case [Fig. 2(a)], the reduction in SL artifactsachieved by compounding several acquisitions was counteractedby the induced motion artifacts, except in zones of purelateral movement, making proper tracking impossible using
REPRINT 7 Z one A CPWC-1 CPWC-3 CPWC-9 CPWC-15 CNN Z one B Z one C Z one D (a) CPWC-1 CPWC-3 CPWC-9 CPWC-15 CNN(b) 020406080100 R EPE ( % ) Fig. 2. Local relative endpoint error (REPE), averaged over 50 independent realizations, of the 2-D displacement estimates inside each of thenumerical phantom zones [A, B, C, and D in Fig. 1(a)], obtained by applying the deployed 2-D speckle tracking algorithm (Section II-D) on twoconsecutive frames for the two inter-frame displacement ranges considered: (a) large displacement range (from 33 µm to 600 µm); (b) smalldisplacement range (from 3.3 µm to 60 µm). Consecutive frames were reconstructed either by coherent plane wave compounding (CPWC) from 1, 3,9, and 15 differently steered PWs, or using the proposed convolutional neural network (CNN)-based image reconstruction method from single PWs.The frame rate and displacement range for each image reconstruction method considered are given in Table III. The displayed REPE range is limitedto 100 %. Local REPE values were interpolated onto a fine grid for display purposes.TABLE IVG
LOBAL E VALUATION M ETRICS OF THE N UMERICAL E XPERIMENT
Zone Metric Large Displacement Range Small Displacement Range
CPWC-1 CPWC-3 CPWC-9 CPWC-15 CNN CPWC-1 CPWC-3 CPWC-9 CPWC-15 CNNA RVE a (%) 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00MREPE b (%) 4.45 7.24 12.99 12.84 3.62 7.34 6.91 5.25 4.36 5.81B RVE (%) 63.00 69.58 63.10 68.96 74.41 57.76 73.79 99.38 99.69 67.42MREPE (%) 19.67 29.41 39.53 38.35 19.83 18.79 25.58 26.15 19.08 18.19C RVE (%) 85.27 77.59 51.56 65.15 100.00 29.25 81.64 100.00 100.00 100.00MREPE (%) 52.82 45.91 41.98 39.24 4.98 39.36 36.28 17.64 8.29 9.61D RVE (%) 44.59 44.08 34.81 49.74 100.00 22.14 42.02 82.29 99.69 99.59MREPE (%) 36.54 46.38 50.94 41.80 5.51 47.61 45.12 36.41 17.54 15.25 a Ratio of valid estimates (RVE); an estimate was considered valid when its local REPE was below 100 %. b Mean relative endpoint error (MREPE) evaluated from the set of valid estimates.
CPWC-based tracking. The proposed method was capable ofproperly estimating displacements, with a quality only slightlyworse than in artifact-free zone A. In the small displacementcase [Fig. 2(b)], CPWC-based tracking was improved withthe increase in compounded acquisitions, thanks to a moreefficient SL reduction than with motion artifacts. The proposedmethod achieved a quality slightly worse than CPWC-15 butsignificantly better than CPWC-9.Zone D was designed to suffer from GL artifacts, thatincrease in strength towards the right edge of the image.In the large displacement case [Fig. 2(a)], compoundingmultiple acquisitions reduced GL artifacts. Yet, motion artifactsprevented accurate displacement estimation except in zonesof pure lateral movement. The proposed method significantlyimproved the displacement estimation quality over CPWC-1and was the only method to allow tracking displacementsin this case. In the small displacement case [Fig. 2(b)], theincrease in compounded acquisitions allowed CPWC-based tracking to reduce the effect of GLs and restore the underlyingspeckle patterns, progressively resulting in an increased RVEwith higher MREPE. The proposed method performed slightlybetter than CPWC-15.
B. In Vivo Experiment
From the example images and corresponding displacementestimates [Fig. 3(a) to 3(d)], one can observe that CPWC-1suffers from diffraction artifacts (mainly caused by GLsand SLs arising from the carotid walls), disturbing both thespeckle patterns and the resulting displacement estimates. Theseartifacts were strongly reduced using CPWC-15, leading tospeckle patterns similar to the reference ones (CPWC-87),resulting in accurate displacement estimates. The proposedCNN-based imaging approach also reduced these artifacts,restoring the underlying speckle patterns accurately. Thisresulted in local displacement estimates with a quality similarto that obtained with CPWC-15.
PREPRINT −20 −10 0 10 20Lateral Dimension (mm)10203040 A x i a l D i m en s i on ( mm ) (a) −20 −10 0 10 20Lateral Dimension (mm)(b) −20 −10 0 10 20Lateral Dimension (mm)(c) −20 −10 0 10 20Lateral Dimension (mm)(d)0.0 0.5 1.0 1.5 2.0 2.5Time (s)050100150 M ean D i s p l a c e m en t ( μ m ) (e)Reference Example Shown 0.0 0.5 1.0 1.5 2.0 2.5Time (s)020406080 M ean E ndpo i n t E rr o r ( μ m ) (f)CPWC-1CPWC-15 Proposed MethodExample Shown Fig. 3. Examples of displacement estimates, mean reference displacement magnitude, and mean endpoint error (MEPE), obtained using thedisplacement estimation methods considered, in a fully-developed speckle zone above the carotid artery: images of a longitudinal view of the carotidartery, are shown for (a) CPWC-1 (also CNN input), (b) CPWC-15, (c) CNN, and (d) CPWC-87 (reference); the bottom row shows (e) the meanreference displacement magnitude and (f) the MEPE along the entire in vivo sequence for each method considered. In each B-mode image of the toprow, the square region of interest is highlighted and the corresponding magnified inset displays the 2-D displacement estimates. B-mode images aredisplayed using a dynamic range of 50 dB. An animation of the figure and the corresponding slideshow are provided as supplementary material.
The analysis of the MEPE values over time [Fig. 3(f)] showsthat, while CPWC-1 was generally unable to estimate inter-frame motion properly, the proposed method resulted in highand stable displacement estimation quality, similar (thoughslightly worse) to CPWC-15. This observation matches theresults of the numerical experiments for small-displacementscase (see Section IV-A). Over the entire sequence a MEPE of41.8 µm, 7.5 µm, and 11.1 µm was achieved using CPWC-1,CPWC-15, and the proposed method, respectively. As thereference mean displacement over time was 64.8 µm, thiscorresponds to error percentages of 65 %, 12 %, 17 % forCPWC-1, CPWC-15, and the proposed method, respectively.
V. D
ISCUSSION
In this work, we proposed a 2-D motion estimation ap-proach based on single unfocused acquisitions to reconstructconsecutive frames and on pairs of consecutive frames toestimate local displacements. This approach relies on ourCNN-based image reconstruction method [17] to reconstructfull-view US frames from single unfocused acquisitions. Itconsists of first reconstructing low-quality images using abackprojection-inspired DAS algorithm and then feeding themto a CNN, specifically trained to reduce diffraction artifactsinherent to ultrafast US imaging. Inter-frame displacements areestimated by applying a state-of-the-art 2-D speckle algorithmon consecutive-frame pairs only.
A. Performance in Numerical Conditions
An important observation was that the proposed approachcould not estimate displacements accurately in zones dominatedby EW artifacts (Fig. 2, zone B). This is directly related to the fact that the CNN deployed is not capable of restoringthe underlying speckle patterns accurately [Fig. 1(c)]. Slightimprovements were observed compared with conventionalsingle PW imaging (CPWC-1), but far less striking than inzones dominated by SL and GL artifacts (Fig. 2, zones C andD). In [17] we already observed that EW artifacts were themost difficult artifacts to deal with, but also that the restorationquality improved with the increase of the CNN capacity. Thelatter implies that the reduction of these artifacts might befurther improved using a more efficient CNN-architecture ortraining process.When analyzing large displacements, we observed thatcompounding multiple acquisitions in an attempt to improvethe obtained image quality induces strong motion artifacts,mainly due to destructive interferences caused by axial motion.In the presence of motion artifacts, conventional CPWC-based speckle tracking was generally incapable of providingvalid displacement estimation, in particular in zones plaguedby strong diffraction artifacts. Consequently, compoundingmultiple acquisitions decreased the displacement estimationquality compared with single PW acquisitions (CPWC-1).While motion compensation techniques have been proposed totackle this issue [16], it remains unclear if motion-compensatedcoherent compounding can be deployed in zones plagued bydiffraction artifacts (as it is based on inter-acquisition motionestimation), and if it actually improves displacement estimationquality in artifact-free zones compared with single unfocusedacquisitions. We demonstrated that the proposed single PWCNN-based approach is capable of providing high-qualitydisplacement estimates in artifact-free zones, as well as inzones plagued by SL and GL artifacts.In the case of small displacements, increasing the number
REPRINT 9 of compounded acquisitions using CPWC-based trackingprogressively increased, as expected, the accuracy of dis-placement estimation. The proposed CNN-based approachachieves a displacement estimation quality comparable toCPWC-15 in zones suffering from SL and GL artifacts andcomparable to CPWC-9 in artifact-free zones. It can be notedthat the relative estimation precision achieved by the proposedapproach was generally worse when analyzing small inter-framedisplacements than in larger displacement cases. This was alsoobserved for conventional CPWC-based tracking in artifact-freezones [e.g. compare CPWC-1, zone A in Fig. 2(a) and 2(b)].This mainly comes from the fact that the minimum estimationerror of correlation-based tracking converges to a minimumvalue (Cramér-Rao lower bound), which, relatively speaking,becomes more significant for smaller displacements [41]. Forquantifying very small displacements, applying speckle trackingto RF data instead of envelope data may improve precision [33],[35, Sec. 14.2.1], at the expense of a reduced robustness tospeckle decorrelation.
B. Performance in Physical Conditions
We demonstrated that the proposed CNN-based approach,which rely on single PW acquisitions, significantly improvedover conventional single PW imaging (CPWC-1). It alsoachieved an accuracy of inter-frame displacement estimationsimilar to that of 15 compounded acquisitions (CPWC-15), inconditions where motion artifacts were negligible. Overall, thequantitative evaluations performed in the in vivo experimentwere comparable to those of the numerical experiment in theabsence of motion artifacts. This does not only show that theproposed method can be applied to in vivo data successfully,even though the CNN used for image reconstruction was trainedon simulated data only, it also suggests that the results of thenumerical experiments are robust and translatable (to someextent) to experimental conditions.It should be noted that the experiment was intentionallycarried out on a slow moving tissue zone. This allowedus to obtain reference displacement estimates for evaluationpurposes, and to select a frame rate, identical for all methodsconsidered, resulting in inter-frame displacements within theranges of interest. However, as speckle tracking is agnostic tothe underlying frame rate, the results are fully translatable tofast motion cases, analyzed at higher frame rates, with similarinter-frame displacement ranges, provided that the desiredframe rate is achievable by the method deployed.
C. Potential, Perspectives, and Limitations
The proposed approach is able to provide high-qualityestimates for a wide range of 2-D inter-frame displacements,even in tissue regions dominated by SL and GL artifacts. Asit only relies on single unfocused acquisitions to reconstructconsecutive frames, it is immune to motion artifacts. Moreover,it is limited only by the propagation time of acoustic waves,making it especially interesting for the analysis of rapidlychanging events at very high frames rates, such as the propaga-tion of shear waves in tissue or complex flow patterns withinthe cardiovascular system, where displacement estimation techniques based on multi-acquisition image reconstructionmethods may not be deployable.The major limitation is that the current implementationof the proposed approach was not able to provide accuratedisplacement estimates in regions dominated by EW artifacts,most probably because these artifacts closely resemble specklepatterns. Both the EW behavior and the general performanceof the approach might be further improved by augmentingthe performance of the CNN used for image reconstruction.For instance, the use of a higher-capacity CNN or a moreefficient training process may improve the restoration of tissuestructures hidden by EW artifacts. Another way to tackle thislimitation would be to use transmit apodization [43]. Thistechnique can significantly reduce EW artifacts, at the costof limited energy towards the image borders. However, itseffectiveness is limited by the apodization-capability of USsystem, in particular by the transmitter complexity. If themethod is not used at maximum achievable frame rate, and inthe presence of sufficiently stationary motion, the robustnessand precision of the displacement estimation could be improvedsuch as by averaging multiple displacement estimates or byusing ensemble correlation [28].This study was limited to tracking fully-developed specklepatterns, hence no insights about tracking tissue structuresarising from specular or diffractive scattering should be drawnfrom it directly. Yet, carotid-wall movement was observedto be similar to that of conventional methods (see animationof Fig. 3, supplementary material). The training set was alsolimited to simulated images of fully-developed speckle zonesresulting from diffusive scattering, and in [17] we observedthat while reconstructing other tissue structures is generallypossible, the performance may be less potent than in fully-developed speckle zones. Using a versatile training set may beconsidered to widen the applicability of both the reconstructionapproach and the displacement tracking method proposed here.On a more general perspective, this work further validatesthe potency of the CNN-based image reconstruction methodintroduced in [17]. Indeed, this method not only provideshigh-quality images from single unfocused acquisitions, butalso preserves the information of underlying physical phenom-ena that can be further exploited for estimating inter-framedisplacements accurately.
VI. C
ONCLUSION
In this work we proposed an approach for estimating 2-Dinter-frame displacements in the context of ultrafast US imaging.The approach consists of a CNN trained to restore high-quality images from single unfocused acquisitions and a speckletracking algorithm to estimate inter-frame displacements fromtwo consecutive frames only. Compared with conventionalmulti-acquisition strategies, this approach is immune to motionartifacts and allows accurate motion estimation at maximumframes rates, even in highly heterogeneous tissues prone tostrong diffraction artifacts. Numerical and in vivo resultsdemonstrated that the proposed approach is capable of esti-mating displacement vector fields from single PW acquisitionsaccurately, including in zones initially hidden by SL and GL artifacts. The proposed approach may thus unlock the fullpotential of ultrafast US, with direct applications to imagingmodes that depend on accurate motion estimation at maximumframe rates, such as shear-wave elastography or ultrasensitiveechocardiography. A CKNOWLEDGMENT
The authors would like to warmly thank Quentin Ligier forhis important contribution to the implementation of the speckletracking algorithm deployed in this work. R EFERENCES [1] M. Tanter and M. Fink, “Ultrafast imaging in biomedical ultrasound,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control , vol. 61, no. 1, pp.102–119, 2014.[2] H. Geyer et al. , “Assessment of Myocardial Mechanics Using SpeckleTracking Echocardiography: Fundamentals and Clinical Applications,”
J.Am. Soc. Echocardiogr. , vol. 23, no. 4, pp. 351–369, 2010.[3] J. Bercoff et al. , “Ultrafast compound doppler imaging: providing fullblood flow characterization,”
IEEE Trans. Ultrason. Ferroelectr. Freq.Control , vol. 58, no. 1, pp. 134–147, 2011.[4] M. Cikes, L. Tong, G. R. Sutherland, and J. D’hooge, “Ultrafast cardiacultrasound imaging,”
JACC Cardiovasc. Imaging , vol. 7, no. 8, pp. 812–823, 2014.[5] J.-U. Voigt et al. , “Definitions for a common standard for 2Dspeckle tracking echocardiography: consensus document of theEACVI/ASE/Industry task force to standardize deformation imaging,”
Eur. Hear. J. - Cardiovasc. Imaging , vol. 16, no. 1, pp. 1–11, 2015.[6] J. Bercoff, M. Tanter, and M. Fink, “Supersonic shear imaging: a newtechnique for soft tissue elasticity mapping,”
IEEE Trans. Ultrason.Ferroelectr. Freq. Control , vol. 51, no. 4, pp. 396–409, 2004.[7] G. Montaldo, M. Tanter, J. Bercoff, N. Benech, and M. Fink, “Coherentplane-wave compounding for very high frame rate ultrasonography andtransient elastography,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control ,vol. 56, no. 3, pp. 489–506, 2009.[8] P. Santos et al. , “Natural shear wave imaging in the human heart: Normalvalues, feasibility, and reproducibility,”
IEEE Trans. Ultrason. Ferroelectr.Freq. Control , vol. 66, no. 3, pp. 442–452, 2019.[9] C. Papadacci, M. Pernot, M. Couade, M. Fink, and M. Tanter, “High-contrast ultrafast imaging of the heart,”
IEEE Trans. Ultrason. Ferroelectr.Freq. Control , vol. 61, no. 2, pp. 288–301, 2014.[10] M. Pernot, M. Couade, P. Mateo, B. Crozatier, R. Fischmeister, andM. Tanter, “Real-time assessment of myocardial contractility using shearwave imaging,”
J. Am. Coll. Cardiol. , vol. 58, no. 1, pp. 65–72, 2011.[11] E. Macé, G. Montaldo, I. Cohen, M. Baulac, M. Fink, and M. Tanter,“Functional ultrasound imaging of the brain,”
Nat. Methods , vol. 8, no. 8,pp. 662–664, 2011.[12] J. Cheng and J.-Y. Lu, “Extended high-frame rate imaging methodwith limited-diffraction beams,”
IEEE Trans. Ultrason. Ferroelectr. Freq.Control , vol. 53, no. 5, pp. 880–899, 2006.[13] B. Denarie et al. , “Coherent plane wave compounding for very highframe rate ultrasonography of rapidly moving targets,”
IEEE Trans. Med.Imaging , vol. 32, no. 7, pp. 1265–1276, 2013.[14] J. Porée, D. Posada, A. Hodzic, F. Tournoux, G. Cloutier, and D. Garcia,“High-frame-rate echocardiography using coherent compounding withdoppler-based motion-compensation,”
IEEE Trans. Med. Imaging , vol. 35,no. 7, pp. 1647–1657, 2016.[15] J. Porée, D. Garcia, B. Chayer, J. Ohayon, and G. Cloutier, “NoninvasiveVascular Elastography With Plane Strain Incompressibility AssumptionUsing Ultrafast Coherent Compound Plane Wave Imaging,”
IEEE Trans.Med. Imaging , vol. 34, no. 12, pp. 2618–2631, 2015.[16] P. Joos et al. , “High-frame-rate speckle-tracking echocardiography,”
IEEETrans. Ultrason. Ferroelectr. Freq. Control , vol. 65, no. 5, pp. 720–728,2018.[17] D. Perdios, M. Vonlanthen, F. Martinez, M. Arditi, and J.-P. Thiran,“CNN-based image reconstruction method for ultrafast ultrasoundimaging,” 2020. [Online]. Available: https://arxiv.org/abs/2008.12750[18] ——, “Deep learning based ultrasound image reconstruction method:A time coherence study,” in , 2019, pp.448–451. [19] J. A. Jensen, S. I. Nikolov, A. C. H. Yu, and D. Garcia, “Ultrasoundvector flow imaging—Part II: Parallel systems,”
IEEE Trans. Ultrason.Ferroelectr. Freq. Control , vol. 63, no. 11, pp. 1722–1732, 2016.[20] S. Fadnes, S. A. Nyrnes, H. Torp, and L. Lovstakken, “Shunt flowevaluation in congenital heart disease based on two-dimensional speckletracking,”
Ultrasound Med. Biol. , vol. 40, no. 10, pp. 2379–2391, 2014.[21] M. Tanter, J. Bercoff, L. Sandrin, and M. Fink, “Ultrafast compoundimaging for 2-D motion vector estimation: application to transientelastography,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control , vol. 49,no. 10, pp. 1363–1374, 2002.[22] J. A. Jensen, “A model for the propagation and scattering of ultrasoundin tissue,”
J. Acoust. Soc. Am. , vol. 89, no. 1, p. 182, 1991.[23] P. Thevenaz, T. Blu, and M. Unser, “Interpolation revisited,”
IEEE Trans.Med. Imaging , vol. 19, no. 7, pp. 739–758, 2000.[24] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networksfor biomedical image segmentation,” in
Med. Image Comput. Comput.Interv. – MICCAI 2015 , 2015, pp. 234–241.[25] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutionalneural network for inverse problems in imaging,”
IEEE Trans. ImageProcess. , vol. 26, no. 9, pp. 4509–4522, 2017.[26] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”pp. 1–15, 2014. [Online]. Available: https://arxiv.org/abs/1412.6980[27] J. A. Jensen, “FIELD: A program for simulating ultrasound systems,” in , vol. 4, no. Supplement 1, 1996, pp.351–353.[28] V. Perrot and D. Garcia, “Back to basics in ultrasound velocimetry:Tracking speckles by using a standard PIV algorithm,” in , 2018, pp. 206–212.[29] J. A. Jensen, H. Liebgott, F. Cervenansky, and C. A. Villagomez Hoyos,“SA-VFI: the IEEE IUS challenge on synthetic aperture vector flowimaging,” in , 2018, pp. 1–5.[30] W. Thielicke and E. J. Stamhuis, “PIVlab – towards user-friendly,affordable and accurate digital particle image velocimetry in MATLAB,”
J. Open Res. Softw. , vol. 2, no. 1, 2014.[31] H. Nobach and M. Honkanen, “Two-dimensional Gaussian regressionfor sub-pixel displacement estimation in particle image velocimetry orparticle position estimation in particle tracking velocimetry,”
Exp. Fluids ,vol. 38, no. 4, pp. 511–515, 2005.[32] D. Garcia, “Robust smoothing of gridded data in one and higherdimensions with missing values,”
Comput. Stat. Data Anal. , vol. 54,no. 4, pp. 1167–1178, 2010.[33] W. F. Walker and G. E. Trahey, “A fundamental limit on the performanceof correlation based phase correction and flow estimation techniques,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control , vol. 41, no. 5, pp.644–654, 1994.[34] ——, “A fundamental limit on delay estimation using partially correlatedspeckle signals,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control , vol. 42,no. 2, pp. 301–308, 1995.[35] C. P. Loizou, C. S. Pattichis, and J. D’hooge, Eds.,
Handbook of SpeckleFiltering and Tracking in Cardiovascular Ultrasound Imaging and Video ,ser. Healthcare Technologies. Institution of Engineering and Technology,2018.[36] L. N. Bohs, B. J. Geiman, M. E. Anderson, S. C. Gebhart, andG. E. Trahey, “Speckle tracking for multi-dimensional flow estimation,”
Ultrasonics , vol. 38, no. 1-8, pp. 369–375, 2000.[37] J. Meunier and M. Bertrand, “Ultrasonic texture motion analysis: theoryand simulation,”
IEEE Trans. Med. Imaging , vol. 14, no. 2, pp. 293–300,1995.[38] M. Otte and H. H. Nagel, “Optical flow estimation: Advances andcomparisons,” in
Comput. Vis. — ECCV ’94 . Springer Berlin Heidelberg,1994, pp. 49–60.[39] S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski,“A database and evaluation methodology for optical flow,”
Int. J. Comput.Vis. , vol. 92, no. 1, pp. 1–31, 2011.[40] T. L. Szabo,
Diagnostic Ultrasound Imaging: Inside Out , 2nd ed., ser.Biomedical Engineering. Academic Press, 2014.[41] G. F. Pinton, J. J. Dahl, and G. E. Trahey, “Rapid tracking of smalldisplacements with ultrasound,”
IEEE Trans. Ultrason. Ferroelectr. Freq.Control , vol. 53, no. 6, pp. 1103–1117, 2006.[42] H. F. Routh, “Doppler ultrasound,”
IEEE Eng. Med. Biol. Mag. , vol. 15,no. 6, pp. 31–40, 1996.[43] J. Jensen, M. B. Stuart, and J. A. Jensen, “Optimized plane wave imagingfor fast and high-quality ultrasound imaging,”