Spatiotemporal tomography based on scattered multiangular signals and its application for resolving evolving clouds using moving platforms
SSpatiotemporal tomography based on scattered multiangular signals andits application for resolving evolving clouds using moving platforms
Roi Ronen, Yoav Y. SchechnerViterbi Faculty of Electrical EngineeringTechnion - Israel Institute of TechnologyHaifa, Israel [email protected]@ee.technion.ac.il
Eshkol EytanDepartment of Earth and Planetary SciencesThe Weizmann Institute of ScienceRehovot, Israel [email protected]
Abstract
We derive computed tomography (CT) of a time-varyingvolumetric translucent object, using a small number of mov-ing cameras. We particularly focus on passive scatteringtomography, which is a non-linear problem. We demon-strate the approach on dynamic clouds, as clouds have amajor effect on Earth’s climate. State of the art scatteringCT assumes a static object. Existing 4D CT methods relyon a linear image formation model and often on significantpriors. In this paper, the angular and temporal samplingrates needed for a proper recovery are discussed. If theserates are used, the paper leads to a representation of thetime-varying object, which simplifies 4D CT tomography.The task is achieved using gradient-based optimization. Wedemonstrate this in physics-based simulations and in an ex-periment that had yielded real-world data.
1. Introduction
Computed tomography (CT) is a powerful way to re-cover the inner structure of three dimensional (3D) vol-umetric heterogeneous objects [16]. Being possibly oneof the earliest types of computational photography meth-ods, CT has extensive use in many research and opera-tional domains. These include medicine [41], sensing ofatmospheric pollution [4], geophysics [49] and fluid dynam-ics [51, 52]. As a result, CT technologies and novel modali-ties are increasingly being advanced by the computer visionand graphics communities [18, 38].CT requires imaging from multiples directions [5, 24].In nearly all CT approaches, the object is considered staticduring image acquisition. However, in many cases of in-terest, the object changes while multi-view images are ac-quired sequentially [11, 53]. Thus, effort has been invested
Moving sensor 2
Moving sensor 1
Dynamic object (Cloud) A B C Images of sensor 1
Images of sensor 2
Time
Time
Figure 1: [A] Multiple moving sensors scan a time-varyingobject (cloud) from multiple-views. [B] The measurements,acquired at a different times, are the input to our method,which aims to recover the 3D volume density of the objectat the different times [C].to generalize 3D CT to four-dimensional (4D) spatiotem-poral CT, particularly in the computer vision and graphicscommunities [42, 52, 53]. This effort has been directed atlinear-CT modalities. Linear CT is computationally eas-ier to handle, thus common for decades, mainly in medi-cal imaging [22]. Medical CT often exploited the periodictemporal nature of organ dynamics, to synchronize sequen-1 a r X i v : . [ c s . C V ] D ec ial acquisitions [41]. The generalization in linear CT ismirrored in a generalization of object surface recovery byspatio-temporal computer vision [28, 37, 42].This paper focuses on a more complicated model: scat-tering CT. It is important to treat this case for scientific, so-cietal and practical reasons. The climate is strongly affectedby interaction with clouds [15] (Fig. 1).To reduce major errors in climate predictions, this inter-action requires a much finer understanding of cloud physicsthan current knowledge. Current knowledge is based onempirical remote sensing data that is analyzed under the as-sumption that the atmosphere and clouds are made of verybroad and uniform layers. This leads to errors in climate un-derstanding. To overcome this problem, 3D scattering CThas been suggested as a way to study clouds [30, 31].Scattering CT of clouds requires high resolution multi-view images from space. There are spaceborne and high-altitude systems may provide such data, such as AirM-SPI [10], MAIA [7], HARP [40], AirHARP [36] and theplanned CloudCT formation [43]. However, there is a prac-tical problem: these systems are very expensive, so it isnot realistic to deploy them in large numbers to simultane-ously acquire images of the same clouds from many angles.Therefore, in practice, the platforms are planned to moveabove the clouds: a sequence of images is taken, in orderto span and sample a wide angular breadth, but the cloudevolves meanwhile. Hence there are important reasons toderive 4D scattering CT, particularly of clouds.We pose conditions under which this task can be per-formed. These relate to temporal sampling and angularbreadth, in relation to the correlation time of the evolvingobject. Then, we generalize prior 3D CT, specifically scat-tering CT, to spatiotemporal recovery using data taken bymoving cameras. We present an optimization-based methodto reach this, and then demonstrate this method both in rig-orous simulations and on real data.
2. Theoretical Background
Computed Tomography (CT) seeks estimation of the 3Dvolumetric density of β an object. Usually, CT measuresthe object from multiple directions. Denote those measure-ments by y . A forward model F ( β ) expresses the imageformation model. Estimation of β is done by minimizationof a cost E , which penalizes the discrepancy between y andthe forward model, ˆ β = argmin β E [ y , F ( β )] . (1)Often, acquiring data from multiple angles simultane-ously is very difficult. Sensors are expensive and/or power-consuming. Thus, their duplication in large numbers tosample many directions is often prohibitive. Sometimesthe sensors are bulky and need cooling, posing difficulty to pack them densely. Therefore, CT measurements areoften acquired sequentially . In contrast, the inverse to-mographic problem (Eq. 1) expresses the object as time-invariant. However, in many situations it is not the case; theobject changes in time. Thus, modeling the inverse problemas Eq. (1) is inconsistent both with the dynamic nature of theobject and the sequential nature of the sensing process.This paper focuses on scattering-based CT. Thus, we de-scribe here the relevant forward model. We follow notationsand definitions of [4, 30]. Material density at a voxel is de-noted by β . In case the main interaction effect is scattering(rather than absorption or emission), as is the case of visiblelight in clouds, β is the extinction or scattering coefficientof the medium, in units of km − . Concatenating this coef-ficient in all voxels creates a vector β t , per time t . The in-teraction of radiation with a scattering volumetric object ismodeled by 3D radiative transfer , which includes multiplescattering. Define radiative transfer by an operator RT( β t ) .There are various algorithms to implement RT( β t ) , includ-ing Monte-Carlo [33, 35] and the spherical harmonic dis-crete ordinate method (SHDOM). We use the latter, as itis considered trustworthy by the scientific community [12]and has open-source online codes [29].Radiative transfer yields the radiance i ( x , ω ) at each lo-cation x in space and each light propagation direction ω . Acamera observes the scene from a specific location, whileeach of the camera pixels samples a direction ω . Hence,imaging (forward model) amounts to sampling the output ofradiative transfer at the camera locations and pixels’ lines ofslight, while integrating over the camera exposure time andspectral bands. Camera sampling is denoted by a projec-tion operator P x , ω . To conclude, the forward model for theexpected value of a pixel gray level at time t is g x , ω ,t = F ( β t ) ≈ γ cam P x , ω { RT( β t ) } . (2)Here γ cam expresses camera properties, including the lensaperture area, exposure time, spectral band, quantum effi-ciency and lens transmissivity. Eq. (2) assumes that the ex-posure time is sufficiently small, such that within this time,the scene and the camera pose vary insignificantly.Empirical measurements include random noise. Thenoise mainly originates from the discrete nature of photonsand electric charges, which yields a Poisson process. Thereare additional noise sources, and their parameters can be ex-tracted from the sensor specifications. Denote incorporationof noise into the expected signal by the operator N . Then,a raw measurement is y x , ω ,t = N { g x , ω ,t } . (3)
3. Representing Dynamic Volumetric Objects
In this section, we present an approximate representa-tion of volumetric 3D objects, which evolve gradually in2ime. There is a need and justification for the approxima-tion. The need is because in our work, there is often insuf-ficient simultaneous data for high quality 3D tomographyat all times. Data is captured sequentially, while the objectevolves. Hence, at best, we would recover a good approx-imation of the evolving object. The approximation can besatisfactory, however, if temporal samples are sufficientlydense, as elaborated in Sec. 4.The object has an evolving state, which is sampled atthe time set T = { t , t , . . . , t N state } . At time t ∈ T , thetrue object is represented by the instantaneous 3D extinctionfield β t . Define a corresponding hidden field β hidden t . Theinstantaneous field is represented as a convex linear combi-nation of the hidden fields: β t = (cid:88) t (cid:48) ∈T w t ( t (cid:48) ) β hidden t (cid:48) . (4)All weights { w t ( t (cid:48) ) } satisfy ≤ w t ( t (cid:48) ) ≤ and (cid:88) t (cid:48) ∈T w t ( t (cid:48) ) = 1 . (5)Equation (4) implies a non-negative correlation of the 3Dfield at t to the 3D field at any time t (cid:48) . Correlation shoulddecay with the time lag | t − t (cid:48) | . We set the weights by anormal function w t ( t (cid:48) ) = s exp (cid:18) − | t − t (cid:48) | σ (cid:19) . (6)Here s is a normalization factor, set to satisfy Eq. (5).The parameter σ expresses the effective correlation time of the volumetric object. We elaborate on the value of σ in Sec. 4. Two limiting cases are illustrative, however. For σ −→ ∞ , we have w t ( t (cid:48) ) −→ /N state . This means that theobject β is effectively static. On the other hand, for σ −→ ,we have w t ( t (cid:48) ) −→ δ ( t − t (cid:48) ) , i.e. a Dirac delta function. Thismeans that the object β varies so fast, that at any time t itsstate is uncorrelated to the state at other times.
4. Bandwidth and Object Sampling
In Sec. 3, a 3D volumetric object which graduallyevolves is represented using a linear superposition ofsampled states. The linear superposition is controlled by akernel, whose width is σ . Here we elaborate both on thesampling process and how σ emerges from the temporalnature of the object. The relation between a continuouslyvarying object and its discrete temporal samples is gov-erned by the Nyquist sampling theorem. This theoremstates, that for an infinite time domain: • Sampling loses no information if for any time sampleindex k , | t k +1 − t k | ≤ (2 B ) − , where B is the half-bandwidth of the object’s temporal variations. Figure 2: A cutoff frequency ≈ / , within which95% of the signal power is contained, is marked in red. • Based on these lossless temporal samples, lossless recon-struction of the object is achieved by a linear superpositionof the samples. The linear superposition is achieved bytemporal convolution of the samples with a sinc kernel.This kernel has infinite temporal length. The kernel’s effec-tive half-width, defined by its first zero-crossing, is (2 B ) − .In practice, the temporal domain, number of samplesand reconstruction kernels are finite. Moreover, the object’stemporal spectrum is often not completely band-limited by B , because some low energy content has far higher frequen-cies. Consequently, sampling and reconstruction are lossy,yielding an approximate result. The reconstruction is notperformed by a sinc kernel, but using a finite-length kernel,such as the cropped Gaussian w t ( t (cid:48) ) of Eq. (6). Correspond-ingly, in our approximation, σ ∼ (2 B ) − .As an example, consider warm convective clouds. Theyare governed by air turbulence of decameter scale. In thesescales [15], the correlation time of content in a voxel isabout ≈
20 to 50 seconds. This indicates that
4D spatiotem-poral clouds can be recovered well using 4D spatiotempo-ral samples , if the temporal samples are about 30 secondsapart. Furthermore, this indicates the range of values of σ .Moreover, the entire lifetime of a warm convective cloud istypically measured in minutes.An additional illustration is given by a cloud simulation,which is described in detail in Sec. 7.1. The cloud evolvesfor about 10 minutes. For each cloud voxel, we calculatedthe power spectrum using the short-time Fourier transform.This power was then aggregated over all voxels. The to-tal temporal spectrum is plotted in Fig 2. The spectrumis effectively limited. The cutoff is not sensitive to evolv-ing stages of the clouds. From this cutoff, a temporal sam-pling period which is or shorter encapsulates mostof the energy of the temporal variations. Hence, we set σ ≈ for clouds.3
50 100 150012
Single voxel Cloud
Single voxel Cloud
Figure 3: A static heterogeneous cloud and a single-voxelcloud (having size × × ) are recovered fromnine viewpoints. The plots are of errors defined in Eq. (8).
5. Tomographic Angular Extent
Section 4 dealt with sampling of an object, as if 4D mea-surements are done in-situ. However, in CT, we have nodirect access to β t : we only measure projections y t . As wediscuss now, projections must have a wide angular breadth,while object evolution is small .Consider an extreme case. Let a cloud be temporallyconstant and reside only in a single voxel, over the ocean.Viewed from space by two cameras simultaneously, cloudrecovery here amounts to triangulation. In triangulation, thebest cloud-localization resolution is obtained if the angularrange between the two cameras is ◦ . At small baselines,localization decreases linearly with the decreasing angularextent. When more than two cameras operate, localizationbehaves more moderately, but with a similar trend. Con-sider two criteria that had been used in 3D CT [23, 30, 33].Per time sample t , δ t = (cid:107) β true t (cid:107) − (cid:107) ˆ β t (cid:107) (cid:107) β true t (cid:107) , ε t = (cid:107) β true t − ˆ β t (cid:107) (cid:107) β true t (cid:107) (7)relate, respectively, to the relative bias of the object massand the relative recovery error. We generalize them to thewhole sample set t ∈ T , by averaging: δ = 1 N state (cid:88) t ∈T δ t , ε = 1 N state (cid:88) t ∈T ε t . (8)Fig. 3 plots the measures when CT attempts to recover asingle-voxel cloud (extending 20 meters), when 9 camerassurround it from 500km away. Above ≈ ◦ total angu-lar extent, recovery reaches a limiting excellent quality, butquality is very poor at narrow angle spans.What happens in extended objects? In linear-CT (as inmedical X-ray CT), information loss due to limited-angleimaging is known as the missing cone of frequencies [2,34].In scattering CT, with the exception of very sparse objects,the missing cone linear theory does not apply. However(See Fig. 3), there is a marked degradation of quality if the angular extent is narrow. Here, scattering CT is per-formed on a single-state (fixed time) of a cloud simulatedas in Sec. 7.1.So far, this section dealt with static clouds. Clouds areconsidered nearly static between times t, t (cid:48) if | t − t (cid:48) | < σ .The viewing angular extent covered in those times (andin intermediate times) is denoted Θ( t, t (cid:48) ) , in radians. So,within time span ≈ σ good recovery can be achieved onlyif t, t (cid:48) ) /π is large. If it is low, then spatial (altitude) res-olution in CT recovery is lost. Most CT systems cover wideangular extent, eventually. So, guarantee for quality is theangular rate. Define a dimensionless figure ρ = 2Θ( t, t (cid:48) ) π σ | t − t (cid:48) | . (9)Good 4D recovery requires ρ (cid:38) , while temporal samplingsatisfies the condition of Sec. 5. The more these conditionsare violated, the worse 4D CT is expected to perform.
6. 4D Scattering Tomography
We now generalize Eq. (1) to 4D CT. At any time t ∈ T ,the object is viewed simultaneously from a set of viewpoints C t . The image data captured in viewpoint c ∈ C t is denotedby the vector y c . The image data captured simultaneouslyby all cameras c ∈ C t is concatenated to a vector y t . At thattime, the modeled medium variables are represented by avector β t . At the corresponding time, as described in Sec. 3,there is a hidden representation of the medium, β hidden t .The inverse problem is now formulated by { ˆ β t } t ∈T = arg min { β t } t ∈T (cid:88) t ∈T E [ y t , F ( β t )] . (10)We use E [ y t , F ( β t )] = 12 (cid:107) y t − F ( β t ) (cid:107) . (11)Eq. (10) can be solved efficiently by gradient-based meth-ods. Towards this, let us approximate the gradient of thecost being minimized in Eq. (10) by ∂∂ β t (cid:88) t (cid:48) ∈T E [ y t (cid:48) , F ( β t (cid:48) )] ≈ ∂∂ β hidden t (cid:88) t (cid:48) ∈T E [ y t (cid:48) , F ( β t (cid:48) )] . (12)Note that ∂∂ β hidden t (cid:88) t (cid:48) ∈T E [ y t (cid:48) , F ( β t (cid:48) )] = (cid:88) t (cid:48) ∈T ∂ E [ y t (cid:48) , F ( β t (cid:48) )] ∂ β t (cid:48) ∂ β t (cid:48) ∂ β hidden t . (13)From Eq. (11), ∂ E [ y t (cid:48) , F ( β t (cid:48) )] ∂ β t (cid:48) = [ F ( β t (cid:48) ) − y t (cid:48) ] ∂ F ( β t (cid:48) ) ∂ β t (cid:48) , (14)4hile from Eq. (4), ∂ β t (cid:48) ∂ β hidden t = w t (cid:48) ( t ) . (15)Define the set of medium density fields at all sampled times B = { β t (cid:48) } t (cid:48) ∈T . From Eqs. (12,13,14,15), for optimizingthe field at time t , the approximate gradient is g t ( B ) = (cid:88) t (cid:48) ∈T w t (cid:48) ( t ) [ F ( β t (cid:48) ) − y t (cid:48) ] ∂ F ( β t (cid:48) ) ∂ β t (cid:48) . (16)A gradient-based approach then performs per iteration k : β t ( k + 1) = β t ( k ) − α g t ( B k ) (17)where α is a step size and B k = { β t (cid:48) ( k ) } t (cid:48) ∈T .The approach in Eqs. (10-17) is not specific to scatteringCT. The formulation can apply generically to other inverseproblems where data is acquired sequentially while the ob-ject evolves, and the forward model F is known and differ-entiable. In case of scattering CT, F is discussed in Sec. 2.Computing the Jacobian ∂ F ( β t (cid:48) ) /∂ β t (cid:48) is complex then.However, there are approximations to the Jacobian of 3DRT, which can be computed efficiently [30, 33], making therecovery tractable. The complexity of solving Eq. (10) issimilar to 3D static CT by (1). We performed optimizationusing a L-BFGS-B solver [54].Following [31], prior to the optimization process, the setof voxels to estimate is bounded using space-carving [27].Space-carving bounds a 3D shape by back-projecting multi-view images. A voxel is labeled as belonging to the object,if the number of back-projected rays that intersect this voxelis greater than a threshold. To adapt this approach to ourdynamic framework, the shape was estimated in a coarsespatial grid and using a low threshold for labeling voxels aspotentially part of the cloud.
7. Simulations
We test the proposed method on clouds. The atmosphereis a scattering medium which changes continuously. It in-cludes several types of scattering particles, including wa-ter droplets, aerosols and molecules. Scattering by waterdroplets is usually much more dominant and spatiotempo-rally variable than aerosols, hence we focus on the former.Scattering by air molecules does not require imaging: it fol-lows Rayleigh theory. Molecular density changes mainlyin height and is usually known globally using non-imagingsensors. Thus, the evolving concentration of cloud waterdroplets is the main unknown we sense and seek.
For realistic complexity, we use a rigorous simulationbased on cloud physics, including evolution of the cloud microphysics. Clouds are simulated using the Systemof Atmospheric Modeling (SAM) [26], which is a non-hydrostatic, inelastic large eddy simulator (LES) [21, 39,50]. It describes the turbulent atmosphere using equationsof momentum, temperature, water mass balance and con-tinuity. This model was coupled to a spectral (bin) micro-physical model (HUJI SBM) [14, 25] of the droplets’ size.It propagates the evolution of the droplet’s size distributionby solving the equations for nucleation, diffusional growth,collision-coalescence and break-up. This is done on a loga-rithmic grid of 33 bins [2 µ m , . .The simulation runs according to the BOMEX case [45]of trade wind cumulus clouds near Barbados. Humidity andpotential temperature profiles are used as initial conditions,while the surface fluxes and large-scale forcing are constant.The mean horizontal background wind is zero. The horizon-tal boundary condition is cyclic. The domain is 5.12km long(cloud diameter is ≈ ) at 10m resolution. Vertically,the resolution is 10m from sea level to 3km, and then res-olution coarsens to 50m. The cloud top reaches 2km. Thesimulation expresses an hour, of which 30 minutes includesthe cloud’s lifetime. The temporal resolution is 0.5sec.We present results using two different time-varyingclouds: Cloud (i) has size × × voxels (See Fig. 4). Cloud (ii) has size × × voxels (See Fig. 5). Thevoxel resolution is m × m × m . Using the time-varying size distribution of the clouddroplets, Mie theory [6] provides the spatiotemporal ex-tinction field B = { β t (cid:48) } t (cid:48) ∈T and scattering phase function.The scene is irradiated by the sun, whose illumination anglechanges in time, relative to the Earth’s coordinates, whilecameras overfly the evolving cloud. The solar trajectory inEarth coordinates corresponds to Feb/03/2013 at 13:54:30- 14:01:00 local time, around 38N 123W. We tested severaltypes of imaging setups: Setup A : Three satellites orbit at altitude, oneafter the other. Their velocity is . / sec . The orbitalarc-length between nearest-neighboring satellites is .At mid-time of the simulation, t = ( t + t N state ) / , thesetup is symmetric around the nadir direction. Then, thesetup spans an angular range of ◦ . Each satellite carriesa perspective camera. The camera resolution is such that atnadir view, a pixel corresponds to at sea level. Imagesare taken every , during , i.e. N state = 7 . Thissetup is illustrated in Fig. 6. Baseline : The baseline uses all the accumulated 21viewpoints of
Setup A . However, all viewpoints herehave perspective cameras that simultaneously acquire thecloud. In other words, this baseline is not prone to errors5
50 100 150050100150
Ground-truth extinction
Recovered extinction - BaselineRecovered extinction - 3 Satellites 3 Satellites -1 [km ] X [km]Y [km]Z [km] X [km]Y [km]Z [km] X [km]Y [km]Z [km]
Baseline
Figure 4:
Cloud (i) . Results of recovery by the
Baseline and
Setup A are compared to the ground-truth by a 3Dpresentation and scatter plots that use 20% of the datapoints, randomly selected for display clarity.that stem from temporal sampling. The baseline is used forrecovery only at time t = ( t + t N state ) / . Setup B : This setup is similar to
Setup A , but it usesonly two satellites. Thus at mid-time of the simulation, thesetup spans a ◦ angular range. Setup C : A single camera, similar to the Multi-angleSpectro-Polarimeter Imager (AirMSPI) [10], mountedon an aircraft flying ◦ relative to North at altitude. Imaging has a pushbroom scan geometry, having spatial resolution at Nadir view and λ = 660nm wavelength band. AirMSPI scans view angles in astep-and-stare mode [10]. Based on AirMSPI PODEXcampaign [9], we set 21 viewing angles along-track: ± ◦ , ± ◦ , ± ◦ , ± ◦ , ± ◦ , ± ◦ , ± ◦ , ± ◦ , ± ◦ , ± ◦ off-nadir and ◦ (nadir). For example, threesample angles are illustrated in Fig. 7. It takes ≈ to scan a cloud domain in any single view angle, during Ground-truth extinction
Recovered extinction - Baseline -1 [km ] Recovered extinction - 3 Satellites
X [km]Y [km]Z [km] X [km]Y [km]Z [km] X [km]Y [km]Z [km]
Figure 5:
Cloud (ii)
Results of recovery by the
Baseline and
Setup A are compared to the ground-truth. 𝑡 𝑡 𝑡 𝑡 orbit orbitorbit Figure 6: Illustration of
Setup A . 𝑡 𝑡 𝑡 𝑡 Figure 7: Illustration of
Setup C . A domain is viewed at21 pushbroom angles, sequentially.which the cloud and solar directions are assumed constant.Dynamics are noticeable between view angles.A spherical harmonic discrete ordinate method(SHDOM) code [13] provides the numerical forwardmodel F . Simulated measurements { y t } t ∈T include noise.The noise model follows the AirMSPI sensors parame-ters [10, 46]. There, the sensor full-well depth is 200,0006 Setup A Setup B
Baseline
Setup ASetup B -0.2-0.100.10.2 0 20 40 60 80
Setup A
BaselineBaseline
Setup B
Figure 8:
Cloud (i) . The criteria of Eq. (7) are marked bycolored circles, whose saturation decays the farther the sam-pling time is from ( t + t N state ) / . The criteria in Eq. (8)marked by solid or dashed lines, with corresponding colors.The setting σ = ∞ refers to the solution by the state of theart, i.e. 3D static scattering tomography.photo-electrons, readout noise has a standard deviation of20 electrons, and the overall readout is quantized to 9 bits. The rendered and noisy images served as input to 4D to-mographic reconstruction. The voxel size in the recoverywas set to × horizontal, vertical and resolution. For parallelization, optimization ran on a com-puter cluster, where each computer core was dedicated torendering a modeled image from a distinct angle. The op-timization was initialized by { β t } τ ∈T = 1 km − . Conver-gence was reached in several dozen iterations. Dependingon the number of input images, it took between minutes toa couple of hours to complete, in total.From Sec. 4, we assess that a value σ ∼ is nat-ural. Indeed, this is supported numerically in the plotsof ε t , δ t , ε, δ for Cloud (i) (Fig 8). Analogous plots for
Cloud (ii) are presented in the Appendix. The 3D tomo-graphic results using
Setup A are shown in Figs. 4 and 5,corresponding to
Cloud (i) and
Cloud (ii) . Both illustratethe state at t = ( t + t N state ) / . Recovery used σ = 20 sec. Baseline -0.3-0.2-0.100.10.20.30.4 0 20 60 120
Figure 9:
Setup C . Error measures (7) of
Cloud (i) at time t = ( t + t N state ) / , for different acquisition inter-angulartemporal intervals. The setting σ = ∞ refers to the solutionby the state of the art, i.e. 3D static scattering tomography.More results, particularly relating to Setup B are shownin the Appendix.
Setup C uses a single platform, which is challenging.Results depend significantly on how fast the aircraft flies,i.e. how long it takes to capture the cloud from a variety ofangles (up to 21 angles). Fig. 9 compares the results for therecovery at t = ( t + t N state ) / for inter-angle time intervalof 5sec, 10sec and 20sec. As expected, quality ( ε ) improveswith velocity. Moreover, if the camera moves slowly (longtime interval between angular samples), results improve byusing a longer temporal support, observing the cloud froma wider angular range, despite its dynamics.
8. Experiment: Real World AirMSPI Data
We follow the experimental approach of [30], and usereal-world data acquired by JPL’s AirMSPI, which flies onboard NASA’s ER-2. The geometry is exactly as describedin
Setup C in Sec. 7.2, including location and time. Weexamine an atmospheric domain of size . × × in the East-North-Up coordinates. We discretized thedomain to × × voxels. Because N states = 21 , thetotal number of unknowns is 10,752,000.The inter-angle time interval in this experiment is around7 -1 [km ] Raw missing imageOur rendered imageStatic solutionrendered image
OursStatic solution
Recovered extinctionScatter plot
X [km]Y [km]Z [km] (b) (e)(d)(c) (a)
Figure 10: Recovered 3D extinction field using real data (a).A raw AirMSPI nadir image (b). Corresponding renderedviews of a cloud, that was estimated using data that hadexcluded the nadir, either by our 4D CT approach (c) orcurrent static 3D CT (d). Gamma correction was applied on(b,c,d) for display clarity. (e) A scatter plot of rendered vs.raw AirMSPI images at nadir.20sec. Based on Fig. 9, we set σ = 60sec here. We wantto focus on dynamic tomography of the evolving cloud, andnot on global motion due to wind in the cloud field. Hence,we used the pre-processing approach of [30] to align thecloud images. Additionally, the ground albedo is estimatedto be 0.04. The pre-processing and albedo estimation aredescribed in the Appendix.A recovered volumetric reconstruction for one time in-stant is displayed in Fig. 10. We have no ground-truth forthe cloud content in this case. Hence we check for con-sistency using cross-validation. For this, we excluded thenadir image (Fig. 10b) from the recovery process. Thus to-mography used 20 out of the 21 raw views. Afterward, weplaced the recovered cloud in SHDOM physics-based ren-dering [13], to generate the missing nadir view. The resultis then compared to the ground-truth missing view. Fig. 10compares the result of this process for two solutions: our4D tomographic solution, and the state-of-the-art, i.e., 3Dstatic scattering tomography.The same cross-validation process was repeated for the ± ◦ view angles. Quantitatively, we measure the fitting er-ror using Eq. (11). The results are summarized in Table (1). +54 ◦ view nadir view − ◦ viewOurs 0.96 0.38 0.24Static solution 1.73 0.94 0.61Table 1: Analysis of empirical data in different view an-gles. Quantitative fit (11) of our 4D result to the data, ascompared to the error of state-of-the-art static 3D CT.
9. Discussion
We derive a framework for 4D CT of dynamic objectsthat scatter, using moving cameras. The natural temporalevolution of an object indicates the temporal and angularsampling needed for a good reconstruction. Given theseconditions, 4D CT recovery can be done, even with a smallnumber of cameras. We believe that our approach can berelevant in additional tomographic setups [19] that rely onradiative transfer. Some elements of this work are generic,beyond scattering CT. Thus, it is worth applying the ap-proach to other tomographic modalities. Our findings cansignificantly improve various research fields, including bio-medical CT, flow imaging and atmospheric sciences.
Acknowledgment
We are grateful to Aviad Levis and Jesse Loveridge forthe pySHDOM code and for being responsive to questionsabout it. We are grateful to Vadim Holodovsky and OmerShubi for helping in setting elements of the code. We thankthe following people: Ilan Koren, Orit Altaratz and YaelSde-Chen for useful discussions and good advice; DannyRosenfeld for pointing out Ref. [15] to us, Johanan Erez,Ina Talmon and Daniel Yagodin for technical support. YoavSchechner is the Mark and Diane Seiden Chair in Scienceat the Technion. He is a Landau Fellow - supported bythe Taub Foundation. His work was conducted in the Ol-lendorff Minerva Center. Minvera is funded through theBMBF. This project has received funding from the Euro-pean Union’s Horizon 2020 research and innovation pro-gramme under grant agreement No 810370-ERC-CloudCT.
Appendices
We now present several appendices. Appendix A elabo-rates on pre-processing which is applied to real world mea-surements that are presented in Sec. 8. This data was col-lected by the AirMSPI instrument. Appendix B providesan additional example of the bandwidth of the power spec-trum of a cloud and more simulation results. Appendix Canalyzes the computational complexity of the method.8 ast[km]
North[km]
Time [sec] Figure 11:
Geometry of the AirMSPI real world setup which ledto the data presented in Sec. 8. The color represents the loca-tions of the cloud and the AirMSPI instrument in the different timestates. The cloud’s outer contour and its corresponding center ofmass, marked in a circle, are presented per state. The AirMSPIlocation and velocity are marked by arrows. The arrows point tothe AirMSPI flight direction azimuth of ◦ relative to the North.Due to the domain size, not all AirMSPI locations are illustratedhere. Due to wind, the cloud moves at 57km/h in azimuth ◦ relative to the North. A. Pre-processing Real World Data
Sec. 8 presents results using real world measurements.The data were acquired by the AirMSPI instrument. As ex-plained in Sec. 8, while AirMSPI flies, clouds move due towide-scale wind at their altitude. The geometry of AirM-SPI’s path and the cloud drift during the experiment is pre-sented in Fig. 11. In order to eliminate the influence ofwide-scale wind, a registration process of the cloud im-ages is done. Moreover, for tomographic recovery, we needto have an assessment of the Earth surface albedo, underthe clouds. This section describes how pre-processing esti-mates the wind and albedo.
Cloud shadow
Nadir
Figure 12: Illustration of estimation of cloud altitude usingits shadow.
A.1. Wind Estimation
Clouds are segmented from the surface automati-cally [47]. Cloudy pixels are then used to estimate thecloud center of mass in each image [30]. A registra-tion of these centers of mass can be done by triangula-tion. However, triangulation of images of a moving ob-ject using a translating camera has an inherent ambigu-ity. This ambiguity can be solved if the cloud heightis known. In this work, we assess this altitude of acloud by its shadow [1, 20, 32]. Let ( x cl , y cl , z cl ) and ( x shad , y shad , be a point in a cloud and its correspond-ing shadow point on the earth surface, respectively (seeFig. 12). Let r shad = (cid:112) ( x cl − x shad ) + ( y cl − y shad ) .We obtain x cl , y cl , x shad and y shad from the AirMSPI im-ages. Given the solar zenith angle relative to the nadir θ sun ,the altitude z cl satisfies z cl = r shad tan( θ sun ) . (18)For the example shown in Sec. 8, we estimated the cloudbase height as ≈ ≈ does not exceed 1000m,which makes our approximation reasonable.We approximate the cloud horizontal velocity by project-ing the images from the locations of the camera to the alti-tude of z cl . From the center of mass of these images, weassess the velocity. We register the camera locations so theprojections of the center of mass of all images intersect atthe same point at altitude of z cl . The images and the regis-tered camera locations are the input for 4D recovery. A.2. Surface Albedo Estimation
3D radiative transfer calculations require the surfacealbedo. We use non-cloudy pixels to estimate the albedo. This data applies over the coast of California, 38N 122W,onFeb/03/2013 at 13:30 local time. Y be a set of non-cloudy pixels. We estimate the sur-face albedo a as, ˆ a = argmin a (cid:88) y ∈Y || y − F ( β air ; a ) || . (19)Here F ( β air ; a ) is a rendering (forward) model where thesurface albedo is set to be a and the atmospheric mediumcontains no clouds. That is, sunlight interacts only withthe air and the surface. Scattering by air is assumed to beknown [17, 48]. The optimization problem is solved by theBrent minimization method [8], implemented by the SciPypackage [44]. For the example shown in Sec. 8, the surfacealbedo is estimated to be 0.04. B. Additional Simulations
B.1. Cloud Temporal Spectrum
Sec. 4 indicates that the temporal power spectrum of aconvective cloud at 10m resolution is effectively limited.Thus, a temporal sampling period of 25[sec] or shorter isrequired. We assess this in an additional cloud simulation.We conducted a single cloud simulation in high resolution,using small changes, relative to the simulation described inSec. 7.1. The simulation parameters and setting are similar.However, the perturbation that initiates the convection andturbulent flow has a smaller horizontal size. This creates asmaller cloud with a horizontal width of ≈ ≈ ≈ / [Hz], and it is not sensitive tothe evolving stages of the cloud. Here the required temporalsampling period is 35[sec] or shorter. This is more tolerablecompared to the temporal sampling period in Sec. 4. B.2. Additional Tomography Results
Recall that our method is demonstrated on two simu-lated clouds,
Cloud (i) and
Cloud (ii) , using several typesof imaging setups:
Setup A , Setup B and
Baseline .Moreover, recall the error criteria as Eqs. (7,8). Fig. 14shows ε t , δ t , ε, δ for Cloud (ii) . It reinforces the assess-ment that a value σ ∼ is natural, as explained inSec. 4. Figs. 15 and 16 respectively visualize the re-sults of Cloud (i) and
Cloud (ii) . The 3D cut-sectionsof the error | β true t ( x ) − ˆ β t ( x ) | at t = ( t + t N state ) / is presented for Setup A , Setup B and
Baseline inFigs. 15 and 16[Top]. Fig. 16[Bottom] uses scatter plots tocompare the ground-truth to the results obtained by eitherthe
Baseline , Setup A or Setup B . Figure 13: A cutoff frequency ≈ / , within which95% of the signal power is contained, is marked in red. Setup A Setup B
Baseline
Setup ASetup B -0.2-0.100.10.2 0 20 40 60 80
Setup A
BaselineBaseline
Setup B
Figure 14:
Cloud (ii) . The criteria of Eq. (7) are marked by col-ored circles, whose saturation decays the farther the sampling timeis from ( t + t N state ) / . The criteria in Eq. (8) are marked by solidor dashed lines, with corresponding colors. The setting σ = ∞ refers to the solution by the state of the art, i.e. 3D static scatteringtomography. km − ]X[km]Y[km]Z [km] X[km]Y[km]Z [km] X[km]Y[km]Z [km]Baseline 3 Satellites 2 SatellitesFigure 15: Cloud (i) . 3D cut-sections of the error | β true t ( x ) − ˆ β t ( x ) | at t = ( t + t N state ) / for Baseline , Setup A and
Setup B . 080[ km − ]X[km]Y[km]Z [km] X[km]Y[km]Z [km] X[km]Y[km]Z [km]100 1500 50050100150 100 1500 50050100150 100 1500 50050100150Baseline 3 Satellites 2 SatellitesBaseline 3 Satellites 2 SatellitesFigure 16: Cloud (ii) comparison for
Baseline , Setup A and
Setup B . [Top] 3D cut-sections of the error | β true t ( x ) − ˆ β t ( x ) | at t = ( t + t N state ) / . [Bottom] Scatter plots that use randomly selected 20% of the data points, for display clarity. C. Computational Complexity
The time complexity for solving the 4D CT inverse prob-lem, (Eq. 10) is governed by the gradient calculation. Recall the formulation of the approximate gradient, g t ( B ) = (cid:88) t (cid:48) ∈T w t (cid:48) ( t ) [ F ( β t (cid:48) ) − y t (cid:48) ] ∂ F ( β t (cid:48) ) ∂ β t (cid:48) . (20)11omputing the Jacobian ∂ F ( β t (cid:48) ) /∂ β t (cid:48) is complex, thusit is established numerically by a surrogate function thatevolves through iterations [30, 33]. Calculating the gradientincludes two dominant time-consuming processes that areexecuted in alternation. The first process calculates the for-ward model for the N state cloud states {F ( β t (cid:48) ) } t (cid:48) ∈T . Thesecond process sums over the entire set of measurements,which does not depend on the number of cloud states thatwe seek to recover.A spherical harmonic discrete ordinate method(SHDOM) code is used for computing the numericalforward model F ( · ) and the Jacobian. SHDOM iterativelyupdates the estimation of 3D radiation fields until conver-gence. Calculating the forward model for the N state cloudstates can be done in parallel. Thus, the time complexityis governed by the temporal state for which the SHDOMcode takes the longest time to compute the forward model.By calculating the forward model for all cloud states inparallel, the time complexity of gradient calculation isinsensitive to the number of cloud states N state .As a numerical example, we used 20 iterations of the L-BFGS-B optimization. Using measurements of Cloud (i) acquired by
Setup A , the run-time of the solution by ourmethod was 501[sec]. The static solution took 301[sec].In both, the computer was Intel® Xeon® Gold 6240 CPU@ 2.60GHz with 72 cores. Although our method recov-ers N state = 7 times more voxels, the run-time is lessthan twice that of the static solution. The time difference iscaused by overheads of saving and loading larger data withour method, and nonoptimal task division for the cores. References [1] Austin Abrams, Kylia Miskell, and Robert Pless. The episo-lar constraint: Monocular shape from shadow correspon-dence. In
Proc. IEEE CVPR , pages 1407–1414, 2013. 9[2] David A Agard, Yasushi Hiraoka, Peter Shaw, and John WSedat. Fluorescence microscopy in three dimensions.
Meth-ods in cell biology , 30:353–377, 1989. 4[3] NASA Federal agency.
NASA Earth Data Site . https://worldview.earthdata.nasa.gov/ . 9[4] Amit Aides, Aviad Levis, Vadim Holodovsky, Yoav YSchechner, Dietrich Althausen, and Adi Vainiger. Dis-tributed sky imaging radiometry and tomography. In Proc.IEEE ICCP , pages 1–12, 2020. 1, 2[5] Rushil Anirudh, Hyojin Kim, Jayaraman J Thiagarajan, KAditya Mohan, Kyle Champley, and Timo Bremer. Lose theviews: Limited angle ct reconstruction via implicit sinogramcompletion. In
Proc. IEEE CVPR , pages 6343–6352, 2018.1[6] Craig F Bohren and Donald R Huffman.
Absorption andscattering of light by small particles . John Wiley & Sons,2008. 5[7] Stacey W Boland, David J Diner, John C Pearson, andKevin A Burke. NASA’s Multi-Angle Imager for Aerosols (MAIA) earth venture instrument investigation.
AGUFM ,2018:GH41C–1443, 2018. 2[8] Richard P Brent.
Algorithms for minimization withoutderivatives . Courier, 2013. 10[9] David J Diner, Michael J Garay, Olga V Kalashnikova,Brian E Rheingans, Sven Geier, Michael A Bull, Veljko MJovanovic, Feng Xu, Carol J Bruegge, Anthony B Davis,et al. Airborne multiangle spectropolarimetric imager(AirMSPI) observations over California during NASA’s po-larimeter definition experiment (PODEX). In
PolarizationScience and Remote Sensing VI , volume 8873, page 88730B.SPIE, 2013. 6[10] David J Diner, Feng Xu, Michael J Garay, John V Mar-tonchik, Brian E Rheingans, Sven Geier, Anthony B Davis,BR Hancock, Michael A Jovanovic, Veljko M andBull,et al. The Airborne Multiangle Spectropolarimetric Imager(AirMSPI): a new tool for aerosol and cloud remote sensing.
Atmos. Meas. Tech. , 6(8):2007, 2013. 2, 6[11] Marie L Eckert, Wolfgang Heidrich, and Nils Thuerey. Cou-pled fluid density and motion from single views. In
Com-puter Graphics Forum , volume 37, pages 47–58. Wiley On-line Library, 2018. 1[12] K Franklin Evans. The spherical harmonics discrete ordinatemethod for three-dimensional atmospheric radiative transfer.
Journal of the Atmospheric Sciences , 55(3):429–446, 1998.2[13] K Franklin Evans and WJ Wiscombe. Improvements to theSHDOM radiative transfer modeling package. In
Proc. 13thARM Sci. Team Meeting , 2003. 6, 8[14] Jiwen Fan, Mikhail Ovtchinnikov, Jennifer M Comstock,Sally A McFarlane, and Alexander Khain. Ice formationin Arctic mixed-phase clouds: Insights from a 3-D cloud-resolving model with size-resolved aerosol and cloud micro-physics.
JGR: Atmospheres , 114(D4), 2009. 5[15] TT Fujita. Mesoscale classifications: their history and theirapplication to forecasting. In
Mesoscale meteorology andforecasting , pages 18–35. Springer, 1986. 2, 3, 8[16] Ioannis Gkioulekas, Anat Levin, and Todd Zickler. An eval-uation of computational imaging techniques for heteroge-neous inverse scattering. In
ECCV , pages 685–701. Springer,2016. 1[17] Howard R Gordon, James W Brown, and Robert H Evans.Exact Rayleigh scattering calculations for use with theNimbus-7 coastal zone color scanner.
Applied Optics ,27(5):862–871, 1988. 10[18] James Gregson, Michael Krimerman, Matthias B Hullin, andWolfgang Heidrich. Stochastic tomography and its applica-tions in 3D imaging of mixing fluids.
ACM TOG , 31(4):1–10,2012. 1[19] J¨org Gumbel, Linda Megner, Ole Martin Christensen, Nicko-lay Ivchenko, Donal P Murtagh, Seunghyuk Chang, JoachimDillner, Terese Ekebrand, Gabriel Giono, Arvid Hammar,et al. The MATS satellite mission–gravity wave studies bymesospheric airglow/aerosol tomography and spectroscopy.
Atmospheric Chemistry and Physics , 20(1):431–455, 2020.8
20] Michael Hatzitheodorou and John R Kender. An optimalalgorithm for the derivation of shape from shadows. In
Proc.IEEE CVPR , pages 486–487, 1988. 9[21] Thijs Heus, Harm JJ Jonker, Harry EA Van den Akker, Eric JGriffith, Michal Koutek, and Frits H Post. A statistical ap-proach to the life cycle analysis of cumulus clouds selected ina virtual reality environment.
JGR: Atmospheres , 114(D6),2009. 5[22] Harish P Hiriyannaiah. X-ray computed tomographyfor medical imaging.
IEEE signal Processing magazine ,14(2):42–59, 1997. 1[23] Vadim Holodovsky, Yoav Y Schechner, Anat Levin, AviadLevis, and Amit Aides. In-situ multi-view multi-scatteringstochastic tomography. In
Proc. IEEE ICCP , pages 1–12,2016. 4[24] Anders P Kaestner, Beat Munch, and Pavel Trtik. Spatiotem-poral computed tomography of dynamic processes.
OpticalEngineering , 50(12):123201, 2011. 1[25] A Khain, A Pokrovsky, M Pinsky, A Seifert, and VaughanPhillips. Simulation of effects of atmospheric aerosols ondeep turbulent convective clouds using a spectral micro-physics mixed-phase cumulus cloud model. Part I: Model de-scription and possible applications.
JAS , 61(24):2963–2982,2004. 5[26] Marat F Khairoutdinov and David A Randall. Cloud re-solving modeling of the ARM summer 1997 IOP: Modelformulation, results, uncertainties, and sensitivities.
JAS ,60(4):607–625, 2003. 5[27] Kiriakos N Kutulakos and Steven M Seitz. A theory of shapeby space carving.
IJCV , 38(3):199–218, 2000. 5[28] Vincent Leroy, Jean-S´ebastien Franco, and Edmond Boyer.Multi-view dynamic shape refinement using local temporalintegration. In
Proc. IEEE ICCV , pages 3094–3103, 2017. 2[29] Aviad Levis, Jesse Loveridge, and Amit Aides.
Pysh-dom. 2020. Available online . https://github.com/aviadlevis/pyshdom . 2[30] Aviad Levis, Yoav Y Schechner, Amit Aides, and Anthony BDavis. Airborne three-dimensional cloud tomography. In Proc. IEEE ICCV , pages 3379–3387, 2015. 2, 4, 5, 7, 8, 9,12[31] Aviad Levis, Yoav Y Schechner, and Anthony B Davis.Multiple-scattering microphysics tomography. In
Proc.IEEE CVPR , pages 6740–6749, 2017. 2, 5[32] Gregoris Liasis and Stavros Stavrou. Satellite images anal-ysis for shadow detection and building height estimation.
ISPRS Journal of Photogrammetry and Remote Sensing ,119:437–450, 2016. 9[33] Tamar Loeub, Aviad Levis, Vadim Holodovsky, and Yoav YSchechner. Monotonicity prior for cloud tomography. In
Proc. ECCV , pages 24–29, 2020. 2, 4, 5, 12[34] Fernando Macias-Garza, Kenneth R Diller, and Alan CBovik. Missing cone of frequencies and low-pass distortionin three-dimensional microscopic images.
Optical Engineer-ing , 27(6):276461, 1988. 4[35] Bernhard Mayer. Radiative transfer in the cloudy atmo-sphere. In
EPJ Web of Conferences , volume 1, pages 75–99.EDP Sciences, 2009. 2 [36] Brent A McBride, J Vanderlei Martins, Henrique MJ Bar-bosa, William Birmingham, and Lorraine A Remer. Spatialdistribution of cloud droplet size properties from AirborneHyper-Angular Rainbow Polarimeter (AirHARP) measure-ments.
AMT , 13(4):1777–1796, 2020. 2[37] Armin Mustafa, Hansung Kim, Jean-Yves Guillemaut, andAdrian Hilton. Temporally coherent 4D reconstruction ofcomplex dynamic scenes. In
Proc. IEEE CVPR , pages 4660–4669, 2016. 2[38] Srinivasa G Narasimhan, Shree K Nayar, Bo Sun, and San-jeev J Koppal. Structured light in scattering media. In
Proc.IEEE ICCV , volume 1, pages 420–427, 2005. 1[39] RAJ Neggers, HJJ Jonker, and AP Siebesma. Size statis-tics of cumulus cloud populations in large-eddy simulations.
JAS , 60(8):1060–1074, 2003. 5[40] Tim L Neilsen, Jose-Vanderlei Martins, RA Fernan-dez Borda, Cameron Weston, Crystal Frazier, Dominik Cies-lak, and Kevin Townsend. The Hyper-Angular Rainbow Po-larimeter (HARP) CubeSat Observatory and the Characteri-zation of Cloud Properties. In
AGU Fall Meeting Abstracts ,volume 2015, pages A43A–0237, 2015. 2[41] Tinsu Pan, Ting-Yim Lee, Eike Rietzel, and George TYChen. 4D-CT imaging of a volume influenced by respiratorymotion on multi-slice CT.
Medical physics , 31(2):333–340,2004. 1, 2[42] Yiming Qian, Minglun Gong, and Yee-Hong Yang. Stereo-based 3D reconstruction of dynamic fluid surfaces by globaloptimization. In
Proc. IEEE CVPR , pages 1269–1278, 2017.1, 2[43] Klaus Schilling, Yoav Y Schechner, and Ilan Koren.CloudCT - computed tomography of clouds by a small satel-lite formation. In
Proc. IAA symposium on Small Satellitesfor Earth Observation , 2019. 2[44] The SciPy community.
SciPy. 2020. Available on-line . https://docs.scipy.org/doc/scipy/reference / generated / scipy . optimize .minimize_scalar.html . 10[45] A Pier Siebesma, Christopher S Bretherton, Andrew Brown,Andreas Chlond, Joan Cuxart, Peter G Duynkerke, HongliJiang, Marat Khairoutdinov, David Lewellen, Chin-Hoh Mo-eng, et al. A large eddy simulation intercomparison study ofshallow cumulus convection. JAS , 60(10):1201–1219, 2003.5[46] Gerard Van Harten, David J Diner, Brian JS Daugherty,Brian E Rheingans, Michael A Bull, Felix C Seidel, Rus-sell A Chipman, Brian Cairns, Andrzej P Wasilewski, andKirk D Knobelspiesse. Calibration and validation of airbornemultiangle spectropolarimetric imager (AirMSPI) polariza-tion measurements.
Applied Optics , 57(16):4499–4513,2018. 6[47] Flavio R Velasco. Thresholding using the ISODATA cluster-ing algorithm. Technical report, Univ. of Maryland CollegePark, Computer Science Center, 1979. 9[48] Menghua Wang. A refinement for the Rayleigh radiancecomputation with variation of the atmospheric pressure.
In-ternational Journal of Remote Sensing , 26(24):5651–5663,2005. 10
49] TE Wright, M Burton, DM Pyle, and T Caltabiano. Scanningtomography of SO2 distribution in a volcanic gas plume.
Geophysical research letters , 35(17), 2008. 1[50] Huiwen Xue and Graham Feingold. Large-eddy simulationsof trade wind cumuli: Investigation of aerosol indirect ef-fects.
JAS , 63(6):1605–1622, 2006. 5[51] Guangming Zang, Ramzi Idoughi, Ran Tao, Gilles Lubineau,Peter Wonka, and Wolfgang Heidrich. Space-time tomog-raphy for continuously deforming objects.
ACM Trans.Graph. , 37, 2018. 1[52] Guangming Zang, Ramzi Idoughi, Ran Tao, Gilles Lubineau,Peter Wonka, and Wolfgang Heidrich. Warp-and-projecttomography for rapidly deforming objects.
ACM TOG ,38(4):1–13, 2019. 1[53] Guangming Zang, Ramzi Idoughi, Congli Wang, AnthonyBennett, Jianguo Du, Scott Skeen, William L Roberts, PeterWonka, and Wolfgang Heidrich. Tomofluid: Reconstruct-ing Dynamic Fluid from Sparse view videos. In
Proc. IEEECVPR , pages 1870–1879, 2020. 1[54] Ciyou Zhu, Richard H Byrd, Peihuang Lu, and Jorge No-cedal. Algorithm 778: L-BFGS-B: Fortran subroutines forlarge-scale bound-constrained optimization.
ACM TOMS ,23(4):550–560, 1997. 5,23(4):550–560, 1997. 5