Detecting Changes Between Optical Images of Different Spatial and Spectral Resolutions: a Fusion-Based Approach
11 Detecting Changes Between Optical Imagesof Different Spatial and Spectral Resolutions:a Fusion-Based Approach
Vinicius Ferraris, Nicolas Dobigeon,
Senior Member, IEEE ,Qi Wei,
Member, IEEE , and Marie Chabert
Abstract
Change detection is one of the most challenging issues when analyzing remotely sensed images.Comparing several multi-date images acquired through the same kind of sensor is the most commonscenario. Conversely, designing robust, flexible and scalable algorithms for change detection becomeseven more challenging when the images have been acquired by two different kinds of sensors. Thissituation arises in case of emergency under critical constraints. This paper presents, to the best ofauthors’ knowledge, the first strategy to deal with optical images characterized by dissimilar spatialand spectral resolutions. Typical considered scenarios include change detection between panchromaticor multispectral and hyperspectral images. The proposed strategy consists of a 3-step procedure:i) inferring a high spatial and spectral resolution image by fusion of the two observed imagescharacterized one by a low spatial resolution and the other by a low spectral resolution, ii) predictingtwo images with respectively the same spatial and spectral resolutions as the observed images bydegradation of the fused one and iii) implementing a decision rule to each pair of observed andpredicted images characterized by the same spatial and spectral resolutions to identify changes. Theperformance of the proposed framework is evaluated on real images with simulated realistic changes.
Index Terms
Change detection, Image fusion, Different resolution, Hyperspectral imagery, Multispectral im-agery.
Part of this work has been supported by Coordenação de Aperfeiçoamento de Ensino Superior (CAPES), Brazil, and EUFP7 through the ERANETMED JC-WATER Program, MapInvPlnt Project ANR-15-NMED-0002-02.V. Ferraris, N. Dobigeon and M. Chabert are with University of Toulouse, IRIT/INP-ENSEEIHT, France (email:{vinicius.ferraris, nicolas.dobigeon, marie.chabert}@enseeiht.fr).Q. Wei is with Department of Engineering, University of Cambridge, CB2 1PZ, Cambridge, UK (email:[email protected]).
September 21, 2016 DRAFT a r X i v : . [ c s . C V ] S e p I. I
NTRODUCTION
Change detection (CD) is one of the most investigated issues in remote sensing [1]–[4]. As thename suggests, it consists in analyzing two or more multi-date (i.e., acquired at different time instants)images of the same scene to detect potential changes. Applications are diverse, from natural disastermonitoring to long-term tracking of urban and forest growth. Optical images have been the moststudied remote sensing data for CD. They are generally well suited to map land-cover types atlarge scales [5]. Multi-band optical sensors use a spectral window with a particular width, oftencalled spectral resolution, to sample part of the electromagnetic spectrum of the incoming light. Theterm spectral resolution can also refer to the number of spectral bands and multi-band images canbe classified according to this number [6], [7].
Panchromatic (PAN) images are characterized by alow spectral resolution, sensing part of the electromagnetic spectrum with a single and generallywide spectral window. Conversely, multispectral (MS) and hyperspectral (HS) images have smallerspectral windows, allowing part of the spectrum to be sensed with higher precision. Multi-band opticalimaging has become a very common modality of remote sensing, boosted by the advent of new finerspectral sensors [8]. One of the major advantages of multi-band images is the possibility of detectingchanges by exploiting not only the spatial but also the spectral information. There is no specificconvention regarding the numbers of bands that characterize MS and HS images. Yet, MS imagesgenerally consists of a dozen of spectral bands while HS may have a lot more than a hundred. Incomplement to spectral resolution taxonomy, one may describe multi-band images in terms of theirspatial resolution measured by the ground sampling interval (GSI), e.g. the distance, on the ground,between the center of two adjacent pixels [5], [7], [9]. Informally, it represents the smallest objectthat can be resolved up to a specific pixel size. Then, the higher the resolution, the smaller therecognizable details on the ground: a high resolution (HR) image has smaller GSI and finer detailsthan a low resolution (LR) one, where only coarse features are observable. Each image sensor isdesigned based on a particular signal-to-noise ratio (SNR). The reflected incoming light must be ofsufficient energy to guarantee a sufficient SNR and thus a proper acquisition. To increase the energylevel of the arriving signal, either the instantaneous field of view (IFOV) or the spectral window widthmust be increased. However these solutions are mutually exclusive. In other words, optical sensorssuffer from an intrinsic energy trade-off that limits the possibility of acquiring images of both highspatial and high spectral resolutions [9], [10]. This trade-off prevents any simultaneous decrease ofboth the GSI and the spectral window width. Consequently, as an archetypal example, HS imagesare generally of lower spatial resolution than MS and PAN images.Because of the common assumption of an additive Gaussian noise model for passive opticalimages, the most common CD techniques designed for single-band optical images are based on image
September 21, 2016 DRAFT differencing [1]–[4]. When dealing with multi-band images, classical CD differencing methods havebeen adapted for such data through spectral change vectors [2], [11], [12] or transform analysis [13],[14]. Besides, most CD techniques assume that the multi-date images have been acquired by sensors ofthe same type [4] with similar acquisition characteristics in terms of, e.g., angle-of-view, resolutions ornoise model [15], [16]. Nevertheless, in some specific scenarios, for instance consecutive to naturaldisasters, such a constraint may not be ensured, e.g., images compatible with previously acquiredones may not be available in an acceptable timeframe. Such disadvantageous emergency situationsyet require fast, flexible and accurate methods able to handle images acquired by sensors of differentkinds [17]–[22]. Facing with heterogeneity of data is a challenging task and must be carefully handled.However, since CD techniques for optical images generally rely on the assumption of data acquiredby similar sensors, suboptimal strategies have been considered to make these techniques applicablewhen considering optical images of different spatial and spectral resolutions [13], [18]. In particular,interpolation and resampling are classically used to obtain a pair of images with the same spatialand spectral resolutions [18], [23]. However, such a compromise solution may remain suboptimalsince it considers each image individually without fully exploiting their joint characteristics and theircomplementarity. In this paper, we address the problem of unsupervised CD technique of multi-bandoptical images with different spatial and spectral resolutions. To the best of authors’ knowledge, thisis the first operational framework specifically designed to address this issue.More precisely, this paper addresses the problem of CD between a pair of optical images acquiredover the same scene at different time instants, one with low spatial and high spectral resolutions andone with high spatial and low spectral resolutions. The proposed approach consists in first fusing thetwo observed images. The result would be a high spatial and high spectral resolution image of theobserved scene as if the two observed images were acquired at the same time or, in our case of study,if no change occurred between the two acquisition times. Otherwise, the result does not correspond toa truly observed scene but it contains the change information. The proposed fusion process explicitlyrelies on a physically-based sensing model which exploits the characteristics of the two sensors,following the frameworks in [24], [25]. These characteristics are subsequently resorted to obtain, bydegradation of the fusion result, two so-called predicted images with the same resolutions as theobserved images, i.e., one with low spatial resolution and high spectral resolutions and one with highspatial resolution and low spectral resolutions. In absence of any change, these two pairs of predictedand observed images should coincide, apart from residual fusion errors/inacurracies. Conversely, anychange between the two observed images is expected to produce spatial and/or spectral alterations inthe fusion result, which will be passed on the predicted images. Finally, each predicted image canbe compared to the corresponding observed image of same resolution to identify possible changes.Since for each pair, the images to be compared are of the same resolution, classical CD methods
September 21, 2016 DRAFT dedicated to multi-band image can be considered [3], [4]. The final result is composed of two changedetection maps with two different spatial resolutions.The paper is organized as follows. Section II introduces the proposed change detection framework,which is composed of three main steps: fusion, prediction and decision. The first two steps aredescribed in Section III which introduces the forward model underlying the observation process.Section IV, dedicated to the third step, discusses three CD techniques operating on mono- and/ormulti-band images of identical spatial and spectral resolutions. Experimental results are provided inSection V, where a specific simulation protocol is detailed. These results demonstrate the efficiencyof the proposed CD framework. Section VI concludes this paper.II. P
ROPOSED CHANGE DETECTION FRAMEWORK
Lets us denote t and t the times of acquisition for two multi-band optical images over the samescene of interest. Assume that the image acquired at time t is a high spatial resolution PAN or MS(HR-PAN/MS) image denoted as Y t HR ∈ R n λ × n and the one acquired at time t is a low spatialresolution HS (LR-HS) image denoted as Y t LR ∈ R m λ × m , where • n = n r × n c is the number of pixels in each band of the HR-PAN/MS image, • m = m r × m c is the number of pixels in each band of the LR-HS image, with m < n , • n λ is the number of bands in the HR-PAN/MS image, • m λ is the number of bands in the LR-HS image, with n λ < m λ .The main difficulty which prevents any naive implementation of classical CD methods results from thedifferences in spatial and spectral resolutions of the two observed images, i.e., m (cid:54) = n and n λ (cid:54) = m λ .Besides, in digital image processing, it is common to consider the image formation process as asequence of transformations of the original scene into an output image. The output image of a givensensor is thus a particular limited representation of the original scene with characteristics imposedby the processing pipeline of that sensor, called image signal processor (ISP). The original scenecannot be exactly represented because of its continuous nature. Nevertheless, to represent the ISPpipeline as a sequence of transformations, it is usual to consider a very fine digital approximation ofthe scene representation as the input image. Following this paradigm, the two observed images Y t HR and Y t LR are assumed to be spectrally and spatially degraded versions of two corresponding latent(i.e., unobserved) high resolution hyperspectral images (HR-HS) X t and X t , respectively, Y t HR = T HR (cid:2) X t (cid:3) Y t LR = T LR (cid:2) X t (cid:3) (1)where T HR [ · ] and T LR [ · ] stand for spectrally and spatially degradation operators and X t j ∈ R m λ × n ( j = 1 , ). Note that these two unobserved images X t j ∈ R m λ × n ( j = 1 , ) share the same spatial September 21, 2016 DRAFT and spectral characteristics and, if they were available, they could be resorted as inputs of classicalCD techniques operating on images of same resolutions.When the two images Y t HR and Y t LR have been acquired at the same time, i.e., t = t , nochange is expected and the latent images X t and X t should represent exactly the same scene, i.e., X t = X t (cid:44) X . In such a particular context, recovering an estimate ˆ X of the HR-HS latent image X from the two degraded images Y t HR and Y t LR can be cast as a fusion problem, for which efficientmethods have been recently proposed [25]–[28]. Thus, in the case of a perfect fusion process, theno-change hypothesis H can be formulated as H : Y t HR = ˆ Y t HR Y t LR = ˆ Y t LR (2)where ˆ Y t HR (cid:44) T HR (cid:104) ˆ X (cid:105) ˆ Y t LR (cid:44) T LR (cid:104) ˆ X (cid:105) (3)are the two predicted HR-PAN/MS and LR-HS images from the estimated HR-HS latent image ˆ X .When there exists a time interval between acquisitions, i.e. when t (cid:54) = t , a change may occurmeanwhile. In this case, no common latent image X can be defined since X t (cid:54) = X t . However, since X t and X t represent the same area of interest, they are expected to keep a certain level of similarity.Thus, the fusion process does not lead to a common latent image, but to a pseudo-latent image ˆ X from the observed image pair Y t HR and Y t LR , which consists of the best joint approximation of latentimages X t and X t . Moreover, since ˆ X (cid:54) = X t and ˆ X (cid:54) = X t , the forward model (1) does not hold torelate the pseudo-latent image ˆ X to the observations Y t HR and Y t LR . More precisely, when changeshave occurred between the two time instants t and t , the change hypothesis H can be stated as H : Y t HR (cid:54) = ˆ Y t HR Y t LR (cid:54) = ˆ Y t LR . (4)More precisely, both inequalities in (4) should be understood in a pixel-wise sense since any changeoccurring between t and t is expected to affect some spatial locations in the images. As a conse-quence, both diagnosis in (2) and (4) naturally induce pixel-wise rules to decide between the no-changeand change hypothesis H and H . This work specifically proposes to derive a CD technique ableto operate on the two observed images Y t HR and Y t LR . It mainly consists of a the following 3-steps,sketched in Fig. 11) fusion : estimating the HR-HS pseudo-latent image ˆ X from Y t HR and Y t LR ,2) prediction : reconstructing the two HR-PAN/MS and LR-HS images ˆ Y t HR and ˆ Y t LR from ˆ X ,3) decision : deriving HR and LR change maps ˆ D HR and ˆ D LR associated with the respective pairs September 21, 2016 DRAFT of observed and predicted HR-PAN/MS and LR-HS images, namely, Υ HR = (cid:110) Y t HR , ˆY t HR (cid:111) and Υ LR = (cid:110) Y t LR , ˆY t LR (cid:111) . (5)An alternate LR (aLR) change map, denoted as ˆ D aLR , is also computed by spatially degradingthe HR change map ˆ D HR with respect to the spatial operator T LR [ · ] and then comparing if atleast one of the ˆ D HR pixels associated to a given ˆ D LR pixel leads to the same change/no-changedecision.One should highlight the fact that this later decision step only requires to implement CD techniqueswithin two pairs of optical images Υ HR and Υ LR of same spatial and spectral resolutions, thusovercoming the initial issue raised by analyzing observed images Y t HR and Y t LR with dissimilarresolutions.To establish the rationale underlying this framework, one may refer to the two main propertiesrequired by any fusion procedure: consistency and synthesis [25]. The former one requires thereversibility of the fusion process: the original LR-HS and HR-PAN/MS can be obtained by properdegradations of the fused HR-HS image. The latter requires that the fused HR-HS image must beas similar as possible to the image of the same scene that would be obtained by sensor at thesame resolution. Similarly, the generic framework proposed by Wald et al. for fusion image qualityassessment [24] can also be properly stated by assigning the consistency and synthesis properties agreater scope.Moreover, it is also worth noting that the proposed framework has been explicitly motivated by thespecific scenario of detecting changes between LR-HS and HR-PAN/MS optical images. However, itmay be applicable for any other CD scenario, provided that the two following assumptions hold: i)firstly, a latent image can be estimated from the two observed images and ii) secondly, the latent andpredicted images can be related through known transformations.Note finally that the modality-time order is not fixed, and without loss of generality, one may stateeither t ≤ t either t ≤ t . Thus, to lighten the notations, without any ambiguity, the superscripts t and t will be omitted in the sequel of this paper. The three main steps of the proposed frameworkare described in the following sections.III. F USION AND PREDICTION STEPS
This section describes the fusion and prediction steps involved in the proposed CD framework.Both intimately rely on the forward model introduced in what follows.
A. Forward model
When dealing with optical images, the sequences of transformations T HR [ · ] and T LR [ · ] intrinsicto the sensors over the pseudo-latent images X in (1) are generally classified as spectral and spatial September 21, 2016 DRAFT
Fig. 1: Change detection framework.degradations. Spatial degradations are related to the spatial characteristics of the sensor, such asthe sampling scheme and the optical transfer function. Spectral degradations, in the other hand, arerelative to the sensitivity to wavelength and the spectral sampling. In this work, following widelyadmitted assumptions [24], [25], these transformations are considered as linear degradations of thepseudo-latent image. Thus, benefiting from convenient matrix representations, the observed imagescan be expressed as Y HR ≈ LX , Y LR ≈ XR . (6)The degradation resulting from the left multiplication by L ∈ R n λ × m λ models the combination ofsome spectral bands for each pixel. This degradation corresponds to a spectral resolution reductionwith respect to the pseudo-latent image X as in [27], [29]. In practice, this degradation modelsan intrinsic characteristic of the sensor called spectral response. It can be either learned by cross-calibration or known a priori . September 21, 2016 DRAFT
Conversely, the right multiplication by R ∈ R n × m degrades the pseudo-latent image by linearcombinations of pixels within a given spectral band, thus reducing the spatial resolution. The rightdegradation matrix R may model the combination of various transformations which are specificof sensor architectures and take into account external factors such as wrap, blurring, translation,decimation, etc [27], [29], [30]. In this work, only space invariant blurring and decimation willbe considered. Geometrical transformations such as wrap and translations can be corrected usingimage co-registration techniques in pre-processing steps. A space-invariant blur can be modeled bya symmetric convolution kernel, yielding a sparse symmetric Toeplitz matrix B ∈ R n × n [26]. Itoperates a cyclic convolution on the image bands individually. The decimation operation S ∈ R n × m corresponds to a d = d r × d c uniform downsampling operator with m = n/d ones on the blockdiagonal and zeros elsewhere, such that S T S = I m [27]. Hence, the spatial degradation operationcorresponds to the composition R = BS ∈ R n × m .The approximating symbol ≈ in (6) stands for any mismodeling effects or acquisition noise, whichis generally considered as additive and Gaussian [4], [9], [25]–[28]. The full degradation model canthus be written as Y HR = LX + N HR , Y LR = XBS + N LR . (7)The additive noise matrices are assumed to be distributed according to matrix normal distributions [31], as follows N HR ∼ MN m λ ,m ( m λ × m , Λ HR , I m ) , N LR ∼ MN n λ ,n ( n λ × n , Λ LR , I n ) . Note that the row covariance matrices Λ HR and Λ LR carry the information of the spectral variancein-between bands. Since the noise is spectrally colored, these matrices are not necessarily diagonal.In the other hand, since the noise is assumed spatially independent, the column covariance matricescorrespond to identity matrices, e.g., I m and I n . In real applications, since the row covariance matricesare an intrinsic characteristic of the sensor, they are estimated by a prior calibration [29]. In this paper,to reduce the number of unknown parameters we assume that Λ HR and Λ LR are both diagonal. This The inverse downsampling transformation S T represent an upsampling transformation by zero interpolation from m to n . The probability density function, p ( X | M , Σ r , Σ r ) of a matrix normal distribution MN r,c ( M , Σ r , Σ c ) is given by p ( X | M , Σ r , Σ r ) = exp (cid:16) − tr (cid:104) Σ − c ( X − M ) T Σ − r ( X − M ) (cid:105)(cid:17) (2 π ) rc/ | Σ c | r/ | Σ r | c/ where M ∈ R r × c is the mean matrix, Σ r ∈ R r × r is the row covariance matrix and Σ c ∈ R c × c is the column covariancematrix. September 21, 2016 DRAFT hypothesis implies that the noise is independent from one band to another and is characterized by aspecific variance in each band [27].
B. Fusion process
The forward observation model (7) has been exploited in many applications involving optical multi-band images, specially those related to image restoration such as fusion and superresolution [27], [29].Whether the objective is to fuse multi-band images from different spatial and spectral resolutions orto increase the resolution of a single one, it consists in compensating the energy trade-off of opticalmulti-band sensors to get a higher spatial and spectral resolution image compared to the observedimage set. One popular approach to conduct fusion consists in solving an inverse problem, formulatedthrough the observation model. In the specific context of HS pansharpening (i.e., fusing PAN and HSimages), such an approach has proven to provide the most reliable fused product, with a reasonablecomputational complexity [25]. For these reasons, this is the strategy followed in this work and it isbriefly sketched in what follows.Because of the additive nature and the statistical properties of the noise N HR and N LR , bothobserved images Y HR and Y LR are assumed to be distributed according to matrix normal distributions Y HR | X ∼ MN m λ ,m ( LX , Λ HR , I m ) Y LR | X ∼ MN n λ ,n ( XBS , Λ LR , I n ) (8)Since the noise can be reasonably assumed sensor-dependent, the observed images can be assumedstatistically independent. Consequently the joint likelihood function of the statistical independentobserved data can be written p ( Y HR , Y LR | X ) = p ( Y HR | X ) p ( Y LR | X ) (9)and the negative log-likelihood, defined up to an additive constant, is − log p ( Ψ | X ) = 12 (cid:13)(cid:13)(cid:13) Λ − HR ( Y HR − LX ) (cid:13)(cid:13)(cid:13) F + 12 (cid:13)(cid:13)(cid:13) Λ − LR ( Y LR − XBS ) (cid:13)(cid:13)(cid:13) F (10)where Ψ = { Y HR , Y LR } denotes the set of observed images and (cid:107)·(cid:107) F stands for the Frobenius norm.Computing the maximum likelihood estimator ˆX ML of X from the observed image set Ψ consistsin minimizing (10). The aforementioned derivation intents to solve a linear inverse problem whichcan be ill-posed or ill-conditioned, according to the properties of the matrices B , S and L definingthe forward model (7). To overcome this issue, additional prior information can be included, settingthe estimation problem into the Bayesian formalism [32]. Following a maximum a posteriori (MAP)estimation, recovering the estimated pseudo-latent image ˆ X from the linear model (7) consists in September 21, 2016 DRAFT0 minimizing the negative log-posterior ˆ X ∈ argmin X ∈ R mλ × n (cid:26) (cid:13)(cid:13)(cid:13) Λ − HR ( Y HR − LX ) (cid:13)(cid:13)(cid:13) F + 12 (cid:13)(cid:13)(cid:13) Λ − LR ( Y LR − XBS ) (cid:13)(cid:13)(cid:13) F + λφ ( X ) (cid:27) (11)where φ ( · ) defines an appropriate regularizer derived from the prior distribution assigned to X and λ is a parameter that tunes the relative importance of the regularization and data terms. Computingthe MAP estimator (11) is expected to provide the best approximation ˆ X with the minimum distanceto the latent images X t and X t simultaneously. This optimization problem is challenging becauseof the high dimensionality of the data X . Nevertheless, Wei et al. [27] has proved that its solutioncan be efficiently computed for various relevant regularization terms φ ( X ) . In this work, a Gaussianprior is considered, since it provides an interesting trade-off between accuracy and computationalcomplexity, as reported in [25]. C. Prediction
The prediction step relies on the forward model (7) proposed in Section III-A. As suggested by(3), it merely consists in applying the respective spectral and spatial degradations to the estimatedpseudo-latent image ˆ X , leading to ˆ Y HR = L ˆ X ˆ Y LR = ˆ XBS . (12)IV. O PTICAL I MAGE H OMOGENEOUS C HANGE D ETECTION
This section presents the third and last step of the proposed CD framework, which consists inimplementing decision rules to identify possible changes between the images composing the twopairs Υ HR = (cid:110) Y HR , ˆY HR (cid:111) and Υ LR = (cid:110) Y LR , ˆY LR (cid:111) . As noticed in Section II, these CD techniquesoperate on observed Y · R and predicted ˆ Y · R images of same spatial and spectral resolutions, with · ∈ { H , L } , as in [2], [3], [33], [34]. Unless explicitly specified, they can be employed whatever thenumber of bands. As a consequence, Y · R and ˆ Y · R could refer to either PAN, MS or HS images andthe two resulting CD maps are either of HR, either of LR, associated with the pairs Υ HR and Υ LR ,respectively. To lighten the notations, without loss of generality, in what follows, the pairs Y · R and ˆ Y · R will be denoted Y ∈ R (cid:96) × η and Y ∈ R (cid:96) × η , which can be set as • { Y , Y } = Υ LR to derive the estimated CD binary map ˆ D LR at LR, • { Y , Y } = Υ HR to derive the estimated CD binary map ˆ D HR at HR and its spatially degradedaLR counterpart ˆ D aLR .In this seek of generality, the numbers of bands and pixels are denoted (cid:96) and η , respectively. Thespectral dimension (cid:96) depends on the considered image sets Υ HR or Υ LR , i.e., (cid:96) = n λ and (cid:96) = m λ for September 21, 2016 DRAFT1
HR and LR images, respectively . Similarly, the spatial resolution of the CD binary map genericallydenoted as ˆ D ∈ R η depends on the considered set of images Υ HR or Υ LR , i.e., η = n and η = m for HR and LR images, respectively.Three efficient CD techniques operating on images of same spatial and spectral resolutions arediscussed below. A. Change vector analysis (CVA)
When considering multi-band optical images after atmospheric and geometric pre-calibration, fora pixel at spatial location p = ( i p , j p ) , one may consider that Y ( p ) ∼ N ( µ , Σ ) Y ( p ) ∼ N ( µ , Σ ) (13)where µ ∈ R (cid:96) and µ ∈ R (cid:96) correspond to the pixel spectral mean and Σ ∈ R (cid:96) × (cid:96) and Σ ∈ R (cid:96) × (cid:96) arethe spectral covariance matrices (here they were obtained using the maximum likelihood estimator).The spectral change vector is defined by the squared Mahalanobis distance between the two pixelswhich can be computed from the pixel-wise spectral difference operator ∆ Y ( p ) = Y ( p ) − Y ( p ) ,i.e., V CVA ( p ) = (cid:107) ∆ Y ( p ) (cid:107) Σ − = ∆ Y ( p ) T Σ − ∆ Y ( p ) (14)where Σ = Σ + Σ . For a given threshold τ , the pixel-wise statistical test can be formulated as V CVA ( p ) H ≷ H τ (15)and the final CD map, denoted ˆ D CVA ∈ { , } η can be derived as ˆ D CVA ( p ) = if V CVA ( p ) ≥ τ ( H )0 otherwise ( H ) . (16)For a pixel which has not been affected by a change (hypothesis H ), the spectral difference operatoris expected to be statistically described by ∆ Y ( p ) ∼ N ( , Σ ) . As a consequence, the threshold τ can be related to the probability of false alarm (PFA) of the test P FA = P (cid:20) V CVA ( p ) > τ (cid:12)(cid:12)(cid:12)(cid:12) H (cid:21) (17)or equivalently, τ = F − χ (cid:96) (1 − P FA ) (18)where F − χ (cid:96) ( · ) is the inverse cumulative distribution function of the χ (cid:96) distribution. Note, in particular, that (cid:96) = n λ = 1 when the set of HR images are PAN images. September 21, 2016 DRAFT2
B. Spatially regularized change vector analysis
Since CVA in its simplest form as presented in Section IV-A is a pixel-wise procedure, it sig-nificantly suffers from low robustness with respect to noise. To overcome this limitation, spatialinformation can be exploited by considering the neighborhood of a pixel to compute the final distancecriterion, which is expected to make the change map spatially smoother. Indeed, changed pixels aregenerally gathered together into regions or clusters, which means that there is a high probability toobserve changes in the neighborhood of an identified changed pixel [3]. Let Ω Lp denote the set ofindexes of neighboring spatial locations of a given pixel p defined by a surrounding regular windowof size L centered on p . The spatially smoothed energy map V sCVA of the spectral difference operatorcan be derived from its pixel-wise counterpart V CVA defined by (14) as V sCVA ( p ) = 1 | Ω Lp | (cid:88) k ∈ Ω Lp ω ( k ) V CVA ( k ) (19)where the weights ω ( k ) ∈ R | Ω Lp | implicitly define a spatial smoothing filter. In this work, they arechosen as ω ( k ) = 1 , ∀ k ∈ (cid:8) , . . . , | Ω Lp | (cid:9) . Then, a decision rule similar to (16) can be followed toderive the final CD map ˆ D sCVA . Note, the choice of window size L is based on the strong hypothesisof the window homogeneity. This choice thus may depend upon the kind of observed scenes. C. Iteratively-reweighted multivariate alteration detection (IR-MAD)
The multivariate alteration detection (MAD) technique introduced in [13] has been shown to bea robust CD method due to being well suited for analyzing multi-band image pair { Y , Y } withpossible different intensity levels. Similarly to the CVA and sCVA methods, it exploits an imagedifferencing operator while better concentrating information related to changes into auxiliary variables.More precisely, the MAD variate is defined as ∆ ˜ Y ( p ) = ˜ Y ( p ) − ˜ Y ( p ) with ˜ Y ( p ) = UY ( p )˜ Y ( p ) = VY ( p ) (20)where U = [ u (cid:96) , u (cid:96) − , . . . , u ] T is a (cid:96) × (cid:96) -matrix composed of the (cid:96) × -vectors u j identified bycanonical correlation analysis and V = [ v (cid:96) , v (cid:96) − , . . . , v ] T is defined similarly. As in Equation (14),the MAD-based change energy map can then be derived as V MAD ( p ) = (cid:107) ∆ ˜ Y ( p ) (cid:107) Λ − where Λ is the diagonal covariance matrix of the MAD variates. Finally, the MAD CD map ˆ D MAD can be pixel-wisely computed using a decision rule similar to (16) with a threshold τ related to thePFA by (18). In this work, the iteratively re-weighted version of MAD (IR-MAD) has been consideredto better separate the change pixels from the no-change pixels [14]. September 21, 2016 DRAFT3
V. E
XPERIMENTS
This section assesses the performance of the proposed fusion-based CD framework. First, thesimulation protocol is described in Section V-A. Then, Section V-B reports qualitative and quantitativeresults when detecting changes between HS and MS or PAN images.
A. Simulation protocol
Evaluating performances of CD algorithms requires image pairs with particular characteristics,which makes them rarely freely available. Indeed, CD algorithms require images acquired at twodifferent dates, presenting changes, geometrically and radiometrically pre-corrected and, for thespecific problem addressed in this paper, coming from different optical sensor modes. Moreover,these image pairs need to be accompanied by ground-truth information in the form of validated CDmask.To overcome this issue, this paper proposes to follow a strategy inspired by the protocol introducedin [24] to assess the performance of pansharpening algorithms. This protocol relies on a uniquereference HS image X ref , also considered as HR. It avoids the need of co-registered and geometricallycorrected images by generating a pair of synthetic but realistic HR-PAN/MS and LR-HS images fromthis reference image and by including changes within a semantic description of this HR-HS image.In this work, this description is derived by spectral unmixing [35] and the full proposed protocol canbe summarized as follows:i) Given an HR-HS reference image X ref ∈ R m λ × n , conduct linear unmixing to extract K endmember signatures M t ∈ R m λ × K and the associated abundance matrix A t ∈ R K × n .ii) Define the HR-HS latent image X t before change as X t = M t A t . (21)iii) Define a reference HR change mask D HR by selecting particular regions (i.e., pixels) in theHR-HS latent image X t where changes occur. The corresponding LR change mask D LR iscomputed according to the spatial degradations relating the two modalities. Both change maskswill be considered as the ground truth and will be compared to the estimated CD HR map ˆ D HR and LR maps ˆ D LR and ˆ D aLR , respectively, to evaluate the performance of the proposed CDtechnique.iv) According to this reference HR change mask, implement realistic change rules on the referenceabundances A t associated with pixels affected by changes. Several change rules applied to thereference abundance will be discussed in Section V-A2. Note that theses rules may also requirethe use of additional endmembers that are not initially present in the latent image X t . Theabundance and endmember matrices after changes are denoted as A t and M t , respectively. September 21, 2016 DRAFT4 v) Define the HR-HS latent image X t after changes by linear mixing such that X t = M t A t . (22)vi) Generate a simulated observed HR-PAN/MS image Y HR by applying the spectral degradation T HR [ · ] either to the before-change HR-HS latent image X t , either to the after-change HR-HSlatent image X t .vii) Conversely, generate a simulated observed LR-HS image Y LR by applying the spatial degrada-tion T LR [ · ] either to the after-change HR-HS latent image X t , or to the before-change HR-HSlatent image X t .This protocol is illustrated in Fig. 2 and complementary information regarding these steps isprovided in the following paragraphs.Fig. 2: Simulation protocol: two HR-HS latent images X t (before changes) and X t (after changes)are generated from the reference image. In temporal configuration 1 (black), the observed HR-PAN/MSimage Y HR is a spectrally degraded version of X t while the observed LR-HS image Y t LR is aspatially degraded version of X t . In temporal configuration 2 (grey dashed lines), the degradedimages are generated from reciprocal HR-HS images. September 21, 2016 DRAFT5
1) Reference image:
The HR-HS reference image used in the simulation protocol is a × × HS image of the Pavia University in Italy acquired by the reflective optics system imagingspectrometer (ROSIS) sensor. A pre-correction has been conducted to smooth the atmospheric effectsdue to vapor water absorption by removing corresponding spectral bands. Then the final HR-HSreference image is of size × × .
2) Generating the HR-HS latent images: unmixing, change mask and change rules:
To produce theHR-HS latent image X t before change, the reference image X ref has been linearly unmixed, whichprovides the endmember matrix M t ∈ R m λ × K and the matrix of abundances A t ∈ R K × n where K is the number of endmembers. This number K can be obtained by investigating the dimensionof the signal subspace, for instance by conducting principal component analysis [35]. In this work,the linear unmixing has been conducted by coupling the vertex component analysis (VCA) [36] asan endmember extraction algorithm and the fully constrained least squares (FCLS) algorithm [37] toobtain M t and A t , respectively.Given the HR-HS latent image X t = M t A t , the HR change mask D HR has been producedby selecting spatial regions in the HR-HS image affected by changes. This selection can be maderandomly or by using prior knowledge on the scene. In this work, manual selection is performed.Then, the change rules applied to the abundance matrix A t to obtain the changed abundancematrix A t are chosen such that they satisfy the standard positivity and sum-to-one constraintsNonnegativity a t k ( p ) ≥ , ∀ p ∈ { , . . . , n } , ∀ k ∈ { , . . . , K } Sum-to-one K (cid:88) k =1 a t k ( p ) = 1 , ∀ p ∈ { , . . . , n } (23)More precisely, three distinct change rules has been considered • Zero abundance: find the most present endmember in the selected region, set all correspondingabundances to zero and rescale abundances associated with remaining endmembers in orderto fulfill (23). This change can be interpreted as a brutal disappearing of the most presentendmember. • Same abundance: choose a pixel abundance vector at random spatial location, set all abundancevectors inside the region affected by changes to the chosen one. This change consists in fillingthe change region by the same spectral signature. • Block Abundance: randomly select a region with the same spatial shape of the region affectedby changes and replace original region abundances by the abundances of the second one. Thisproduce a “copy-paste” pattern.Note that other change rules on the abundance matrix A t could have been investigated; in particularsome of them could require to include additional endmembers in the initial endmember matrix M t . September 21, 2016 DRAFT6
The updated abundance A t and endmember M t matrices allow to define the after-change HR-HSlatent image X t as X t = M t A t . Fig. 3 shows the four different change rules for one single selected region in image. (a) (b) (c) (d)
Fig. 3: Change rules applied to the reference image (a): (b) zero-abundance, (c) same abundance and(d) block abundance.
3) Generating the observed images: spectral and spatial degradations:
To produce spectrallydegraded versions Y HR of the HR-HS latent image X t j ( j = 1 or j = 2 ), two particular spectralresponses have been used to assess the performance of the proposed algorithm when analyzing aHR-PAN or a -band HR-MS image. The former has been obtained by uniformly averaging the first bands of the HR-HS pixel spectra. The later has been obtained by filtering the HR-HS latent image X t j by a -band LANDSAT-like spectral response.To generate a spatially degraded image, the HR-HS latent image X t j ( j = 2 or j = 1 ) has beenblurred by a × Gaussian kernel filter and down-sampled equally in vertical and horizontal directionswith a factor d = 5 . This spatial degradation operator implicitly relates the generated HR changemask D HR to its LR counterpart D LR . Each LR pixel contains d × d HR pixels. As D HR is a binarymask, after the spatial degradation, if at least one of HR pixels associated to a given LR pixel isconsidered as a change pixel then the pixel in D LR is also considered as a change pixel.To illustrate the impact of these spectral and spatial degradations, Fig. 4 shows the HR-HS referencePavia University image (a), corresponding HR-PAN (b) and HR-MS (c) images resulting from spectraldegradations and a LR-HS image resulting from spatial degradation (d).Note that, as mentioned in Section II, the modality-time order can be arbitrary fixed, and withoutloss of generality, one may state either t ≤ t either t ≤ t . Thus, there are distinct temporal September 21, 2016 DRAFT7 (a) (b) (c)
Fig. 4: Degraded version of the reference image in Fig. 3(a): (a) spectrally degraded HR-PAN image,(b) spectrally degraded HR-MS image and (c) spatially degraded LR-HS image.configurations to generate the pair of observed HR and LR images: • Configuration 1: generating the spectrally (resp., spatially) degraded observed image Y HR (resp., Y LR ) from the before-change (resp., after-change) HR-HS latent image X t (resp., X t ), • Configuration 2: generating the spectrally (resp., spatially) degraded observed image Y HR (resp., Y LR ) from the after-change (resp., before-change) HR-HS latent image X t (resp., X t ). B. Results
The CD framework introduced in Section II has been evaluated following the simulation protocoldescribed in the previous paragraph. More precisely, regions have been randomly selected in thebefore-change HR-HS latent image X t as those affected by changes. For each region, one of thethree proposed change rules (zero-abundance, same abundance or block abundance) has been appliedto build the after-change HR-HS latent image X t . The observed HR and LR images are generatedaccording to one of the two temporal configurations discussed in Section V-A3. This leads to simulated pairs of HR-PAN/MS and LR-HS images. From each pair, as detailed in Section II, oneHR CD map ˆ D HR and two LR CD maps ˆ D LR and ˆ D aLR are produced from the CD frameworkdescribed in Fig. 1. These HR and LR CD maps are respectively compared to the actual HR D HR and LR D LR masks to derive the empirical probabilities of false alarm P FA and detection P D that arerepresented as empirical receiver operating characteristics (ROC) curves, i.e., P D = f ( P FA ) . TheseROC curves have been averaged over the Monte Carlo simulations to mitigate the influence oftime order and the influence of considered change region and rule.Moreover, as quantitative figures-of-merit, two metrics derived from these ROC curves have beenconsidered: i) the area under the curve (AUC), which is expected to be close to for a good testing September 21, 2016 DRAFT8 rule and ii) a normalized distance between the no-detection point (defined by P F A = 1 and P D = 0 )and the intersect of the ROC curve with the diagonal line P FA = 1 − P D , which should be close to for a good testing rule.While implementing the proposed CD framework, the fusion step in Section III-B has beenconducted following the method proposed in [27] with the Gaussian regularization because of itsaccuracy and computational efficiency. The corresponding regularization parameter has been chosenas λ = 0 . by cross-validation. Regarding the detection step, when considering multi-band images(i.e., MS or HS), the CD techniques detailed in Section III-B (i.e., CVA, sCVA, MAD and IR-MAD) have been considered. Conversely, when considering PAN image, only CVA and sCVA havebeen considered since MAD and IR-MAD requires multi-band images. The sCVA method has beenimplemented with a window size of L = 7 and L = 3 , , for PAN image.In absence of state-of-the-art CD techniques able to simultaneously handle images with distinctspatial and spectral resolutions, the proposed method has been compared to the crude approach thatfirst consists in spatially (respectively spectrally) degrading the observed HR (respectively LR) image.The classical CD techniques described in Section IV can then be applied to the resulting LR-MS/PANimages since they have the same, unfortunately low, spatial and spectral resolutions. The final resultis a so-called worst-case LR CD mask denoted as ˆ D WC in the following.
1) Scenario : Change detection between HR-MS and LR-HS images: The first simulation scenarioconsiders a set of HR-MS and LR-HS images. The ROC curves are plotted in Fig. 5 with correspond-ing performance metrics reported in Table I. These results show that, whatever the implemented CDtesting feature (CVA, sCVA, MAD or IR-MAD), the proposed framework offers high precision. Inparticular, the aLR change map ˆ A aLR computed from the estimated HR change map ˆ D HR providessignificantly better results that those obtained in the worst-case and those obtained on the estimatedLR change map ˆ D LR directly. This can be explained by the intrinsic quality of the estimated HRchange map ˆ D HR , which roughly provides similar detection performance as the aLR change map ˆ D aLR with the great advantage to be available at a finer spatial resolution.To visually illustrate this finding, Fig. 6 shows the CD maps estimated from a pair of observedHR-MS (a) and LR-HS (b) images containing multiple changes with size varying from × -pixelto × -pixels using sCVA(3) classical CD. The actual HR and LR CD masks are reported inFig. 6(c) and (d), respectively. Figures 6(e) to (h) show the estimated CD maps ˆ D HR , ˆ D LR , ˆ D aLR and ˆ D WC , respectively. Once again, these results clearly demonstrate that the HR CD map ˆ D HR estimated by the proposed method achieves a better detection rate with a higher precision.
2) Scenario : Change detection between HR-PAN and LR-HS images: In the second scenario,the same procedure as Scenario has been considered while replacing the observed MS image by aPAN image. The ROC curves are depicted in Fig. 7 with corresponding metrics in Table II. As for September 21, 2016 DRAFT9
PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (a) PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (b) PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (c) PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (d) Fig. 5: Scenario : ROC curves computed from (a) CVA, (b) sCVA(7), (c) MAD and (d) IRMAD.TABLE I: Scenario : detection performance in terms of AUC and normalized distance. ˆ D HR ˆ D LR ˆ D aLR ˆ D WC CVA AUC . . . . Dist. . . . . sCVA( ) AUC . . . . Dist. . . . . MAD AUC . . . . Dist. . . . . IR-MAD AUC . . . . Dist. . . . . Scenario , whatever the decision technique (CVA or its spatially regularized counterpart sCVA), thecomparison of these curves show that the HR CD map also leads to a higher accuracy, since it issharper than the LR maps. In particular, it provides a significantly more powerful test than the crude September 21, 2016 DRAFT0 (a) Y HR (b) Y LR (c) D HR (d) D LR (e) ˆ D HR (f) ˆ D LR (g) ˆ D aLR (h) ˆ D WC Fig. 6: Scenario : (a) observed HR-MS image, (b) observed LR-HS image, (c) actual HR CD mask D HR , (d) actual LR CD mask D LR , (e) estimated HR CD map ˆ D HR , (e) estimated LR CD map ˆ D LR , (g) estimated aLR CD map ˆ D aLR and (h) worst-case CD map ˆ D WC .approach that consists in degrading both observed HR-PAN and LR-HS images to reach the samespatial and spectral resolutions.VI. C ONCLUSIONS AND FUTURE W ORK
This paper introduced an unsupervised change detection framework for handling multi-band opticalimages of different modalities, i.e., with different spatial and spectral resolutions. The method wasbased on a -step procedure. The first step performed the fusion of the two different spatial/spectralresolution multi-band optical images to recover a pseudo-latent image of high spatial and spectralresolutions. From this fused image, the second step generated a pair of predicted images with thesame resolutions as the observed multi-band images. Finally, standard CD techniques were applied toeach pair of observed and predicted images with same spatial and spectral resolutions. The relevance September 21, 2016 DRAFT1
PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (a) PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (b) PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (c) PFA P D ˆ D HR ˆ D LR ˆ D aLR ˆ D WC (d) Fig. 7: Scenario 2: ROC curves computed from (a) CVA, (b) sCVA(3), (c) sCVA(5) and (d) sCVA(7).TABLE II: Scenario 2: detection performance in terms of AUC and normalized distance. ˆ D HR ˆ D LR ˆ D aLR ˆ D WC CVA AUC . . . . Dist. . . . . sCVA( ) AUC . . . Dist. . . . sCVA( ) AUC . . . . Dist. . . . . sCVA( ) AUC . . . . Dist. . . . . of the proposed framework was assessed thanks to an experimental protocol. These experimentsdemonstrated the accuracy of the recovered high-resolution change detection map. September 21, 2016 DRAFT2
Future works will include the generalization of the proposed framework to deal with images of othermodalities. Indeed, the newly proposed -step procedure ( fusion , prediction , detection ) is expectedto be applicable provided that a physically-based direct model can be derived to relate the observedimages with a pseudo-latent image. R EFERENCES [1] A. Singh, “Review Article Digital change detection techniques using remotely-sensed data,”
Int. J. Remote Sens. ,vol. 10, no. 6, pp. 989–1003, June 1989.[2] M. K. Ridd and J. Liu, “A comparison of four algorithms for change detection in an urban environment,”
RemoteSens. Environment , vol. 63, no. 2, pp. 95–100, 1998.[3] R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,”
IEEETrans. Image Process. , vol. 14, no. 3, pp. 294–307, 2005.[4] F. Bovolo and L. Bruzzone, “The Time Variable in Data Fusion: A Change Detection Perspective,”
IEEE Geosci.Remote Sens. Mag. , vol. 3, no. 3, pp. 8–26, Sept. 2015.[5] M. Dalla Mura, S. Prasad, F. Pacifici, P. Gamba, J. Chanussot, and J. A. Benediktsson, “Challenges and Opportunitiesof Multimodality and Data Fusion in Remote Sensing,”
Proc. IEEE , vol. 103, no. 9, pp. 1585–1601, Sept. 2015.[6] D. Landgrebe, “Hyperspectral image data analysis,”
IEEE Signal Process. Mag. , vol. 19, no. 1, pp. 17–28, 2002.[7] J. B. Campbell and R. H. Wynne,
Introduction to remote sensing , 5th ed. New York: Guilford Press, 2011.[8] C. Collet, J. Chanussot, and K. Chedi,
Multivariate image processing: methods and applications . wiley, address =Hoboken, NJ, 2006.[9] C. Elachi and J. Van Zyl,
Introduction to the physics and techniques of remote sensing , 2nd ed., ser. Wiley series inremote sensing. Hoboken, N.J.: Wiley-Interscience, 2006.[10] J. C. Price, “Spectral band selection for visible-near infrared remote sensing: spectral-spatial resolution tradeoffs,”
IEEE Trans. Geosci. Remote Sens. , vol. 35, no. 5, pp. 1277–1285, 1997.[11] F. Bovolo and L. Bruzzone, “A Theoretical Framework for Unsupervised Change Detection Based on Change VectorAnalysis in the Polar Domain,”
IEEE Trans. Geosci. Remote Sens. , vol. 45, no. 1, pp. 218–236, Jan. 2007.[12] F. Bovolo, S. Marchesi, and L. Bruzzone, “A Framework for Automatic and Unsupervised Detection of MultipleChanges in Multitemporal Images,”
IEEE Trans. Geosci. Remote Sens. , vol. 50, no. 6, pp. 2196–2212, June 2012.[13] A. A. Nielsen, K. Conradsen, and J. J. Simpson, “Multivariate alteration detection (MAD) and MAF postprocessingin multispectral, bitemporal image data: New approaches to change detection studies,”
Remote Sens. Environment ,vol. 64, no. 1, pp. 1–19, 1998.[14] A. A. Nielsen, “The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi- andHyperspectral Data,”
IEEE Trans. Image Process. , vol. 16, no. 2, pp. 463–478, Feb. 2007.[15] M. J. Canty, A. A. Nielsen, and M. Schmidt, “Automatic radiometric normalization of multitemporal satellite imagery,”
Remote Sens. Environment , vol. 91, no. 3-4, pp. 441–451, June 2004.[16] J. Inglada and A. Giros, “On the possibility of automatic multisensor image registration,”
IEEE Trans. Geosci. RemoteSens. , vol. 42, no. 10, pp. 2104–2120, Oct. 2004.[17] J. Inglada, “Similarity measures for multisensor remote sensing images,” in
Proc. IEEE Int. Conf. Geosci. RemoteSens. (IGARSS) , vol. 1. IEEE, 2002, pp. 104–106.[18] V. Alberga, M. Idrissa, V. Lacroix, and J. Inglada, “Performance Estimation of Similarity Measures of Multi-SensorImages for Change Detection Applications,” in
Proc. IEEE Int. Workshop Analysis Multitemporal Remote SensingImages (MultiTemp) . Leuven: IEEE, 2007, pp. 1 – 5.
September 21, 2016 DRAFT3 [19] G. Mercier, G. Moser, and S. Serpico, “Conditional copula for change detection on heterogeneous sar data,” in
Proc.IEEE Int. Conf. Geosci. Remote Sens. (IGARSS) . IEEE, 2007, pp. 2394–2397.[20] J. Prendes, M. Chabert, F. Pascal, A. Giros, and J.-Y. Tourneret, “A new multivariate statistical model for changedetection in images acquired by homogeneous and heterogeneous sensors,”
IEEE Trans. Image Process. , vol. 24,no. 3, pp. 799–812, 2015.[21] ——, “Performance assessment of a recent change detection method for homogeneous and heterogeneous images,”
Revue Française de Photogrammétrie et de Télédétection , no. 209, pp. 23–29, 2015.[22] J. Prendes, M. Chabert, F. Pascal, A. Giros, J.-Y. Tourneret, M. Ressl, and R. Saint Nom, “Change detection for opticaland radar images using a Bayesian nonparametric model coupled with a Markov random field,”
IEEE Trans. ImageProcess. , 2015.[23] V. Alberga, M. Idrissa, V. Lacroix, and J. Inglada, “Comparison of similarity measures of multi-sensor imagesfor change detection applications,” in
Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEEInternational . IEEE, 2007, pp. 2358–2361.[24] L. Wald, T. Ranchin, and M. Mangolini, “Fusion of satellite images of different spatial resolutions: assessing thequality of resulting images,”
Photogrammetric engineering and remote sensing , vol. 63, no. 6, pp. 691–699, 1997.[25] L. Loncan, L. B. de Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao,G. A. Licciardi, M. Simoes, J.-Y. Tourneret, M. A. Veganzones, G. Vivone, Q. Wei, and N. Yokoya, “HyperspectralPansharpening: A Review,”
IEEE Geosci. Remote Sens. Mag. , vol. 3, no. 3, pp. 27–46, Sept. 2015.[26] Q. Wei, J. Bioucas-Dias, N. Dobigeon, and J.-Y. Tourneret, “Hyperspectral and multispectral image fusion based ona sparse representation,”
IEEE Trans. Geosci. Remote Sens. , vol. 53, no. 7, pp. 3658–3668, 2015.[27] Q. Wei, N. Dobigeon, and J.-Y. Tourneret, “Fast Fusion of Multi-Band Images Based on Solving a Sylvester Equation,”
IEEE Trans. Image Process. , vol. 24, no. 11, pp. 4109–4121, Nov. 2015.[28] ——, “Bayesian Fusion of Multi-Band Images,”
IEEE J. Sel. Topics Signal Process. , vol. 9, no. 6, pp. 1117–1127,Sept. 2015.[29] N. Yokoya, N. Mayumi, and A. Iwasaki, “Cross-Calibration for Data Fusion of EO-1/Hyperion and Terra/ASTER,”
IEEE J. Sel. Topics Appl. Earth Observations Remote Sens. , vol. 6, no. 2, pp. 419–426, April 2013.[30] F. Heide, O. Gallo, M. Steinberger, J. Liu, Y.-T. Tsai, W. Heidrich, M. Rouf, K. Egiazarian, D. Pajak, J. Kautz,D. Reddy, and K. Pulli, “FlexISP: A Flexible Camera Image Processing Framework,”
ACM Transactions on Graphics(TOG) - Proceedings of ACM SIGGRAPH Asia 2014 , vol. 33, no. 6, 2014.[31] A. K. Gupta and D. K. Nagar,
Matrix Variate Distribution , ser. Monographs and Surveys in Pure and AppliedMathematics. Chapman and Hall, 1999, no. 104.[32] J. Idier,
Bayesian approach to inverse problems , ser. Digital signal and image processing series, J. Idier, Ed. London: Hoboken, NJ: ISTE ; Wiley, 2008.[33] R. D. Johnson and E. S. Kasischke, “Change vector analysis: A technique for the multispectral monitoring of landcover and condition,”
Int. J. Remote Sens. , vol. 19, no. 3, pp. 411–426, Jan. 1998.[34] A. D’Addabbo, G. Satalino, G. Pasquariello, and P. Blonda, “Three different unsupervised methods for changedetection: an application,” in
Proc. IEEE Int. Conf. Geosci. Remote Sens. (IGARSS) , vol. 3. IEEE, 2004, pp.1980–1983.[35] J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi, and J. Chanussot, “Hyperspectral RemoteSensing Data Analysis and Future Challenges,”
IEEE Geosci. Remote Sens. Mag. , vol. 1, no. 2, pp. 6–36, June 2013.[36] J. Nascimento and J. Dias, “Vertex component analysis: a fast algorithm to unmix hyperspectral data,”
IEEE Trans.Geosci. Remote Sens. , vol. 43, no. 4, pp. 898–910, April 2005.[37] D. C. Heinz and C. Chang, “Fully constrained least-squares linear spectral mixture analysis method for materialquantification in hyperspectral imagery,”
IEEE Trans. Geosci. Remote Sens. , vol. 29, no. 3, pp. 529–545, March 2001., vol. 29, no. 3, pp. 529–545, March 2001.