An Unsupervised Approach to Solving Inverse Problems using Generative Adversarial Networks
Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
AAn Unsupervised Approach to Solving InverseProblems using Generative Adversarial Networks
Rushil Anirudh
Center for Applied Scientific ComputingLawrence Livermore National Laboratory [email protected]
Jayaraman J. Thiagarajan
Center for Applied Scientific ComputingLawrence Livermore National Laboratory [email protected]
Bhavya Kailkhura
Center for Applied Scientific ComputingLawrence Livermore National Laboratory [email protected]
Timo Bremer
Center for Applied Scientific ComputingLawrence Livermore National Laboratory [email protected]
Abstract
Solving inverse problems continues to be a challenge in a wide array of applicationsranging from deblurring, image inpainting, source separation etc. Most existingtechniques solve such inverse problems by either explicitly or implicitly finding theinverse of the model. The former class of techniques require explicit knowledge ofthe measurement process which can be unrealistic, and rely on strong analyticalregularizers to constrain the solution space, which often do not generalize well.The latter approaches have had remarkable success in part due to deep learning, butrequire a large collection of source-observation pairs, which can be prohibitivelyexpensive. In this paper, we propose an unsupervised technique to solve inverseproblems with generative adversarial networks (GANs). Using a pre-trained GANin the space of source signals, we show that one can reliably recover solutionsto under determined problems in a ‘blind’ fashion, i.e., without knowledge ofthe measurement process. We solve this by making successive estimates on themodel and the solution in an iterative fashion. We show promising results inthree challenging applications – blind source separation, image deblurring, andrecovering an image from its edge map, and perform better than several baselines.
A large class of machine learning techniques has been devoted to solving inverse problems that arisein different application domains. Formally, this refers to problems that take the form Y = F ( X ) + n ,wherein the goal is to recover the true source X from its noisy observation Y [15]. The mapping F includes a wide variety of corruption functions [21, 13, 22], measurement processes [5, 9, 1] andmixing models. In addition to its broad applicability, the highly under-determined nature of thisformulation has made it an important problem among machine learning researchers for over twodecades. In its most general form, this is posed as Maximum a Posteriori (MAP) estimation, anda common recurring idea that comes up in several existing solutions is to regularize the problemby restricting the space of solutions that X can assume, through appropriate prior models. Forexample, sparsity prior with respect to a latent basis has enabled effective recovery from compressedmeasurements of many natural signals [19]. However, what differentiates between the variety ofexisting solutions is the degree to which they assume knowledge of F – while most approaches assumeaccess to exact parameterization of F , others make more relaxed assumptions, e.g. distribution of F Preprint. Work in progress. a r X i v : . [ c s . C V ] J un s known [2, 4]. However, it is important to note that, even with a fully known F , the adherence ofobserved data to the prior model can be insufficient, thus making the solutions highly non-robust.In order to circumvent the challenge of choosing an appropriate prior for the data at hand, anddispense the need to know F in its analytical form, more recently, deep neural networks havebeen employed to directly build a surrogate for the inversion process ˆ X = T ( Y ) , where T is aneural network that implicitly approximates F − ; for example, some recent applications are in CSreconstruction [5, 9], super-resolution [11], CT image reconstruction [1] etc. Such a supervisoryapproach relies on the ability to collect enough training data in the form of ( X, Y ) pairs (works evenfor black-box F ). Despite the unprecedented success of this approach, in many scenarios it can beimpractical to obtain such large training pairs. Another inherent limitation of this approach is that itdoes not provide an actual estimte of the mapping F , which can be crucial in certain applications,e.g. blind source separation, or in imaging. Instead, we propose a general unsupervised solution forinverse problems that utilizes pre-trained generative models to form the prior and simple shallownetworks to approximate the mapping F , with the goal of effectively producing the observed data Y ≈ ˆ F ( ˆ X ) .We show that, through an alternating optimization algorithm, one can effectively recoverboth the unknown mapping as well as the true sources solely from a limited set of observations. Thisis conceptually similar to the problem of jointly learning sparse representations and an associatedovercomplete dictionary from the data, where the two unknowns are alternatively inferred with asparsity regularizer, e.g. Laplacian prior [17].Deep generative models, such as variational autoencoders [8] and Generative Adversarial Networks(GANs)[6], have enabled non-parameteric inferencing of complex data distributions. Consequently,there has been a recent surge in utilizing data-driven generative models to sample from distributionsin several traditionally difficult, unsupervised learning problems. In the context of inverse problems,a Generative Adversarial Network that models P ( X ) can naturally act as a strong prior for X , thuseliminating the need to choose a prior that can regularize the problem while also being convenient foroptimization. Furthermore, even with a shallow network, we are able to represent a large class ofmappings F (both linear and simple non-linear mappings) thus allowing a robust approximation ofthe measurement process without the need for an analytical form or even a prior on F . Consequently,the system remains reasonably agnostic to the current task being solved – for example, the samesystem can be reused to solve a deblurring task and reconstruct an image from its edgemap.To the best of our knowledge, this is the first work to provide a single, unsupervised solution todifferent inverse problems. When applied to standard inverse imaging problems, we show thatour unsupervised approach performs competitively against baselines with full knowledge of F and significantly outperforms other existing unsupervised approaches. Interestingly, the proposedapproach can be applied to more challenging scenarios such as blind source separation, wherein thereare multiple sources corresponding to each observation, and there can be multiple observations (eachwith a different mapping) corresponding to the same set of sources. Even in highly underdeterminedscenarios where conventional approaches such as independent component analysis fail completely, i.e.number of observations are significantly lesser than number of sources, we observe that our algorithmrecovers the underlying sources with high-fidelity. In this section we describe the proposed unsupervised solution for inverse problems. As describedin the previous section, it does not assume knowledge of F or analytical form of the regularizer. Inthe blind source separation case, we assume that the number of sources is known a priori , thoughexisting techniques can be used for its estimation [18]. Background:
The goal of several ill-posed inverse problems, such as compressed recovery orsource separation, is to estimate the true source signals from an underdetermined system of noisymeasurements. In order to solve these ill-posed problems in a tractable fashion, one usually needs aprior knowledge on the structure of solution in the domain of signal X . In general, different priorassumptions lead to different forms of regularization, using which several signal/image processingproblems can be formulated as optimization problems and effectively solved using existing techniques.Several successful approaches for solving inverse problems make two crucial assumptions: (a)parameters of the measurement process F are known; and (b) the prior on structure of the solutioncan be represented analytically (e.g., low rank or sparsity w.r.t. a known basis). Although widely2sed, these assumptions are very restrictive and hard to meet in several scenarios. While the firstassumption simplifies the inversion process, it limits application to known corruption models. On theother hand, the latter assumption is targeted at simplifying the optimization process (e.g. convexity)such that existing techniques can be used effectively. For example, it has been widely known thatnatural images typically lie close to a collection of low dimensional sub-manifolds, however, a preciseand analytical characterization for a given dataset is very challenging. Consequently, analyticalapproximations of the known low-dimensional structure (such as total variation regularization) ispreferred, and convex optimization procedures are employed to solve this regularized alternativeinstead of the original recovery problem.In order to overcome these challenges, we propose a unsupervised approach for solving inverseproblems by (a) employing shallow neural networks to estimate the measurement process and, (b)utilizing deep generative models (GANs) as an effective, non-analytical regularizer. In the rest of thissection, we formalize these ideas and describe the algorithm. Problem Formulation:
Let us denote a set of N observed measurements as Y obs ∈ R N × K whereeach column Y obsj ∈ R K denotes an independent observation. For generality, we assume thateach observed signal Y obsj is composed using a set of S source signals X j ∈ R S × M , whereinwe hypothesize that each of the source signals lie in a low-dimensional manifold M ⊆ R M and K ≤ M . Note that, in the case of conventional inverse imaging problems such as de-blurring S = 1 .Now, the sources and the observations are related through a unknown measurement process F by Y obsj = F ( X j ) + n . Here n represents noise in the measurement process. The goal is to recover anestimate of the source signals ˆ X j from each of the N observations. For the sake of simplicity, herewe only consider a single observation for each set of sources, but it can be trivially generalized tomultiple observations for scenarios such as blind source separation.Several existing solutions for inverse problems assume F to be known and are formulated as: { ˆ X j } Nj =1 = arg min { X j ∈ R S × M } Nj =1 N (cid:88) j =1 (cid:13)(cid:13) Y obsj − F ( X j ) (cid:13)(cid:13) + λ R M ( X j ) (1)where R M is an analytical regularizer (such as, l norm, TV regularizer, etc.) and λ is a regularizationparameter. The hope of solving a regularized optimization problem is that regularization R M (ifmodeled precisely) will push the solution to lie on (or be near) the true signal manifold M . Asmentioned earlier, the above approach has serious limitations and motivates the proposed formulationas discussed next.The proposed approach overcomes these modeling limitations by estimating both unknown mixingfunction and prior signal structure from the data itself. The unknown mixing function is parameterizedby a neural network, denoted by ˆ F acting as a surrogate, and prior signal structure is parameterizedby using a GAN – G : R T (cid:55)→ R M , where G denotes the generator in the GAN and T is the number oflatent dimensions (set to in our experiments). The implicit regularization with this “GAN prior”results in the following optimization problem: ˆ F ∗ , { z ∗ j } Nj =1 = arg min ˆ F , { z j ∈ R S × T } Nj =1 N (cid:88) j =1 (cid:13)(cid:13)(cid:13) Y obsj − ˆ F ( G ( z j )) (cid:13)(cid:13)(cid:13) , (2)where we have ˆ X j = G ( z j ) . Since we have two unknowns in (2), we employ an alternatingoptimization which allows us to find the optimal ˆ F ∗ and { z ∗ j } Nj =1 . In particular, since G and ˆ F aredifferentiable, we can evaluate the gradients of the objective in (2), using backpropagation and useexisting gradient based optimizers. In addition to computing gradients with respect to z j , we alsoperform clipping in order to restrict it within the desired range (like [ − , ) resulting in a projectedgradient descent optimization. With sufficiently large number of observations N , the measurementprocess F can be estimated with high fidelity (as demonstrated in our experiments). As the deepgenerative model is obtained via unsupervised training on the data directly, it can very precisely modelcomplex data distributions. Consequently, this enables us to utilize the low-dimensional structuralinformation from data manifolds which cannot be otherwise modeled analytically with conventionalregularizers. Furthermore, by providing a surrogate for the measurement process, it dispenses thelimitation of supervised approaches that map directly from observations to the source signals.3 lgorithm 1: Proposed Algorithm input :
Number of sources S , Observations Y obs ∈ R N × K , Pre-trained GAN G . output : Estimated sources { ˆ X j ∈ R S × M } Nj =1 , Surrogate model ˆ F For j = 1 , · · · , N , initialize z (0) j ∈ R S × T randomly using an uniform distribution U ( − , . z ∗ j = z (0) j ∀ j ; //initial best guess is randomInitialize ˆ F as a shallow neural network with random weights Θ (0) . for t ← to T dofor t ← to T do Y estj ← ˆ F (cid:0) G ( z ∗ j ); Θ ( t ) (cid:1) , ∀ j ; // present best guess of sources L = (cid:80) Nj =1 (cid:107) Y estj − Y obsj (cid:107) + α L per ; Θ ( t +1) ← Θ ( t ) − λ ∇ Θ ( L ) ; // gradient descent end Θ ∗ = Θ ( T ) for t ← to T do Y estj ← ˆ F (cid:16) G ( z ( t ) j ); Θ ∗ (cid:17) , ∀ j ; // present best guess of measurement process L = (cid:80) Nj =1 (cid:107) Y estj − Y obsj (cid:107) + α L per ; ˜ z ( t +1) j ← z ( t ) j − λ ∇ z ( L ) , ∀ j ; // gradient descent z ( t +1) j ← P (cid:16) ˜ z ( t +1) j (cid:17) ∀ j ; //projection operation end z ∗ j = z ( T ) j , ∀ j end return ˆ F ∗ , ˆ X j = G ( z ∗ j ) , ∀ j .Other applications like blurring and edge maps can be considered special cases of the problemformulation in (2), with S = 1 , and F acts as a linear operator on X . Algorithm
The algorithm to perform the alternating optimization is shown in Algorithm 1. We runthe inner loops for updating the surrogate and the latent parameters of G for T and T iterationsrespectively. The projection operation denoted by P is the clipping operation, where we restrictthe update on z j to lie within the desired range. In addition to the reconstruction error, we alsoincorporate a perceptual loss, that penalizes unrealistic images. For a given discriminator model D and a generator G , this loss is given by: L per = (cid:80) Nj =1 (cid:80) Si =1 log(1 − D ( G ( z ij ))) , where z ij is the i th column of the latent matrix z j . Note, this is the same as the generator loss of the pre-trained GAN.An advantage of the algorithm described here is that it only depends on Y obs to perform the update.As a result, not only does it not require any paired training data to be collected, but the procedurealso lends itself to a task-agnostic inference wherein the user does not need to specify a priori whatthe current task being solved is. This is in contrast to most existing deep learning based solutionstoday, which are optimized to solve a specific task using training data collected in advance. In this section, we demonstrate the effectiveness of our proposed solution using three scenarios thatrely on solving highly ill-posed inverse problems – (a) image deblurring, (b) recovering an imagefrom its edge map, and (c) blind source separation with unknown mixing configuration. In all threescenarios, we assume no knowledge of the measurement process, and attempt to estimate the mapping F and true source signals jointly. Consequently, with sufficient number of observations and undersuitable assumptions on the capacity of the shallow network used to approximate F , our inversionsystem can be directly reused across different inverse problems.4 .1 Inverting convolutional operators – blur and edge transforms In this experiment, we attempt to recover images after they have been filtered with a convolutionalkernel such as a blur or an edge kernel. These belong to a large class of inverse problems commonlycharacterized by convolutional filtering operations. The convolution operation can also be interpretedas matrix multiplication when the convolution kernel is represented in its Toeplitz matrix form, withno requirement to be full rank.For the deblurring experiment, we used a standard × Gaussian blur kernel on face images,with a scale parameter of , which is severe enough to ensure that no facial features are discernablefrom the transformed images. For the edge map experiment, we used a × edge kernel givenby (cid:34) −
12 0 −
21 0 − (cid:35) . Experimental Setup:
First, we build the prior model by training a DCGAN [14] on the CelebAdataset [12], which consists 202,599 images of which we use for training, and used the remainingto perform our experiments. To train the DCGAN, we used the hyper-parameters suggested in [14].As described in Section 2, we model the surrogate ˆ F using a 2-layer convolutional network withoutany non-linearities, since the transformations considered are linear. In order to speed up convergenceand avoid overfitting, we used a small set of filters – filters of size × × in the first layer,followed by filters of size × × . We also experimented with non-linearities such as ReLU,but found convergence to be much faster without them for the linear inverse problems considered inthis experiment.In general, we observe that a complex surrogate with larger number of parameters in ˆ F , requires alarger set of observations in order to avoid over-fitting. In this experiment, we found that even just − observations, i.e., images transformed using the same linear model, are sufficient to train aneffective surrogate. Even though this is an extremely small batch size, we believe the network doesnot overfit easily because the images used to train the surrogate are updated constantly with the GANas outlined in Algorithm 1.In the alternating optimization, we first update ˆ F for T = 50 iterations, followed by updating thesources { ˆ X j } by optimizing over the latent variables of the GAN model for T = 50 iterations. Theoptimization procedure typically achieves convergence in about T = 100 epochs. We used the AdamOptimizer [7] for both the updates, with a learning rate of − to update the surrogate and a slightlylower rate of − for updating the latent parameters of the generative model. In addition, we clipthe updates on { z j } (see 1) at each step. Losses:
As noted in several existing image recovery efforts, such as [22], we also observed that an (cid:96) loss worked better in image recovery problems than the (cid:96) loss. In addition, following [22], wealso penalized images that are not “perceptually” meaningful, i.e., encourage images that reduce thegenerator loss from the pre-trained GAN. We found the balance between these two losses is highlysensitive and governed by the weight α , and observed α = 1e − to be a suitable choice. Results:
We compare the proposed approach with the following baselines: (a)
PGD : Use projectedgradient descent directly with respect to the loss L = (cid:80) j (cid:107)G ( z j ) − Y obsj (cid:107) + L p , while dispensingentirely, the use of a measurement process. This simple baseline works reasonably well for simpletransormations, e.g. blur; (b) Weiner Deconv : For the blur operation, we use traditional signal process-ing restoration based on Weiner deconvolution; (c)
PGD + Known F : This is representative of mostexisting GAN-based recovery approaches such as [22, 3, 16] etc. where the exact parameterization of F is known.We observe that the proposed approach significantly outperforms the simpler baselines that do nothave access to F , while performing competitively to the cases where it is known. Results for testcases on the CelebA dataset are shown for deblurring in Figure 1a, and edgemap to image recovery inFigure 1b. Furthermore, in both cases, we observe that the proposed approach produces an effectivesurrogate to produce the actual observations. 5 bservedEstimated “corruption” (proposed) PGDWeiner DeconvRecovered (Proposed)
PGD + Known Blur KernelOriginal (a) Inverting the blur operation
ObservedEstimated “corruption” (proposed)
PGDRecovered (Proposed)
PGD + Known Edge KernelOriginal (b) Inverting the edgemap operation
Figure 1:
Blind inversion of the blur filter and the edge filter using the proposed approach. It should be notedthat in both the cases the system remains unchanged, i.e., the algorithm does not need to know if its beingprovided with an edge map or a blured image. It is observed in figure 1a, that most facial attributes are removed,yet we estimate the most likely original image accurately.
Next, we study the application of blind source separation (BSS), commonly encountered in signaland image processing, where the goal is the following: recovering individual sources when observingonly a linear or non-linear mixture of these sources. We fix S = 3 in our experiments. This problem6s severly under-determined when the number of observations is smaller than the number of sources,and it can be over-determined when number of observations is higher. Furthermore, most traditionaltechniques assume F to be linear in order to solve BSS, whereas our method is easily generalizes toeven non-linear mixtures. In addition to the mixture observations, the proposed technique requires S , the number of sources as input. We model the mixing process using a fully connected 2-layeredneural network with 16 units in the first layer, and ReLu activation after the first layer, followed by N (number of observations) units in the final layer. Our model estimates the sources without training ,i.e., it behaves as an iterative algorithm at test time and can automatically work with any kind ofmixing process. Experimental Setup:
We pre-train a GAN with CNNs on the MNIST dataset [10] with 6 layersin the generator and discriminator; we also rescale the images to lie in the range of [ − , . Next,we simulate the mixing process using weights drawn from a random normal distribution, M ∼N ( − . , . , allowing for negative weights, where the final mixed observation is given by Y obsj = | M T X j | . We train each block of algorithm 1 for iterations each, with T = 100 . Our formulationcan naturally handle scenarios with multiple observations as well, which is done by generalizing M to have multiple rows. We show results for blind source separation in Figure 2, for two distinct cases– (a) under determined : single non-linear mixture from three sources, and (b) over determined : distinct non-linear mixtures from sources. Results:
We compare the performance of our approach with the following baselines: (a)
NaïveAdditive Model:
We assume a simple non-weighted additive model, such that the loss is givenby L = (cid:80) j (cid:107) Σ Si G ( z ij ) − Y obsj (cid:107) + α L p . We find this baseline to be surprisingly strong even forweighted additive mixtures, however it completely fails with non-linearities as shown in figure 2a;(b) Independent Component Analysis (ICA) : Though ICA fails completely (not shown) for the underdetermined case, it fairs better when the number of observations is increased. However, the as shownin figure 2b it is highly inferior to the proposed approach. We see in figure 2a, that our approach isparticularly effective in under determined scenarios, where we have a single observation composedfrom multiple sources. We clearly observe that sampling from a GAN is an effective way to addressthe blind source separation problem.
With their ability to effectively model high-dimensional distributions, GANs [6] have enabled a largenumber of unsupervised methods for image-related inverse problems. Yeh et al. [22] showed thatprojected gradient descent (PGD) can be used to fill up arbitrary holes in an image in a semanticallymeaningful manner. More recently, several efforts have pursued solving inverse problems usingPGD, for example in compressive sensing [3, 16], and deblurring [2]. Asim et. al [2] proposed anunsupervised technique for deblurring that is closely related to our work, where they use two separateGANs – one for the blur kernel, and another for the images, to solve for the inverse. Ours differs fromthis technique in that we make fewer assumptions on the functions that can be recovered, and onlyrequire a single GAN on the images. In addition, it is not evident how their method generalizes toother challening inverse problems like blind source separation. AmbientGAN [4], is another relatedtechnique that allows one to obtain a GAN in the original space given its lossy measurements, whichhelps make our case stronger as it provides a way to train a GAN even without the original images.However, to solve the inverse problem, they still assume the form of corruption or mesurement(for example as a random binary mask or a measurement matrix etc.), whereas we parameterizeit as a neural network that needs to be trained. Finally, our work leverages the notion of a “GANprior" [22, 16, 2, 3] – the idea that optimizing in the latent space of a pre-trained GAN provides apowerful prior to solve several traditionally hard problems. As GANs become better, we expect newcapabilities in solving challenging inverse problems to emerge.
In this paper, we presented a proof of concept for unsupervised techniques to solve commonlyencountered ill-conditioned inverse problems. By leveraging GANs as priors, we are able to recoversolutions from blurred images, edge maps, and separate sources from underdetermined non-linear7 round Truth Proposed Naive Model
Legend: Mixture (left), Source1, Source2, Source3. Y obs Y est Y est (a) Single observation, three sources. WHile there are no unique solutions to this problem, ourapproch finds highly likely solutions. The images are intensity normalized.(b) Four observations, three sources. Figure 2:
Blind Source Separation with single and multiple observations on the MNIST dataset. Images areintensity normalized. mixtures. A crucial observation is that this approach does not require knowledge of the task that isbeing solved. Assuming that the true source was realized from a known distribution (approximatedby the GAN), our method is able to identify what corruption/transformation the source signal hasgone through, while also identifying the signal itself.8he proposed technique opens up several new possibilities for inverse problem recovery, before whichsome key aspects need generalization. First, a batch version of the current technique can enabletraining more complex functions, that require many more observations than those considered here.Next, the choice of the surrogate model ˆ F dictates the type of functions that can be recovered. Aswith supervised training, more complex functions require more observations. Further, recent work[20] has shown that the choice of architecture places an implicit prior on the family of functions thatcan be modeled, so for example using a convolutional neural net may limit learning certain kinds offunctions like inpainting-masks. References [1] R. Anirudh, H. Kim, J. J. Thiagarajan, K. A. Mohan, K. Champley, and T. Bremer. Lose the views: Limitedangle CT reconstruction via implicit sinogram completion. arXiv preprint arXiv:1711.10388 , 2017.[2] M. Asim, F. Shamshad, and A. Ahmed. Solving bilinear inverse problems using deep generative priors.
CoRR , abs/1802.04073, 2018.[3] A. Bora, A. Jalal, E. Price, and A. G. Dimakis. Compressed sensing using generative models. arXivpreprint arXiv:1703.03208 , 2017.[4] A. Bora, E. Price, and A. G. Dimakis. AmbientGAN: Generative models from lossy measurements. In
International Conference on Learning Representations , 2018.[5] J. Chang, C.-L. Li, B. Póczos, B. Kumar, and A. C. Sankaranarayanan. One network to solve themall—solving linear inverse problems using deep projection models. arXiv preprint arXiv:1703.09912 ,2017.[6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.Generative adversarial nets. In
Advances in neural information processing systems (NIPS) , pages 2672–2680, 2014.[7] D. Kingma and J. Ba. Adam: A method for stochastic optimization.
International Conference on LearningRepresentations ICLR , 2014.[8] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013.[9] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok. Reconnet: Non-iterative reconstruction ofimages from compressively sensed measurements. In
Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition , pages 449–458, 2016.[10] Y. LeCun. The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/ , 1998.[11] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz,Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXivpreprint , 2016.[12] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In
Proceedings ofInternational Conference on Computer Vision (ICCV) , 2015.[13] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning byinpainting. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages2536–2544, 2016.[14] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutionalgenerative adversarial networks.
International Conference on Learning Representations (ICLR) , 2016.[15] A. Ribes and F. Schmitt. Linear inverse problems in imaging.
IEEE Signal Processing Magazine , 25(4),2008.[16] V. Shah and C. Hegde. Solving linear inverse problems using gan priors: An algorithm with provableguarantees. arXiv preprint arXiv:1802.08406 , 2018.[17] J. J. Thiagarajan, K. N. Ramamurthy, and A. Spanias. Optimality and stability of the k-hyperline clusteringalgorithm.
Pattern Recognition Letters , 32(9):1299–1304, 2011.[18] J. J. Thiagarajan, K. N. Ramamurthy, and A. Spanias. Mixing matrix estimation using discriminativeclustering for blind source separation.
Digital Signal Processing , 23(1):9–18, 2013.[19] J. J. Thiagarajan, K. N. Ramamurthy, P. Turaga, and A. Spanias. Image understanding using sparserepresentations.
Synthesis Lectures on Image, Video, and Multimedia Processing , 7(1):1–118, 2014.[20] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Deep image prior. arXiv preprint arXiv:1711.10925 , 2017.[21] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In
Advances inneural information processing systems , pages 341–349, 2012.[22] R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do. Semantic imageinpainting with deep generative models. In
Proceedings of the IEEE Conference on Computer Vision andPattern Recognition , pages 5485–5493, 2017. cknowledgement This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore NationalLaboratory under Contract DE-AC52-07NA27344.
Disclaimer