Fast single image super-resolution based on sigmoid transformation
>> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1
Abstract βSingle image super-resolution aims to generate a high-resolution image from a single low-resolution image, which is of great significance in extensive applications. As an ill-posed problem, numerous methods have been proposed to reconstruct the missing image details based on exemplars or priors. In this paper, we propose a fast and simple single image su-per-resolution strategy utilizing patch-wise sigmoid transfor-mation as an imposed sharpening regularization term in the reconstruction, which realizes amazing reconstruction perfor-mance. Extensive experiments compared with other state-of-the-art approaches demonstrate the superior effective-ness and efficiency of the proposed algorithm. Index Termsβsingle image super-resolution, sigmoid trans-formation, sharpening regularization
I. Introduction UPER-resolution (SR) has been widely applied in video surveillance [1][2][8], remote sensing [3][4][5][35], med-ical imaging [6] [7] and many other fields, enhances the im-age resolution and provides pleasing image details, which is significant for subsequent image processing. Aiming to up-scale the image resolution based on single image [12], single image super-resolution (SISR) requires generating severalfold data from limited input data to approach natural images, which is badly ill-posed and often suffers from annoying blurring and artifacts. To alleviate the ill-posed problem of SISR, many algo-rithms have been proposed to exploit additional information to learn how natural images are, which improves the recon-struction performance efficiently. Considering the difference between natural image information sources, the algorithms can be roughly subdivided into dictionary-based methods [9][17][18][33][34], self-exemplar-based methods [15][20][32] and prior-based methods [13][14][16]. Diction-ary-based methods refer to external high resolution (HR) and corresponding low resolution (LR) image pairs as exemplars to hallucinate the missing details, which requires large scale databases to cover possible relationship between LR images and HR images. As external-exemplars-based methods are time-consuming in training procedure, self-exemplar-based methods assume redundancy of patches within the single im-age and utilize recurred local similar patches as exemplars to exploit underlying image details. Unlike dictionary-based and exemplar-based, prior-based methods utilize priors as con-straints to alleviate the ill-posedness, perform robustly and
L. Wang is with the College of Electronic Science, National University of Defense Technology, Changsha, China. ([email protected]) Z. Lin, J. Gao, X. Deng and W. An are also with the College of Electronic Science, National University of Defense Technology, Changsha, China. efficiently with no relying on exemplars. Interpolation methods [22][23][26] as the simplest pri-or-based methods, utilize analytical interpolation formulae like bicubic scheme to predict new pixels based on local pix-els, however the spline functions may not match natural im-ages with strong discontinuities and lead to ringing artifacts along the edges. Smoothness prior [24][25] as another widely used category of image prior, regularizes the first or higher derivatives of the reconstructed image, suppressing the addi-tional noises effectively but leading to transparent blurring. Recently sparsity deduced priors [14][21][27][28][31] have been extensively investigated which assume local image patch can be sparsely represented by linear combination of over-complete dictionary, however the sparsity assuming los-es texture and other image details. To alleviate the blurring effect of edges in SISR, edge-based priors [11][13][16] are introduced to SISR and realize outstanding sharpening and deblurring effect. Fattal [11] proposed an edge deduced prior based on statistics of edge features, imposed the local conti-nuity measures in the upscaled image to match the statistics learned from HR and LR image pairs. Sun [13][16] proposed a novel gradient profile prior, utilized 1-D profiles of gradient magnitudes to describe the gradient structure with a paramet-ric gradient profile model learned from a large scale of natu-ral images. Considering upscaling leads to little degradation in flat re-gion and dominant deterioration is mainly concentrated in edge and texture regions performing as blurring effect, we are motivated to utilize sharpening operation to alleviate the degradation with no relying on external exemplars. In this paper we propose an adaptive patch-wise sigmoid transfor-mation to realize appropriate sharpening of the upscaled im-age, then utilize the sharpened image as an imposed regulari-zation in the reconstruction and realize amazingly effective deblurring and sharpness enhancement. Different from [13][16] implementing the sharpening process in gradient space, our patch-wise method utilizes sigmoid function to match the slope of local intensities directly and steepens the slope with sigmoid transformation, which is similar to the image enhancement in [29][30]. With this simple but effec-tive process, our method produces state-of-the-art SISR re-sults with superior processing efficiency, and the recon-structed images are sharp and distinct with rare artifacts. The reminder of this paper is organized as follow: In
Sec-tion II we first introduce the patch-wise sigmoid transfor-mation. Then in
Section III we present the SISR framework based on sigmoid transformation. Extensive experiments are conducted comparing with state-of-the-art algorithms in
Sec-tion IV . Finally the conclusions are drawn in
Section V . Longguang Wang, Zaiping Lin, Jinyan Gao, Xinpu Deng, Wei An Fast single image super-resolution based on sigmoid transformation S REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2
Normalization constant Image base patch min patch patch max min ο N o r m a li z a t i o n R e - qu a n t i f y Sigmoid transformationNormalized residual Sharpened residualExtracted patch Sharpened patch
Fig.1 Overall sigmoid transformation.
II. PATCH-WISE SIGMOID TRANSFORMATION
In this section, the proposed patch-wise sigmoid transfor-mation is presented and analyzed. We first formulate the sig-moid transformation, then perform parametrical analysis and finally compare our sharpening operation to other edge en-hancement methods.
A. Formulation
In our scenario, the sharpening operation is implemented directly in image space instead of gradient space. Considering the continuity of natural images in local region, extracted local image patch can be regarded as a slope and the sharp-ening operation is equivalent to steepening the patch-wise slope intuitively. From the sketch of sigmoid function π(π₯) = 1/(1 + expβ‘(βπ(π₯ + π))) shown in
Fig. 2 , we can see that sigmoid functions with varying parameter pairs π and π perform as superior characterizations of slopes with ranged steepness and location, therefore we are motivated to choose it as fitting function to characterize patch-wise slope, and the slope-based steepening can be realized through sim-ple parametrical transformation. Fig.2 Sketch of sigmoid functions with ranged parameter pairs π and π . As shown in
Fig. 2 , the value of sigmoid function π(π₯) = 1/(1 + expβ‘(βπ(π₯ + π))) ranges from 0 to 1, there-fore the intensities within the patch need to be normalized first for subsequent sigmoid transformation. π¦ π,πβπππ‘πβ = π§ π βmin πβπππ‘πβ *π§ π +max πβπππ‘πβ *π§ π +βmin πβπππ‘πβ *π§ π + , (1) where π§ π is the intensity of location π , π¦ π represents the normalized value, and min πβπππ‘πβ *π§ π + , max πβπππ‘πβ *π§ π + refer to the minimum and maximum of the patch respectively. To avoid π¦ π equals to 0 or 1, we utilize a small constant π set to be 0.01 to fine-tune the maximum and minimum as { min πβπππ‘πβ *π§ π + = min πβπππ‘πβ *π§ π + β πmax πβπππ‘πβ *π§ π + = max πβπππ‘πβ *π§ π + + π . (2) With normalization operation implemented, sigmoid func-tion π (π₯) = 1/(1 + expβ‘(βπ (π₯ + π ))) is introduced to fit the normalized residuals π¦ π to derive corresponding π₯ π values π¦ π = π (π₯ π ) β‘β‘β‘β‘β‘ β β‘β‘β‘β‘β‘ π₯ π = ln( β1)βπ β π , (3) and then we transform π₯ π to sharpened residuals π¦ πβ² with sharpened sigmoid function π (π₯) = 1/(1 +expβ‘(βπ (π₯ + π ))) π¦ πβ² = π (π₯ π ) = β1) π1π0 βπ π1(π0βπ1) (4) Note that all intensities within the image patch are fused to-gether for operation in this process, leading to strong robust-ness to noises and undulation. Finally we re-quantify the sharpened residuals π¦ πβ² to de-rive sharpened intensities π§ πβ² π§ πβ² = π¦ πβ² Γ πΆ π + min πβπππ‘πβ {π§ π } (5) where πΆ π = max πβπππ‘πβ {π§ π } β min πβπππ‘πβ {π§ π } represents the normalization constant. As we can see from (4), the patch-wise sigmoid transfor-mation can be regarded as a four-parameter problem, namely the sharpened image patch can be derived as parameters { π , π , π , π } determined. Further observe (4), we can see it can be rewritten as π¦ πβ² = β1) πΎ βπ π΅ = π(π¦ π ; πΎ, π΅) where {πΎ = π π β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘β‘π΅ = π (π β π ) (6) Then the overall sigmoid transformation also degenerates into a two-parameter function as π§ πβ² = π(π§ π ; πΎ, π΅) (7) where πΎ determines the steepness of the sigmoid function while π΅ determines the location, namely the non-symmetry of the sigmoid function. The overall process of sigmoid transformation is illustrated in Fig. 1 . -10 -5 0 5 1000.20.40.60.81 f( x ) x a=1, b=0a=0.5, b=0a=2, b=0 -10 -5 0 5 1000.20.40.60.81 x f( x ) a=1, b=0a=1, b=2a=1, b=-2 REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 3 overlapping patch selection overlapped sharpened patch final sharpened resultssigmoid transfromation mean of the overlapped
Fig.3 Overlapping strategy for patch-wise sigmoid transformation.
Considering patch-wise sigmoid transformation may gen-erate block artifacts at the boundary between adjacent patches, we further utilize an overlapping strategy with Hanning win-dow during implementation as shown in
Fig. 3 , and compute the mean value of overlapped pixels as the final sharpened results. As we can see, to implement the patch-wise sigmoid transformation with overlapping strategy, additional two pa-rameters need to be determined, the size of image patch π πππ‘πβ and the stride π πππ‘πβ . B. Parametrical analysis
As analyzed above, totally four parameters consisting of
πΎ, π΅, π πππ‘πβ and π πππ‘πβ need to be determined for sigmoid transformation. In this part, the parameters are analyzed re-spectively and empirically determined, while more experi-mental results are exhibited in Section V . ο¬ Sharpness πΎ Remembering parameter πΎ in (6) determines the steep-ness of the sigmoid function, it affects the sharpness and width of the edges directly. As we can see from left to right by row in Fig. 4 , varying values of πΎ lead to ranged sharp-ening effects, the larger πΎ is, the sharper and narrower edges are. Moreover, it can be seen the sigmoid transformation per-forms as a blurring operation with smoothed and widened edges for πΎ < 1 , while performing as a sharpening operation with sharper and narrower edges for
πΎ > 1 . Under the con-dition of
πΎ = 1 , sigmoid transformation performs no sharp-ening or blurring with slope information well preserved in the results. Considering the requirement of image sharpening with computational cost, we empirically set
πΎ = 2 in this paper. ο¬ Location π΅ Different with πΎ , parameter π΅ determines the location of the sigmoid function as shown in Fig. 2 , namely affects the extension of the edges. After downsampling operation, some edges may not locate at the center of LR pixel, which is often the case especially when upscaling factor π is large. As shown in Fig. 4 , we can see from top to bottom by column that the edges with varying values of π΅ are displaced along the vertical direction, and the sign of π΅ determines the di-rection. For π΅ < 0 shown in the first row, the edges are dis-placed along the gradient direction, while displaced opposite to the gradient direction for
π΅ > 0 shown in the last row. Under the condition of
π΅ = 0 , the edges are supposed to locate exactly at the center of pixel in LR image with no dis-placement during sharpening operation. Considering sub-pixel locations of edges in LR image vary between re-gions and are hard to estimate, especially when π is large, therefore we empirically set π΅ = 0 in this paper for simplic-ity.
Fig.4 Effects with ranged values of πΎ and π΅ . The results are derived from the blurred original image with different parameter settings. ο¬ Patch size π πππ‘πβ and stride π πππ‘πβ As a patch-wise slope-based sharpening operation, sigmoid transformation requires the patch size π πππ‘πβ to cover com-plete slope information without disturbing information. Con-sidering over-large patch size may introduce extra disturbing information and over small patch size may lose slope infor-mation, both leading to degradation of sharpening effect and generation of annoying artifacts, we empirically set π πππ‘πβ = 3 in LR space to cover the neighboring information with little loss of slope information. To reduce the block arti-facts at the boundary between patches, stride π πππ‘πβ is re-quired to cooperate with patch size π πππ‘πβ to guarantee the compatibility between adjacent patches. Considering the computational cost, in this paper we empirically set Original K=1, B=-1 K=0.5, B=-1 K=2, B=-1 K=0.5, B=0 K=1, B=0 K=2, B=0 K=1, B=1 K=0.5, B=1 K=2, B=1
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 4
LR image
HR image Y X DHX ο DHX Y ( ) U ο DHX Y Z ο X Z ( ) ( ) U ο¬ο ο« ο DHX Y X Z ο ο ο¬ ο ο¨ο Fig.5 Overall SR framework. π πππ‘πβ = 1 in LR space for efficiency, namely π πππ‘πβ = π in HR space. C. Relationship to other edge enhancement methods
As a generic edge-based prior for super-resolution, the proposed sigmoid transformation performs similarly with other edge enhancement methods. In this part, we discuss the relationship and inherent difference between the proposed sigmoid transformation and other representative edge en-hancement methods. Fattal [11] proposed an edge statistics prior for image up-scaling, namely edge-frame continuity modulus (EFCM), which is learned to characterize the marginal distribution of the gradients over the whole image. However, the imposed constrain on local continuity measures may not fit varying types of edges and generate jaggy artifacts, besides the com-putational cost is considerable as reported in [11]. Sun [13][16] establish a generic prior called gradient pro-file prior, and proposed gradient field transformation to transfer image gradient field guided by the prior knowledge learned from large training sets. However, the gradient profile only utilizes the pixel information along the gradient direc-tion with local image structure neglected, leading to suscepti-bility to noises and undulation. Shock filter is another representative category of edge en-hancement approach, which is commonly designed to en-hance edges detected by edge detectors. Recently, shock filter has been introduced into SR applications [] performing as an edge enhancement operation. However, as shock filter is sen-sitive to noise, the noise is supposed to be amplified while enhancing the edges. Compared with other edge enhancement methods, the pro-posed sigmoid transformation performs twofold distinct dif-ferences or advantages. Firstly, the sigmoid transformation performs patch-wise and slope-based, namely the sharpening operation is based on local image structure, which is more reasonable with better suppression of noise. Secondly, our method performs superior efficiency, which is further demon-strated in
Section IV . III. SISR FRAMEWORK BASED ON SIGMOID TRANS-FORMATION In our scenario, the SISR process is realized through typi-cal iterative reconstruction [21][27], where the patch-wise sigmoid transformation performs as an imposed sharpening regularization cooperating with the reconstruction errors. Given an HR image πΏ and a corresponding LR image , the degradation process can be formulated as = π«π―πΏ + π΅ (8) where π― is the blurring operator, π« determines the decima-tion matrix and π΅ represents the additional Gaussian noise in . To realize the restoration of HR image πΏ , we integrate the patch-wise sigmoid transformation as a sharpening regu-larization term with the reconstruction error in the cost func-tion as πΏ = ππππππ πΏ *βπ«π―πΏ β β + πβ‘βπΏ β β + (9) where serves as a sharpened image of πΏ utilizing the proposed patch-wise sigmoid transformation. To solve the minimization problem in (9), we refer to our previous work in [] and utilize the fast upscaling technique in our SISR framework πΏ π+1 = πΏ π β π(π(π«π―πΏ π β ) + πβ‘(πΏ π β π )) (10) where πΏ π , πΏ π+1 are estimators of HR image πΏ in π π‘β and π + 1 π‘β iteration respectively, π represents the regulariza-tion parameter weighting the regularization cost against the reconstruction error, π serves as the stepsize and π(β) re-fers to the upscaling technique proposed in [37]. The overall framework is shown in
Fig. 5 and further summarized in algorithm 1.
Algorithm 1: SISR based on sigmoid transformation
Input:
LR image , scaling factor πΎ Initialize:
Upscale the LR image utilizing interpolation method with bicubic spline to obtain original HR image πΏ Do: Re-degenerate the HR image πΏ πβ1 and compute the reconstruction error (π«π―πΏ πβ1 β ) , then utilize the upscaling technique [37] to upscale the reconstruction error; 2) Compute the sharpened image πβ1 based on πΏ πβ1 utilizing the proposed patch-wise sigmoid transfor-mation, then derive the difference between πΏ πβ1 and πβ1 as a regularization term; Update πΏ πβ1 to derive πΏ π according to (10). Until:
Stopping criteria are satisfied
Output:
Reconstructed HR image πΏ IV. EXPERIMENTAL RESULTS
Implementation:
As human vision is more sensitive to
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 5 brightness change, we only apply our method on brightness channel (Y) with color channels (UV) upscaled by bicubic interpolation for color images. In our experiments, the HR images are firstly blurred by a
Gaussian kernel with π = 1.2 , and then dwonsampled with factor πβ‘ = 3 to serve as the input LR images. During the implementation of our method, regularization parameter π and learning rate π are set to be 0.2 and 0.1 respectively, while the maximum num- ber of iterations specified as 30. All the experiments are coded in Matlab R2011 and running on a workstation with Septuple Core i7 @ 3.60 GHz CPUs and 16GB RAM. Metrics:
To evaluate and compare the results quantitatively, peak-signal-to-noise ratio (PSNR) and mean structure simi-larity (SSIM) are utilized as metrics, which are defined as { π = 10πππ ( ) , π πΈ = β β ( (π, π) β πΏ(π, π)) πΌπ = (2π π₯ π π¦ +π )(2π π₯ π π¦ +π )(π π₯2 +π π¦2 +π )(π π₯2 +π π¦2 +π ) , {π = (π πΏ) π = (π πΏ) β‘β‘β‘β‘β‘β‘ οΌ (11) where , are mean value of image πΏ and respec-tively, π , π are standard variance of πΏ and respec-tively, π , π are two stabilizing constants with πΏ repre-senting the dynamics of a pixel value, and π , π are gener-ally set to be 0.01 and 0.03 respectively. A. Experimental analysis of parameters
In this section, we focus on the sharpness parameter πΎ and location parameter π΅ , conduct extensive experiments on Set5 and
Set14 datasets to test the effects of our method with various parameter settings.
Fig.6 Effect of parameter πΎ on image PPT and
Comic . The images are reconstruction results (3X) with
πΎ = 2,3,4 . Fig. 6 presents the effects of sharpness parameter πΎ on the reconstruction result. As we can see, larger πΎ leads to sharper and narrower edges, however, over-sharp artifacts can be visible when πΎ is over-large. In Fig. 7 , effects of location parameter π΅ on the recon-struction result is given. Zooming in on the images, the dis-placement of edges can be observed between the results with different values of π΅ . Compare the enlarged region of Lenna , the widening of the brown region under Lennaβs hat with π΅ increased can be noticeable. Fig.7 Effect of parameter π΅ on image Foreman and
Lenna . The images are reconstruction results (3X) with
π΅ = β1,0,1 . Fig.8 Average PSNR of different parameter settings.
Comparison of quantitative performance is exhibited in
Table. I with average performance presented in
Fig. 8 . As sharpness parameter πΎ increases, the reconstruction perfor-mance is first improved and then begins to deteriorate due to the over-sharp effects. Concerning location parameter π΅ , obviously it has great impacts on the performance with π΅ = 0 performing as the dominant one. However, note that it does not mean no benefits of π΅ in fine-tuning the locations of edges, which can be validated by some superior results in Table. I with
π΅ β 0 . As edges within an image varies remar kably, the fine-tuning is required to implement adaptively, nevertheless, the global selection of π΅ and implementation of fine-tuning in our scenario may not adapt to most edges, commonly leading to degeneration of reconstruction perfor-mance. Note that the adaptive selection of πΎ and π΅ is not the PS N R B=-1B=0B=1(a) K=2 (b) K=3 (c) K=4 (a) B=-1 (b) B=0 (c) B=1
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 6
Fig.9 SISR results (3X) on image
Baby by ranged methods. TABLE I COMPARISON OF PSNR WITH DIFFERENT PARAMETER SETTINGS. THE BEST RESULT ARE SHOWN IN BOLD.
π² = π
π² = π
π² = π
π© = βπ
π© = π
π© = π
π© = βπ
π© = π
π© = π
π© = βπ
π© = π
π© = π
Baby
Bird
Butterfly
Head
Woman
Baboon
Barbara
Coastguard
Comic
Flowers
Foreman
Lena
Monarch
Pepper
PPT
Zebra
Bridge
Man
Average focus of this paper, we empirically set
πΎ = 2 and
π΅ = 0 for following experiments and leave this problem for further investigation.
B. Comparison to other methods
In this section, experiments are conducted on Set5 and Set14 datasets to compare the reconstruction performance of other state-of-the-art approaches, including bicubic, Kim et al. βs method [14], Zeyde et al. βs method [31], Timofte et al. βs method [18] and Yang et al. βs method [28], Dong et al. βs method [36], Sun et al. βs method [16] and Yang et al. βs method [17]. The source codes of approaches [14][31][18] [28][17][36] are downloaded from the authorsβhomepages, and we refer to Yangβs implementation work in [19] to derive other approaches [10][13][16] for no available originally re-leased codes. Recommended parameters of comparing meth-ods by the authors are used in our experiments. We summarize the parameter settings for our method in
Table. II , and the reconstruction results are exhibited in
Figs. 9-13 with quantitative results presented in
Table. III . TABLE II PARAMETER SETTINGS FOR THE PROPOSED METHOD.
Parameters Values
Sharpness πΎ π΅ π πππ‘πβ π πππ‘πβ ο¬ Visual quality From the reconstruction results on image
Baby , Butterfly , Head , Comic and
Lena with upscaling factor 3 shown in
Figs. , we can see that the bicubic interpolation method blurs the edge and texture regions remarkably with a serious loss of image de tails. Sunβs method generates sharper edges, how-ever the blurring effect is still noticeable. For Zeydeβs method, Timofteβs method, Yangβs method and Yangβs method, the blurring effect is further alleviated, but some ringing artifacts around the edges can still be visible. Relying on adaptive sparse representation, Dongβs method achieves relatively (a) Original (b) Bicubic (c) Sun (d) Zeyde (e)Timofte (f) Yang (g) Kim (h) Yang (i) Dong (j)Proposed
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 7
Fig.10 SISR results (3X) on image
Butterfly by ranged methods.
Fig.11 SISR results (3X) on image
Comic by ranged methods. excellent visual quality, however some fine details like tex-ture are missed due to the sparsity regularization. Kimβs method generates sharp edges and distinct details, performs superior reconstruction performance. Concerning our pro-posed method, comparable top visual quality is achieved with even more fine details reconstructed, demonstrating the supe-riority of our proposed method. ο¬ Quantitative metrics As we can see in
Table. III , the proposed method outper- forms other approaches on all the test images with highest PSNR and SSIM values. Compared with Sunβs method, our method performs remarkable improvement with PSNR value improved by 1.61 dB in average. Concerning state-of-the-art Kimβs and Yangβs method, the proposed method improves the average PSNR value with about 0.7 and 0.3 dB respectively, demonstrating the superior reconstruction performance of the proposed method. (a) Original (b) Bicubic (c) Sun (d) Zeyde (e)Timofte (f) Yang (g) Kim (h) Yang (i) Dong (j)Proposed (a) Original (b) Bicubic (c) Sun (d) Zeyde (e)Timofte (f) Yang (g) Kim (h) Yang (i) Dong (j)Proposed
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 8
Fig.12 SISR results (3X) on image
Lena by ranged methods.
Fig.13 SISR results (3X) on image
Head by ranged methods. ο¬ Running time In terms of processing efficiency, obviously the running time of bicubic interpolation method is much faster for no complex calculations. Some dictionary-based methods like Zeydeβs, Timofteβs and Yangβs methods also performs superi- or efficiency, however note that the learning processes of these methods are hugely time-consuming. Concerning our proposed method, there is no need for additional training process and the running time is still faster than Kimβs, Yangβs, Yangβs, Dongβs and Sunβs methods, even comparable to
TABLE III COMPARISON OF PSNR, SSIM AND RUNNING TIME. THE BEST RESULT ARE SHOWN IN BOLD.
Metrics Bicubic Sun [16] Zeyde [31] Timofte [18] Yang [28] Kim [14] Yang [17] Dong [36] Proposed Baby
PSNR 30.91 31.47 32.12 32.11 32.11 32.22 32.86 28.32
SSIM 0.848 0.861 0.871 0.873 0.874 0.875 0.886 0.825
Time (a) Original (b) Bicubic (c) Sun (d) Zeyde (e)Timofte (f) Yang (g) Kim (h) Yang (i) Dong (j)Proposed (a) Original (b) Bicubic (c) Sun (d) Zeyde (e)Timofte (f) Yang (g) Kim (h) Yang (i) Dong (j)Proposed
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 9
Bird
PSNR 28.72 29.29 30.06 30.06 30.00 30.29 30.89 26.26
SSIM 0.870 0.882 0.894 0.896 0.895 0.899 0.903 0.815
Time
Butterfly
PSNR 21.33 21.95 22.93 22.86 22.96 23.76 23.88 20.23
SSIM 0.742 0.769 0.815 0.810 0.817 0.846 0.835 0.759
Time
Head
PSNR 29.43 29.67 29.95 30.00 30.00 30.08 30.25 28.08
SSIM 0.692 0.703 0.714 0.717 0.718 0.718 0.727 0.670
Time
Woman
PSNR 25.66 26.24 27.24 27.20 27.22 27.56 28.33 23.46
SSIM 0.840 0.855 0.874 0.875 0.875 0.879 0.892 0.817
Time
Baboon
PSNR 20.75 20.89 21.03 21.08 21.09 21.13 21.24 20.37
SSIM 0.423 0.441 0.457 0.464 0.468 0.470 0.490 0.423
Time
Barbara
PSNR 24.04 24.27 24.89 24.60 24.63 24.68 24.90 22.86
SSIM 0.678 0.692 0.710 0.712 0.714 0.715 0.729 0.652
Time
Coastguard
PSNR 24.60 24.85 25.26 25.21 25.35 25.34 25.54 23.93
SSIM 0.527 0.548 0.569 0.574 0.582 0.577 0.596 0.518
Time
Comic
PSNR 20.73 21.05 21.49 21.57 21.62 21.76 22.04 19.72
SSIM 0.615 0.640 0.672 0.677 0.682 0.689 0.707 0.596
Time
Flowers
PSNR 24.25 24.62 25.15 25.18 25.21 25.35 25.68 22.96
SSIM 0.728 0.743 0.760 0.762 0.764 0.766 0.774 0.696
Time
Foreman
PSNR 28.23 28.81 29.75 29.71 29.63 30.30 30.82 27.11
SSIM 0.863 0.874 0.889 0.830 0.891 0.898 0.902 0.860
Time
Lena
PSNR 28.81 29.23 29.84 29.86 29.88 30.14 30.49 26.45
SSIM 0.755 0.766 0.777 0.779 0.780 0.783 0.790 0.726
Time
Monarch
PSNR 26.63 27.22 28.07 28.04 28.10 28.70 28.95 25.17
SSIM 0.880 0.890 0.904 0.903 0.905 0.914 0.912 0.866
Time
Pepper
PSNR 29.06 29.43 30.03 29.93 29.95 30.16 30.39 26.65
SSIM 0.764 0.772 0.779 0.779 0.779 0.781 0.786 0.732
Time
PPT
PSNR 21.17 21.56 22.44 22.30 22.30 22.65 23.13 19.76
SSIM 0.830 0.846 0.870 0.865 0.865 0.873
Time
Zebra
PSNR 23.55 24.20 25.22 25.20 25.17 25.71 26.45 21.31
SSIM 0.704 0.732 0.762 0.766 0.770 0.771 0.804 0.667
Time
Bridge
PSNR 23.65 23.94 24.33 24.35 24.38 24.42 24.77 22.63
SSIM 0.585 0.608 0.633 0.639 0.645 0.642 0.672 0.566
Time
Man
PSNR 24.82 25.16 25.72 25.73 25.78 25.96 26.26 23.46
SSIM 0.676 0.695 0.718 0.721 0.715 0.725 0.745 0.654
Time
Average
PSNR 25.32 25.77 26.42 26.39 26.41 26.68 27.05 22.82
SSIM 0.723 0.740 0.759 0.758 0.763 0.768 0.780 0.703
Time
Zeydeβs method, demonstrating the superior efficiency for the proposed simple sigmoid transformation.
Fig.14 Plot of the trade-off between accuracy and speed for ranged meth-ods with upscaling factor 3. The results present the mean PSNR values and running time on the test images.
Overall, as shown in
Fig. 14 , our method realizes dominant reconstruction performance without much loss of efficiency, performing superior to Yangβs method known as state-of-the-art SISR approach with respect to PSNR value, and competitive to Zeydeβs methods concerning running time. As our method requires no training process, the superiority in efficiency can be more prominent if we take training time into consideration.
C. Robustness
To further demonstrate the robustness of the proposed method to various Gaussian blurring kernels, upscaling fac-tors and noise intensities, additional experiments are con-ducted on Set5 and Set14 datasets in this section, and the quantitative results are presented in
Tables. IV-VI . ο¬ Blurring kernel
Table. IV presents the comparison of reconstruction per-formance with various blurring kernels under the condition of upscaling factor 3. As we can see, the overall reconstruction performance for various methods degenerates with blurring kernel width increased. Under the condition of π = 0.8 , alt-hough Kimβs method performs superiorly with highest PSNR value in average, our method is still comparable and outper-forms other approaches. Concerning conditions of π =1.2β‘andβ‘π = 1.6 , the proposed method outperforms other approaches in average while Yangβs method, as the most competitive one, also performs competitively when π = 1.6 . Overall, with blurring kernel width varying, the proposed method performs as the dominant one on most test images when π = 1.2β‘andβ‘π = 1.6 , and achieves second best per-formance after Kimβs method when π = 0.8 , indicating -3 -2 -1 PS N R ( d B ) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 10
TABLE IV COMPARISON OF RECONSTRUCTION PERFORMANCE WITH VIROUS π . THE BEST RESULTS ARE SHOWN IN BOLD. π Bicubic Sun [16] Zeyde [31] Timofte [18] Yang [28] Kim [14] Yang [17] Dong [36] Proposed Baby
Bird
Butterfly
Head
Woman
Baboon
Barbara
Coastguard
Comic
Flowers
Foreman
Lena
Monarch
Pepper
PPT
Zebra
Bridge
Man
Average strong robustness to ranged blurring kernels. ο¬ Upscaling factor In
Table. V , results concerning various upscaling factors are exhibited. Although mutual degeneration trend can be observed with upscaling factor increased from 2 to 4, our method still outperforms other state-of-the-art approaches in average with highest PSNR values, while Kimβs method and Yangβs method perform superior to ours on several images under conditions of upscaling factor 2 and 4 respectively. On the whole, our method performs independent to exemplars and realize state-of-the-art and even better performance for range upscaling factors, demonstrating the superior robust-ness to upscaling factors.
TABLE V COMPARISON OF RECONSTRUCTION PERFORMANCE WITH VIROUS π . THE BEST RESULTS ARE SHOWN IN BOLD. π Bicubic Sun [16] Zeyde [31] Timofte [18] Yang [28] Kim [14] Yang [17] Dong [36] Proposed Baby
2 32.28 33.17 33.06 33.06 33.06 33.21 35.45 32.21
3 30.91 31.47 32.12 32.11 32.11 32.22 32.86 28.32
4 29.54 29.80 30.89 30.85 30.66 31.04 30.79 26.16
Bird
2 30.33 31.31 31.31 31.25 31.20 31.49 34.35 30.38
3 28.72 29.29 30.06 30.06 30.00 30.29 30.89 26.26
4 27.18 27.43 28.43 28.51 28.38 28.63 28.37 23.88
Butterfly
2 22.67 23.68 23.94 23.73 23.71 24.10 28.06 24.13
3 21.33 21.95 22.93 22.86 22.96 23.76 23.88 20.23
4 20.09 20.41 21.62 21.58 21.66 22.40 21.56 18.29
Head
2 30.15 30.47 30.52 30.53 30.55 30.56 31.34 30.23
3 29.43 29.67 29.95 30.00 30.00 30.08 30.25 28.08
4 28.72 28.85 29.25 29.29 29.25 29.37 29.20 26.68
Woman
2 27.11 28.03 28.19 28.13 28.07 28.35
4 24.37 24.64 25.84 25.80 25.71 26.24 25.82 21.53
Baboon
2 21.35 21.54 21.67 21.70 21.72 21.76
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 11
3 20.75 20.89 21.03 21.08 21.09 21.13 21.24 20.37
4 20.28 20.33 20.49 20.52 20.52 20.54 20.48 19.68
Barbara
2 24.80 25.10 25.26 25.23 25.25 25.33
3 24.04 24.27 24.89 24.60 24.63 24.68 24.90 22.86
4 23.33 23.44 23.79 23.82 23.80 23.98 23.79 21.69
Coastguard
2 25.53 25.94 26.32 26.32 26.40 26.45 28.47 26.51
3 24.60 24.85 25.26 25.21 25.35 25.34 25.54 23.93
4 23.92 24.03 24.30 24.32 24.25 24.38 24.36 22.99
Comic
2 21.84 22.35 22.60 22.61 22.62 22.76 25.13 22.83
3 20.73 21.05 21.49 21.57 21.62 21.76 22.04 19.72
4 19.81 19.94 20.44 20.47 20.47 20.56 20.43 18.27
Flowers
2 25.52 26.11 26.28 26.24 26.25 26.39 28.45 26.16
3 24.25 24.62 25.15 25.18 25.21 25.35 25.68 22.96
4 23.17 23.35 23.96 24.01 23.98 24.16 23.91 21.31
Foreman
2 29.32 30.25 30.40 30.24 30.24 30.48 32.37 30.32
3 28.23 28.81 29.75 29.71 29.63 30.30 30.82 27.11
4 27.22 27.51 28.74 28.67 28.52
Lena
2 29.91 30.48 30.62 30.60 30.58 30.75 32.50 29.73
3 28.81 29.23 29.84 29.86 29.88 30.14 30.49 26.45
4 27.78 27.99 28.76 28.83 28.70
Monarch
2 28.03 29.00 29.18 29.04 29.02 29.38 32.93 29.11
3 26.63 27.22 28.07 28.04 28.10 28.70 28.95 25.14
4 25.35 25.64 26.68 26.71 26.77 27.25 26.62 23.13
Pepper
2 30.02 30.54 30.65 30.57 30.58 30.74 31.49 29.50
3 29.06 29.43 30.03 29.93 29.95 30.16 30.39 26.65
4 28.04 28.23 29.06 28.99 28.79
PPT
2 22.50 23.18 23.61 23.44 23.47 23.73 26.79 23.42
3 21.17 21.56 22.44 22.30 22.30 22.65 23.13 19.76
4 19.98 20.14 21.01 20.86 20.73 21.24 21.00 18.20
Zebra
2 25.13 26.23 26.37 26.27 26.23 26.49 30.82 25.58
3 23.55 24.20 25.22 25.20 25.17 25.71 26.45 21.31
4 21.94 22.22 23.43 23.48 23.43 23.98 23.33 19.27
Bridge
2 24.54 24.97 25.18 25.17 25.20 25.27 27.12 25.18
3 23.65 23.94 24.33 24.35 24.38 24.42 24.77 22.63
4 22.80 22.93 23.39 23.39 23.37 23.38 23.35 21.39
Man
2 25.80 26.29 26.51 26.50 26.49 26.64
4 23.95 24.11 24.79 24.78 24.77 25.00 24.73 21.93
Average
2 26.49 27.15 27.32 27.26 27.26 27.44 29.73 27.01
3 25.35 25.77 26.42 26.39 26.41 26.68 27.05 23.82
4 24.308 24.50 25.27 25.27 25.21 25.55 25.20 22.18 ο¬ Noise intensity
Table. VI shows the reconstruction performance with var-ious noise intensities. As analyzed in Section II, the patch-wise slope-based implementation of our method per-forms innate suppression of noise, which can be demonstrated by the superior performance with highest PSNR in average. Although sparsity-deduced approaches like Kimβs and Yangβs methods have advantages in noise suppression, our method still outperforms them except for several conditions where our method performs slightly inferior.
TABLE VI COMPARISON OF RECONSTRUCTION PERFORMANCE WITH VIROUS NOISE INTENSITIES. THE BEST RESULTS ARE SHOWN IN BOLD. π π΅ Bicubic Sun [16] Zeyde [31] Timofte [18] Yang [28] Kim [14] Yang [17] Dong [36] Proposed Baby
0 30.91 31.47 32.12 32.11 32.11 32.22 32.86 28.32
2 30.78 31.32 31.93 31.91 31.89 32.02 32.57 28.23
4 30.43 30.90 31.42 31.37 31.30 31.48
Bird
0 28.72 29.29 30.06 30.06 30.00 30.29 30.89 26.26
2 28.64 29.19 29.91 29.91 29.84 30.14 30.67 26.18
4 28.43 28.93 29.61 29.57 29.47 29.77 30.14 25.98
Butterfly
0 21.33 21.95 22.93 22.86 22.96 23.76 23.88 20.23
2 21.32 21.93 22.91 22.83 22.94 23.72 23.83 20.19
4 21.28 21.88 22.85 22.76 22.86 23.62 23.73 20.14
Head
0 29.43 29.67 29.95 30.00 30.00 30.08 30.25 28.08
2 29.33 29.56 29.83 29.86 29.85 29.96 30.07 27.91
4 29.07 29.27 29.47 29.48 29.44
Woman
0 25.66 26.24 27.24 27.20 27.22 27.56 28.33 23.46
2 25.63 26.20 27.19 27.14 27.16 27.50 28.22 23.40
4 25.51 26.05 26.99 26.95 26.93 27.27 27.90 23.30
Baboon
0 20.75 20.89 21.03 21.08 21.09 21.13 21.24 20.37
2 20.74 20.87 21.01 21.06 21.08 21.11 21.22 20.34
4 20.71 20.83 20.97 21.01 21.02 21.06 21.15 20.29
Barbara
0 24.04 24.27 24.89 24.60 24.63 24.68 24.90 22.86
2 24.01 24.24 24.55 24.56 24.59 24.65 24.85 22.83
4 23.94 24.16 24.45 24.46 24.48 24.54 24.71 22.73
Coastguard
0 24.60 24.85 25.26 25.21 25.35 25.34 25.54 23.93
2 24.58 24.83 25.22 25.17 25.31 25.30 25.49 23.88
4 24.48 24.72 25.09 25.03 25.15 25.17 25.31 23.78
Comic
0 20.73 21.05 21.49 21.57 21.62 21.76 22.04 19.72
2 20.72 21.03 21.47 21.54 21.60 21.73 22.01 19.70
4 20.68 20.99 21.42 21.49 21.54 21.67 21.92 19.67
Flowers
0 24.25 24.62 25.15 25.18 25.21 25.35 25.68 22.96
2 24.22 24.59 25.11 25.13 25.17 25.31 25.62 22.93
4 24.14 24.49 25.00 25.01 25.03 25.18 25.45 22.83
Foreman
0 28.23 28.81 29.75 29.71 29.63 30.30 30.82 27.11
2 28.17 28.73 29.65 29.60 29.52 30.17 30.62 27.03
4 27.99 28.51 29.35 29.29 29.17 29.81
Lena
0 28.81 29.23 29.84 29.86 29.88 30.14 30.49 26.45
2 28.73 29.14 29.72 29.73 29.75 30.01 30.32 26.36
4 28.50 28.87 29.39 29.38 29.36 29.65
Monarch
0 26.63 27.22 28.07 28.04 28.10 28.70 28.95 25.14
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 12
2 26.58 27.16 27.98 27.95 28.00 28.61 28.84 25.12
4 26.44 26.99 27.76 27.72 27.74 28.35 28.52 24.96
Pepper
0 29.06 29.43 30.03 29.93 29.95 30.16 30.39 26.65
2 28.98 29.33 29.92 29.81 29.81 30.04 30.23 26.57 30.22 4 28.73 29.04 29.56 29.44 29.40 29.67
PPT
0 21.17 21.56 22.44 22.30 22.30 22.65 23.13 19.76
2 21.16 21.56 22.43 22.29 22.29 22.65 23.11 19.75
4 21.13 21.53 22.39 22.25 22.24 22.61 23.06 19.73
Zebra
0 23.55 24.20 25.22 25.20 25.17 25.71 26.45 21.31
2 23.52 24.18 25.18 25.15 25.13 25.66 26.37 21.29
4 23.45 24.09 25.05 25.03 24.99 25.50 26.15 21.22
Bridge
0 23.65 23.94 24.33 24.35 24.38 24.42 24.77 22.63
2 23.62 23.91 24.29 24.31 24.34 24.40 24.71 22.61
4 23.55 23.83 24.19 24.20 24.22 24.29 24.56 22.52
Man
0 24.82 25.16 25.72 25.73 25.78 25.96 26.26 23.46
2 24.78 25.12 25.66 25.66 25.71 25.92 26.18 23.43
4 24.70 25.01 25.54 25.53 25.56 25.77 25.99 23.30
Average
0 25.35 25.77 26.42 26.39 26.41 26.68 27.05 23.82
2 25.31 25.72 26.33 26.31 26.33 26.61 26.94 23.76
4 25.18 25.56 26.14 26.11 26.11 26.39 26.66 23.63
V. CONCLUSIONS AND FUTURE WORK In this paper, we propose a fast and simple single image su-per-resolution algorithm based on sigmoid transformation, uti-lize the overlapped patch-wise sigmoid transformation to realize slope-based sharpening of the image, and implement the sharp-ening operation as an imposed sharpening regularization term in the reconstruction. Extensive experiments compared with other state-of-the-art approaches demonstrate the superiority of the proposed algorithm in effectiveness and efficiency. Considering the fast, simple and effective SR processing of the proposed method with no need for learning or training, it has widespread application value and prospect. Although the proposed overlapped patch-wise sigmoid trans-formation realizes state-of-the-art and even better reconstruction performance, the parameters utilized including steepest parame-ter πΎ and location parameter π΅ are empirically determined, which can be further improved. In the future, we will investigate the adaptive determination of the parameters to enhance the adaptability of sigmoid transformation to complex images. REFERENCES [1] V. Chandran, C. Fookes, F. Lin, S. Sridharan, βInvestigation into optical flow super-resolution for surveillance applications,β
Aprs Workshop on Digital Image Computing , vol. 26, no. 47, pp. 73β78, 2005. [2]
L. Zhang, H. Zhang, H. Shen, and P. Li, βA super-resolution reconstruc-tion algorithm for surveillance images,β
Signal Processing , vol. 90, no. 3, pp. 848β859, 2010. [3]
A. J. Tatem, H. G. Lewis, P.M. Atkinson, M.S. Nixon, βSuper-resolution target identification from remotely sensed images using a hopfield neural network,β
IEEE Trans. Geosci. Rem. Sens. , vol. 39, no. 4, pp. 781-796, Apr. 2001. [4]
M. W. Thornton, P. M. Atkinson, and D. a. Holland, βSub-pixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping,β
International Journal of Remote Sensing , vol. 27, no. 3, pp. 473β491, Jan. 2006. [5]
K. Makantasis, K. Karantzalos, A. Doulmis, N. Doulamis, βDeep super-vised learning for hyperspectral data classification through convolutional neural networks,β in
Proc. IGARSS , Jul. 2015, pp. 4959β4962. [6]
H. Greenspan, G. Oz, N. Kiryati, S. Peled, βSuper-resolution in MRI,β in
Proc. ISBI , 2002, pp. 943-946. [7]
W. Shi, J. Caballero, C. Ledig, X. Zhuang, W. Bai, K. Bhatia, A. Marvao, T. Dawes, D. ORegan, and D. Rueckert, βCardiac image super-resolution with global correspondence using multi-atlas patchmatch,β in
Proc. Int. Conf. Medical Image Computing and Computer Assisted Intervention (MICCAI) , Jan. 2013, pp. 9β16. [8]
T. Goto, T. Fukuoka, F. Nagashima, S. Hirano and M. Sakurai, βSu-per-resolution system for 4K-HDTV, β in
Proc. Int Conf. Pattern Recog-nition , Jan. 2014, pp. 4453β4458. [9]
K. Zhang, X. Gao, D. Tao and X. Li, βMulti-scale dictionary for single image super-resolution,β in
Proc. CVPR , Providence, RI, Jun. 2012, pp. 1114-1121. [10]
M. Irani and S. Peleg. βImproving resolution by image registration.β
Graphical Models and Image Processing , vol. 53, no. 3, pp. 231-239, 1991. [11]
R. Fattal. βImage upsampling via imposed edge statistics,β
ACM SIG-GRAPH , vol. 26, no. 3, pp. 95, 2007. [12]
S. C. Park, M. K. Park and M. G. Kang, βSuper-resolution image recon-struction: A technical overview,β
IEEE Signal Processing Magazine , vol. 20, no. 3, pp. 21-36, May. 2003. [13]
J. Sun, et al. βImage super-resolution using gradient profile prior,β in
Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. , 2008, pp. 1-8. [14]
K. I. Kim and Y. Kwon. βSingle-image super-resolution using sparse regression and natural image prior,β
IEEE Trans. Pattern Anal. Mach. In-tell. , vol. 32, no. 6, pp. 1127-1133, 2010. [15]
G. Freedman and R. Fattal. βImage and video upscaling from local self-examples,β
ACM Trans. Graph. , vol. 30, no. 2, pp. 1-11, 2011. [16]
J. Sun, et al. βGradient profile prior and its applications in image su-per-resolution and enhancement,β
IEEE Trans. Image Process. , vol. 20, no. 6, pp. 1529-1542, 2011. [17]
C. Yang, and M. H. Yang. βFast direct super-resolution by simple func-tions,β in
Proc. ICCV , 2013, pp. 561-568. [18]
R. Timofte, V. De, and L. V. Gool. βAnchored neighborhood regression for fast example-based super-resolution,β in
Proc. ICCV , 2014, pp. 1920-1927. [19]
C. Yang, C. Ma, and M. H. Yang. βSingle-image super-resolution: A benchmark,β in
Proc. ECCV , 2014, pp. 372-386. [20]
Z. Zhu, et al. βFast single image super-resolution via self-example learn-ing and sparse representation,β
IEEE Trans. Multime-dia
S. Gu, et al. βConvolutional sparse coding for image super-resolution,β in
Proc. ICCV , 2016, pp. 1823-1831. [22]
X. Gao, K. Zhang, D. Tao and X. Li, βImage super-resolution with sparse neighbor embedding,β
IEEE Trans. Image Process. , vol. 21, no. 7, pp. 3194-3205, Jul. 2012. [23]
L. Zhang and X. Wu, βAn edge-guided image interpolation algorithm via directional filtering and data fusion,β
IEEE Trans. Image Process. , vol. 15, no. 8, pp. 2226β2238, Aug. 2006. [24]
H. A. Aly and E. Dubois. βImage up-sampling using total variation regu-larization with a new observation model,β
IEEE Trans. Image Process. , vol. 14, no. 10, pp. 1647β1659, 2005. [25]
S. Dai, M. Han, W. Xu, Y. Wu, and Y. Gong. βSoft edge smoothness prior for alpha channel super resolution,β in
Proc. CVPR , Minneapolis, MN, Jun. 2007, pp. 1-8. [26]
H. S. Hou and H. C. Andrews. βCubic splines for image interpolation and digital filtering,β
IEEE Trans. Acoustics, Speech & Signal Proc. , ASSP-26:508β517. 449 [27]
W. Liu, S. Li, βSparse representation with morphologic regularizations for single image super-resolution,β
Signal Processing , vol. 98, pp. 410-422, 2014. [28]
J. Yang, J. Wright, T. Huang. βImage super-resolution as sparse represen-tation of raw image patches,β
IEEE Trans. Image Process. , vol. 19, no. 11, pp. 2861,2010. [29]
Q. He, C. Zhang, D. C. Liu. βNonlinear image enhancement by self-adaptive sigmoid function,β
International Journal of Signal Pro-cessing, Image Processing and Pattern Recognition , vol. 8, no. 11, pp. 319-328, 2015. [30]
E. F. Arriaga-Garcia, R. E. Sanchez-Yanez, M. G. Garcia-Hernandez,
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 13 βImage enhancement using Bi-histogram equalization with adaptive sig-moid functions,β in
Proc. International Conference on Electronics, Com-munications and Computers (CONIELECOMP) , 2014, pp. 28-34. [31]
R. Zeyde, M. Elad, and M. Protter. βOn single image scale-up using sparse-representations, β in Proc. International Conference on Curves and Surfaces , 2010, pp. 711β730. [32]
D. Glasner, S. Bagon, M. Irani, βSuper-resolution from a single image,β in
Proc. ICCV , 2009, pp.349-356 [33]
C. Dong, C. C. Loy, K. He and X. Tang, βLearning a deep convolutional network for image super-resolution,β In
Proc. ECCV , 2014, pp. 184-199. [34]
L. He, H. Qi and R. Zaretzki, βBeta process joint dictionary learning for coupled feature spaces with application to single image super-resolution,β In
Proc. CVPR , 2013, pp. 345-352. [35]
M. T. Merino and J. Nunez, βSuper-resolution of remotely sensed images with variable-pixel linear reconstruction,β
IEEE Trans. Geosci. Rem. Sens. , vol. 45, no. 5, pp. 1446-1457, May. 2007. [36]
W. Dong, L. Zhang, G. Shi, X. Wu, βImage deblurring and su-per-resolution by adaptive sparse domain selection and adaptive regulari-zation,β
IEEE Trans. Image Process. , vol. 20, no. 7, pp. 1838-1857, 2011. [37]
L. Wang, Z. Lin, X. Deng and W. An, βMulti-frame image super resolu-tion with fast upscaling technique,β arXiv preprint arXiv:arXiv preprint arXiv: