High-Capacity Reversible Data Hiding in Encrypted Images using Adaptive Encoding
HHigh-Capacity Reversible Data Hiding in Encrypted Images using Adaptive Encoding
WU You-Qing , MA Wen-Jing , YIN Zhao-Xia (Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, AnhuiUniversity, Hefei 230601, China) (School of Computer Science and Technology, Hefei Normal University, Hefei 230601, China) Corresponding author: Zhaoxia Yin. E-mail: [email protected]
Abstract : With the popularization of digital information technology, the reversible data hiding in encrypted images (RDHEI) hasgradually become the research hotspot of privacy protection in cloud storage. As a technology that can embed additional information in theencrypted domain, extract the embedded information correctly and recover the original image losslessly, RDHEI has received a lot ofattention from researchers. To embed sufficient additional information in the encrypted image, a high-capacity RDHEI method usingadaptive encoding is proposed in this paper. Firstly, the occurrence frequency of different prediction errors of the original image iscalculated and the corresponding adaptive Huffman coding is generated. Then, the original image is encrypted and the encrypted pixels aremarked with different Huffman codewords according to the prediction errors. Finally, additional information is embedded in the reservedroom of marked pixels by bit substitution. Experimental results prove that the proposed method outperforms the state-of-the-art methods inembedding rate and can extract the embedded information correctly and recover the image losslessly.
Keywords : privacy protection; reversible data hiding; encrypted images; adaptive encoding; prediction error
In the past few decades, with the continuous development and improvement of digital media technology,technologies for digital media protection have emerged in an endless stream [1],[2]. As an important informationprotection technology, information hiding plays a vital role in information security by realizing the functions ofcopyright protection, concealment communication, and so on, while not excessively affecting carriers. According todifferent technical objectives, traditional information hiding can be divided into three categories, namelywatermarking [3],[4], steganography [5],[6], and reversible data hiding (RDH) [7],[8]. Watermarking is widely usedin security fields such as copyright protection and integrity authentication. In watermarking system, some of theinformation is embedded in the digital work without compromising its usefulness. Steganography aims to embedinformation in the carrier in an imperceptible way to achieve covert communication. Early RDH is mainly applied inthe plaintext domain. The embedded information can be extracted correctly and the original image can be recoveredintactly in plaintext domain RDH. Existing plaintext domain RDH methods mainly include lossless compression[9],[10], difference expansion [11],[12] and histogram shifting [7],[13]. These methods greatly promote theimprovement of the plaintext domain RDH and improve the performance of the RDH method. However, theplaintext domain RDH cannot protect the information of the original image from being leaked.With the promotion of cloud storage technology, the storage and transmission frequency of images is gettinghigher. To protect personal privacy, a combination of encryption and the RDH method is proposed. On the one hand,image encryption can ensure that the information of the original image is not exposed; on the other hand, theembedded information can be correctly extracted and the image can be recovered intactly. Therefore, the reversibledata hiding in encrypted images (RDHEI) method [14]-[17] has received widespread attention.According to the sequence of image encryption and room reservation, the existing RDHEI methods can bedivided into two main categories: vacating room after encryption (VRAE) RDHEI and reserving room beforeencryption (RRBE) RDHEI. The VRAE method [18]-[21] utilizes the image redundancy after encryption to reservethe room for embedding information. An RDHEI method utilized the spatial correlation of image is proposed in[18]. By flipping the least significant bits of the pixels in the encrypted image block, additional information can beembedded and extracted. However, when the encrypted image block is too small, the extraction of embeddedinformation maybe fail in the rough area of the image. Subsequently, a separable RDHEI method is proposed in[20]. The method reserves room to accommodate the embedded information by compressing the least significant bitsof the encrypted image. At the image receiving end, the extraction of embedded information and the recovery of theimage can be performed separably. This separable method further broadens the application scenarios of RDHEI. In21], Huang et al. introduce a new RDHEI framework using block encryption. The pixel correlation is still retainedin the block after encryption, so most plaintext domain RDH methods can be applied to the encrypted image toembed information. This method further proves that the preserving of the correlation between image pixels isessential to reserve the embedded room.Above mentioned methods can embed additional information in the encrypted image, but due to the low imageredundancy after encryption, the embedding capacity of the RRBE method is limited and there may be a certain biterror rate when recovering the image. Different from the VRAE method, the method of RRBE [22]-[25] processesthe image before encryption to reserve room. Literature [22] first proposes an RDHEI method that reserves roombefore image encryption. This method realizes the correct extraction of embedded information and the losslessrecovery of the image. Subsequently, an RDHEI method using the prediction error histogram shifting is proposed in[23]. According to different application requirements, the embedded information can be extracted from theencrypted image and the decrypted images respectively. Then, Puteaux et al. predict the most significant bit (MSB)of the pixel to reserve the room in [24]. Since the RDHEI method regardless of the loss of image quality, selectingthe MSB of the pixel for prediction does not have an impact. Besides, this method further improves the embeddingcapacity of the image. Inspired by the MSB prediction method, an RDHEI method based on multi-MSB predictionand Huffman coding is introduced in [25]. In this method, the binary sequence of the original pixel value and theprediction value is compared and the number of the same bits from MSB to LSB is recorded to generate Huffmancoding. Finally, each pixel is marked with the corresponding Huffman codeword. This method considers not onlymulti-MSB prediction but also pixel marking. The method of marking pixels receives attention from researchersrecently.In [26], an RDHEI method that utilizes a parametric binary tree to mark pixels is proposed. In the method,information can be embedded in the unmarked bits of the pixel by binary tree coding, thereby improving embeddingcapacity. Next, the parametric binary tree labeling method is extended in [27]. By reducing the impact of imageblocks on pixel utilization, the embedding capacity is further improved. Subsequently, Literature [28] proposes amethod based on bit plane compression. In this method, the prediction error corresponding to the original image iscalculated and the redundancy between the prediction errors is used to compress the bit plane. This method not onlyrecovers the original image losslessly but also greatly improves the embedding capacity. Compared with the RDHEImethod which operates on the original image, the higher embedding capacity can be obtained by processing theprediction error.The method proposed in [27] utilizes equal length coding to mark pixels with different prediction errors. Thismethod can obtain a higher embedding capacity, but it does not consider the distribution characteristics of theprediction errors. Based on this, we propose a high-capacity adaptive encoding method in this paper to improve themethod in [27]. We use adaptively generated Huffman coding to mark pixels with different prediction errors. At thesame time, we also utilize prediction error with more redundancy as the carrier. In the proposed method, theprediction error of the entire image is calculated and the occurrence frequency of different prediction errors isobtained. Then the adaptive Huffman coding is generated in combination with the occurrence frequency of theprediction error. Finally, the generated Huffman coding codewords are used to mark the pixels, and the remainingbits are used to embed additional information. Due to the character of Huffman coding, the shorter Huffmancodeword is used for marking the pixel with the higher occurrence frequency of the prediction error, which canreserve more room in the image. Compared with the method in [27], the proposed method uses adaptive Huffmancoding to mark pixels. By combing the texture characteristics of each image, the proposed method finally obtains ahigher embedding capacity.
Research framework
The RDHEI method mainly includes three types of roles, namely the image owner, the information hider, andthe image receiver. The image owner has the information of the original image and performs a series ofpreprocessing and encryption of the image to protect the image content; The information hider cannot obtain theoriginal image information without the permission of the image owner, but he can embed some necessary additionalinformation in the encrypted image; The image receiver can execute the information extraction or image recoveryaccording to the type of encryption key.The texture characteristics of different images are different and the corresponding prediction error distributionsare also different. To take advantage of the characteristics of each image, we propose a high-capacity RDHEImethod using adaptive encoding in this paper. As shown in Fig.1, the proposed method can be divided into thefollowing steps:(1) The image owner preprocesses and encrypts the original image. The image owner first preprocesses theoriginal image. On the one hand, the image owner uses a predictor to calculate the prediction error of theimage and generates Huffman coding according to the occurrence frequency of different predictionerrors. On the other hand, the original image is directly encrypted with the image encryption key toobtain the encrypted image. The encrypted pixels are marked in the way of bit substitution according tothe Huffman codewords corresponding to their prediction errors. Finally, the marked encrypted image isobtained;(2) The information hider embeds additional information into the marked encrypted image. The unmarkedbits of the marked encrypted image pixel are the reserved room. The information hider can embed theencrypted additional information into the reserved room by bit substitution. Although the informationhider cannot obtain the information of the original image based on the marked encrypted image, it doesnot affect the embedding of the additional information and the generating of the stego image;(3) The image receiver extracts the embedded information or recovers the original image. After the imagereceiver obtains the stego image, he performs the information extraction or image recovery operationaccording to the type of key.Fig.1 The framework of the proposed RDHEI method
High-capacity RDHEI using adaptive encoding
Section 1 introduces the general framework of the high-capacity RDHEI method using adaptive encoding. Thissection describes the details of the proposed method. In the first stage, the image owner calculates the predictionerror of the image. The specific process is described in section 2.1. Then in section 2.2, the Huffman coding isadaptively generated according to the occurrence frequency of the prediction error. Then the encrypted image ismarked according to the Huffman codewords corresponding to different prediction errors to obtain the markedencrypted image. In the second stage, after the information hider receives the marked encrypted image, he embedsthe encrypted additional information in the unmarked bits of the pixel to obtain the stego image. The detailedoperation is introduced in Section 2.3. After receiving the stego image, the image receiver can perform informationextraction or image recovery operations on the image, the details are introduced in Section 2.4.
Prediction error calculatingFor a grayscale image with a size of m × n, the median edge prediction method [29] can be used to calculate theprediction value of the original image. x(i,j) (2 ≤ i ≤ m, 2 ≤ j ≤ n) is a pixel value of the original image, as shown inFig.2, select the three pixels in the left, top, and top left of pixel x(i,j) as reference values to calculate the predictionvalue p(i, j). max min min max (1)When calculating the prediction value of the entire image, the first row and the first column of pixels are usedas reference pixels without any operation. Starting from the pixel in the second row and second column, theprediction value corresponding to the pixel is calculated according to formula (1). Then, combine the pixels valuex(i,j) and its prediction value p(i,j) to calculate its prediction error e(i,j)=x(i,j)-p(i,j). Calculate the prediction errorof the remaining pixels in turn until the prediction error of the entire image is obtained.Fig.2 MED predictor is used for pixel prediction Huffman coding and markingAfter the operation in section 2.1, the prediction error of the image is obtained. To combine the texturecharacteristics of the image itself and reserve room for embedding additional information in the image, it isnecessary to perform adaptive Huffman coding on the prediction error of the image. Due to the generated Huffmancoding is used to mark pixels, so the length of Huffman codeword cannot exceed eight bits. To meet this restriction,it is necessary to preprocess the occurrence frequency of each prediction error. The pixels with the tiny occurrencefrequency of prediction errors are uniformly recorded in the same situation and the sum of the occurrencefrequencies of these prediction errors is counted. Assuming that each Huffman codeword has 8 bits, thecorresponding coding is the situation where the most codewords can be obtained. The average occurrence frequencyof the corresponding codeword at this time is recorded as the partition threshold, which is used to determine therange of initial prediction error. Fig.3 shows the corresponding Huffman tree when all Huffman codewords are 8bits. At this time, the average frequency of each codeword is 1/256, which is recorded as the initial partitionthreshold. According to the initial partition threshold, the initial prediction error range is determined andcorresponding Huffman coding is carried out.ig.3 A Huffman tree corresponding to a Huffman encoding length of 8Determine whether the current Huffman codewords do not exceed 8 bits. If not, adjust the partition threshold tofurther reduce the prediction error range until the length of the Huffman codewords meets the condition. Then, thefinal partition threshold is obtained. Finally, we determine the prediction error range that can be used to generateappropriate Huffman coding and obtain Huffman coding that meets the characteristics of the image texture.In the process of generating Huffman coding, pixels can be divided into three types. They are reference pixels,non-embedded pixels, and embeddable pixels. The reference pixels are the pixels in the first row and the firstcolumn without any operations. The rest pixels are divided into non-embedded pixels and embeddable pixelsaccording to the final partition threshold. The prediction errors of non-embedded pixels exceed the limited range ofthe final partition threshold. They are regarded as the same situation when Huffman coding is generated andcorresponding to a Huffman codeword. The prediction errors of embeddable pixels are within a limited range andpixels with different prediction errors correspond to different Huffman codewords.To protect the information of the image from being leaked, the image encryption should be performed beforemarking the pixels with Huffman codewords. In image encryption, an image H with the same size as the originalimage is generated according to the image encryption key, h(i,j) is the pixel value in H, i and j represent abscissa andordinate of the corresponding pixel. The original pixel value x(i,j) and the pixel value h(i,j) at the correspondingposition are converted into an eight-bit binary number, the specific conversion operation is shown in formula (2),floor(*) is the floor operation, mod represents the complementary function and k is the number of bits correspondingto the binary number. After converting x(i,j) and h(i, j) to a corresponding eight-bit binary number, the encryptedoperation is performed as formula (3), the ’ ⊕ ’ represents the exclusive-or operation. Then the eight-bit binarynumber after the exclusive-or encryption is converted to decimal according to the formula (4), That is, the encryptedpixel value 吠 is obtained. Finally, the above operations are performed on all pixels in the image in turn toobtain the encrypted image. ݈ 吠 h (2) (3) (4)After the image is encrypted, the encrypted pixels can be marked with adaptive Huffman coding. According tothe different types of pixels, the operations when marking pixels are also different. The reference pixels are notmarked and their information is recorded as auxiliary information. Non-embedded pixels are marked by bitsubstitution according to their corresponding Huffman codeword, and the information of the replaced non-embeddedpixels is also recorded as auxiliary information. embeddable pixels are marked with different Huffman codewordsccording to different prediction errors. The unmarked bits of each embeddable pixel are reserved room and can beused to embed additional information. Once the pixels are marked, the Huffman coding rules are converted into abinary bitstream and stored in the position of the original reference pixels by bit substitution. To facilitatesubsequent information embedding or extraction, the auxiliary information is embedded in the reserved room of thecurrent embeddable pixel by bit substitution.
162 162 162 161162 162 162 161162 162 162 161162 162 162 161 (a) Original image
162 162 162 161162 0 0 0162 0 0 0162 0 0 0 (b) Prediction error (c) Encrypted image (d) Marked the encrypted image with Huffman coding ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) (e) Marked encrypted imageFig.4 Diagram of pixel marking process:(a) Original image, (b) Prediction error, (c) Encrypted image, (d) Markedthe encrypted image with Huffman coding, (e) Marked encrypted imageFig.4 captures a part of the Lena image and briefly explains the whole process of pixel marking. Fig.4(a) is theoriginal image, the first row and the first column of pixels are reference pixels. According to the calculation methodof the prediction error in section 2.1, the prediction error is shown in Fig.4(b). The prediction error at this time ispreprocessed and Huffman coding is performed to obtain the Huffman codewords shown in Table 1. Fig.4(c) is theencrypted image obtained by directly encrypting Fig.4(a) with the image encryption key. Then, the encrypted imageis marked by bit substitution according to the Huffman codewords shown in Table 1. As shown in Fig.4(d), theinformation in red is the Huffman marking bits corresponding to the current pixel and the remaining bits representthe reserved room, in which information can be embedded. The prediction errors of adjacent pixels often have smalldifferences, to prevent image information from being leaked, Fig.4(d) is reversed to obtain the marked encryptedimage shown in Fig.4(e). able 1 Huffman coding table
Predictionerror Huffman codewords-5 [0,1,0,0,1]-4 [1,1,0,1]-3 [0,1,1,0]-2 [0,0,1,0]-1 [0,0,0,0]0 [1,0,0]1 [1,1,1]2 [0,0,0,1]3 [0,1,0,1]4 [1,1,0,0]5 [0,1,0,0,0]
Embed additional informationIn the marked encrypted image, assuming that the auxiliary information has a total of ݈ bits. There are embeddable pixels and each embeddable pixel 吠 uses bits Huffmancodeword for marking. Then the embeddable pixel can reserve − bits for embedding information. The netembedding capacity (bit) can be calculated based on the total number of the reserved room and the length ofauxiliary information. 吠 ݈ (5)After the information hider obtains the marked encrypted image, the Huffman coding rule stored in thereference pixels can be obtained. Then, the position of the net reserved room can be located. Finally, the additionalinformation is embedded in the net reserved room by bit substitution to generate the stego image. To ensure that thecontent of the additional information is not leaked, the additional information must be encrypted by the informationhiding key, the detail encryption operation is the same as the image encryption method in section 2.2. Information extraction or image recoveryIn the information extraction or image recovery stage, the image receiver extracts and obtains the Huffmancoding rules from the reference pixels. Combined with the Huffman coding rules, the image receiver can find allembeddable pixels and extract the embedded information. The extracted information is mainly composed of twoparts, which are auxiliary information and encrypted additional information. According to the different keys of theimage receiver, the information extraction or image recovery can be processed differently. Specific operations canbe divided into the following three different types:(1) Only the image encryption key: When the image receiver only has the image encryption key, the originalimage can be recovered losslessly. The image receiver recovers the extracted auxiliary information to thecorresponding position to obtain the encrypted reference pixels and non-embedded pixels. With theimage encryption key, the original reference pixels and the original non-embedded pixels of the imagecan be recovered. Then, different prediction errors of pixels can be obtained by Huffman coding rules,Finally, the image receiver calculates the prediction value of each pixel according to formula (1), andrecovers the embeddable pixels with the prediction value and prediction error. After the above operations,the encrypted image is recovered to the original image.(2) Only the information hiding key: When the image receiver only has the information hiding key, theembedded additional information can be extracted correctly. The image receiver uses the informationhiding key to decrypt the extracted additional information to recover additional information.(3) Both the image encryption key and the information hiding key: When the image receiver has both theimage encryption key and the information hiding key, the additional information can be extracted andrecovered correctly or the original image can be recovered losslessly. In the process of informationextraction and image recovery, the two do not affect each other, so the information extraction and imageecovery are separable. Experimental results and analysis
To verify the feasibility of the proposed method, this section designs a large number of experiments. As shownin Fig.5, five conventional grayscale images are used for experimental comparison to verify the feasibility of theproposed method. At the same time, to reduce the impact of the texture complexity of the test images on theexperiment, we also test the performance of different methods in three datasets of UCID[30], BOSSBase[31], andBOWS-2[32]. Because the RDHEI method does not care about the image quality in the encrypted domain, theembedding performance has become an important indicator to measure the method. The embedding rate (ER) isregarded as the key indicator to measure the embedding performance and the ER is expressed in bpp(bit per pixel).The maximum value of the embedded additional information is selected to calculate ER. Besides, we also usecommon indicators Mean Square Error (MSE) and Structural Similarity (SSIM) to test the reversibility of theproposed method.(a) (b) (c) (d) (e)Fig.5 Five grayscale test images: (a) Peppers, (b) Lena, (c) Lake, (d) Baboon, (e) Tiffany
Reversibility analysisTo verify that the proposed method can extract the information or recover the image reversibly, this sectioncarries out a verification of reversibility. In the information extraction or image recovery stage, the prediction errorsof pixels can be obtained and the original image can be recovered losslessly. Three datasets of UCID[30],BOSSBase[31], and BOWS-2[32] are utilized to verify whether the original image is consistent with the recoveryimage, that is, to verify the reversibility of the image recovery. Table 2 shows the MSE and the SSIM of originaldatasets and recovered datasets. The MSE of the datasets is '0', indicating that the recovered image is consistent withthe original image. At the same time, the SSIM of the recovered image and the original image is '1', which meansthat the structure of the two images is the same, that is, the proposed method can reversibly recover the originalimage. The information extraction of the proposed method is introduced in section 2.4, In the process of informationextraction, the image receiver extracts all the embedded additional information losslessly, indicating that theproposed method can also achieve the reversibility of information extraction. In the proposed method, the process ofinformation extraction and image recovery does not affect each other and can be completed independently, whichfurther proves that the information extraction or image recovery of the proposed method is not only reversible butalso separable.
Table 2
MSE and SSIM of original datasets and recovered datasets
UCID BOSSBase BOWS-2MSE 0 0 0SSIM 1 1 1
Safety analysisAs a high-capacity RDHEI method using adaptive encoding, the proposed method not only protects the originalimage content from being leaked but also protects the additional information embedded in the image. To prove thesecurity of the proposed method, this section analyzes the encryption method and compares the feature map ofmages in different states. In the above mentioned, we introduced that this method utilizes an image encryption keyand information hiding key for security. For a grayscale image with a size of , the image encryption keygenerates a pseudo-random matrix H. After all of the pixels in H converting into an 8-bit binary number, there are bits '0' or '1'. So the existence possibility of the image encryption key is , which is very difficultto predict the correct image encryption key. Similarly, assuming that bits of additional information are embeddedin the marked encrypted image, there are kinds of pseudo-random sequences generated by the informationhiding key, it is also difficult to obtain the correct information hiding key. The existence of the image encryption keyand the information hiding key makes it difficult for illegal users to obtain the plaintext information and embeddedadditional information of the image. The analysis of the pixel characteristics of the images at each state in Fig.6further confirms the security of the proposed method. Fig.6(a) is the original Lena image and Fig.6(b) is theencrypted image. Then the Huffman coding is utilized to obtain the marked encrypted image as shown in Fig.6(c).After embedding additional information, the stego image Fig.6(d) is obtained. Finally, image recovery is performedto obtain the recovered image Fig.6(e). Fig.6(f) shows the pixel distribution histogram of the original image, andFig.6(g) is the pixel distribution histogram of the encrypted image processed by the image encryption key, the pixeldistribution is flat and it is difficult to obtain meaningful image features information. After the encrypted image ismarked by Huffman coding, the marked encrypted image is obtained. Fig.6(g) is the pixel distribution histogram ofthe marked encrypted image. Compared with the pixel distribution histogram of the encrypted image, the overalldistribution of the pixel is still messy, and the original image information cannot be obtained based on this, whichmeans that the proposed RDHEI method is guaranteed in terms of security.(a) (b) (c) (d) (e)(f) (g) (h)Fig.6 Image pixel feature representation schematic diagram of different stages: (a) Original image, (b) Encryptedimage, (c) Marked encrypted image,(d) Stego image, (e) Recovery image, (f) Histogram of the original image, (g)Histogram of the encrypted image, (h) Histogram of the marked encrypted image Performance comparisonIn the proposed method, the unmarked bits of the embeddable pixel can be used to embed information. Theformula (5) in Section 2.3 introduces the calculation method of the net embedding capacity , and the ER(bpp) ofthe image can be calculated by combining formula (5). The proposed method utilizes the adaptive Huffman codingto mark pixels to improve the method in [27], which makes full use of the texture characteristics of the image. Table3 compares the performance of [27] with the proposed method from many aspects. As shown in Table 3, the numberf embeddable pixels is increased and the number of auxiliary pixels (reference pixels and non-embedded pixels)that cannot be used to reserve room is reduced in the proposed method. Compared with the method in [27], the pixelutilization has increased to a certain extent so that ER has been significantly improved in the proposed method.Since the proposed method utilizes adaptive Huffman coding to mark the image, pixels corresponding to theprediction errors with higher occurrence frequency are marked with shorter Huffman codewords, which finallymakes full use of the characteristics of the image itself and reserves more room.Also, to more fully illustrate the performance of the proposed method, this section compares the ER with themethod in [24], [26], [27], and [28]. As shown in Fig.7, the ER of the five test images in Fig.5 is compared firstly.The method in [24] predicts the MSB to embed information and the ER is close to 1bpp. A parameter binary treecoding method is utilized to mark image pixels in [26], which improves the embedding performance of the image.Then, the method in [27] uses an improved binary tree coding with equal length to mark more embeddable pixels,which greatly improves the ER. The method in [28] combines the characteristics of the prediction error to compressthe bit plane, the image ER is further improved. As shown in Fig.7, the ER of the Baboon image in the experimentalresults of all methods is significantly lower than that of the other images. This is because the image texture ofBaboon is more complex and the image redundant is low. It can be observed in table 3 that in the proposed method,there are 36614 auxiliary pixels in the Baboon image, which is more than other test images. But the experimentalresults show that the ER of the Baboon image obtained by the proposed method is still higher than other methods.For several other test images, the proposed method can still obtain a higher ER when compared with the methods in[24], [26], [27], and [28].To better verify that this performance improvement is not accidental, this section also conducts comparativeexperiments on the UCID[30], BOSSBase[31], and BOWS-2[32] datasets. The average ER in three datasets of theproposed method is compared with the method in [24], [26], [27], and [28] to verify the effectiveness of theproposed method. As shown in Fig.8, the proposed method can obtain a higher average ER on three datasets. Theexperimental results show that compared with the methods in [24], [26], [27], and [28], the average ER of the threedatasets corresponding to the proposed method increases by 1.156bpp, 1.810bpp, 1.533bpp, and 1.448bpp onaverage. Compared with other methods, the proposed method utilizes adaptive Huffman coding to mark pixels withdifferent prediction errors. The feature with the shortest average length of the Huffman coding makes full use of thetexture characteristics of the image itself, which promotes the higher average ER of the proposed method. In theproposed method, the average ER on the datasets of UCID[30], BOSSBase[31], and BOWS-2[32] respectivelyreached 3.162bpp, 3.917bpp, and, 3.775bpp. Table 3
The performance comparison with TMM2020[27]
TestImages Method Embeddablepixel Auxiliarypixel Pixelutilization=numberof embeddablepixels/number ofimage pixels ER(bpp)Lena TMM2020[27] 243107 19037 92.7% 2.645Proposed method 246614 15530 94.1% 3.262Baboon TMM2020[27] 155266 106878 59.2% 0.969Proposed method 225530 36614 86.0% 1.481Tiffany TMM2020[27] 243463 18681 92.9% 2.652Proposed method 245814 16330 93.8% 3.376Peppers TMM2020[27] 235184 26960 89.7% 2.494Proposed method 249882 12262 95.3% 2.910Lake TMM2020[27] 209185 52959 79.8% 1.998Proposed method 243183 18961 92.8% 2.409 ig.7 Comparison of ER(bpp) of five test images between our method and state-of-the-art methodsFig.8 Comparison of the average ER(bpp) of three datasets between our method and state-of-the-art methods Summary
This paper improves the method in [27] and proposes a high-capacity RDHEI method using adaptive encoding.According to the occurrence frequency of different prediction errors of the image, the proposed method utilizesadaptive Huffman coding to mark pixels. Compared with the equal length coding proposed in [27], the proposedadaptive Huffman coding method is more flexible and the coding with the shortest average codeword can beobtained. By taking advantage of the texture characteristics of the image itself, the proposed method can obtain ahigher embedding rate and provides sufficient room for embedding additional information. However, the proposedmethod still has some limitations in the pixel marking process. In the future, we can try to continuously increase thenumber of embeddable pixels to further improve the embedding performance. eferences : [1] Shen J, Liao X, Qin Z, Liu X C. Spatial steganalysis of low embedding rate based on convolutional network,Ruan Jian Xue Bao/Journal of Software, 2019 (in Chinese with English abstract). [doi: 10.13328/j.cnki.jos.005980][2] Li X R, Ji S L, Wu C M, Liu Z G, Deng S G, Cheng P, Yang M, Kong X W. Survey on deepfakes and detectiontechniques, Ruan Jian Xue Bao/Journal of Software, 2021,32(2):496-518 (in Chinese with English abstract). [doi:10.13328/j.cnki.jos.006140][3] Thodi D M, Rodríguez J J. Expansion embedding techniques for reversible watermarking. IEEE transactions on image processing,2007,16(3): 721-730. [doi: 10.1109/TIP.2006.891046][4] Alattar A M. Reversible watermark using the difference expansion of a generalized integer transform. IEEE transactions on imageprocessing, 2004,13(8): 1147-1156. [doi: 10.1109/TIP.2004.828418][5] Ye J, Ni J, Yi Y. Deep learning hierarchical representations for image steganalysis. IEEE Transactions on Information Forensicsand Security, 2017,12(11): 2545-2557. [doi: 10.1109/TIFS.2017.2710946][6] Chen J F, Fu Z J, Zhang W M, Cheng X, Sun X M. Steganalysis based on deep learning : A review,Ruan Jian Xue Bao/Journal of Software, 2021,32(2):551-578 (in Chinese with English abstract). [doi: 10.13328/j.cnki.jos.006135][7] Ni Z C, Shi Y Q, N. Ansari and Su W. Reversible data hiding. IEEE Transactions on circuits and systems for video technology,2006,16(3): 354-362. [doi: 10.1109/TCSVT.2006.869964][8] Shi Y Q, Li X, Zhang X, Wu H and Ma B. Reversible data hiding: Advances in the past two decades. IEEE access, 2016,4:3210-3237. [doi: 10.1109/ACCESS.2016.2573308][9] Zhang W, Hu X, Li X and Yu N. Recursive histogram modification: Establishing equivalency between reversible data hiding andlossless data compression. IEEE transactions on image processing, 2013,22(7): 2775-2785. [doi: 10.1109/TIP.2013.2257814][10] Qin C, Hu Y C. Reversible data hiding in VQ index table with lossless coding and adaptive switching mechanism. Signal Processing,2016,129: 48-55. https://doi.org/10.1016/j.sigpro.2016.05.032[11] Jun Tian. Reversible data embedding using a difference expansion. IEEE transactions on circuits and systems for video technology,2003,13(8): 890-896. [doi: 10.1109/TCSVT.2003.815962][12] Kim H J, Sachnev V, Shi Y Q, Nam J and Choo H G. A novel difference expansion transform for reversible data embedding. IEEETransactions on Information Forensics and Security, 2008,3(3): 456-465. [doi: 10.1109/TIFS.2008.924600][13] Wang J, Chen X, Ni J, Mao N and Shi Y. Multiple histograms-based reversible data hiding: Framework and realization. IEEETransactions on Circuits and Systems for Video Technology, 2019,30(8): 2313-2328. [doi: 0.1109/TCSVT.2019.2915584][14] Chen Y C, Hung T H, Hsieh S H and Shiu, C W. A new reversible data hiding in encrypted image based on multi-secret sharing andlightweight cryptographic methods. IEEE Transactions on Information Forensics and Security, 2019,14(12): 3332-3343. [doi:10.1109/TIFS.2019.2914557][15] Huang F, Huang J, Shi Y Q. New framework for reversible data hiding in encrypted domain. IEEE transactions on informationforensics and security, 2016,11(12): 2777-2789. [doi: 10.1109/TIFS.2016.2598528][16] Zhang X, Long J, Wang Z and Cheng H. Lossless and reversible data hiding in encrypted images with public-key cryptography.IEEE Transactions on Circuits and Systems for Video Technology, 2015,26(9): 1622-1631. [doi: 10.1109/TCSVT.2015.2433194][17] Xiang S J, Luo X R. Reversible data hiding in encrypted image based on homomorphic public key cryptosystem. Ruan Jian XueBao/Journal of Software, 2016,27(6):1592 − ” Break our steganographic system ””