Gabor wavelets combined with volumetric fractal dimension applied to texture analysis
aa r X i v : . [ c s . C V ] D ec Gabor wavelets combined with volumetric fractal dimension applied to textureanalysis ´Alvaro Gomez Z., ∗ Jo˜ao B. Florindo, † and Odemir M. Bruno ‡ (Dated: August 26, 2018)Texture analysis and classification remain as one of the biggest challenges for the field of computervision and pattern recognition. On this matter, Gabor wavelets has proven to be a useful techniqueto characterize distinctive texture patterns. However, most of the approaches used to extract de-scriptors of the Gabor magnitude space usually fail in representing adequately the richness of detailpresent into a unique feature vector. In this paper, we propose a new method to enhance the Gaborwavelets process extracting a fractal signature of the magnitude spaces. Each signature is reducedusing a canonical analysis function and concatenated to form the final feature vector. Experimentswere conducted on several texture image databases to prove the power and effectiveness of the pro-posed method. Results obtained shown that this method outperforms other early proposed method,creating a more reliable technique for texture feature extraction. I. INTRODUCTION
Texture analysis and classification have a huge vari-ety of applications. Although it has been widely stud-ied it remains open for research and in fact, is oneof the biggest challenges for the field of computer vi-sion and pattern recognition. There are a lot of differ-ent methods to deal with texture analysis, which canbe grouped into four classes: (i) structural methods -where textures are described as a set of primitives; (ii)statistical methods - textures are characterized by non-deterministic measures of distribution, using statisticalapproach; (iii)model-based - textures are described asmathematical and physical modeling; and (iv) spectralmethods, based on the analysis in the frequency domainmethods, such as Fourier, cosine transform or wavelets.In the last approach, lay one of the well known and verysucceed texture method: the Gabor filter, in which a fea-ture extraction enhancement is proposed in this work.The Gabor filter was proposed by Dennis Gabor in1946 and extended by 2D and applied to image texturesby Daugman [1, 2] in the 80’s. Daugman’s work mainmotivation was to model mathematically the receptivefields (response of neuronal cells set) of the cortical cellsin the primate brain. Besides the biological motivation,the Gabor Filter has a very good performance for tex-ture processing and still remains one of the best methodsfor texture analysis. Gabor texture technique consists onthe convolution of an image with several multi-scale andmulti-orientation filters. For each convolution, a trans-formed space is created, and the feature extraction isperformed in each space. Usually, the feature vector iscomposed concatenating the energy measure of each con-voluted image [3]. This way, each convoluted image is ∗ [email protected] † fl[email protected] ‡ [email protected] represented by a single statistical value that is far fromrepresenting adequately the rich information present inthe Gabor space. This issue has motivated the researchin the field and the proposal of this work.One of the simplest Gabor enhancement was proposedby [4–6], which uses other basic statistical descriptorsthat proves to work better than energy in some situa-tions. Another approach proposed is the use of GLCM[7] applied over the convoluted images to extract simplefeatures achieving good results. Tou et al[8, 9], proposeda simple yet powerful method to calculate the covariancematrix of all the convoluted images. More recently, thesuccess of the LBP operator [10] in several computer vi-sion fields motivated the adaptation of this operator onthe Gabor process yielding the best results found on theliterature.In addition fractal dimension has been successfullyused in texture feature extraction [11, 12]. The fractaldescriptors represent the spatial relations between pixelintensities, even small changes between texture patternsproduce significant changes on the signature. In this pa-per, we propose the use of volumetric fractal dimensionto extract the fractal descriptors of the Gabor convolutedimages with the use of canonical analysis to decorrelatethe signature descriptors and reduce dimensionality. Theintroduced approach is validated using several image tex-ture datasets, and the results analyzed and comparedagainst the best feature extraction methods for Gaborspace found in the literature.The paper is split into 9 sections. Next section givesa short overview of the Gabor wavelets method. Section3 presents a brief description of the different methodsimplemented to compare their performance against theproposed technique. Section 4 explains the Volumetricfractal dimension method in detail. Section 5 presentsthe combinational approach of Gabor wavelets with vol-umetric fractal dimension. Section 6,7 and 8 shows theexperiments conducted and the results obtained. Finally,section 9 draws conclusions and future directions. II. GABOR WAVELETS
Since the discovery and description of the visual cor-tex cells of mammalian our understanding of how thehuman brain process texture has advanced enormously.Daugman [1, 2] shown that simple cells in the visual cor-tex can be modeled mathematically using Gabor func-tions. These functions [13] approximate cortex cells us-ing a fixed gaussian. Later, Daugman proposed a two-dimensional Gabor wavelet [14] for its application on im-age processing and it has been widely used in the fieldfor its biological and mathematical properties. The 2DGabor function is a local bandpass filter that achievesoptimal localization in both spatial and frequency do-main and allows multi-resolution analysis by generatingmultiple kernels from a single core function.The Gabor wavelets are generated by dilating and ro-tating a single kernel with a set of parameters. Based onthis concept, we use the Gabor filter function as the ker-nel to generate a filter dictionary. The two-dimensionalGabor transform is a complex sine wave with frequency W modulated by a Gaussian function. Its form in space g ( x, y ) and frequency domains G ( u, v ), is given by Eqs.1and 2: g ( x, y ) = (cid:18) πσ x σ y (cid:19) exp (cid:20) − (cid:18) x σ x + y σ y (cid:19) + 2 πjW x (cid:21) (1) G ( u, v ) = exp ( − " ( u − W ) σ u + v σ v (2)A self-similar filter dictionary can be obtained by di-lating and rotating g ( x, y ) using the generating functionproposed in [15]. g mn = a − m g ( x ′ , y ′ ) (3)Where a > m, n are integer values that spec-ify the number of scales and orientations respectively m = 0 , , ..., M − n = 0 , , ..., N −
1, where M represents the total number scales and N the total num-ber of orientations. The x ′ and y ′ parameters are definedby: x ′ = a − m ( x cos θ + y sin θ ) (4) y ′ = a − m ( − x cos θ + y sin θ ) (5)Where θ = nkN , the scaling factor a − m is needed toensure that the energy is independent from m . The pa-rameters necessary to generate the dictionary could be se-lected empirically. However, in [15], the authors present a suitable method to compose a filter dictionary thatensures a maximum spectrum coverage with the lowestredundancy possible. Based on this approach, we use thefollowing equations to describe how to obtain the idealsigmas. a = ( U h U l ) M − (6) ϑ u = ( a − U h ( a + 1) √ ϑ v = tan ( π N )[ U h − ln ( ϑ u U h )] q ln − (2 ln ϑ u U h (8)Where W = U h and U h and U l represent the minimumand maximum central frequencies respectively. III. GABOR DESCRIPTORS
The Gabor wavelet representation of an image is theconvolution of this image with the entire filter dictionary.Formally, the convolution result of an image I ( x, y ) anda Gabor wavelet dictionary ϕ f u ,m,n named as Gabor im-ages on the rest of the paper can be defined as follows: gi m,n ( x, y ) = I ( x, y ) ∗ ϕ f u ,m,n ( x, y ) (9)where ϕ f u ,m,n denotes the Gabor wavelet with centralfrequency f u , scale m and orientation n . The number ofimages generated depends on the number of scales andorientations used. For example, four scales and six ori-entations will generate 24 Gabor images. The featurevector F is composed by extracting single or multiplefeatures from each generated image using image descrip-tors. A general process to describe this is shown in Figure1. A classical and simple approach to obtain the featurevector F is just calculating the energy of each Gaborimage by F = [ e ( gi , ) , e ( gi , ) , ..., e ( gi ,n ) , e ( gi , ) , e ( gi , ) , ..., ..., e ( gi m,n )](10)where e = R R f ( x, y ) [1]. Although it is largely used inthe literature, this approach does not achieve a efficientlyinformation of the Gabor images. It has motivated thedevelopment of the methods to extract more efficientlythe Gabor images information. In the following subsec-tions a brief overview of the most important methodsfound on the literature is presented.The non-orthogonal Gabor filters produce different ef-fects depending on the texture characteristics. It does FIG. 1. General scheme used to extract features from the convoluted images. not exist an ideal combination of parameters that ensuresthe maximum performance. Whilst the work presented in[15] help reducing the redundancy of the filters still someparameters like scales orientations and central frequen-cies are determined empirically. Thus, central frequen-cies variations seem to have a low impact on the results.They are fixed to 0 .
05 and 0 . x
6, 3 x
4, 3 x
5, 4 x
4, 4 x
6, 5 x x
3, 6 x A. Descriptors based on first order statistics
Let f ( x, y ) be a grayscale image with dimensions x =0 , , ..., W − y = 0 , , ..., H − W and H arethe image width and height respectively. The possible in-tensity values that f ( x, y ) could take are i = 0 , , ..., G − G is the maximum number of intensity value. Thenthe histogram is a function showing the number of pixelsfor each possible grayscale intensity value according to: h ( i ) = W − X x =0 H − X y =0 δ ( f ( x, y ) , i ) (11)Where δ ( i, j ) its the binary function defined by: δ ( j, i ) = (cid:26) , if j=i0 , else (12)Image histogram has the power to represent a largeset of values in a single measure that reflects a specificproperty of the distribution. To compute descriptors,we use a histogram representation based on a densityprobability function given by: p ( i ) = h ( i ) W H , i = 0 , , ..., G − p ( i ) is a one-dimensional vectorthat holds important information that is later extractedusing distribution measures such as energy, mean, vari-ance, etc. The most common approach to extract fea-tures in the Gabor wavelets methods is energy based de-scriptors. Some recent approaches use other types of de-scriptors in order to obtain more useful information fromeach image. Since each extractor generates a single valuefrom each image the final representation is a ( M x N ) -dimensional feature vector.The best first-order statistics found in the literatureare used on experimentation: Energy (Eq. 13), variance(Eq. 14) and percentil75 (Eq. 15) are used accordinglyto their implementation in [34]. According to the figure1 the extractors are applied directly over the magnitudespace. E = G − X i =0 [ p ( i )] (14) V = G − X i =0 ( i − u ) p ( i ) (15) P = p ord ( ⌈ . G − ⌉ ) (16)Where p ord its the ascendant sorted vector of p and u = P G − i =0 ip ( i ). B. Descriptors based on GLCM features
Second-order statistics derived from the gray level co-ocurrence matrix (GLCM) are a better representation ofhow humans perceive texture patterns [5, 6]. It has beenproven to be the most successful approach to many kindsof texture feature extraction problems. GLCM featurescapture information regarding higher frequency compo-nents in texture. The co-ocurrence matrix represents thehistogram of the number of occurrences of gray-level pairvalues when a pixel neighborhood algorithm is applied.Formally, the GLCM h dθ ( i, j ) represents the frequencyof appearance of 2 pixels with gray-level values a, b sep-arated by a distance d and orientation θ for an image f ( x, y ) defined by: f ( x , y ) = i and f ( x , y ) = j (17)where ( x , y ) = ( x , y ) + ( dcosθ, dsinθ ) (18)For each d and θ is created a squared matrix with adimension the same size as the number of grayscale valuespresent in the image, due to computational cost only afew values of d and θ are used.The research presented by [5] shows the finest combi-nation of Gabor filters and gray level co-ocurrence ma-trix features. According to [5] these three basic statisticdescriptors represent the best second order statistics ex-tracted from the GLCM matrix obtained after processingthe Gabor images: Ent = − G − X i =0 G − X j =0 p ( i, j ) log [ p ( i, j )] (19) Con = G − X i =0 G − X j =0 ( i − j ) p ( i, j ) (20) Cor = G − X i =0 G − X j =0 ijp dθ ( i, j ) − µ x µ y σ x σ y (21) C. Descriptors based on covariance matrix features
Covariance matrix is a statistical method that repre-sents the covariance between values. Covariance matrixapplied to images reflects important features of heteroge-neous images while achieving a considerable dimension-ality reduction. A covariance matrix can be representedas: C R = 1 n − n X k =1 ( z k − u )( z k − u ) T (22)where z represents the feature point and u the meanof n feature points. For fast computation, integral imagetechnique is used [16]. The P and Q tensor used for thecomputation are defined by: P ( x ′ , y ′ , i ) = X x 24 covariance matrix generates a feature vector of size300. D. Descriptors based on local binary patternfeatures Some of the latest work in Gabor signatures involvesdescriptors based on local binary patterns. The origi-nal LBP operator [10] labels the pixels of an image bythresholding the 3 x f p ( p =0 , , , ..., 7) with the center value f c and considering theresult as a binary number according to: S ( f p − f c ) = (cid:26) , f p = f c , f p < f c (26)Then, by assigning a binomial factor 2 p for each S ( f p − f c ) the LBP pattern for each pixel is achieved as: LBP = X p =0 S ( f p − f c ) 2 p (27)In [17] The LBP operator is applied to each pixel onthe Gabor images to generate a LGBP map (Local GaborBinary Map). G lgbp ( x, y, u, v ) the concatenation of thehistograms of each Gabor image is used as the featurevector. In [18] a volume approach is taken by consideringall the Gabor images as a 3D volume and performing aLBP calculation in the 3D space.The local binary pattern is applied to the Gabor im-ages according to [17]. A 4-neighbourhood is applied toreduce the size of the histogram. since a 4-neighbourhoodallow a maximum of 16 possible values on the LBP map R . The final feature vector is composed of the concate-nation of the histogram of each Gabor image: H = [ h , , h , , ..., h , , h , , ..., h m,n ] (28)where m, n is the number of scales and orientationsused for the Gabor process and h is: h , ,i = X x,y ∈ R ( IG glbp ( x, y, u, v ) = i ) (29) IV. THE PROPOSED METHODA. Volumetric fractal dimension The fractal concept was first used by Mandelbrot inhis book [19]. This concept states that natural objectscannot be described using Euclidean geometry but us-ing persistent self-repeating patterns. In recent yearsthis concept has been used on the field of image analysis[11, 12, 20, 21]. To adapt the fractal concept to imagesis necessary to use a measure that captures fractal prop-erties of non fractal objects inside discrete environments.For this purpose, the fractal dimension of an image isused to describe how self-repetitive the objects containedwithin the image are. Under this concept, several typesof images could be analyzed. An approach used to ana-lyze grayscale images called volumetric fractal dimensionproposed in [12, 20, 22] has proven to be a very effec-tive fractal descriptor. On [23] the authors successfullydemonstrated the power of VFD to describe the Gaborimages. On this approximation, we take a different ap-proach to reduce and de-correlate the fractal signaturesin order to improve the power of description and reducedimensionality.Let gi m,n ( x, y ) be a Gabor image taken from Eq. 9the 3-dimensional representation necessary to computethe VFD is given by S ( x, y, z ) ∃ R where ( x, y ) are thespatial coordinates of the image and z is the gray levelintensity. This surface S is dilated by a sphere of radius r and the influence volume of the dilated surface V ( r )is calculated for each value of r . This could be betterexplained by equation: V ( r ) = { p ′ ǫR |∃ pǫS : | p − p ′ | ≤ r } (30)where p ′ = ( x ′ , y ′ , z ′ ) is a point in R whose distancefrom p = ( x, y, z ) is smaller or equal to r . As r growsthe spheres start to intercept each other producing vari-ations on the computed volume. This property makesVFD very sensitive to even small changes on the texturepattern. Each expansion of r generates a single-volumemeasure. Therefore, the values that r takes must reflecteach possible state of the expansion without redundancy.To reduce the computational costs of the volume com-putation, we applied an exact 3-dimensional Euclideandistance transform algorithm (EDT) [24] over the sur-face. The EDT performs a calculus of the distance ofall the voxels on R to its closest p ′ ∃ R voxel using theEuclidean distance. The most suitable way to obtain theset of radius to expand the surface is by using all the pos-sible Euclidean distances up to a maximum radius. Thisis defined by: E = 1 , √ , √ , ..., r max (31)The fractal dimension can be estimated as D = 3 − lim r − > log( V ( r ))log( r ) (32)The fractal signature (or fractal descriptors) will becomposed by the logarithm of each volume according to: F = [log V (1) , log V ( √ , log V ( √ , ..., log V ( r max )](33)The parameters used in the feature extraction arebased on previous research presented by [12, 22] wherethe expansion radius for the Volumetric Fractal dimen-sion is set to 16. The number of canonical variables usedis based on the percentage of representation of the i-thmost important canonical variables. Volumetric fractaldimension signature tends to be 99 . 90% described withonly 10 canonical variables. Figure 2 shows the process. B. Canonical discriminant analysis The Canonical discriminant analysis (CDA) is dimen-sion reduction technique closely related to principal com-ponent analysis. CDA purpose is to find linear combina-tions of quantitative variables that provide maximal sep-aration between classes [25]. This linear combinations posses the power of producing a reduced number of in-dependent features also called canonical variables.The total dispersion among the feature vectors is de-fined as: S = N X i =1 ( ϕ i − −→ M )( ϕ i − −→ M ) ′ (34)where −→ M is the global mean feature vector and −→ ϕ i con-tains the row features of all vectors for class i , definedby: −→ ϕ i = F M ( x σ ) (35) −→ M = P Nx =1 F M ( x ) N (36)M is the total number of features. The matrix S i in-dicating the dispersion of objects within each class, isdefined as: S i = X i ∈ C i ( −→ ϕ i − −→ u i )( −→ ϕ i − −→ u i ) ′ (37)where −→ u i is the mean feature vector for objects in class i defined by: −→ u i = P Nx =1 −→ F i ( x k ) N k (38)The intraclass variability S intra indicates the combineddispersion in each class is defined by: S intra = K X i =1 S i (39)The interclass variability S inter indicates the disper-sion of the classes in terms of their centroids is definedby: S inter = K X i =1 N i ( −→ u i − −→ M )( −→ u i − −→ M ) ′ (40)where K is the number of classes and N the number ofsamples on class i , Finally we have the total variabilityrepresented by: S = S intra + S inter (41)Finally, to obtain the principal components we use the (a) (b) (c) (d) (e) FIG. 2. (a) is an image taken from Brodatz database, (b) image with expansion r = 2, (c) image with expansion r = 5 (d)image with expansion r = 7 (e) image with expansion r = 9 approximation taken in [26]: C = S inter ∗ S − intra (42)The i-th canonical discriminant function is given by: Z i = a i X + a i X + , , , + a ip X p (43)where p is the number of features and a ij are the sortedeigenvectors of C where a is the most significant eigen-vector. This definition leads to Z i non correlated fea-tures, where i is the number of features used to reducede dimensionality of the dataset with i < p . C. Proposed Signature Let gi m,n ( x.y ) be the convoluted image from equa-tion 9. Let e be the set of Euclidean distances e =[1 , √ , √ , ..., r max ] for a radius r max . The volumet-ric fractal dimension signatures of each Gabor image gi m,n ( x, y ) is defined by: ω m,n ( z ) = { V F D ( gi m,n ( x, y ) , r ) |∀ r ∈ e } (44)where r is a radius from vector e and ω m,n ( z ) is avector that contains the fractal signatures for all the Ga-bor m, n images. Then a canonical analysis function isapplied to de-correlate the signature descriptors and N principal components are selected. The computation ofthe canonical analysis of the signatures is defined by: φ m,n ( z ) = { λ ( ω m,n , N ) } (45)Where { λ ( ω m,n , N ) } is the N principal components of ω m,n with orientation m and scale n . Finally the im-age feature vector F consists on the concatenation of theprincipal components previously computed defined by: F = [ φ , (1) , φ , (2) , ..., φ , ( z ) , φ , (1) , φ , (2) , ..., φ , ( z ) , ..., φ m,n ( z )](46) V. EVALUATION STRATEGY Image Databases: For experimentation purposes,we used five different image databases. All the relatedmethods and the proposed method are tested with eachdatabase. The image databases are selected based onthe recurrence which each database is used in connectedliterature to validate feature extraction methods. Theselection contains databases with a different difficultylevel in classification and reported results. The selecteddatabases were: • Brodatz: Obtained from [27] it contains 111 tex-tures in grayscale each with 640 x 640 pixels. Togenerate a database with the appropriate numberof samples per class, we took 10 non-overlappingrandom windows of 200 x 200 pixels from each tex-ture, hence, the used database contains 1110 imageswith 111 classes and 10 images per class. • KTH-TIPS2: Obtained from [28] the ”2b” versionwas selected and it contains 11 grascale textureseach with 108 samples of 200 x 200 pixels. • Outex texture classification test suite 5: Obtainedfrom [29] the selected Outex T C • Outex texture classification test suite 5: Obtainedfrom [29] the selected Outex T C • Outex texture classification test suite 5: Obtainedfrom [29] the selected Outex T C Classification: With the extracted features are pos-sible to perform a class separation based on the use ofa statistical classifier. We have chosen the use of naiveBayes classifier [30] which is a simple probabilistic clas-sifier based on the Bayes theorem. This classifier uses anindependent feature model where the presence or absenceof a particular feature of a class is unrelated to the pres-ence of absence of any other feature. In simple terms, itassumes the conditional independence among attributes.Despite its over-simplified assumptions, this classifier hasworked very well with the real world datasets even whenthe attribute independence hypothesis is violated [31],[32].Formally, the probability of an observation E =( x , x , ..., x n ) being class c is: p ( c | E ) = p ( E | c ) p ( c ) p ( E ) (47)where E is the defined as the class C = + if: f b ( E ) = p ( C = + | E ) p ( C = −| E ) ≥ f b ( E ) is called a Bayesian classifier. based onthe attribute independency hypothesis we can write p ( E | c ) = p ( x , x , ..., | x n | c ) = n Y i =1 p ( x i | c ) (49)The resulting naive Bayes classifier can be defined as: f nb ( E ) = p ( C = +) p ( C = − ) n Y i =1 p ( x i | C = +) p ( x i | C = − ) (50)Even though the Naive Bayes classifier still does a goodjob with non-independent features is not appropriate touse highly correlated features. To solve this problem, weuse the canonical discriminant analysis function over dedataset to remove correlations. The application of thismethod maximizes the separation between classes andreduces de dimensionality of the dataset. VI. EXPERIMENTAL RESULTS The results obtained for each image database is pre-sented in this section. Each table shows the rate of cor-rect classifications. All the techniques implemented forthe purpose of comparison are run against all the imagedatabases.VI shows the results obtained for the full Brodatzdatabase. The best result obtained by one of the com-pared methods (LBP) is 92 . . . 58% and the Gabor+LBP methodobtains 86 . . 87% and the Gabor+Percentil75 obtains 82 . . 46% and the Ga-bor+Covariance method obtains 61 . 23% where the bestoverall result reported for Outex 14 is 69%. Finally, VIshows the results obtained for the Outex 16 classifica-tion suit. The proposed method obtains 77 . 02% and theGabor+Covariance method obtains 69 . VII. CONCLUSIONS We have presented a novel technique that improvesthe Gabor wavelets to extract features from texture im-ages. The effectiveness of the method is demonstratedby various experiments. The proposed method obtainedthe best results on all the image databases used. Tex-ture feature extraction is a difficult task and it has beenwidely addressed but most of the approaches found inthe literature only focus on a short range of texture con-ditions. The variability of the results of the comparedmethods shows the weakness of these methods when theimage datasets used present a great intra-class variabilitya wide range of texture types and variations in the cap-ture conditions. Different image datasets were selectedwith the purpose of presenting consistent results. How-ever, this is not very common since most methods onlyperform well under tight image conditions. As shownin the results, most of the related methods only workwell with one image dataset. Moreover, the variabilityof results on each compared method for a single datasetshows their sensibility to Gabor wavelets parameters. Inthis matter, the proposed method performs consistentlyin all experiments showing a clear independence of bothmethods and a successful conjunction to obtain rich tex-ture features. ACKNOWLEDGMENTS A. Gomez Z. gratefully acknowledges the financial sup-port of FAPESP (The State of Sao Paulo Research Foun-dation) Proc. 2009/04362. J. B. Florindo gratefullyacknowledges the financial support of FAPESP Proc.2012/19143-3. O. M. Bruno gratefully acknowledges thefinancial support of CNPq (National Council for Sci-entific and Technological Development, Brazil) (Grant VIII. REFERENCES [1] J. G. Daugman, Uncertainty relation for resolution inspace, spatial frequency, and orientation optimized by two-dimensional visual cortical filters, Journal of the Op- tical Society of America A: Optics, Image Science, andVision 2 (7) (1985) 1160–1169.[2] J. Daugman, Two-dimensional spectral analysis of corti-cal receptive field profiles, Vision Research 20 (10) (1980)847–856.[3] O. Rajadell, P. Garc´ıa-Sevilla, F. Pla, Scale analysis ofseveral filter banks for color texture classification, in:Proceedings of the 5th International Symposium on Ad-vances in Visual Computing: Part II, ISVC ’09, Springer-Verlag, Berlin, Heidelberg, 2009, pp. 509–518.[4] P. Bandzi, M. Oravec, J. Pavlovicova, New statistics fortexture classification based on gabor filters, Radioengi-neering 16 (3) (2007) 133–137.[5] D. Clausi, H. Deng, Design-based texture feature fusionusing gabor filters and co-occurrence probabilities, ImageProcessing, IEEE Transactions on 14 (7) (2005) 925 –936.[6] F. Shahabi, M. Rahmati, Comparison of Gabor-BasedFeatures for Writer Identification of Farsi/Arabic Hand-writing, in: G. Lorette (Ed.), Tenth International Work-shop on Frontiers in Handwriting Recognition, Universit´ede Rennes 1, Suvisoft, La Baule (France), 2006.[7] R. M. Haralick, K. Shanmugam, I. Dinstein, Texturalfeatures for image classification, Systems, Man and Cy-bernetics, IEEE Transactions on SMC-3 (6) (1973) 610–621.[8] J. Y. Tou, Y. H. Tay, P. Y. Lau, Gabor filters andgrey-level co-occurrence matrices in texture classifica-tion, in: MMU International Symposium on Informationand Communications Technologies, Petaling Jaya, 2007.[9] J. Y. Tou, Y. H. Tay, P. Y. Lau, Advances in neuro-information processing, Springer-Verlag, Berlin, Heidel-berg, 2009, Ch. Gabor Filters as Feature Images for Co-variance Matrix on Texture Classification Problem, pp.745–751.[10] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolutiongray-scale and rotation invariant texture classificationwith local binary patterns, Pattern Analysis and Ma-chine Intelligence, IEEE Transactions on 24 (7) (2002)971 –987.[11] A. R. Backes, O. M. Bruno, Fractal and multi-scalefractal dimension analysis: a comparative study ofbouligand-minkowski methodArXiv 1201.3153v1.[12] A. R. Backes, O. M. Bruno, Plant leaf identification us-ing multi-scale fractal dimension, in: International Con-ference on Image Analysis and Processing, 2009, pp. 143–150.[13] D. Gabor, Theory of communication, J. Inst. Elect. Eng.93 (1946) 429–457.[14] J. Daugman, How iris recognition works, Circuits andSystems for Video Technology, IEEE Transactions on14 (1) (2004) 21–30.[15] B. Manjunath, W. Ma, Texture features for browsing andretrieval of image data, Pattern Analysis and MachineIntelligence, IEEE Transactions on 18 (8) (1996) 837 –842.[16] O. Tuzel, F. Porikli, P. Meer, Region covariance:A fast descriptor for detection and classification, in:A. Leonardis, H. Bischof, A. Pinz (Eds.), Computer Vi-sion - ECCV 2006, Vol. 3952 of Lecture Notes in Com-puter Science, Springer Berlin Heidelberg, 2006, pp. 589–600. [17] W. Zhang, S. Shan, W. Gao, X. Chen, H. Zhang, Lo-cal gabor binary pattern histogram sequence (lgbphs): anovel non-statistical model for face representation andrecognition, in: Computer Vision, 2005. ICCV 2005.Tenth IEEE International Conference on, Vol. 1, 2005,pp. 786 – 791 Vol. 1.[18] S. Xie, S. Shan, X. Chen, W. Gao, V-lgbp: Volume basedlocal gabor binary patterns for face representation andrecognition, in: Pattern Recognition, 2008. ICPR 2008.19th International Conference on, 2008, pp. 1 –4.[19] B. B. Mandelbrot, The fractal geometry of nature, W. H.Freeman, New York, 1983.[20] A. R. Backes, D. Casanova, O. M. Bruno, Plant leaf iden-tification based on volumetric fractal dimension, Interna-tional Journal of Pattern Recognition and Artificial In-telligence 23 (6) (2009) 1145–1160.[21] O. M. Bruno, R. de Oliveira Plotze, M. Falvo, M. de Cas-tro, Fractal dimension applied to plant identification, In-formation Sciences 178 (12) (2008) 2722–2733.[22] A. R. Backes, J. J. de M. S Junior, R. M. Kolb, O. M.Bruno, Plant species identification using multi-scale frac-tal dimension applied to images of adaxial surface epider-mis, in: International Conference on Computer Analysisof Images and Pattern, 2009, pp. 680–688.[23] A. G. Zuniga, O. M. Bruno, Enhancing gabor waveletsusing volumetric fractal dimension, in: I. Bloch, J. Cesar,RobertoM. (Eds.), Progress in Pattern Recognition, Im-age Analysis, Computer Vision, and Applications, Vol.6419 of Lecture Notes in Computer Science, SpringerBerlin Heidelberg, 2010, pp. 362–369.[24] R. Fabbri, L. da Fontoura Costa, J. C. Torelli, O. M.Bruno, 2d euclidean distance transform algorithms: Acomparative survey, ACM Computer Surveys 40 (1).[25] G. J. Mclachlan, Discriminant Analysis and StatisticalPattern Recognition (Wiley Series in Probability andStatistics), Wiley-Interscience, 2004.[26] D. Rossatto, D. Casanova, R. Kolb, O. M. Bruno, Frac-tal analysis of leaf-texture properties as a tool for tax-onomic and identification purposes: a case study withspecies from neotropical melastomataceae (miconieaetribe), Plant Systematics and Evolution 291 (1-2) (2011)103–116.[27] P. Brodatz, Textures, a photographic album for artistsand designers, New York:Dover, 1966.[28] M. Fritz, E. Hayman, B. Caputo, J.-O. Eklundh, Thekth-tips and kth-tips2 image database (Dec. 2012).URL [29] O. T., M. T., P. M., V. J., K. J. . H. S., Outex - newframework for empirical evaluation of texture analysisalgorithms., 2002, proc. 16th International Conferenceon Pattern Recognition, Quebec, Canada, 1:701 - 706.[30] T. Mitchell, Machine Learning (Mcgraw-Hill Interna-tional Edit), 1st Edition, McGraw-Hill Education (ISEEditions), 1997.[31] L. I. Kuncheva, On the optimality of naive bayes withdependent binary features, Pattern Recognition Letters27 (7) (2006) 830 – 837.[32] P. Domingos, M. Pazzani, Beyond independence: Con-ditions for the optimality of the simple bayesian classi-fier, in: Machine Learning, Morgan Kaufmann, 1996, pp.105–112. TABLE I. Results for Brodatz image database. Scales x OrientationsGabor + 2 x 6 3 x 4 3 x 5 4 x 4 4 x 6 5 x 5 6 x 3 6 x 6Energy Variance Percentil75 LBP Covariance GLCM Enhanced Fractal Scales x OrientationsGabor + 2 x 6 3 x 4 3 x 5 4 x 4 4 x 6 5 x 5 6 x 3 6 x 6Energy Variance Percentil75 LBP Covariance GLCM Enhanced Fractal Scales x OrientationsGabor + 2 x 6 3 x 4 3 x 5 4 x 4 4 x 6 5 x 5 6 x 3 6 x 6Energy Variance Percentil75 LBP Covariance GLCM Enhanced Fractal Scales x OrientationsGabor + 2 x 6 3 x 4 3 x 5 4 x 4 4 x 6 5 x 5 6 x 3 6 x 6Energy Variance Percentil75 LBP Covariance GLCM Enhanced Fractal TABLE V. Results for Outext test suite 16 database. Scales x OrientationsGabor + 2 x 6 3 x 4 3 x 5 4 x 4 4 x 6 5 x 5 6 x 3 6 x 6Energy Variance Percentil75 LBP Covariance GLCM Enhanced Fractal77.02