A Novel Bio-Inspired Texture Descriptor based on Biodiversity and Taxonomic Measures
AA Novel Bio-Inspired Texture Descriptor based on Biodiversity andTaxonomic Measures
Steve Tsham Mpinda Ataky a, ∗ , Alessandro Lameiras Koerich a a ´Ecole de Technologie Sup´erieure, Universit´e du Qu´ebec1100, rue Notre-Dame Ouest, H3C 1K3, Montr´eal, QC, Canada Abstract
Texture can be defined as the change of image intensity that forms repetitive patterns, resultingfrom physical properties of the object’s roughness or differences in a reflection on the surface.Considering that texture forms a complex system of patterns in a non-deterministic way, bio-diversity concepts can help to its characterization. In this paper, we propose a novel approachcapable of quantifying such a complex system of diverse patterns through species diversity andrichness, and taxonomic distinctiveness. The proposed approach considers each image channel asa species ecosystem and computes species diversity and richness measures as well as taxonomicmeasures to describe the texture. The proposed approach takes advantage of the invariancecharacteristics of ecological patterns to build a permutation, rotation, and translation invariantdescriptor. Experimental results on three datasets of natural texture images and two datasets ofhistopathological images have shown that the proposed texture descriptor has advantages overseveral texture descriptors and deep methods.
Keywords:
Texture Classification, Texture Characterization, Species Richness, TaxonomicDistinctiveness, Phylogenetic Indices, Species Abundance.
1. Introduction
Texture is an important descriptor that has been used in several image analysis [1] and com-puter vision [2] applications, such as agriculture [3], recognition of facial expressions [4], objectrecognition [5], medical image analysis [6], music genre classification [7], remote sensing [8], mate-rial [9] and surface [10] recognition, and so on. Texture analysis aims at establishing the neighbor-hood relationship of the texture elements and their position concerning the others (connectivity), ∗ Corresponding author
Email addresses: [email protected] (Steve Tsham Mpinda Ataky ), [email protected] (Alessandro Lameiras Koerich)
Preprint submitted to Pattern Recognition February 16, 2021 a r X i v : . [ c s . C V ] F e b he number of elements per spatial unit (density), and their regularity (homogeneity). Texturedescriptors developed to characterize image textures by and large fall into statistical methods andgeometric methods [11]. The former aims at discovering to what extent some image propertiesrelated to its texture may be distributed, afterward derive numerical texture measures from thedistributions thereof. The latter, in turn, generally investigates the various sorts of periodicity inan image and characterizes a texture with the relative spectral energy at different periodicity.Several approaches for texture information extraction have been developed in the last threedecades such as gray-level co-occurrence matrix (GLCM) [12], Haralick descriptors [13], localbinary patterns (LBP) [14], wavelet transform [15], Markov random fields [16], Gabor texturediscriminator [17], local phase quantization [18], local tera pattern [19], binarized statistical im-age features [20], and fractal models [21]. A review of most of these approaches can be foundin Simon and Uma [22] and Liu et al. [23]. Recently, researchers have focused their attention onconvolutional neural networks (CNN) due to their effectiveness in object detection and recognitiontasks. However, the shape information extracted by CNNs is of minor importance in texture anal-ysis [24]. Andrearczyk and Whelan [24] develop a simple texture CNN (T-CNN) architecture foranalyzing texture images that pools an energy measure at the last convolution layer and discardsthe overall shape information analyzed by classic CNNs. Despite the promising results achievedby T-CNN, the trade-off between accuracy and complexity is not so favorable to CNNs. OtherCNN architectures have also achieved moderate results on texture classification [25–27].Even if most of the texture descriptors previously mentioned have proven to be discriminativefor texture classification, they do not exploit the color information that may exist in natural andmicroscopic images. To overcome such a limitation, Qi et al. [28] introduced an approach thatencodes cross-channel texture correlation and an extension of LBP that incorporates color infor-mation. Nsimba and Levada [29] have also exploited color information for texture classification.They presented a novel approach to compute information theory measures that capture significanttextural information from a color image. The experimental results of both approaches are verypromising and show the importance of using color information for texture characterization.In this paper, we introduce a novel bio-inspired texture (BiT) descriptor based on biodiversitymeasurements (species richness and evenness) and taxonomic distinctiveness, which are conceptsprimarily applied in ecology that exploit texture as an ecosystem, whence both the biodiver-sity measurements and taxonomic indices are computed and quantified. Azevedo et al. [30] and2e Carvalho Filho et al. [31] have used some taxonomic indices for the diagnosis of glaucomaon retinographs and lung nodules, respectively. It is also worthy of mention that these worksemployed taxonomic indices for extracting features of specific types of medical images, such asglaucoma and lung nodules. The bio-inspired texture descriptor proposed in this paper is a generaltexture descriptor that can be used to characterize texture information on a variety of texture im-ages. Furthermore, the proposed approach also exploits color information [28, 29]. We representand characterize the biodiversity of an image on the interaction of a pixel with its neighborhoodwithin a given channel (R, G, or B) as well as on the three-channel overlapped (original) image.Besides, taxonomic indexes and species diversity and richness measures on which the novel BiTdescriptor relies are of an underlying use when it comes to defining an all-inclusive (takes into ac-count the whole ecosystem) behavior of texture image patterns, which forms a non-deterministiccomplex system.The main contribution of this paper is to propose a novel bio-inspired texture descriptor byexploiting species diversity and richness, taxonomic distinctiveness to extract descriptive featuresfor texture classification. More specifically, the contributions are: (i) modeling each channel ofa color image as an ecosystem; (ii) a novel bio-inspired texture (BiT) descriptor, which com-bines measurements of species diversity and richness, and taxonomic distinctiveness; (iii) the BiTdescriptor is invariant to scale, translation and permutation; (iv) the BiT descriptor is easy tocompute and has a low computational complexity; (v) the BiT descriptor is a generic texture de-scriptor that performs well on different categories of images such as natural textures and medicalimages.The rest of this paper is organized as follows. Section 2 presents the proposed bio-inspiredtexture descriptor based on biodiversity measurements and taxonomic distinctiveness. Section 3describes a baseline approach to classify texture images, which is used to assess the performanceof the proposed BiT descriptor and to compare its performance with other classical texture de-scriptors. Section 4 presents the datasets and the experimental protocol. Experimental results,comparison with other texture descriptors and deep approaches, and discussion are presented inSection 5. Finally, the conclusions are stated in the last section.3 . Biodiversity and Taxonomic Distinctiveness Diversity is a term often used in ecology and the purpose of diversity indices is to describe thevariety of species present in a community or region [32]. Community is defined as a set of speciesthat occurs in a certain place and time. Measurements frequently used in statistical studies, suchas mean and variance, measure quantitative variability, while diversity indices describe qualitativevariability. Diversity is measured through two variants: (i) species richness, which representsthe number of species of a given region; (ii) relative abundance, which refers to the number ofindividuals of a given species in a given region) [33]. However, diversity cannot be measured only interms of abundance and species richness. It requires the inclusion of a phylogenetic parameter [34].Phylogeny is a branch of biology responsible for studying the evolutionary relationships betweenspecies to determine possible common ancestors. The combination of species abundance withphylogenetic proximity to generate a diversity index is denoted as taxonomic diversity. Taxonomyis the science that deals with classification (creating new taxa), identification (allocation of lineagewithin species), and nomenclature.In biology, a phylogenetic tree combined with phylogenetic diversity indices is used to com-pare behavior patterns between species in different areas. Phylogenetic indices (biodiversity andtaxonomic indices) can characterize texture due to their potential in characterizing patterns ofa given region/image, regardless of forming a non-deterministic complex system. The richnessof details obtained with each group of indices is essential for the composition of the descriptorsproposed in this paper. We state that these indices are suitable for describing textures due totheir ability in analyzing the diversity between species in a region.
We assume that an image is an abstract model of an ecosystem where: (i) gray levels of pixelsin an image correspond to the species in an ecosystem; (ii) pixels in an image correspond to theindividuals in an ecosystem; (iii) the number of different gray levels in an image corresponds tospecies richness in an ecosystem; (iv) the number of different gray levels in a specific region of animage corresponds to species abundance in an ecosystem. Another factor of consideration is thatboth the patterns in an ecosystem and the patterns in texture images form a non-deterministicsystem. Figure 1 illustrates an ecosystem with three species, six individuals of white species, fiveindividuals of gray species, and five individuals of black species.4 igure 1: A gray-level image as an abstract model of an ecosystem of three species (three gray-levels): white (6individuals), gray (5 individuals) and black (5 individuals).
Biodiversity is defined as the variety within and among life forms on an ecosystem or a siteand it is measured as a combination of richness and evenness across species [33]. Diversity can beemployed to represent variation in several forms such as genetic, life form, and functional group.It is worthy of mention that diverse communities are often a sign of fragmented sites where muchof species richness is contributed by disturbance species [33]. Different objective measures havebeen brought into existence as a means to empirically measure biodiversity. The fundamental ideaof a diversity index is to quantify biological variability, which, in turn, can be used to comparebiological entities, composed of direct components, in whether space or time [35]. Biodiversitycan be expressed or monitored at different scales and spaces: alpha diversity, beta diversity, andgamma diversity. More details concerning these three types of indices can be found in [36].
Diversity measurements rely on three assumptions [32]: (i) all species are equal – richnessmeasurement makes no distinctions among species and treats the species that are exceptionallyabundant in the same way as those extremely rare; (ii) all individuals are equal – there is no dis-tinction between the largest and the smallest individual, however, in practice, the least animalscan often escape for instance by sampling with nets. This does not necessarily apply to taxo-nomic and functional diversity measures; (iii) species abundance is recorded using appropriateand comparable units.We can translate such assumptions to our abstract model as: (i) all gray levels are equal –richness measurement makes no distinctions among gray levels and treat the gray levels that areexceptionally abundant in the same way as those extremely less represented; In other words, allgray levels within an image are taken into account for further calculation, regardless of how non-representative some of them are; (ii) all pixel values are equal – there is no distinction between5he largest and the smallest pixel value; (iii) gray-level abundance has to be recorded in usingappropriate and comparable units such as the intensity.Some alpha diversity measures, including measures of richness, dominance, and evenness [37]are described as follows. They represent the diversity within a particular ecosystem, that is, therichness and evenness of individuals within a community.
Margalef ’s (d Mg ) [32, 38] and Menhinick’s (d Mn ) [39] diversity index are both the ratiobetween the number of species ( S ) and the total number of individuals in the sample ( N ):d Mg = S − N (1)d Mn = SN (2)where, S and N denote the number of gray levels and the total number of pixels in an image,respectively. Berger-Parker dominance (d BP ) [40] is the ratio between the number of individuals in themost abundant species ( N max ) and the total number of individuals in the sample ( N ):d BP = N max N (3)where N max denotes the most frequent gray level in an image. Fisher’s alpha diversity metric (d F ) [37, 41] denotes the number of operational taxonomicunits, that is, groups of closely related individuals and it is defined as:d F = α ln (cid:18) Nα (cid:19) (4)where N is the number of pixels in the image, and α is approximately equal to the number ofgray levels represented by a single pixel. Kempton-Taylor index of alpha diversity (d KT ) [42] measures the interquartile slope ofthe cumulative abundance curve, where n r is the number of species with abundance R ; S is thenumber of species in the sample; R and R are the 25% and 75% quartiles of the cumulativespecies curve; n R is the number of individuals in the class where R falls; n R is the number of6ndividuals in the class where R falls:d KT = 12 n R + R − (cid:88) R +1 n r + 12 n R log R R (5)where n r denotes the number of gray levels with abundance R ; S is the number of gray levels inthe image; R and R are the 25% and 75% quartiles of the cumulative gray scale curve; n R isthe number of pixels in the class where R falls; n R is the number of pixels in the class where R falls. McIntosh’s evenness measure (e M ) [43] is the ratio between the number of individuals in the i -th species and the total number of individuals ( N ), and the number of species in the sample( S ): e M = (cid:118)(cid:117)(cid:117)(cid:117)(cid:117)(cid:116) S (cid:88) i =1 n i ( N − S + 1) + S − n i denotes the number of pixels of the i -th gray-level (the summation is over all gray levels), N is the total number of pixels, and S is the number of different gray levels in the image. Shannon-Wiener diversity index (d SW ) [37] is defined as the proportion of individuals ofspecies i in terms of species abundance ( S ):d SW = − S (cid:88) i =1 ( p i log e p i ) (7)where S and p i represent the number of gray levels and the proportion of pixels that have the i -th gray-level. The ecological diversity indices presented in the previous section are based on the richnessand abundance of species present in a community. Nevertheless, such indices may be insensitiveto taxonomic differences or similarities. With equal species abundances, they measure but thespecies richness. Assemblages with the same species richness may either comprise species that areclosely related taxonomically to each other or they may be more distantly related [44].Taxonomic indices consider the taxonomic relation between different individuals in an ecosys-7em. The diversity thereof reflects the average taxonomic distance between any two individuals,randomly chosen from a sample. The distance can represent the length of the path connectingthese two individuals along the branches of a phylogenetic tree [44]. Taxonomic diversity andtaxonomic distinctiveness define the relationship between two organisms randomly chosen in anexisting phylogeny in a community [34, 45], and they are characterized by three key factors: (i)number of individuals; (ii) the number of species; (iii) the structure of species connection, thatis, the number of edges. Furthermore, Gibson et al. [45] also proposed the distinctiveness in-dex describing the average taxonomic distance between two randomly chosen individuals throughthe phylogeny of all species in a sample. This distinctiveness may be represented as taxonomicdiversity and taxonomic distinctness [35], which is described as follows.
Taxonomic diversity (∆) [34] includes aspects of taxonomic relatedness and evenness. In otherwords, it considers the abundance of species (number of different gray levels) and the taxonomicrelationship between them, and whose value represents the average taxonomic distance betweenany two individuals (pixels), chosen at random from a sample.∆ = S (cid:88) i =0 S (cid:88) i Figure 3: Construction of a phylogenetic tree for computing the taxonomic indexes. In each iteration (step) theimage is divided based on species (gray levels). The average species value is used as threshold at each step. Figure 3 illustrates the process of division performed in a region of an image to assemble a11hylogeny tree (dendrogram), based on the similarity between pixels (gray levels), for computingthe taxonomic indexes. In this case, some iterations are needed to divide the original region/imageuntil a single gray level remains on each leaf. The division is carried out based on a threshold,which splits a region into two parts each of which containing pixels of gray levels above (right)and below (left) the threshold, successively. The threshold can be the average gray-level of allpixels.From the original image, in the first iteration (step 1), gray levels 6 and 75 (left) are belowthe threshold, whereas gray levels 117, 141, and 230 (right) are above the threshold. The seconditeration (step 2) splits the left part resulting from step 1, that is, gray levels 6 (left) and 75 (right)into two parts. Since there are a single pixel gray levels in each region resulting from step 2, theseregions become leaves. The third iteration (step 3) separates the right part resulting from step 1into two parts, that is, pixels of gray levels 141 and 117 go to the left, while pixels of gray-level230 to the right. Finally, the fourth iteration (step 4) separates the left part resulting from step3 into two parts, that is, pixels of gray levels 141 and 117. Figure 4 illustrates the rooted tree,the dendrogram, and the respective species (gray levels) as well as their characteristics. Figure 4: Example of (a) rooted tree; (b) a dendrogram; (c) and the respective distance matrix of gray levelscomputed from the image in Figure 3. Note that (a) and (b) are equivalent. The dendrogram allows computingthe phylogenetic indexes to infer the phylogenetic relationship between existing gray levels in the original image.Therefrom, the taxonomic indexes are likewise computed. For many applications, a texture descriptor must have some important properties such asinvariance to rotations, scale, and translation. Furthermore, the descriptor should be easy to12alculate.The diversity indices based on species richness measure properties directly related to species,such as their relative abundance and evenness. These measurements are invariant to in-planerotations and scale (because the proper essence of pattern is invariance). The fundamental ideaof diversity indices is to quantify biological variability, which, in turn, can be used to comparebiological entities, composed of direct components, whether space or time [35]. Biodiversity can beexpressed or monitored at different scales and spaces, and it is assumed that all species are equal,that is, richness measurement makes no distinctions among species and treat the species that areexceptionally abundant in the same way as those that are extremely rare; and all individuals areequal, that is, there is no distinction between the largest and the smallest individual [32].In our abstract model, these assumptions may be expressed as pixels of any gray level areequal, that is, richness measurement makes no distinctions among gray levels and treats pixelsthat are exceptionally abundant in the same way as pixels that are extremely less represented;In other words, pixels of all gray levels present in an image are taken into account for furthercalculation, regardless of how non-representative some are; and all pixel values are equal, that is,there is no distinction between the largest and the smallest pixel value.In ecology, a pattern is subject to how form remains invariant to changes in measurement.Some patterns retain the same form after uniformly stretching or shrinking the scale of measure-ment. The rotational invariance in the ecological pattern has been stated by Frank and Bascompte[52], being the most general way in which to understand commonly observed patterns. Therefrom,species abundance distributions provide a transcendent example, in which the maximum entropyand neutral models can succeed in some cases because they derive from invariance principles.Likewise, as presented by Daly et al. [53], diversity is invariant to permutation of the speciesabundance vector. Rousseau et al. [54] emphasizes that there is a one-to-one correspondencebetween abundance vectors and Lorenz curves, consequently, abundance vectors can be partiallyordered by the Lorenz order, which is permutation-invariant (rotation) and scale-invariant.Therefore, the BiT descriptor combines the characteristics of statistical and structural ap-proaches and takes advantage of the invariance characteristics of ecological patterns to permu-tation, rotation, and scale, by combining species richness, abundance, and evenness, as well astaxonomic indices. 13 .5. BiT and other Texture Descriptors The BiT descriptor shares some characteristics of both GLCM [12] and LBP [14] descriptors inthe sense that BiT also characterizes textures based on second-order statistical properties, whichinvolves comparing pixels and determining how a pixel at a specific location relates statisticallyto pixels at different locations.In ecology, taxonomic indices are approximations of second-order statistics at the specieslevel. These indices are based on group analysis, thus enabling a behavioral exploration of theneighborhood of regions displaced from a reference location. Given a distance measurementbetween pairs of species (pairs of pixels of different gray levels), a classical approach to thephylogeny issue can be to find a tree that predicts the observed set of adjoining distances. Thisis represented in the matrix that indicates the existing phylogenetic distance, reducing it to asimple table of pairwise distances [44, 51].Furthermore, the BiT descriptor also shares some characteristics of Gabor filters [17], whichexplore unalike varieties of periodicity in an image and attempt to characterize a texture at differ-ent periodicity. This analysis thereof is confined to the adjacent neighborhoods of the individualpixels. These within-neighborhood periodicity properties can be used to recognize texture dif-ferences between the different regions. Accordingly, phylogenetic trees combined with diversityindices are used in biology to compare behavior patterns between species in different areas andwithin-neighborhood. In addition, diversity indices based on species richness are of an under-lying use when it comes to defining an all-inclusive behavior of an ecosystem, which forms anon-deterministic complex system. 3. Case Study In this section, we present how the proposed bio-inspired texture descriptor can be integratedwith image processing and machine learning algorithms for classification tasks. The proposedclassification scheme is structured into five stages: image channel splitting, pre-processing, featureextraction, training, and classification. Figure 5 shows an overview of the proposed scheme.Algorithm 1 integrates the first three steps, and it receives an RGB image as input and provides a d -dimensional feature vector of BiT descriptors. An implementation of this algorithm is available14s a Python module . The five stages are described as follows. Figure 5: General overview of the proposed scheme. Channel Splitting:. Besides the original input RBG image, each image channel (R, G, B) is con-sidered as a separate input. The key reason behind the splitting channels is that: notwithstandingthe features extracted employed in the majority of the descriptors presented in Section 1 haveshown the discriminative ability when it comes to classifying texture patterns, their performanceon natural and microscopic images may be bounded because they are applied to gray-scale of theoriginal image, thus, not exploiting color information. Here, we intend to provide a classificationapproach on color texture image-based, to a great extent, on the ability of the bio-inspired featuredescriptor to capture noteworthy textural information from an input color image. Based on theprinciple that most ecosystems work in a cause-effect relationship, that is, when one resource isadded or lost it affects the entire ecosystem, and some of the most marked temporal/spacial fluc-tuations in species abundances are linked to this cause-effect [55], we consider here to representand characterize the biodiversity of an input image by a set of local descriptors generated bothfrom the interaction of a pixel with its neighborhood inside a given channel (R, G or B) and thethree-channel overlapped (original) image. Pre-Processing:. It consists of an unsharp filter to highlight image characteristics and a Crimminsfilter to remove speckles [56]. Both filters are applied to each image channel and the original imageto improve their quality for the feature extraction step. https://github.com/stevetmat/BioInspiredFDesc . The Python class BiT ( image, b feat = True, t feat =True, unsharp filter = True, crimmins filter = True, normalization = True ) generates a 56-dimensional featurevector. eature Extraction:. After the pre-processing step, the images undergo feature extraction, whichlooks for informative and discriminative characteristics within the images. Images are then rep-resented by several measurements organized in feature vectors. From each image, we extract:biodiversity measurements (Equations 1 to 7) and taxonomic indices (Equations 8 to 14). Algorithm 1: FeatureExtractionProcedure Result: feature descriptor 1. Read a RGB image; 2. Separate the image in channels R , G , B , preserving the original RGB ; 3. Convert R , G , B , and RGB to grayscale images; 4. Apply unsharp filter to R , G , B and Crimmins filter to RGB ; 5. Compute biodiversity measurements and taxonomic indices of R , G , B and RGB ; 6. Concatenate these values into a single vector V ; 7. Normalize all values of V ; 8. Repeat steps 1 to 7 for all images of the dataset Classification:. the final step of the proposed scheme consists of classifying images in differentclasses, using a shallow approach where feature vectors are used to train different classificationalgorithms as detailed in Section 4. The results obtained are presented and discussed in Section 5. 4. Experimental Protocol In this section, we present the datasets used to assess the performance of the proposed BiTdescriptor, which includes natural texture images and histopathological images (HIs) and theexperimental protocol to evaluate the proprieties of the BiT descriptor and its performance onclassification tasks. We compare the performance of BiT descriptor with classical texture descrip-tors such as LBP, GLCM, and Haralick. It is worthy to mention that our contribution relies onthe combination of biodiversity measurements and taxonomic indices to build a discriminativedescriptor capable of efficiently classifying textures. We use three texture datasets that have already been employed for evaluating texture de-scriptors such as LBP, GLCM, and Haralick [22]. The Salzburg dataset contains a collectionof 476 color texture images of resolution 128 × has a training set consisting of 20 non–rotated color imagesof each of the 24 classes (480 in total) of illuminant “inca”, color counterpart of the originalOutex TC 00010 dataset. The test set consists of 3,840 color images of eight orientations (5, 10,15, 30, 45, 60, 75, and 90 degrees). Figure 6(b) shows some samples from the training set of theOutex dataset.The KTH-TIPS dataset contains a collection of 810 color texture images of 200 × 200 pixel ofresolution. The images were captured at nine scales, under three different illumination directionsand three different poses with 81 images per class. Seventy percent of images are used for training,while the remaining 30% are used for testing. Figure 6(c) shows some samples from the KTH-TIPSdataset. Figure 6: Samples from the texture datasets: (a) Salzburg, (b) Outex TC 00010 c, and (c) KTH-TIPS. HIs were included in the experiments because they are more challenging than pure textureimages since HIs usually have other structures such as nuclei (shape) and variation of tissues(colors) within the same class. http://lagis-vi.univ-lille1.fr/datasets/outex.html http://lagis-vi.univ-lille1.fr/datasets/outex.html × × 150 patched and labeled according to the struc-ture they contain. Eight types of structures are labeled: tumor (T), stroma (ST), complex stroma(C), immune or lymphoid cells (L), debris (D), mucosa (M), adipose (AD), and background orempty (E). Each structure detailed in the CRC dataset has a specific textural characteristic, withfew shape characteristics, found more in the formation of cell nuclei, which have a rounded shape,but with different coloring due to hematoxylin. The total number of images is 625 per structuretype, resulting in 5,000 images. Figure 7 shows samples of each class from the CRC dataset. Theexperiments were performed with stratified 10-fold cross-validation. Figure 7: Samples of the CRC dataset: (a) tumor, (b) stroma, (c) complex, (d) lympho, (e) debris, (f) mucosa, (g)adipose, (h) empty. The BreakHis dataset [58] is composed of 7,909 microscopic images of breast tumor tissuecollected from 82 patients using different magnification factors (40 × , 100 × , 200 × , and 400 × ). Thebreast tissues extracted from biopsy usually have some basic structures, such as glands, ducts,and supporting tissue. By comparing a region that has a malignant tumor ductal carcinoma,for example, with a region that does not, there will be a difference in texture between them.In the region with carcinoma, there will be a large presence of nuclei, identified by the purplecolor of the reaction of hematoxylin with its proteins. The nuclei and a large number of cellsin a reduced region, make the apparent texture to be noisier. In a region without carcinoma,the epithelial tissue is thin and delimits two regions, lumen and stroma, which have differenttextural characteristics from the excess of epithelial cells. The lumen generally presents itself asa homogeneous and whitish region, the stroma, due to the reaction of eosin, presents a pink andalso homogeneous color, with little noise. It is at this point that a texture descriptor can assist18n the detection of carcinomas, by characterizing a given texture. Nevertheless, the evaluation oftypes of malignant tumors, that is, differentiation between types of carcinoma on a dataset suchas BreaKHis would present a need to detect shape to differentiate the papillae from a disorderlycluster of cells, for instance.The BreakHis dataset contains 2,480 benign and 5,429 malignant samples (700 × 460 pixels,3-channel RGB, 8-bit depth in each channel, PNG format). We used hold-outs with repetitionwhere 70% of the samples are used for training and 30% of the samples are used for testing.Figure 8 shows samples from each class of the BreakHis dataset. Figure 8: Example of HIs: (a) Adenosis, (b) Fibroadenoma, (c) Phyllodes, (d) Tabular adenomaa, (e) Ductalcarcinoma, (f) Lobular carcinoma, (g) Mucinous carcinoma, (h) Papillary carcinoma, where (a) to (d) are benignand (e) to (f) are malignant tumors. We have carry out three types of experiments to evaluate the proposed BiT descriptor: (i)experiments on texture images to evaluate invariance of the BiT descriptor to rotation and scale;(ii) experiments on texture images in which the accuracy of classification algorithms trained usingBiT descriptors extracted from images are computed for a comparative analysis with traditionaltexture descriptors; (iii) experiments on HIs in which sensitivity, specificity, and Kappa scores arecomputed as quantitative measures. Such measures are frequently used in medical imaging.The invariance properties of the proposed BiT descriptors is evaluated on different transfor-mations applied on texture images. For each image, we compute the BiT descriptors and compareto those computed from the transformed images. In this case, feature values should not changewith the transformations.The BiT descriptor is evaluated by the accuracy achieved on three texture datasets when itis used to extract features and different classification algorithms are trained with such a feature19ector. The same classification algorithms are trained with other texture descriptors and theirperformance is compared with the performance achieved with BiT. For a fair comparison withother texture descriptor, we use the same approach describe in Section 3 for all texture descrip-tors. Furthermore, the feature extraction procedure describe in Algorithm 1, was also used forall texture descriptors. We have used SVM and k -NN and four ensemble learning algorithms:decision tree-based ensemble algorithm that uses a gradient boosting framework (XGBCB), ahistogram-based algorithm for building gradient boosting ensembles of decision trees (HistoB),light gradient boosting decision trees (LightB), and super learner (SuperL) [59], which involves theselection of different base classifiers and the evaluation of their performances using a resamplingtechnique. SuperL applies a stacked generalization through out-of-fold predictions during k -foldcross-validation. The base classifiers used in SuperL are k -NN, decision trees, and ensembles ofdecision trees such as adaboost, bagging, extra trees, and random forest.The BiT descriptor is also evaluated by the accuracy, specificity, sensitivity, and Kappa scoreachieved on two HI datasets. In this case, only the classification algorithm that achieved the bestperformance with BiT is retained and its performance is compared with the state-of-the-art ofthese datasets, which includes CNNs. These experiments are performed using a stratified k -foldcross-validation. 5. Results and Discussion Figure 9 illustrates different transformations of texture images (first row) and HIs (secondrow). For each image, we have computed some BiT descriptors from each transformation andnon-normalized feature values are presented in Tables 1 and 2.20 igure 9: Example of texture image: (a) Original image, (b) rotation 90 ◦ , (c) rotation 180 ◦ , (d) Horizontal reflection,(e) Vertical reflection, (f) rescaled 50%. Example of histopathologic image: (g) Original image, (h) rotation 90 ◦ ,(i) rotation 180 ◦ , (j) Horizontal reflection, (k) Vertical reflection, (l) rescaled 50%. The values of BiT descriptors presented in Tables 1 and 2 show that: (i) all measurementsemployed are invariant to rotation and reflection as shown in Figures 9(a)-(e) and 9(g)-(k), sincethey presented exactly the same values for each texture image or HI. This also corroborates thefact that BiT descriptors capture the all-inclusive behaviors of patterns in an image; (ii) Shannon-Wiener diversity index (d SW ), taxonomic distinctness (∆ ∗ ), intensive quadratic entropy (e IQ ), andthe average distance from the nearest neighbor (d NN ) are invariant to scale as they provided valuesof the order of other transformations for each of the images. On the other hand, the measuresbased on richness and abundance show dependence to scale. By changing the image scale, wesomehow affect the proportion of both factors, which, affects the resulting values either directlyor inversely. Unlike, taxonomic indices rely on the parenthood relationship between species andare not affected by the change in scale, as the phylogenetic relationship depends on the intrinsicproprieties found in the ecosystem (image). 21 able 1: Non-normalized feature values computed from different image transformations applied to a texture im-age 9(a). TransformationsBiT Original Rotation Reflection RescalingFeatures 90 o o Horizontal Vertical 50%d Mg M Mn SW ∗ IQ NN o o Horizontal Vertical 50%d Mg M Mn SW ∗ IQ NN Table 3 shows the accuracy achieved by monolithic classifiers and ensemble methods on fourtexture descriptors: LBP, GLCM, Haralick, and BiT. The proposed BiT descriptor provided thebest accuracy for most of the classification algorithms, and the best result was achieved with BiTand SuperL (96.34%), which outperformed all texture descriptors. The difference in accuracyachieved by BiT and the second and the third-best texture descriptors (Haralick+ k -NN andGLCM+ k -NN) are nearly 5% and 13%, respectively. Table 3: Accuracy (%) on the test set of Salzburg dataset. The best results are in boldface.Texture Classification AlgorithmsDescriptors XGBCB HistoB LightB SuperL k -NN SVMLBP 57.10 ± ± ± ± ± ± ± ± ± ± ± ± 22 direct comparison of the results presented in Table 3 with other works may not be reasonableowing to differences in the experimental protocols. For example, the subclasses used in theexperiment sets are not clearly specified as well as the samples in the test set.Table 4 shows the accuracy achieved by monolithic classifiers and ensemble methods on fourtexture descriptors: LBP, GLCM, Haralick, and BiT. The proposed BiT descriptor provided thebest accuracy for all classification algorithms, and the best result was achieved with BiT and SVM(100%), which outperformed all texture descriptors. The difference in accuracy achieved by BiTand the second and the third-best texture descriptors (Haralick+SupeL and GLCM+SuperL) arenearly 7% and 8%, respectively. Table 4: Accuracy (%) on the test set of Outex dataset. Best results are in bold face.Texture Classification AlgorithmsDescriptors XGBCB HistoB LightB SuperL k -NN SVMLBP 54.90 ± ± ± ± ± ± ± ± ± ± ± ± Several works have also used the Outex dataset for texture classification. Although a directcomparison is not possible due to differences in the experimental protocols, Mehta and Egiazarian[60] presented an approach based on dominant rotated LBP, which achieved an accuracy of 96.26%with a k -NN. The approach is rotation invariant, nonetheless, it has a downside of not consideringcolor information and global features. Du et al. [61] presented an approach based on a local spikingpattern. This approach has the advantage of being rotation invariant, impulse noise resistant,and illumination invariant. Notwithstanding, it is not extended for color textures and many inputparameters are required. They achieved an accuracy of 86.12% with a neural network. Finally,Table 5 shows the accuracy achieved by monolithic classifiers and ensemble methods on fourtexture descriptors: LBP, GLCM, Haralick, and BiT. The proposed BiT descriptor provided thebest accuracy for four out of six classification algorithms. However, the best result was achievedwith BiT and SVM (98.93%), which outperformed all texture descriptors.The difference in accuracy achieved by BiT and the second and the third-best texture de-scriptors (Haralick+SVM and GLCM+SVM) are nearly 5% and 11%. Nonetheless, the Haralickdescriptor presented an accuracy equal and slightly higher than BiT for XGBCB and HistoBensemble methods. 23 able 5: Accuracy (%) on the test set of KTH-TIPS dataset. Best results are in bold face.Texture Classification AlgorithmsDescriptors XGBCB HistoB LightB SuperL k -NN SVMLBP 59.11 ± ± ± ± ± ± ± ± ± ± ± ± The KTH-TIPS dataset has also been used to evaluate approaches for texture classification.Even if a direct comparison may not be reasonable due to differences in the experimental pro-tocols, Mehta and Egiazarian [60] also evaluated their approach on such a dataset and achievedan accuracy of 96.78% with k -NN. Hazgui et al. [62] presented an approach based on geneticprogramming and fusion of HOG and LBP features. Such an approach achieved an accuracy of91.20% with a k -NN. Nevertheless, it does not consider color information and global features.Moreover, Nguyen et al. [63] presented statistical binary patterns, which are rotational and noiseinvariant. Such an approach reached an accuracy of 97.73%, which is 1.3% lower than the ac-curacy achieved by BiT+SVM. However, in addition to being resolution sensitive, this methodpresents a high computational complexity. Despite differences in the experimental protocol Qiet al. [28] studied the relative variance of texture patterns between different channels throughLBP, as feature descriptor, and Shannon entropy to encode the cross-channel texture correlation.Therefore, they proposed a multi-scale cross-channel LBP (CCLBP), which is rotation-invariant.The CCLBP first computes the LBP descriptors in each channel and for each scale (total of 3scales), afterward conducts the co-occurrence statistics, and the extracted features are concate-nated. Such an approach achieved an accuracy of 99.01% for three scales with an SVM, whichis 0.17% higher than the accuracy achieved by BiT+SVM. Notwithstanding, scale invariance, forexample, is not an advantage provided by this method. Table 6 shows the accuracy achieved by monolithic classifiers and ensemble methods trainedwith BiT descriptors on the CRC dataset. Among all classification algorithms, SuperL providedthe best results. We have also computed other important metrics used in medical images forBiT+SuperL. Specificity, sensitivity, and Kappa achieved on the CRC dataset are 94.43%, 94.47%,and 93.87%, respectively. Table 7 compares the results achieved by BiT+SuperL with the state-of-the-art for the CRC dataset. The proposed descriptor outperforms slightly the accuracy achieved24y all other methods. The difference in accuracy to the second-best method (CNN) is 0.56%,considering an 8-class classification task.It is worthy of mention that the success of CNNs relies on the ability to leverage massive labeleddatasets to learn high-quality representations. They have been widely employed on different imageclassification tasks due to their discriminative capability. Considering that they learn iteratively,a large amount of data is required to train CNNs to obtain desired results. Notwithstanding,data availability for a few fields may be scanty and therefore CNNs become prohibitive in severaldomains. This is the case in medical imaging. The results achieved by the BiT descriptor on theCRC dataset for HI classification have shown that the proposed descriptor works well on othertypes of images, which have other structures than textures, with no need for data augmentation. Table 6: Accuracy (%) of monolithic classifiers and ensemble methods with BiT descriptor on CRC dataset.Texture Classification AlgorithmsDescriptor XGBCB HistoB LightB SuperL k -NN SVMBiT 91.00 ± ± ± ∗ –Kather et al. [65] Shallow 87.40 –Sarkar and Acton [66] Shallow 73.60 – BiT+SuperL Shallow –Wang et al. [67] CNN – 92.60Pham [68] CNN – 84.00Raczkowski et al. [69] CNN 92.40 92.20 ∗ Used 2-classes classification instead. Table 8 shows the accuracy achieved by monolithic classifiers and ensemble methods trainedwith BiT descriptor on the BreakHis dataset. The SVM classifier achieved the best accuracy forall magnifications, followed by Super Learner. Table 9 shows specificity, sensitivity, and Kappaachieved by BiT and SVM. Table 10 compares the results achieved by BiT+SVM with the state-of-the-art for the BreakHis dataset. The proposed descriptor achieved a considerable accuracyof 97.50% for 40 × magnification, which slightly outperforms the accuracy of both shallow anddeep methods. The difference of accuracy between the proposed method and the second-bestmethod (CNN) is about 0.5% for 40 × magnification. Notwithstanding, the best CNN methodoutperforms BiT for 100 × , 200 × , and 400 × magnification with difference of 0.70%, 1.40% and25.00%, respectively. Moreover, Table 10 presents the results achieved by Spanhol et al. [58], whichalso used LBP, GLCM, and other texture descriptors with monolithic classifiers and ensemblemethods. For instance, the results achieved by BiT+SVM outperform their GLCM approach by22.8%, 20.0%, 12.4% and 13.5% for 40 × , 100 × , 200 × and 400 × , respectively. Table 8: Accuracy (%) of classification algorithms on the test set of the BreakHis dataset.Classification MagnificationAlgorithms 40 × × × × XGBCB 94.55 94.85 93.36 90.51HistoB 94.25 95.38 93.86 91.45LightB 94.39 94.95 92.61 90.10SuperL 96.61 95.72 93.57 93.86SVM Table 9: Specificity, sensitivity, and Kappa for BiT+SVM on the test set of the BreakHis dataset.Magnification Specificity Sensitivity Kappa40 × × × × × × × × Spanhol et al. [58] ∗ Shallow 75.60 73.00 72.90 71.20Spanhol et al. [58] + Shallow 74.70 76.80 83.40 81.70Erfankhah et al. [70] Shallow 88.30 88.30 87.10 83.40 BiT+SVM Shallow Han et al. [72] CNN 92.80 93.90 93.70 92.90Bayramoglu et al. [73] CNN 83.00 83.10 84.60 82.10Spanhol et al. [74] CNN 90.00 88.40 84.60 86.10 ∗ : LBP; + : GLCM. Even if CNNs have been overcoming shallow methods for several classification tasks, theiradvantages on texture images are not so high. CNNs must be trained on large amounts ofdata and they often require retraining or fine-tuning of some of their layers to deal with differentproblems. Besides that, CNNs are complex, usually have thousands of trainable parameters, whichrequire large computational resources for training such models. In contrast, the computationof BiT descriptors is relatively very low. Furthermore, the proposed BiT descriptor is genericand does not require retraining or hyperparameter configuration while providing state-of-the-art26erformance as shown in the experimental results over different datasets. 6. Conclusions In this paper we have presented an important contribution for texture characterization usingbiodiversity measurements and taxonomic distinctiveness. We have proposed bio-inspired texturedescriptor named BiT, which is based on an abstract modeling of an ecosystem as a gray-levelimage where image pixels correspond to a community of organisms. We have revisited severalbiodiversity measurements and taxonomic distinctiveness to compute features based on speciesrichness, species abundance, and taxonomic indices. The combination of species richness, speciesabundance, and taxonomic indices takes advantage of the invariance characteristics of ecologicalpatterns such as reflection, rotation, and scale.These bio-inspired features form a robust and invariant texture descriptor that can be usedtogether with machine learning algorithms to build classification models. Experimental resultson texture and HI datasets have shown that the proposed texture descriptor can be used to traindifferent classification algorithms that outperformed traditional texture descriptors and achievedvery competitive results when compared to deep methods. Therefore, the proposed texture de-scriptor is promising for particularly dealing with texture analysis and characterization problems.The results demonstrate the auspicious performance of such a bio-inspired texture descriptorpresented.Considering that the image channels are separated and that the features are extracted usingthe same measures, it is possible to have redundant and irrelevant features, which may affectthe classification performance. This issue opens the door for a feature selection step. Thus, asfuture work we intend to integrate into the feature extraction procedure a decision-maker-basedmulti-objective feature selection to find a solution that makes a trade-off between the number offeatures and accuracy. Acknowledgments This work was funded by the Natural Sciences and Engineering Research Council of Canada(NSERC) under Grant RGPIN 2016-04855 and by the Regroupement Strategique REPARTI -Fonds de recherche du Quebec - nature et technologie (FRQNT).27 eferences [1] M. Pietik¨ainen, T. M¨aenp¨a¨a, J. Viertola, Color texture classification with color histogramsand local binary patterns, in: Workshop on Texture Analysis in Machine Vision, volume 1,2002.[2] J. R. Bergen, E. H. Adelson, Early vision and texture perception, Nature 333 (1988) 363–364.[3] J. Hu, D. Li, Q. Duan, Y. Han, G. Chen, X. Si, Fish species classification by color, textureand multi-class support vector machine using computer vision, Computers and Electronicsin Agriculture 88 (2012) 133–140.[4] G. Zhao, M. Pietikainen, Dynamic texture recognition using local binary patterns with anapplication to facial expressions, IEEE Trans on Pattern Analysis and Machine Intelligence29 (2007) 915–928.[5] F. S. Khan, J. Van De Weijer, M. Vanrell, Top-down color attention for object recognition,in: IEEE 12th Int’l Conf on Computer Vision, 2009, pp. 979–986.[6] S. H. Ong, X. C. Jin, R. Sinniah, et al., Image analysis of tissue sections, Computers inBiology and Medicine 26 (1996) 269–279.[7] Y. Costa, L. Oliveira, A. L. Koerich, F. Gouyon, Music genre recognition using Gabor filtersand LPQ texture descriptors, in: Iberoamerican Congress on Pattern Recognition, Springer,2013, pp. 67–74.[8] J. Li, W. Rich, D. Buhl-Brown, Texture analysis of remote sensing imagery with clusteringand bayesian inference, Int’l J. Image Graphics Signal Processing 7 (2015) 1–10.[9] W. Li, M. Fritz, Recognizing materials from virtual examples, in: European Conf onComputer Vision, Springer, 2012, pp. 345–358.[10] D. Vriesman, A. de Souza Britto Jr., A. Zimmer, A. L. Koerich, R. Paludo, Automatic visualinspection of thermoelectric metal pipes, Signal Image Video Process. 13 (2019) 975–983.[11] M. Tuceryan, A. K. Jain, Texture analysis, in: Handbook of Pattern Recognition andComputer Vision, World Scientific, 1993, pp. 235–276.2812] R. M. Haralick, K. Shanmugam, I. Dinstein, Textural features for image classification, IEEETrans on Systems, Man, and Cybernetics (1973) 610–621.[13] R. M. Haralick, Statistical and structural approaches to texture, Proc. of the IEEE 67 (1979)786–804.[14] M. Pietik¨ainen, A. Hadid, G. Zhao, T. Ahonen, Computer vision using local binary patterns,volume 40, Springer Science & Business Media, 2011.[15] S. Arivazhagan, L. Ganesan, Texture classification using wavelet transform, Pattern Recog-nition Letters 24 (2003) 1513–1521.[16] G. R. Cross, A. K. Jain, Markov random field texture models, IEEE Trans on PatternAnalysis and Machine Intelligence (1983) 25–39.[17] I. Fogel, D. Sagi, Gabor filters as texture discriminator, Biological Cybernetics 61 (1989)103–113.[18] V. Ojansivu, J. Heikkil¨a, Blur insensitive texture classification using local phase quantization,in: Int’l Conf on Image and Signal Processing, 2008, pp. 236–243.[19] J. Saxena, K. Teckchandani, P. Pandey, M. K. Dutta, C. M. Travieso, J. B. Alonso-Hern´andez, et al., Palm vein recognition using local tetra patterns, in: 4th Int’l Confon Bioinspired Intelligence, 2015, pp. 151–156.[20] J. Kannala, E. Rahtu, Bsif: Binarized statistical image features, in: 21st Int’l Conf onPattern Recognition, 2012, pp. 1363–1366.[21] L. M. Kaplan, Extended fractal analysis for texture classification and segmentation, IEEETrans on Image Processing 8 (1999) 1572–1585.[22] P. Simon, V. Uma, Review of texture descriptors for texture classification, in: Data Engi-neering and Intelligent Computing, Springer, 2018, pp. 159–176.[23] L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietik¨ainen, From bow to cnn: Twodecades of texture representation for texture classification, Int’l Journal of Computer Vision127 (2019) 74–109. 2924] V. Andrearczyk, P. F. Whelan, Using filter banks in convolutional neural networks for textureclassification, Pattern Recognition Letters 84 (2016) 63–69.[25] J. de Matos, A. de Souza Britto Jr., L. E. S. de Oliveira, A. L. Koerich, Texture CNNfor histopathological image classification, in: 32nd IEEE Int’l Symp on Computer-BasedMedical Systems, 2019, pp. 580–583. doi: .[26] S. Fujieda, K. Takayama, T. Hachisuka, Wavelet convolutional neural networks for textureclassification, arXiv preprint arXiv:1707.07394 (2017).[27] D. Vriesman, A. S. Britto Junior, A. Zimmer, A. L. Koerich, Texture CNN for thermoelectricmetal pipe image classification, in: IEEE 31st Int’l Conf on Tools with Artificial Intelligence,2019, pp. 569–574. doi: .[28] X. Qi, Y. Qiao, C. Li, J. Guo, Exploring cross-channel texture correlation for color textureclassification, in: T. Burghardt, D. Damen, W. W. Mayol-Cuevas, M. Mirmehdi (Eds.),British Machine Vision Conf, 2013. doi: .[29] C. B. Nsimba, A. L. Levada, Exploring information theory and gaussian markov randomfields for color texture classification, in: Int’l Conf on Image Analysis and Recognition,Springer, 2020, pp. 130–143.[30] L. M. Azevedo, J. D. S. de Almeida, J. D. S. de Almeida, A. C. de Paiva, A. C. de Paiva,G. Braz J´unior, R. MS Veras, Diagn´ostico de glaucoma em retinografias usando ´ındicestaxonˆomicos e aprendizado de m´aquina, Revista de Sistemas e Computa¸c˜ao 10 (2020).[31] A. O. de Carvalho Filho, A. C. Silva, A. C. de Paiva, R. A. Nunes, M. Gattass, Lung-noduleclassification based on computed tomography using taxonomic diversity indexes and an svm,Journal of Signal Processing Systems 87 (2017) 179–196.[32] A. E. Magurran, Measuring Biological Diversity, Blackwell Publishing, 2004.[33] R. Rousseau, P. Van Hecke, D. NIjssen, J. Bogaert, The relationship between diversity pro-files, evenness and species richness based on partial ordering, Environmental and EcologicalStatistics 6 (1999) 211–223.[34] K. R. Clarke, R. M. Warwick, A taxonomic distinctness index and its statistical properties,Journal of Applied Ecology 35 (1998) 523–531.3035] C. Sohier, Measurements of biodiversity, , 2019.[36] L. Jost, Partitioning diversity into independent alpha and beta components, Ecology 88(2007) 2427–2439.[37] SDR-IV, Species diversity and richness 4, , 2020.[38] H. T. Clifford, W. Stephenson, et al., An introduction to numerical classification, volume240, Academic Press New York, 1975.[39] R. H. Whittaker, Evolution and measurement of species diversity, Taxon 21 (1972) 213–251.[40] R. M. May, M. Cody, J. M. Diamond, Ecology of species and communities (1975).[41] R. A. Fisher, A. S. Corbet, C. B. Williams, The relation between the number of speciesand the number of individuals in a random sample of an animal population, The Journal ofAnimal Ecology (1943) 42–58.[42] R. A. Kempton, L. R. Taylor, Models and statistics for species diversity, Nature 262 (1976)818–820.[43] C. Heip, P. Engels, Comparing species diversity and evenness indices, Journal of the MarineBiological Association of the United Kingdom 54 (1974) 559–563.[44] S. I. Rogers, K. R. Clarke, J. D. Reynolds, The taxonomic distinctness of coastal bottom-dwelling fish communities of the north-east atlantic, Journal of Animal Ecology 68 (1999)769–782.[45] R. Gibson, M. Barnes, R. Atkinson, Practical measures of marine biodiversity based onrelatedness of species, Oceanography and Marine Biology 39 (2001) 207–231.[46] J. Izs´aki, L. Papp, Application of the quadratic entropy indices for diversity studies ofdrosophilid assemblages, Environmental and Ecological Statistics 2 (1995) 213–224.[47] S. Pavoine, S. Ollier, A.-B. Dufour, Is the originality of a species measurable?, EcologyLetters 8 (2005) 579–586. 3148] D. P. Faith, Conservation evaluation and phylogenetic diversity, Biological Conservation 61(1992) 1–10.[49] M. Vellend, W. K. Cornwell, K. Magnuson-Ford, A. Ø. Mooers, Measuring phylogeneticbiodiversity, Biological Diversity: Frontiers in Measurement and Assessment (2011) 194–207.[50] C. Ricotta, A parametric diversity measure combining the relative abundances and taxonomicdistinctiveness of species, Diversity and Distributions 10 (2004) 143–146.[51] R. I. Vane-Wright, C. J. Humphries, P. H. Williams, What to protect?—systematics and theagony of choice, Biological Conservation 55 (1991) 235–254.[52] S. A. Frank, J. Bascompte, Invariance in ecological pattern, F1000Research 8 (2019).[53] A. J. Daly, J. M. Baetens, B. De Baets, Ecological diversity: measuring the unmeasurable,Mathematics 6 (2018) 119.[54] R. Rousseau, P. Van Hecke, D. NIjssen, J. Bogaert, The relationship between diversity pro-files, evenness and species richness based on partial ordering, Environmental and EcologicalStatistics 6 (1999) 211–223.[55] H. Shimadzu, M. Dornelas, P. A. Henderson, A. E. Magurran, Diversity is maintained byseasonal variation in species abundance, BMC Biology 11 (2013) 98.[56] T. R. Crimmins, Geometric filter for speckle reduction, Applied Optics 24 (1985) 1438–1443.[57] J. N. Kather, C.-A. Weis, F. Bianconi, S. M. Melchers, L. R. Schad, T. Gaiser, A. Marx,F. G. Z¨ollner, Multi-class texture analysis in colorectal cancer histology, Scientific Reports6 (2016) 27988.[58] F. A. Spanhol, L. S. Oliveira, C. Petitjean, L. Heutte, A Dataset for Breast CancerHistopathological Image Classification, IEEE Trans on Biomedical Engineering 63 (2016)1455–1462.[59] M. J. van der Laan, E. C. Polley, A. E. Hubbard, Super learner, Statistical Applications inGenetics and Molecular Biology 6 (2007). 3260] R. Mehta, K. Egiazarian, Dominant rotated local binary patterns (DRLBP) for textureclassification, Pattern Recognition Letters 71 (2016) 16–22.[61] S. Du, Y. Yan, Y. Ma, Local spiking pattern and its application to rotation-and illumination-invariant texture classification, Optik 127 (2016) 6583–6589.[62] M. Hazgui, H. Ghazouani, W. Barhoumi, Genetic programming-based fusion of HOG andLBP features for fully automated texture classification, The Visual Computer 37 (2021)1–20.[63] T. P. Nguyen, N.-S. Vu, A. Manzanera, Statistical binary patterns for rotational invarianttexture classification, Neurocomputing 173 (2016) 1565–1577.[64] M. G. Ribeiro, L. A. Neves, M. Z. do Nascimento, G. F. Roberto, A. S. Martins, T. A. A.Tosta, Classification of colorectal cancer based on the association of multidimensional andmultiresolution features, Expert Systems with Applications 120 (2019) 262–278.[65] J. N. Kather, C.-A. Weis, F. Bianconi, S. M. Melchers, L. R. Schad, T. Gaiser, A. Marx,F. G. Z¨ollner, Multi-class texture analysis in colorectal cancer histology, Scientific Reports6 (2016) 27988.[66] R. Sarkar, S. T. Acton, Sdl: Saliency-based dictionary learning framework for image simi-larity, IEEE Trans on Image Processing 27 (2017) 749–763.[67] C. Wang, J. Shi, Q. Zhang, S. Ying, Histopathological image classification with bilinearconvolutional neural networks, in: 39th Annual Int’l Conf of the IEEE Engineering inMedicine and Biology Society, 2017, pp. 4050–4053.[68] T. D. Pham, Scaling of texture in training autoencoders for classification of histologicalimages of colorectal cancer, in: Int’l Symposium on Neural Networks, Springer, 2017, pp.524–532.[69] (cid:32)L. Raczkowski, M. Mo˙zejko, J. Zambonelli, E. Szczurek, Ara: accurate, reliable and ac-tive histopathological image classification framework with bayesian deep learning, ScientificReports 9 (2019) 1–12.[70] H. Erfankhah, M. Yazdi, M. Babaie, H. R. Tizhoosh, Heterogeneity-aware local binarypatterns for retrieval of histopathology images, IEEE Access 7 (2019) 18354–18367.3371] M. Z. Alom, C. Yakopcic, M. S. Nasrin, T. M. Taha, V. K. Asari, Breast cancer classificationfrom histopathological images with inception recurrent residual convolutional neural network,Journal of Digital Imaging 32 (2019) 605–617.[72] Z. Han, B. Wei, Y. Zheng, Y. Yin, K. Li, S. Li, Breast cancer multi-classification fromhistopathological images with structured deep learning model, Scientific Reports 7 (2017)1–10.[73] N. Bayramoglu, J. Kannala, J. Heikkil¨a, Deep learning for magnification independent breastcancer histopathology image classification, in: 23rd Int’l Conf on Pattern Recognition, 2016,pp. 2440–2445.[74] F. A. Spanhol, L. S. Oliveira, C. Petitjean, L. Heutte, Breast cancer histopathological imageclassification using convolutional neural networks, in: Int’l Joint Conf on Neural Networks,2016, pp. 2560–2567. Supplementary Material All the libraries and implementations will be provided upon the acceptance of the paper inthe following online public repository: https://github.com/stevetmat/BioInspiredFDeschttps://github.com/stevetmat/BioInspiredFDesc