Artificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites Composition
AA RTIFICIAL N EURAL N ETWORK A PPROACH FOR THE I DENTIFICATION OF C LOVE B UDS O RIGIN B ASED ON M ETABOLITES C OMPOSITION
A P
REPRINT
Rustam ∗ Industrial and Financial Mathematics Research GroupFaculty of Mathematics and Natural SciencesInstitut Teknologi Bandung, Indonesia [email protected]
Agus Y. Gunawan
Industrial and Financial Mathematics Research GroupFaculty of Mathematics and Natural SciencesInstitut Teknologi Bandung, Indonesia
Made T. A. P. Kresnowati
Food and Biomass Processing Technology Research GroupFaculty of Industrial TechnologyInstitut Teknologi Bandung, IndonesiaJuly 13, 2020 A BSTRACT
This paper examines the use of artificial neural network approach in identifying the origin ofclove buds based on metabolites composition. Generally, large data sets are critical for accurateidentification. Machine learning with large data sets lead to precise identification based on origins.However, clove buds uses small data sets due to lack of metabolites composition and their high cost ofextraction. The results show that backpropagation and resilient propagation with one and two hiddenlayers identifies clove buds origin accurately. The backpropagation with one hidden layer offers99.91% and 99.47% for training and testing data sets, respectively. The resilient propagation withtwo hidden layers offers 99.96% and 97.89% accuracy for training and testing data sets, respectively.
Keywords
Artificial neural networks · Backpropagation · Resilient propagation · Clove buds
There is variation in the flavor and aroma of different plantation commodities. For example, in Indonesia, the clove budsfrom Java has a prominent wooden aroma and sour flavor while those in Bali have a sweet-spicy flavor [1]. Arabicacoffee from Gayo has a lower acidity and a strong bitterness. In contrast, coffee from Toraja has a medium browning,tobacco, or caramel flavor, not too acidic and bitter. Furthermore, Kintamani coffee from Bali has a fruit flavor andacidity, mixed with a fresh flavor. Contrastingly, Coffee from Flores has a variety of flavors ranging from chocolate,spicy, tobacco, strong, citrus, flowers and wood. Coffee from Java has a spicy aroma while that from Wamena has afragrant aroma and without pulp [2].The specific flavors and aromas are attributed to the composition of commodities’ metabolites. Generally, specificmetabolite contributes is responsible for particular flavors and aroma. For this reason, it is vital to recognize thecharacteristics of each plantation commodity based on metabolite composition. This study investigates the origin ofclove buds. This helps to maintain the flavor of a product using clove buds as a mixture. Also, the characteristics offood products can be predicted based on the origin of clove buds used due to differences flavour and taste betweenregions [3]. ∗ alternative email: [email protected] a r X i v : . [ c s . N E ] J u l rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites CompositionMetabolic profiling is a widely used approach in obtaining information related to metabolites contained in a biologicalsample. This is a quantitative measurement of metabolites from biological samples [4, 5]. To give meaning to themetabolites data sets, chemometrics technique was developed. This is a chemical sub-discipline that uses mathematics,statistics and formal logic to gain knowledge about chemical systems. It provides maximum relevant information byanalysing metabolites data sets from biological samples [6]. Additionally, it is used in pattern recognition of metabolitesdata sets in complex chemical systems [3]. Pattern recognition in biological samples identifies specific metabolites orbiomarkers that form particular flavor and aroma.Artificial neural networks have been widely used in pattern recognition [7] and other applications in various fields asshown by some researchs [8, 9, 10, 11, 12, 13]. However, it has not been fully implemented, especially in clove buds.The small data sets available limits the implementation of artificial neural networks for clove buds. This is attributed tothe lack of metabolite composition in the clove buds and high cost for extracting them. Furthermore, some clove budshave zero metabolite concentration. However, this is because of inefficient tools in the laboratory to detect metaboliteswhose values are very small. Therefore this study implements artificial neural networks as pattern recognition in clovebuds data sets. Each origin of clove buds has specific metabolites as a biomarker. This study uses clove buds data sets obtained from Kresnowati et al. [3]. It examined clove buds from four origins inIndonesia, including Java, Bali, Manado and Toli-Toli. Each origin has three regions, and therefore, there are twelveregions in total. In the laboratory, eight experiments are carried out in each region, except for Java with only sixexperiments. Each experiment, 47 types of metabolites were recorded. In the matrix, data sets are 94 ×
47. The rowand column represent the number of experiments and metabolites, respectively.
In total, the clove buds data sets have a wide range, specifically between − and 10. Therefore, logarithmictransformations are used to obtain reliable numerical data. Since some metabolites data have zero concentration,logarithmic transformation cannot be applied directly. This is because their concentrations range below the specifiedthreshold. The metabolite data with zero concentration are not removed because of acting as biological markers.Therefore, they are replaced with value of one order smaller than the smallest concentration available. In this case, thezeros are replaced with − . Before implementing artificial neural networks, one stage preprocessing clove buds datasets from [14] are added to normalize the values of metabolites data. Normalization ensure that each data has the sameinfluence or contribution to determine its origin. The following normalization formula is used [15] z kl = x kl − xs . (1)Here z kl is the result of normalization of x kl , x is the mean of the k -th experiment and s is s = (cid:118)(cid:117)(cid:117)(cid:116) n (cid:88) k =1 x kl − xn − . (2) Artificial neural networks are a false representation of the human brain that simulates the learning process [16].Backpropagation and resilient propagation are learning algorithms widely used in artificial neural networks [17, 18, 19,20, 21, 22, 23, 24, 25, 26, 27]. In this study, two different network architectures, including resilient and backpropagationare used. The first and second architectures consist of two and one hidden layers, respectively.2rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites Composition
Backpropagation learning algorithm is based on the repeated use of chain rules to calculate the effect of each weight innetwork concerning the error function E [28]. ∂E∂w ij = ∂E∂o i ∂o i ∂net i ∂net i ∂w ij (3)where w ij is the weight from j − th neuron to i − th neuron, o i is the output, and net i is the weighted number ofneurons input i . Once the partial derivatives for each weight are known, the goal of minimizing the error function isachieved with gradient descent [28]: w ( t +1) ij = w ( t ) ij − (cid:15) ∂E∂w ij ( t ) (4)where t is iteration and < (cid:15) < the learning rate. From the Equation (4), choosing a large learning rate (close to1), allows for oscillations. This makes the error fall above the specified tolerance value and lessens the identificationaccuracy. Conversely, in case, the learning rate ( (cid:15) ) is too small (close to 0), many steps are needed for convergence ofthe error function E . To avoid these, the backpropagation learning algorithm is expanded by adding the momentumparameter (0 < α < as shown in Equation (5). The addition of momentum parameter also accelerates theconvergence of error function [28]. ∆ w ( t +1) ij = − (cid:15) ∂E∂w ij ( t ) + α ∆ w ( t − ij (5)where it measures the effect of previous step on the currently.To activate neurons in the hidden and output layers, the sigmoid activation function is used. Three essential propertiesused in backpropagation and resilient propagation include bounded, monotonic and continuously differentiable. Thishelps to convert a weighted amount of input into an output signal for each neuron i as shown by Equation (6) [29]. O i = f ( I i ) = 11 + e − σI i . (6)where I i is the input of i -th weighted number of neuron, σ the slope parameter of the sigmoid activation function and O i the output of i -th neuron. The threshold used on the output layer for the sigmoid activation function is O i = (cid:26) if O i ≥ . if O i < . (7)The weighted amount input is given in the following equation [29]. n (cid:88) i =1 w ij O i + w Bj O B . (8)The sum of i represents the input received from all neurons in the input layer, while B is the bias neuron. Weight w ij isassociated with connections from i -th neuron to j -th neuron, while w Bj weight relates to the connections from biasedto j -th neuron. The weighted amount obtained in the hidden and the output layers are activated by substituting theweighted amount from Equation (8) to be an exponent in Equation (6). Riedmiller et al. in [28] proposed a resilient propagation learning algorithm developed by the backpropagation algorithm.The algorithm directly adapts to the weight value based on local gradient information. Riedmiller et al. [28] introducedan update value ∆ ij for each weight determining the size of weight update. The adaptive update value evolves duringthe learning process based on its local sight on the error function E , according to the following learning rule [28]: ∆ tij = η + ∗ ∆ ( t − ij , if ∂E∂w ij ( t − ∗ ∂E∂w ij ( t ) > η − ∗ ∆ ( t − ij , if ∂E∂w ij ( t − ∗ ∂E∂w ij ( t ) < ( t − ij , else (9)3rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites Compositionwhere ( < η − < < η + ) η − and η + represents the decrease and increase factors, respectively. According to thisadaptation rule, every time the partial derivative of the corresponding weight w ij changes its sign, which indicates thatthe last update is too big and the algorithm is above the local minimum, the update value ∆ ij is decreased by the factor η − . In case the derivative retains its sign, the update value slightly increases to accelerate convergence in the shallowregions [28].Once the update value for each weight is adjusted, the update weight itself follows rule stating that in case the derivativeis positive, the weight is decreased by its update value. If the derivative is negative, the update value is added ∆ w tij = − ∆ t − ij , if ∂E∂w ij ( t ) > t − ij , if ∂E∂w ij ( t ) < , else (10) w t +1 ij = w tij + ∆ w tij (11)However, in case the partial derivative sign changes, which means the previous step was too large and the minimummissed, the previous weight update is reverted: ∆ w ( t ) ij = − ∆ w ( t − ij , if ∂E∂w ij ( t − ∗ ∂E∂w ij ( t ) < (12)Due to the ’backtracking’ weight step, the derivative should change its sign once again in the next step. To avoid anotherproblem, there should be no adaptation of update value in the succeeding step. In practice, this can be carried out bysetting ∂E∂w ij ( t − = 0 in the ∆ ij adaptation rule. The update values and weights are changed every time the whole set ofpatterns is presented once to the network (learning by epoch).The following shows the process of adaptation and resilient propagation learning process. The minimum ( maximum ) operator is expected to provide a minimum or maximum of two numbers. The sign operator returns +1 in the argumentis positive, -1 in case the it is negative, and 0 for otherwise. F or each weight and bias { if ( ∂E∂w ij ( t − ∗ ∂E∂w ij ( t ) > then { ∆ ( t ) ij = minimum (∆ ( t − ij ∗ η + , ∆ max )∆ w ( t ) ij = sign ( ∂E∂w ij ( t ) ∗ ∆ ( t ) ij ) w ( t +1) ij = w ( t ) ij + ∆ w ( t ) ij } else if ( ∂E∂w ij ( t − ∗ ∂E∂w ij ( t ) < then { ∆ ( t ) ij = maximum (∆ ( t − ij ∗ η − , ∆ min ) w ( t +1) ij = w ( t ) ij − ∆ w ( t − ij ∂E∂w ij ( t ) = 0 } if ( ∂E∂w ij ( t − ∗ ∂E∂w ij ( t ) = 0) then { ∆ w ( t ) ij = − sign ( ∂E∂w ij ( t ) ∗ ∆ ( t ) ij ) w ( t +1) ij = w ( t ) ij + ∆ w ( t ) ij }} (13)4rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites CompositionTable 1: Backpropagation with two hidden layers Network
MSE
Accuracy (%) R Architecture Training Testing Training Testing Training Testing47-3-5-4 0.10346 0.11357 76.98 73.68 0.81 0.76
In this study, the percentage of training and testing data sets are 80% and 20%, respectively. The metabolites data setsin matrix are 94 ×
47. Out of 94 rows, 75 were chosen randomly as training data sets, while the remaining were used astesting. The selection of training data sets is carried out randomly 30 times. Therefore, in each network architecture,there are 30 values for the percentage of identification accuracy, coefficient of determination and the mean squared error(
M SE ). The average is chosen as a representative of the 30 values. In each network architecture, learning rate ( (cid:15) ) 0.9,momentum parameter ( α ) 0.1 and maximum epoch 5000 are used with error target − . In this study, each origin isrepresented by a binary code. Specifically, the binary code for the Java origin is 1000, Bali 0100, Manado 0010 andToli-Toli 0001. The calculation of identification accuracy and M SE is shown in Equation (14) and (15). % accuracy = ak (14)Where a is the number of origins identified correctly, while k is the total number. M SE calculated by the followingequation [29]
M SE = 1 m · n m (cid:88) p =1 n (cid:88) k =1 ( T kp − O kp ) . (15)where T kp is the desired target, O kp the network output and p the variable corresponding to the number of origins.The suitability between the expected target and network output was evaluated based on the coefficient of determination R . It was calculated using the following equation [21] R = 1 − n (cid:80) nk =1 ( T kp − O kp ) n − (cid:80) nk =1 ( T kp − T kp ) . (16)Where T kp is the average desired target.In this study, backpropagation and resilient propagation were used, each consisting of two and one hidden layers. Forone hidden layer, the number of neurons was determined using the formula proposed by Shibata and Ikeda in 2009[30], specifically N h = √ N i · N o , where N h , N i , and N o represents hidden, input and output neurons, respectively.In both backpropagation and resilient propagation, the number of neurons used do not exceed one hidden layer.Based on Shibata and Ikeda [30] formula, the number of neurons in one hidden layer was obtained, specifically N h = √ N i · N o = √ · . . However, in this study, it was rounded up to 15 neurons. Some experiments wereconducted to evaluate the identification accuracy, and whether using one hidden layer 15 neurons might lead to a betteraccuracy of identification than two hidden layers. However, the number of neurons varied, setting less than 15 neurons.For two hidden layers, experiments were conducted with the number of consecutive neurons as follows; 3-5 (8), 4-6(10), 5-7 (12) and 6-8 (14). The number of neurons in the hidden layer never exceed 15 neurons. In this section, backpropagation learning algorithm with two hidden layers was used. The number of neurons in thehidden layer varied with not more than 15 neurons. There were four variations of network architecture, including47-3-5-4, 47-4-6-4, 47-5-7-4 and 47-6-8-4. The input layer consists of 47 neurons based on the number of metabolites.The output layer consists of 4 neurons according to the number of clove buds origins.Table 1 shows the network architecture gives the highest value for identification accuracy and coefficient ofdetermination in training and testing data sets. Similar to the
M SE , this network architecture provides the smallestamount of both training and the testing data sets. From Table 1, increasing the number of neurons in the backpropagation5rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites CompositionTable 2: Backpropagation with one hidden layer
Network
MSE
Accuracy (%) R Architecture Training Testing Training Testing Training Testing
Network
MSE
Accuracy (%) R Architecture Training Testing Training Testing Training Testing
The backpropagation learning algorithm with one hidden layer was implemented to evaluate its result in case of acomparison using two hidden layers. The results obtained are shown in Table 2.Table 2 shows that network architecture 47-15-4 identifies the clove buds origin effectively. The identification accuracypercentage is 99.91% and 99,47% for training and testing data sets, respectively. Besides, the
M SE value is alsosmaller compared to the two hidden layers.For the backpropagation algorithm, the results show one hidden layer is better than two. This is in line with Villiers andBarnard [32], which stated that network architecture with one hidden layer is on average better than two hidden layers.They concluded that two hidden layers are more difficult to train. Additionally, they also established that this behaviouris caused by a local minimum problem. The networks with two hidden layers are more prone to the local minimumproblem during training.
Resilient propagation learning algorithm contains some parameters, including the upper and lower limits, as well asthe decrease and increase factors. In this study, the range of update values is limited to the upper limit ( ∆ max ) = 50,the lower limit ( ∆ min ) = − , and the decrease and increase factors ( η − ) = 0.5 and ( η + ) = 1.2, respectively. Thereason for choosing these values is shown in [28]. Similar to Section 3.1, the resilient propagation learning algorithm isapplied to the network architecture with two hidden layers. The number of neurons vary but do not exceed 15 neurons.In this section, there are four variations of network architecture, including 47-3-5-4, 47-4-6-4, 47-5-7-4 and 47-6-8-4.The results in Table 3 show the network architecture gives the highest identification accuracy of clove budsorigin. The percentage of identification accuracy is 99.96% and 97.89% for training and testing data sets, respectively. In this section, the resilient propagation learning algorithm is implemented with one hidden layer. Similar to section 3.2,the number of neurons in the hidden layer is 15 neurons, and have the network architecture 47-15-4.Table 4 shows the network architecture 47-15-4 identifies the origin of clove buds with identification accuracy of 99.86%and 94.74% on training and testing data sets, respectively.Table 4: Resilient propagation with one hidden layer
Network
MSE
Accuracy (%) R Architecture Training Testing Training Testing Training Testing
MSE in training and testing data sets are shown in Figures 5and 6, respectively. Figure 1: Identification accuracy percentage of training data sets.Figure 2: Identification accuracy percentage of testing data sets.The results of identification from the origins of clove buds have been obtained. In small data set categories, backpropa-gation with one hidden layer provides accurate identification in the training and testing data sets. It accurately identifiesthe origins of clove buds obtained using the resilient propagation algorithm with two hidden layers.The neural networks model obtained in this paper can be a reference from a scientific perspective. For instance, it canbe used in future studies to identify the origin of various plantation commodities with small metabolites data sets. Atthe moment, the most appropriate way of determining the origin of a plantation commodity is qualitative, relying on theservices of flavorist to evaluate flavor and taste. This is because each commodity has a specific flavor and taste based onthe origin of its region. Furthermore, the different origins of clove buds data sets have not been reported in the literatureand thus no direct comparison can be presented in this paper.7rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites CompositionFigure 3: Determination coefficient of training data sets.Figure 4: Determination coefficient of testing data sets.Figure 5:
M SE of training data sets.8rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites CompositionFigure 6:
M SE of testing data sets.
This paper demonstrated the potential and ability of a neural network approach with backpropagation and resilientpropagation learning algorithms. It was meant to identify the clove buds origin based on metabolites composition. Thework was divided into two parts, the first one being identification of the clove buds origin using the backpropagationlearning algorithm. Two network architectures were constructed, each containing one hidden and two hidden layers.The results showed the use of one hidden layer gives clove buds origin identification accurately, specifically 99.91%and 99.47% in training and testing data sets. The second step involved the identification of the clove buds originusing resilient propagation learning algorithm. Two network architectures were constructed, each containing onehidden and two hidden layers. The results showed the use of two hidden layers gives accurate clove buds originidentification, including 99.96% and 97.89% in training and testing data sets. From these results, it was concludedthat for identification of a small of metabolites data sets from a plantation commodity, the backpropagation algorithmwith one hidden layer and the resilient propagation algorithm with two hidden layers should be used. This paper alsoconfirmed the contribution of artificial neural networks to pattern recognition of metabolites data sets obtained bymetabolic profiling technique.
References [1] L. Broto. Derivatisasi minyak cengkeh, dalam cengkeh: Sejarah, budidaya dan industri. 2014 (in Indonesian).[2] Coffeeland 2010. Jenis kopi arabika terbaik dari berbagai daerah di indonesia (in indonesian).[3] M T A P Kresnowati, R Purwadi, M Zunita, R Sudarman, and A O Putri. Metabolite profiling of four originsindonesian clove buds using multivariate analysis.
Report Research Collaboration PT. HM Sampoerna Tbk. andInstitut Teknologi Bandung (confidential report) , 2018.[4] Joachim Kopka, Alisdair Fernie, Wolfram Weckwerth, Yves Gibon, and Mark Stitt. Metabolite profiling in plantbiology: platforms and destinations.
Genome biology , 5(6):109, 2004.[5] Sastia Prama Putri and Eiichiro Fukusaki.
Mass spectrometry-based metabolomics: a practical guide . CRC Press,2016.[6] Desiré Luc Massart, BGM Vandeginste, SN Deming, YKAUFMAN Michotte, and L Kaufman. Chemometrics: atextbook. 1988.[7] T Cornelius. Leondes: Image processing and pattern recognition, 1998.[8] Benbakreti Samir and Aoued Boukelif. New approach for online arabic manuscript recognition by deep beliefnetwork.
Acta Polytechnica , 58(5), 2018.[9] A Noriega Ponce, A Aguado Behar, A Ordaz Hernández, and V Rauch Sitar. Neural networks for self-tuningcontrol systems.
Acta Polytechnica , 44(1), 2004.[10] M Chvalina. Demand modelling in telecommunications comparison of standard statistical methods and approachesbased upon artificial intelligence methods including neural networks.
Acta Polytechnica , 49(2):48–52, 2009.9rtificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites Composition[11] Ivo Bukovsky and Michal Kolovratnik. A neural network model for predicting nox at the melnik 1 coal-powderpower plant.
Acta Polytechnica , 52(3):17–22, 2012.[12] P Kutilek and S Viteckova. Prediction of lower extremity movement by cyclograms.
Acta Polytechnica , 52(1),2012.[13] D Novák and D Lehk`y. Neural network based identification of material model parameters to capture experimentalload-deflection curve.
Acta Polytechnica , 44(5-6), 2004.[14] Rustam, Agus Y Gunawan, and Made Tri Ari P Kresnowati. The hard c-means algorithm for clustering indonesianplantation commodity based on metabolites composition. In
Journal of Physics: Conference Series , volume 1315,page 012085. IOP Publishing, 2019.[15] Tanja Beltramo, Michael Klocke, and Bernd Hitzmann. Prediction of the biogas production using ga and aco inputfeatures selection method for ann model.
Information Processing in Agriculture , 2019.[16] Laurene Fausett.
Fundamentals of neural networks: architectures, algorithms, and applications . Prentice-Hall,Inc., 1994.[17] Igor Aizenberg and Claudio Moraga. Multilayer feedforward neural network based on multi-valued neurons(mlmvn) and a backpropagation learning algorithm.
Soft Computing , 11(2):169–183, 2007.[18] Erik M Johansson, Farid U Dowla, and Dennis M Goodman. Backpropagation learning for multilayer feed-forwardneural networks using the conjugate gradient method.
International Journal of Neural Systems , 2(04):291–301,1991.[19] T Todd Pleune and Omesh K Chopra. Using artificial neural networks to predict the fatigue life of carbon andlow-alloy steels.
Nuclear Engineering and Design , 197(1-2):1–12, 2000.[20] Klaus LE Kaiser, Stefan P Niculescu, and Gerrit Schüürmann. Feed forward backpropagation neural networksand their use in predicting the acute toxicity of chemicals to the fathead minnow.
Water Quality Research Journal ,32(3):637–658, 1997.[21] Eddy El Tabach, Leyla Adishirinli, Nicolas Gascoin, and Guillaume Fau. Prediction of transient chemistry effectduring fuel pyrolysis on the pressure drop through porous material using artificial neural networks.
Journal ofanalytical and applied pyrolysis , 115:143–148, 2015.[22] RA Chayjan and M ESNA-ASHARI. Isosteric heat and entropy modeling of pistachio cultivars using neuralnetwork approach.
Journal of Food Processing and Preservation , 35(4):524–532, 2011.[23] Aristoklis D Anastasiadis, George D Magoulas, and Michael N Vrahatis. New globally convergent trainingscheme based on the resilient propagation algorithm.
Neurocomputing , 64:253–270, 2005.[24] Apurba Kumar Santra, Niladri Chakraborty, and Swarnendu Sen. Prediction of heat transfer due to presence ofcopper–water nanofluid using resilient-propagation neural network.
International Journal of Thermal Sciences ,48(7):1311–1318, 2009.[25] Lalit M Patnaik and K Rajan. Target detection through image processing and resilient propagation algorithms.
Neurocomputing , 35(1-4):123–135, 2000.[26] Dominik Fisch and Bernhard Sick. Training of radial basis function classifiers with resilient propagation andvariational bayesian inference. In , pages 838–847. IEEE,2009.[27] MD Shiblee, B Chandra, and Prem Kumar Kalra. Learning of geometric mean neuron model using resilientpropagation algorithm.
Expert Systems with Applications , 37(12):7449–7455, 2010.[28] Martin Riedmiller and Heinrich Braun. A direct adaptive method for faster backpropagation learning: The rpropalgorithm. In
Proceedings of the IEEE international conference on neural networks , volume 1993, pages 586–591.San Francisco, 1993.[29] Phiroz Bhagat.
Pattern recognition in industry . Elsevier, 2005.[30] Katsunari Shibata and Yusuke Ikeda. Effect of number of hidden neurons on learning in large-scale layered neuralnetworks. In , pages 5008–5013. IEEE, 2009.[31] Imran Shafi, Jamil Ahmad, Syed Ismail Shah, and Faisal M Kashif. Impact of varying neurons and hidden layersin neural network architecture for a time frequency application. In ,pages 188–193. IEEE, 2006.[32] Jacques De Villiers and Etienne Barnard. Backpropagation neural nets with one and two hidden layers.