A new belief Markov chain model and its application in inventory prediction
aa r X i v : . [ c s . A I] M a r A new belief Markov chain model and its application ininventory prediction
Zichang He a , Wen Jiang a, ∗ a School of Electronics and Information, Northwestern Polytechnical University, Xi’an,Shaanxi, 710072, China
Abstract
Markov chain model is widely applied in many fields, especially the field ofprediction. The classical Discrete-time Markov chain(DTMC) is a widelyused method for prediction. However, the classical DTMC model has somelimitation when the system is complex with uncertain information or statespace is not discrete. To address it, a new belief Markov chain model is pro-posed by combining Dempster-Shafer evidence theory with Markov chain. Inour model, the uncertain data is allowed to be handle in the form of inter-val number and the basic probability assignment(BPA) is generated basedon the distance between interval numbers. The new belief Markov chainmodel overcomes the shortcomings of classical Markov chain and has an ef-ficient ability in dealing with uncertain information. Moreover, an exampleof inventory prediction and the comparison between our model and classi- ∗ Corresponding author at: School of Electronics and Information, Northwestern Poly-technical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address:[email protected], [email protected]
Preprint submitted to Elsevier March 7, 2017 al DTMC model can show the effectiveness and rationality of our proposedmodel.
Keywords:
Markov chain model; Dempster-Shafer evidence theory; Newbelief Markov chain; Interval number; Inventory prediction
1. Introduction
A Markov process can be used to model a random system that changes statesaccording to a transition rule that only depends on the current state. Andthe Markov property is the most important property during Markov process,namely the conditional probability distribution of the future states of theprocess depends only upon the present state[23, 32, 34]. Markov chain isa stochastic process with Markov property[11]. It is widely used in manyapplications[1, 2, 19, 43], especially in the prediction[40, 48], such as rainfallprediction[21, 39], economy prediction[29, 41] and so on.Makov chain is a powerful tool to study sequential data. It provides abunch of models to analyze time series data and predict the variation tenden-cies of random processes, including discrete-time Markov chain(DTMC)[46],continuous-time Markov chain[5, 54], hidden Markov chain[35, 47], etc. Amongthese, DTMC model having the properties of both simplicity and effective-ness has a widely application in realistic programs[3, 36]. Classical DTMCdoes a great job in prediction when the discrete states are easy to distinguish.However, The uncertainty and vagueness exist in real world inevitably. The2pplication of DTMC model is limited when the states are not discrete or therealistic states are uncertain. For example, the states can not be determinedaccording to the known data or the collected data may not be crisp. Fuzzymathematics is a great tool to handle with uncertainty[10, 26, 59]. To addressit, some modified models have been proposed like fuzzy Markov chain[4, 9]and fuzzy states based on Markov chain[12, 33]. These models introducea certain subordinating degree function between the states distribution andthe states description. However, considering the complexity of the realisticprogram, the certain function may be hard to build. Moreover, the eviden-tial Markov chain[51, 52] and some generalized model models[14, 60] are alsoproposed.In this paper, a new belief Markov chain is proposed. The new modeluses the basic probability assignment(BPA) to describe the uncertainty ofstates as Dempster-Shafer theory[49, 56] is an efficient tool to deal with theuncertainty[6, 8, 50, 57]. The Dempster-shafer fusion can also be modelledin the Markov fields.[7, 45]. Interval number is a simple and efficient toolto handle the uncertain data[18, 25]. Considering the properties of intervalnumber, the BPA is generated based on interval number in our model. Dueto it, the uncertain data can be represented in the form of interval num-ber. An application in inventory control shows that our proposed modelcan represent and handle with uncertainty effectively. The prediction resultalso agrees with the practical situation, which proves the correctness of ourmodel. 3he rest of this paper is organized as follows. The preliminaries of the basictheory employed are briefly presented in Section 2. And the shortcomingof DTMC model is illustrated in Section 3. Then our new belief Markovchain model is shown in Section 4. Section 5 uses a numerical example ofinventory prediction to show the efficiency of our model. Finally, the paperis concluded in Section 6.
2. Preliminaries
In this section, some preliminaries such as DTMC, Dempster-Shafer theory,Pignistic probability transformation(PPT) and interval number are brieflyintroduced.
Definition 2.1. let { X n : n > } be a random sequence defined in the prob-ability space (Ω , F, P ) . P represents probability measure which is a functionfrom set F to filed of real number R . Every event in F is given a probabilityvalue ranged from 0 to 1 by the function P . For arbitrary n ∈ N + and states i , i , . . . , when P { X n = i n , X n − = i n − , . . . , X = i } > if satisfying P { X n +1 = i n +1 | X n = i n , . . . , X = i } = P { X n +1 = i n +1 | X n = i n } (1) then the random sequence { X n : n > } is called the Markov chain. Eq.(1)is called the Markov property. Definition 2.2.
Markov chain { X n : n > } is homogeneous if meeting thefollowing condition: Given arbitrary m , n and states i , j , meeting P { X n = i } > and P { X m = i } > , P { X n +1 = j | X n = i } = P { X m +1 = j | X m = i } (2)4 efinition 2.3. For homogeneous Markov chain, the following conditionalprobability P ij ( m, m + n ) = P { X m + n = j | X m = i } (3) is called the transition probability from the condition that m moment Markovchain is in state i to the condition that m + n moment Markov chain is in state j . And the matrix composed of transferring probability is called transferringprobability matrix. The following is some important properties of Markov chain. let E denotestate space, P ( k ) ij denote the probability of state i transferring to state j through k steps. The k transferring probability of homogeneous Markovchain has the following properties: P ( k ) ij ≥ , ∀ i, j ∈ E, k ≥ P j ∈ E P ( k ) ij = 1 , ∀ i ∈ E, k ≥ P ( m + k ) ij = P r ∈ E P ( m ) ir P ( k ) rj , ∀ i, j ∈ E, m, k ≥ Tough evidence theory has some open issues, such as conflict management[16, 58], dependent evidence combination [53] and determination of basicprobability assignment[28], it has a wide applications like fault diagnosis[27],supplier chain management[13, 17], decision making[15, 22, 31, 38] and riskevaluation which matters a lot in reality[20, 24, 30], due to its efficiency tomodel and fuse uncertain information. The following is some basic conceptsof D-S evidence theory. 5 efinition 2.4.
Let U denote a finite set composed of the whole possiblevalue of the random variable X . The elements of set U are mutually exclusiveand U is called the frame of discernment. Let U denote the power set of U whose each element corresponds to a subset of the value of X . Definition 2.5.
Let U denote the frame of discernment. Given an arbi-trary proposition(subset) A of U , a mass function called the basic probabilityassignment(BPA) is a mapping m : 2 U → [0 , , satisfying the following con-ditions: X A ∈ U m ( A ) = 1 (7) and m ( ∅ ) = 0 (8) where m ( A ) reflects the evidence’s supporting degree to the proposition A . A is called the focal element if satisfying m ( A ) > Since the evidence theory assigns the probability to all the subsets of theframe of discernment, the BPA usually relates to probability assignmentof a multi-element subset rather than a singleton subset. It reflects theuncertainty of the real world. However, the decisions are uneasy to make byusing BPA directly. The BPA is usually converted to probability and thenthe decision will be make based on the probability. Pignistic probabilitytransformation(PPT) is a classical method to achieve it by averaging theBPA of a multi-element set into singleten sets.
Definition 2.6.
Let U denote the frame of discernment and m be a BPA on U . A pignistic probability transformation function Bet U → [0 , is defined s: B et P ( x ) = X x ⊆ A,A ∈ U m ( A ) | A | (9) where | A | is the cardinality of proposition A .2.4. Interval number Definition 2.7.
An interval number ˜ a is defined as ˜ a = [ a − , a + ] = { x | a − ≤ x ≤ a + } where a − and a + are the lower limiting value and upper limiting value ap-parently while x ∈ [0 , . Especially, interval number a − degenerates into areal number when a − = a + . Definition 2.8.
Let A = [ a , a ] and B = [ b , b ] be two interval numbers.The square of the distance between two interval numbers D ( A, B ) is calcu-lated by [55]: D ( A, B ) = R − nh (a + a )2 + x ( a − a ) i − h ( b + b )2 + x ( b − b ) io dx = h ( a + a )2 − ( b + b )2 i + [( a − a )+( b − b )] (10)
3. The shortcoming of DTMC model
In this section, a realistic example of inventory anticipation will show thedetailed application of DTMC model and its main shortcoming.
Example 3.1.
Table 1 is one company’s statistics of the inventory demandfor Product E15 in 20 consecutive periods. Relying on these data, the number21 periods’ inventory can be anticipated by applying the traditional DTMCmodel. able 1: Inventory demand for Product E15 in 20 consecutive periodsTime(period) 1 2 3 4 5 6 7 8 9 10Inventory(package) 143 152 161 139 137 174 142 141 162 180Time(period) 11 12 13 14 15 16 17 18 19 20Inventory(package) 164 171 206 193 207 218 229 225 204 200 Generally, the most significant step of prediction is to build a transition prob-ability matrix. Then according to the original states probability assignment,the next period’s probability of transferring to each state can be calculated.A larger transferring probability means it is more likely to be in the state.Consequently, the state of next period can be predicted.First of all, the states space of DTMC model needs to be determined. Basedon the data in Table 1, the inventory demand of one period ranges from 137to 229. If every integer in the interval is deemed as one state, the number ofstates will be overmuch and the state transferring condition will be hard tocount. Considering the number of states should be rational and convenientfor building transition probability matrix, the states need to be classified andthe classified states are show in Table 2.
Table 2: The results of classified states.Inventory(package) [100,150) [150,200) [200,250)States Low(L) Medium(M) High(H)Serial number 1 2 3
Let N ij denote the times of transferring from the state i to state j . Then the8ollowing data are obtained. N = 2 , N = 3 , N = 0; N = 2 , N = 4 , N = 2; N = 0 , N = 2 , N = 4 . Let E denote the whole state space. Based on the following equation P ij = N ij / X j ∈ E N ij (11)the transfer probability matrix can be obtained as: P = .
400 0 .
600 0 . .
250 0 .
500 0 . .
000 0 .
333 0 . The inventory of 20th period is 200 packages, belonging to the state medium.Based on the obtained transfer probability matrix, the state probability as-signment of the next period is: (
L, M, H ) = (0 . , . , . P = .
400 0 .
600 0 . .
250 0 .
500 0 . .
000 0 .
167 0 . L, M, H ) = (0 . , . , .
4. The new belief Markov chain model
The integrated process of building the new belief Markov model is as follow-ing: 10 tep 1.
Determine the state space based on the previous data. Make sure thenumber of states is rational and all the states form the frame of discernment U . Step 2.
Calculate the BPA of the whole periods on the power set of thediscernment 2 U based on the distance between interval numbers. Step 3.
Calculate the single-step transferring belief assignment P ij , i, j ∈ U based on the obtained BPA, . And then the transition belief matrix [ P ij ] isbuilt. P ij represents the belief assignment of transferring from proposition i to proposition j : P ij = n − P t =1 ( m ( i ) t · m ( j ) t +1 ) P k ∈ U n − P t =1 ( m ( i ) t · m ( k ) t +1 ) , i, j ∈ U (12)where m ( i ) t represents the assigned belief of proposition i in the t moment, n represents the length of the Markov chain. Note 4.1.
In classical DTMC model, the transition takes place between basicstates. While in new belief Markov chain model, the transition takes placebetween propositions. In other words, the probability is replaced with BPA.Hence, the original transition probability matrix is replaced with the transitionbelief matrix.
Step 4.
Let m = [ m ( i )] , i ∈ U be the BPA of the final period. Thenthe assignment of the next period can be obtained by: m ′ = m · [ P ij ] (13)11 tep 5. Convert the obtained BPA of the next period m ′ into the statesprobability assignment [ P ij ] , i ∈ U by using PPT. And the final predic-tion result is obtained.In our model, one of the most significant step is to generate BPA. The detailedprocess of generating BPA is shown in the following. First of all, the interval Figure 1: The flow chart of generating BPA
BPA need to be obtained. The fuzziness and uncertainty existed in realisticsituation can be effectively represented in the form of interval number. Andthe crisp number can also be deemed as an interval number like 0.5 can beseen as [0.5,0.5].By using Eq.(10), the distance between these interval numbers can be calcu-lated.Then the similarity of the interval numbers can be obtained based on thedistance.
Definition 4.1.
Let A = [a , a ] and B = [b , b ] be two interval numbers. The imilarity of the two interval numbers S ( A, B ) is defined as: S ( A, B ) = 11 + D ( A, B ) (14) where D ( A, B ) is the distance between interval number A and B . When the interval number A equals to B , S ( A, B ) = 1. According to thedefinition, it is easy to know that the larger the difference between A and B is, the similarity is smaller.Finally, normalize the obtained similarity and the BPA of interval number isobtained. An example will show the process in the following. Example 4.1.
Let state A range in the interval (0,5] and state B range inthe interval (5,10]. Given an interval number C =[3,6] and α =3, the resultof generated BPA is shown as Table 3. Table 3: The result of example 4.1.
States Distance Similarity BPAA
283 331
433 346
5. Numerical example
Inventory control is a common realistic problem[37, 42, 44]. In this section,the inventory prediction problem in Section 3 will still be taken as an example13o show the effectiveness of our model. We assume that some uncertaininformation exist in the realistic statistics, some data of inventory demandare in the form of interval number. Table 4 is one company’s statistics of theinventory demand for Product E15 in 20 consecutive periods. Following thesteps described in Section 4, the new belief Markov model will be applied todo the prediction.
Table 4: Inventory demand for Product E16 in 20 consecutive periodsTime(period) 1 2 3 4 5 6 7 8 9 10Inventory(package) 143 152 [157,162] 139 137 [165,180] 142 141 162 180Time(period) 11 12 13 14 15 16 17 18 19 20Inventory(package) 164 171 [204,209] 193 207 [215,220] 229 225 204 200
As shown in Table 4, the inventory of the whole 20 periods ranges in the inter-val number [137, 229]. All the values can be classified into three basic states:low(L), medium(M) and high(H). They constitute the frame of discernment U = { L, M, H } . Then the power set of the frame is consisted of { L } , { M } , { H } , { L, M } , { M, H } , { L, H } , { L, M, H } and the empty set ∅ . To assignthe value of inventory into its according proposition, the correspondencesbetween propositions and the value of inventory are determined as Table 5.Considering the realistic situation, the assessment of the inventory can notbe both low and high, nor do the propositions { L, M, H } or empty set ∅ . Sothese propositions do not have a according inventory value. After assigning,the distribution of the 20 periods can be showed like Figure 5.14 able 5: Correspondences between propositions and inventory Series Propositions Inventory { L } [100,150]2 { L,M } [135,165]3 { M } [150,200]4 { M,H } [185,215]5 { H } [201,250]- { L,H } -- { L,M,H } -- ∅ - Figure 2: Previous inventory demand
Generally speaking, given an inventory value, its according proposition iscertain. However, in our model, it can be assigned to every proposition with15ifferent probabilities. The probabilities of the assignment are unknown fornow. The BPA cannot be obtained without a certain probability. Thus thefollowing step is to calculate the probabilities of assignment.Let us take the inventory of 6th period [165,180] as an example. Based onEq.(10), the distance among the inventory and all valid propositions can beobtained. Then by using Eq.(14), the similarity among the inventory and allvalid propositions can also be obtained. Lastly, the BPA of 6th period canbe obtained by normalizing the similarity. The results are shown in Table 6.And the BPA of 6th period ism ( { L } , { L, M } , { M } , { M, H } , { H } ) = [0 . , . , . , . , . Table 6: The BPA of 6th period
Propositions Distance Similarity BPA { L } × e-4 0.0324 { L,M } { M } { M,H } { H } × e-4 0.0257Repeat the above process for 20 times, the BPA of each period can be ob-tained in Table 7. 16s shown in Table 7, in this program every BPA is nonzero. Thus the Eq.(12)can be rewritten as following: P ij = n − P t =1 ( m ( i ) t · m ( j ) t +1 ) P k ∈ U n − P t =1 ( m ( i ) t · m ( k ) t +1 ) = n − P t =1 ( m ( i ) t · m ( j ) t +1 ) n − P t =1 m ( i ) t , i, j ∈ U (15)By using Eq.(15), the belief assignment of transferring from one propositionto another proposition can be calculated. For example, the belief assignmentof transferring from proposition { L } to proposition { L, M } is calculated as: P = n − P t =1 (cid:0) m ( { L } ) t · m ( { L, M } ) t +1 (cid:1) n − P t =1 m ( { L } ) t = 0 . × P ij ] as following: . . . . . . . . . . . . . . . . . . . . . . . . . The next step is to predict the inventory of the 21st period based on theBPA of the 20th period and the matrix [ P ij ]. As Table 7 shows, the BPAof the 20th is m (20) = (0 . , . , . , . , . m (21) = m (20) · [ P ij ] = (0 . , . , . , . , . Bet P ( L ) =0 . . . Bet P ( M ) = 0 . . . . Bet P ( H ) = 0 . . . L, M, H ) = (0 . , . , .
93 194 195 196 197 198 199 200 201 202
The inventory of the 20th period T he p r ed i c t ed p r obab ili t i e s o f t he t h pe r i od LowMediumHigh
Figure 3: The prediction of the 21th period difference of our model is that the state is distributed into each possible statewith a probability. Figure 4 illustrates the methods of state distribution indifferent models. The three different models are applied to predict the 21stinventory demand when the data of 20th fluctuates. As Figure 5 shows,along the 20th inventory demand changes, the sudden change will exist inprediction of classical Markov chain model.As mentioned above, the classical DTMC model can not handle the uncer-tainty, especially a little change of data may lead to a drastic predictionresult, which is obviously irrational. The increase in the state number maydecrease the trouble caused by uncertainty of states in some degree. How-ever, a larger data set will be needed to assure it and the problem of drasticchange in prediction is still unavoidable. By contrast, our new belief Markov19 igure 4: Comparison of state distribution chain model can handle these shortcomings effectively. Moreover, the uncer-tain data like interval number can also be well handled, which proves ourmodel’s ability of handling uncertain information.
6. Conclusion
In this paper, a new belief Markov chain model is proposed. The shortcom-ings of classical DTMC model are successfully overcame in this model. Themain advantages of the new model are as following:1. Stability. The sudden change of prediction result is avoided, namely theprediction result will change fluently along the slight change of data.2. The ability of handling uncertain information. Either the uncertainty of20tate or data can be effectively handled in our model.3. Flexibility. The different state distribution schemes and basic probabilityassignment distribution functions can lead to different results. The settingcan be adjusted according to the realistic situation which reveals the flexi-bility of our model.A numerical example of inventory prediction is illustrated in the paper toshow the application of our model. And the prediction result and comparisonprove the effectiveness and rationality of our model.
Acknowledgement
The work is partially supported by National Natural Science Foundationof China (Grant No. 61671384), Natural Science Basic Research Plan inShaanxi Province of China (Program No. 2016JM6018), the Fund of SAST(Program No. SAST2016083), the Seed Foundation of Innovation and Cre-ation for Graduate Students in Northwestern Polytechnical University (Pro-gram No. Z2016122).
References [1] Alagoz, O., Hsu HSchaefer, A.J., Roberts, M.S., 2010. Markov deci-sion processes: a tool for sequential decision making under uncertainty.21edical Decision Making 30, 474–483.[2] Annett, B., Sabine, T., Helge, B., 2010. Twelve years of succession onsandy substrates in a post-mining landscape: a markov chain analy-sis. Ecological Applications A Publication of the Ecological Society ofAmerica 20, 1136–47.[3] Arenas, A., Borge-Holthoefer, J., Meloni, S., Moreno, Y., et al., 2010.Discrete-time markov chain approach to contact-based disease spreadingin complex networks. EPL (Europhysics Letters) 89, 38009.[4] Avrachenkov, K.E., Sanchez, E., 2002. Fuzzy markov chains anddecision-making. Fuzzy Optimization and Decision Making 1, 143–159.[5] Aziz, A., Sanwal, K., Singhal, V., Brayton, R., 2000. Model-checkingcontinuous-time markov chains. ACM Transactions on ComputationalLogic (TOCL) 1, 162–170.[6] Benavoli, A., Chisci, L., Farina, A., Ristic, B., 2008. Modelling uncertainimplication rules in evidence theory, in: Information Fusion, 2008 11thInternational Conference on, IEEE. pp. 1–7.[7] Boudaren, M.E.Y., An, L., Pieczynski, W., 2016. Dempster-shafer fu-sion of evidential pairwise markov fields. IEEE Transactions on FuzzySystems 74, 13–29.[8] Boujelben, M.A., De Smet, Y., Frikha, A., Chabchoub, H., 2009. Build-ing a binary outranking relation in uncertain, imprecise and multi-22xperts contexts: The application of evidence theory. Internationaljournal of approximate reasoning 50, 1259–1278.[9] Buckley, J.J., 2005. Fuzzy markov chains. Fuzzy Probabilities: NewApproach and Applications , 71–83.[10] Chou, C.C., 2016. A generalized similarity measure for fuzzy numbers.Journal of Intelligent & Fuzzy Systems 30, 1147–1155.[11] Darling, D.A., Siegert, A.J.F., 1953. The first passage problem for acontinuous markov process. Annals of Mathematical Statistics 24, 624–639.[12] De Korvin, A., Kleyle, R., 1998. Expected transition costs based on amarkov model having fuzzy states with an application to policy selection.Stochastic analysis and applications 16, 51–64.[13] Deng, X., Hu, Y., Deng, Y., Mahadevan, S., 2014a. Supplier selectionusing AHP methodology extended by d numbers. Expert Systems withApplications 41, 156–167.[14] Deng, X., Liu, Q., Deng, Y., 2015. Newborns prediction based on abelief markov chain model. Applied Intelligence 43, 1–14.[15] Deng, X., Lu, X., Chan, F.T.S., Sadiq, R., Mahadevan, S., Deng, Y.,2014b. D-cfpr: D numbers extended consistent fuzzy preference rela-tions. Knowledge-Based Systems 73, 61C68.2316] Deng, Y., 2015. Generalized evidence theory. Applied Intelligence 43,530–543.[17] Deng, Y., Chan, F.T.S., 2011. A new fuzzy dempster mcdm method andits application in supplier selection. Expert Systems with Applications38, 9854–9861.[18] Dou, R., Zong, C., Li, M., 2016. An interactive genetic algorithm withthe interval arithmetic based on hesitation and its application to achievecustomer collaborative product configuration design. Applied Soft Com-puting 38, 384–394.[19] Farahat, A., 2010. Markov stochastic technique to determine galacticcosmic ray sources distribution. Journal of Astrophysics and Astronomy31, 81–88.[20] Feng, N., Yu, X., Dou, R., Pan, B., 2015. Managing risk for businessprocesses: A fuzzy based multi-agent system. Journal of Intelligent &Fuzzy Systems 29, 2717–2726.[21] Fraedrich, K., Muller, K., 1983. On single station forecasting: Sunshineand rainfall markov chains. Beitr. Phys. Atmos 56, 208–134.[22] Fu, C., Yang, J.B., Yang, S.L., 2015. A group evidential reasoningapproach based on expert reliability. European Journal of OperationalResearch 246, 886–893. 2423] Fu, J.C., Koutras, M.V., 1994. Distribution theory of runs: A markovchain approach. Journal of the American Statistical Association 18,1050–1058.[24] Guo, J., 2016. A risk assessment approach for failure mode and effectsanalysis based on intuitionistic fuzzy sets and evidence theory. Journalof Intelligent & Fuzzy Systems 30, 869–881.[25] Jiang, C., Han, X., Liu, G., Liu, G., 2008. A nonlinear interval numberprogramming method for uncertain optimization problems. EuropeanJournal of Operational Research 188, 1–13.[26] Jiang, W., Luo, Y., Qin, X., Zhan, J., 2015. An improved methodto rank generalized fuzzy numbers with different left heights and rightheights. Journal of Intelligent & Fuzzy Systems 28, 2343–2355.[27] Jiang, W., Wei, B., Xie, C., Zhou, D., 2016a. An evidential sensor fusionmethod in fault diagnosis. Advances in Mechanical Engineering 8, 1–7.[28] Jiang, W., Zhan, J., Zhou, D., Li, X., 2016b. A method to determinegeneralized basic probability assignment in the open world. Mathemat-ical Problems in Engineering 2016, 1–11.[29] Jin, Y., Xie, Z., Chen, J., Chen, E., 2015. Phev power distributionfuzzy logic control strategy based on prediction. Journal of ZhejiangUniversity of Technology 43, 97–102.2530] Kabir, G., Tesfamariam, S., Francisque, A., Sadiq, R., 2015. Evaluat-ing risk of water mains failure using a bayesian belief network model.European Journal of Operational Research 240, 220–234.[31] Kang, B., Deng, Y., Sadiq, R., Mahadevan, S., 2012. Evidential cogni-tive maps. Knowledge-Based Systems 35, 77–86.[32] Kemeny, J.G., Snell, J.L., 1960. Finite markov chain. American Math-ematical Monthly 67.[33] Kleyle, R., De Korvin, A., 1997. Transition probabilities for markovchains having fuzzy states. Stochastic analysis and applications 15, 527–546.[34] Komorowski, T., Szarek, T., 2010. On ergodicity of some markov pro-cesses. Annals of Probability 38, pgs. 1401–1443.[35] Krogh, A., Larsson, B., Von Heijne, G., Sonnhammer, E.L., 2001. Pre-dicting transmembrane protein topology with a hidden markov model:application to complete genomes. Journal of molecular biology 305,567–580.[36] Lange, K., 2010. Discrete-time markov chains, in: Applied Probability.Springer, pp. 151–185.[37] Levi, R., Pl, M., Roundy, R., Shmoys, D.B., 2005. Approximation Al-gorithms for Stochastic Inventory Control Models. Springer Berlin Hei-delberg. 2638] Li, Y., Chen, J., Ye, F., Liu, D., 2016. The improvement of ds evidencetheory and its application in ir/mmw target recognition. Journal ofSensors 2016, 1–15.[39] Liu, C., Tian, Y.M., Wang, X.H., 2011. Study of rainfall predictionmodel based on gm (1, 1) - markov chain, in: Water Resource andEnvironmental Protection (ISWREP), 2011 International Symposiumon, pp. 744–747.[40] Lu, Y., Zhang, M., Yu, T., Qu, M., Lu, Y., Zhang, M., Yu, T., Qu,M., 2014. Application of markov prediction method in the decision ofinsurance company. lemcs-14 .[41] Mar, J., Anto?anzas, F., Pradas, R., Arrospide, A., 2010. Probabilisticmarkov models in economic evaluation of health technologies: a practicalguide. Gaceta Sanitaria 24, 209–214.[42] Memari, A., Rahim, A.R.B.A., Ahmad, R.B., 2014. Production Plan-ning and Inventory Control in Automotive Supply Chain Networks.[43] Navaei, 2010. Markov chain for multiple hypotheses testing and iden-tification of distributions for one object. Pakistan Journal of Statistics26, 557–562.[44] Ozbay, K., Ozguven, E.E., 2007. Stochastic Humanitarian InventoryControl Model for Disaster Planning.2745] Pieczynski, W., Benboudjema, D., 2006. Multisensor triplet markovfields and theory of evidence. Image & Vision Computing 24, 61–69.[46] Privault, N., 2013. Discrete-time markov chains, in: UnderstandingMarkov Chains. Springer, pp. 77–94.[47] Rabiner, L.R., Juang, B.H., 1986. An introduction to hidden markovmodels. ASSP Magazine, IEEE 3, 4–16.[48] Samet, H., Mojallal, A., 2014. Enhancement of electric arc furnace re-active power compensation using grey-markov prediction method. Gen-eration, Transmission & Distribution, IET 8, 1626–1636.[49] Shafer, G., et al., 1976. A mathematical theory of evidence. volume 1.Princeton university press Princeton.[50] Song, Y., Wang, X., Lei, L., Yue, S., 2016. Uncertainty measure forinterval-valued belief structures. Measurement 80, 241–250.[51] Soubaras, H., 2009. An Evidential Measure of Risk in Evidential MarkovChains. Springer Berlin Heidelberg.[52] Soubaras, H., 2010. On evidential markov chains. Studies in Fuzziness& Soft Computing 249, 247–264.[53] Su, X., Mahadevan, S., Han, W., Deng, Y., 2016. Combining dependentbodies of evidence. Applied Intelligence 44, 634–644.2854] Suchard, M.A., Weiss, R.E., Sinsheimer, J.S., 2001. Bayesian selection ofcontinuous-time markov chain evolutionary models. Molecular biologyand evolution 18, 1001–1013.[55] Tran, L., Duckstein, L., 2002. Comparison of fuzzy numbers using afuzzy distance measure. Fuzzy sets and systems 130, 331–341.[56] Yager, R.R., Liu, L., 2008. Classic works of the Dempster-Shafer theoryof belief functions. volume 219. Springer.[57] Yang, Y., Han, D., 2016. A new distance-based total uncertainty mea-sure in the theory of belief functions. Knowledge-Based Systems 94,114–123.[58] Yu, C., Yang, J., Yang, D., Ma, X., Min, H., 2015. An improved con-flicting evidence combination approach based on a new supporting prob-ability distance. Expert Systems with Applications 42, 5139–5149.[59] Zadeh, L.A., 1965. Fuzzy sets. Information and control 8, 338–353.[60] Zhou, X., Li, S., Ye, Z., 2013. A novel system anomaly prediction systembased on belief markov model and ensemble classification. MathematicalProblems in Engineering 2013, 831–842.29 able 7: The BPA of 20 periods
PPPPPPPPPPPPPP
Period Propositions { L } { L,M } { M } { M,H } { H } able 8: Comparison between prediction and practical situationInventory 193 194 195 196 197Prediction result(Probability) M(0.4577) M(0.4511) M(0.4479) H(0.4454) H(0.4516)Practical situation HInventory 198 199 200 201 202Prediction result(Probability) H(0.4568) H(0.4609) H(0.4643) H(0.4670) H(0.4692)Practical situation H igure 5: Comparison of model predictionigure 5: Comparison of model prediction