New statistical control limits using maximum copula entropy
NNEW STATISTICAL CONTROL LIMITS USING MAXIMUMCOPULA ENTROPY
FALLAH MORTEZANEJAD, S.A, MOHTASHAMI BORZADARAN, G.R, ANDSADEGHPOUR GILDEH, B.
Department of Statistics, Ferdowsi University of Mashhad, P. O. Box 1159, Mashhad 91775,Iran;[email protected]; [email protected]; [email protected]
Abstract.
Statistical quality control methods are noteworthy to producedstandard production in manufacturing processes. In this regard, there aremany classical manners to control the process. Many of them have a globalassumption around distributions of the process data. They are supposed to benormal, which is clear that it is not always valid for all processes. Such controlcharts made some false decisions that waste funds. So, the main question whileworking with multivariate data set is how to find the multivariate distributionof the data set, which saves the original dependency between variables. Upto our knowledge, a copula function guarantees the dependence on the resultfunction. But it is not enough when there is no other fundamental informationabout the statistical society, and we have just a data set. Therefore, we applythe maximum entropy concept to deal with this situation. In this paper,first of all, we find out the joint distribution of a data set, which is from amanufacturing process that needs to be control while running the productionprocess. Then, we get an elliptical control limit via the maximum copulaentropy. In the final step, we represent a practical example using the statedmethod. Average run lengths are calculated for some means and shifts to showthe ability of the maximum copula entropy. In the end, two real data examplesare presented. Introduction
Shannon entropy has been introduced first by [24] since 1948. Afterward, it isused in many different fields. The maximum entropy principle has been presentedby [11] since 1957. Jaynes exerted Lagrange function according to some constraintsto find the distribution of maximum entropy. Such papers as [14, 27, 13] have stud-ied more on the maximum entropy concept. After that, it used wieldy by authorsuntil recent years, and some of them can be mentioned as [4, 8, 29].The maximum entropy principle is a good manner to find the unknown distribu-tion of a univariate data set because it does not need any strong presumption aboutdistribution and works well with ill-posed conditions which are not required largesample sizes. Although all benefits of the maximum entropy concept, it can bedifficult for some researchers to define some more constraints for multivariate dataset to preserve the original dependency between different variables of multivariate
Mathematics Subject Classification.
Primary 47A55; Secondary 39B52, 34K20, 39B82.
Key words and phrases.
Control chart, Maximum entropy, Copula function, Spearman’s rho, T -Hotelling statistic. a r X i v : . [ s t a t . A P ] D ec FALLAH ET AL. data, and a specialist needs to save it in the result distribution function. Somepapers like [6, 35, 20, 5, 26, 22, 18] have made a link between the maximum en-tropy principle and copula function. Generally, by the aim of both concepts, we canget a copula density function by a maximum copula entropy just by adding somesimple constraints according to intended dependency measures. So, the maximumcopula function has the same dependency on the existing data. Finally, the Sklartheorem [28] helps easily to find out the multivariate distribution function whosedependency is the same as the available data.In this paper, we would like to peruse manufacturing process data where there aremany processes with multivariate data set with unknown distributions which areassumed normal distribution. This assumption is incorrect in general cases. So,technical assistants need to know the distribution. In this regard, the main pointis to transfer the original dependency to the result density function. Thus, we areworking on this issue to combine the maximum entropy principle and copula func-tion. As we mentioned before, the maximum entropy principle is applied to findthe empirical multivariate distribution, and the copula function cares about the de-pendency. Our predestinate data is bivariate and also dependent. So we estimatedits distribution by the maximum entropy principle for some simulated dependencymeasures, which are based on Spearman’s rho, Blest measures. In the next step,we apply the T -Hotelling statistic, which is common to use while dealing witha multivariate data set. Afterward, we compute the statistical quality control forthese kinds of data. These control limits are reliable because the dependency ispaid attention to while calculating them.The proceeds of this paper are: In section 2, some basic concepts of copulas anddependent measures are determined there. In section 3 The maximum entropy pro-cedure of finding distributions is explained. We present for univariate and bivariatedistributions. The purpose of the bivariate case is to compare functionally withthe result function of the next section. In section 4, first of all, we explain the pro-cedure of finding a bivariate maximum entropy distribution with respect to someintended constraints. Then we clarify the way of obtaining the maximum copulaentropy according to corresponding constraints. Shannon entropy is used in bothof them. In the process of acquiring the maximum copula entropy, we apply somedependence measures to transfer the dependence of an available data set to the finalmaximum copula function. In the following section, we exert the maximum copulaentropy to get the joint density function of the data set using the Sklar theorem.In section 5, we represent the T -Hotelling statistic and illustrate how to find thestatistical control limits for bivariate data set with its original dependency savedby the maximum copula function and explain how to compute ARL s. In section6, we calculate the coefficients of the maximum copula entropy for some instancevalues of dependence measures whose surface plots are represented in some figures.Then we estimate the upper control limit for some different means as well as theircorresponding
ARL s.In section 7, two different real data examples are discussed indetails.In section 8, we make a conclusion and statements of the paper.2.
Copula function definition
Statistical researches concerting copula function are advanced studying usingdata sets because it has many beneficial properties on saving dependency of thedata to raise serving. For example, the copula function is applied to generate
EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 3 random variate from a set with the same dependency whose distribution is unknown.Copula function has been introduced by Abe Sklar since 1959 in [28] applying one-dimensional marginal functions to built multivariate distribution. [9] was the firstpublished paper in statistics using the copula function, and also Schweizer and Wolff[23] are its pioneers. After that, this concept is wieldy used in many different paperslike [16, 32, 34]. In this paper, we are focusing on the mixture of copula function andentropy principle, and such articles as [3, 12, 18, 31] are published in recent years.The idea is to make alike between the copula function and the maximum entropyto estimate the unclear copula and then approximating the indistinct distributionsusing the Sklar theorem [28]. By the way, some dependence measures are added tosave the original dependency in the data set, but the copula function is requiredto compute their values. So, some pre-estimated statistics have to be defined. Inthis regard, some primary definitions and theorem are presented here. The copuladescription is given from [19]:
Definition 2.1.
A two-dimensional copula is a function defined on I where I =[0 ,
1] with the following properties: • for every u, v ∈ I : C ( u,
0) = C (0 , v ) = 0 , C ( u,
1) = u, C (1 , v ) = v, • for all u , u , v , and v in I such that u ≤ u and v ≤ v : C ( u , v ) + C ( u , v ) − C ( u , v ) − C ( u , v ) ≥ . In multivariate data studying, copula functions have a valuable rule. The basictool to play the rule is Sklar theorem.
Theorem 2.2.
Let H ( · , · ) be a joint distribution function for random variables X and Y whose marginal functions are F X ( · ) and F Y ( · ) . Then a copula C ( · , · ) existssuch that H ( x, y ) = C ( F X ( x ) , F Y ( y )) , ∀ x, y ∈ R . (2.1) C ( · , · ) is unique if F X ( · ) and F Y ( · ) are continuous; otherwise, C ( · , · ) can be uniquelydefined on the joint support set S ( X, Y ) . Reciprocally, let C ( · , · ) be a copula func-tion, and F X ( · ) and F Y ( · ) be univariate distribution functions. Then H ( · , · ) in (2.1) is the corresponding joint distribution function respect to the margins. The main key in the copula topic is this theorem extensively applied in sev-eral articles with different issues. The theorem gives us a connection between thecopula function and the joint distribution function. In the following paper, somedependence measures are exerted. The first measure is Spearman’s rho evaluatingcoordination and incoordination between variables. Spearman’s rho definition isbased on [15]. Let ( X , Y ), ( X , Y ), and ( X , Y ) be three vectors of independentrandom variables whose joint distribution function is H ( · , · ) with margins F X ( · )and F Y ( · ), and their corresponding copula function is C ( · , · ). The Spearman’s rhois determined by the below formula whose changes are in [ − , ρ = 3 { P (( X − X )( Y − Y ) > − P (( X − X )( Y − Y ) < } . The following theorem in [19] applied copula function to compute Spearman’s rho.
FALLAH ET AL.
Theorem 2.3.
Suppose X and Y be two independent random variables with copula C ( · , · ) . Then, Spearman’s measure is calculated by: ρ = 3 Q ( C, Π)= 12 (cid:90) I uvdC ( u, v ) − (cid:90) (cid:90) C ( u, v ) dudv − . The other used measures are related to Blest rank correlations adapted from [1]called the first, second, and third Blest’s measures: ν = 2 − (cid:90) (cid:90) I (1 − u ) vc ( u, v ) dudv, ν ∈ [ − , ,ν = 2 − (cid:90) (cid:90) I u (1 − v ) c ( u, v ) dudv, ν ∈ [ − , ,η = 6 (cid:90) (cid:90) I u v c ( u, v ) dudv − , η ∈ [0 , . As we can see, the density copula function is required for all presented scales, here,but if we are working with a real data set, the distribution and even the copulafunction are unknown. So, they can not be calculated without the necessary func-tion. The logical recommendation is to put their approximation as pre-estimatorsbased on the available data sample. The primary aim of this paper is to appraisethe copula and joint distribution function. Paper [6] determined the estimations inthe article as well, but before the presentment, it needs to define some notations.n is the sample size, and u n and v n are defined for t = 1 , . . . , n : u t = 1 n n (cid:88) i =1 X i ≤ X t ) = R t n + 1 ,v t = 1 n n (cid:88) i =1 Y i ≤ Y t ) = S t n + 1 , where 1( · ) is the indicator function. The pre-estimation of explained dependenciesare: (cid:98) ρ = 12 n − n n (cid:88) t =1 R t S t − n + 1 n − , (cid:98) ν = 2 n + 1 n − − n − n n (cid:88) t =1 (cid:18) − R t n + 1 (cid:19) S t , (cid:98) ν = 2 n + 1 n − − n − n n (cid:88) t =1 R t (cid:18) − S t n + 1 (cid:19) , (cid:98) η = 6 n − n n (cid:88) t =1 (cid:18) R t n + 1 (cid:19) (cid:18) S t n + 1 (cid:19) − (1 / n + 1 n − . By the copula and entropy principle, we can find out the unclear distribution ofany set in the following.
EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 5 Maximum entropy principle
The entropy was introduced by Shannon in [24, 25]. The Shannon entropy is soapplicable in statistics and broadly applying in many other fields like mathemat-ics, physics, computer science, economics, etc. Jaynes has explained the maximumentropy principle in [11] since 1957, which has many advantages like being unbias,suitable for small sample size, no need for strong assumptions, etc. The maximumentropy concept is an applied way to find the unknown distribution of a real dataset. It gets a compatible distribution for available information. In this manner, weuse the maximum entropy principle to approximate margins here, and also jointdistribution to make a comparison with the result that comes from the maximumcopula entropy.In the following, we describe how to find univariate maximum copula entropy ac-cording to Shannon, which is useful to get marginal functions. Then the bivariatefunction will be presented. [24] introduced Shannon entropy for random variable X in continuous case as a differential entropy: H S ( f X ) = (cid:90) S X − log f X ( x ) dF X ( x ) , where S X is univariate support set, and f X ( x ) is its density function. The nextstep is to add some constraints. Kagan et al. [14] has extended some conditions onthe entropy. So, some mandatory and optional constraints have to be defined onthe univariate density function: (cid:26) (cid:82) S X dF X ( x ) = 1 ,E ( g i ( X )) = m i ( x ) , j = 1 , . . . , k, where k is the number of optional constraints, m i ( x ) for j = 1 , . . . , k is knownbased on the available set of data whose corresponding functions are g i ( X ). Thefirst condition guarantees the result to be a valid statistical density. The Lagrangefunction for this case is: L ( f X , λ , . . . , λ k ) = − (cid:90) S X log f X ( x ) dF X ( x ) − λ { (cid:90) S X dF X ( x ) − }− Σ ki =1 λ i { (cid:90) S X g i ( x, y ) dF X ( x ) − m i ( x ) } . After differentiating and setting to zero, the final univariate maximum entropy isgotten as below: f X ( x ) = exp( − λ − Σ ki =1 λ i g i ( x )) , x ∈ S X . So, we briefly expiated how to get maximum entropy function for the case withone variable, and it is helpful in the proceeding while considering joint distributionfunction based on copula. It is worth to describe for the bivariate state becausewe would like to compare functionally the result of pure entropy function with theoutcome of the manner combining with copulas. By the way, we will make a jointdensity function based on the maximum entropy. So. the bivariate form of Shannonentropy is: H S ( f X,Y ) = (cid:90) (cid:90) S ( X,Y ) − log f X,Y ( x, y ) dF X,Y ( x, y ) , which is for X and Y whose density and distribution function are f X,Y ( x, y ), and F X,Y ( x, y ) respectively, and S ( X, Y ) is the joint support set. To find joint maximum
FALLAH ET AL. entropy distribution, some intended constraints are needed as well: (cid:26) (cid:82) (cid:82) S ( X,Y ) dF X,Y ( x, y ) = 1 ,E ( g i ( X, Y )) = m i ( x, y ) , j = 1 , . . . , k (cid:48) , where m i ( x, y )s for j = 1 , . . . , k (cid:48) are some known moments which are calculatedbased on the available data set, g i ( X, Y )s for j = 1 , . . . , k (cid:48) are corresponding func-tions to m i ( · , · )s, k (cid:48) is the number of constraints on moments which does not haveto be equal to k , and dF X,Y ( x, y ) is the full differential of F X,Y ( x, y ). Then themaximum entropy distribution is gotten by applying the Lagrange function madeof Shannon entropy and its corresponding constraints as well: L ( f X,Y , λ , . . . , λ (cid:48) k ) = − (cid:90) S ( X,Y ) log f X,Y ( x, y ) dF X,Y ( x, y ) − λ { (cid:90) S ( X,Y ) dF X,Y ( x, y ) − }− Σ k (cid:48) i =1 λ i { (cid:90) S ( X,Y ) g i ( x, y ) dF X,Y ( x, y ) − m i ( x, y ) } . Then the Lagrange function should be differentiated with respect to f X,Y ( · ) and byusing the Kuhn-Tucker method joint maximum entropy distribution is found out: f X,Y ( x, y ) = exp( − λ − Σ k (cid:48) i =1 λ i g i ( x, y )) , ( x, y ) ∈ S ( X, Y ) . (3.1)In the next section, the copula concept is added to the maximum entropy procedureto make the effect of available dependency on data. Function f X,Y ( x, y ) is not asreliable as the result of the maximum copula entropy.4. Joint distribution function via maximum copula entropy method
In this section, we would like to present a feasible method of finding multivariatedistribution affected by the dependency between variables. For simplicity of calcu-lations and notations, we discuss the bivariate data set. Although there is the mainquestion while working with a multivariate data set whose distribution is unknown,the maximum entropy seems good for this purpose. The question is, how can befound the distribution with the same original dependency between correspondingvariables. Copula function replies to the question as well. Generally, function (3.1)is upgraded by copula function to preserve the dependency. Here we suppose tomix these two major concepts to estimate a fit distribution. Now, we are keenon representing how to find the maximum copula entropy. First of all, the copulaentropy based on the Shannon definition is: H S ( c ) = (cid:90) (cid:90) I − c ( u, v ) log c ( u, v ) dudv, where c ( u, v ) = ∂ C ( u, v ) ∂u∂v . The maximum copula entropy has to be found out based on some constraints en-suring the result function to be copula. These essential constraints according to [6]are for i = 1 , . . . , r : (cid:82) (cid:82) I c ( u, v ) dudv = 1 , (cid:82) (cid:82) I u i c ( u, v ) dudv = i +1 , (cid:82) (cid:82) I v i c ( u, v ) dudv = i +1 , EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 7 where r is the counter of constraints and the bigger choice of r , the more accuratecreature of the result function compared with copulas. Here, we would like toadd some other equations based on some measures of dependence that should beestimated while dealing with real data set to get a copula function that has the samedependency as the available data set. These constraints are related to ρ , ν , ν , and η . According to [6, 7], some phrases have approximately equal Lagrange coefficients.For example, in paper [18], when they put different coefficients for each constraintin a simulation study, their results were almost the same for some of them. It comesfrom the symmetricity of the maximum copula function. Thus, we incorporate someof them to reduce the Lagrange coefficients. The synchronization is significantlyimportant while dealing with real data because the number of computations reducessaving time and energy. The incorporations are obvious and explained more in thefollowing. After reduction, the optional conditions are: (cid:82) (cid:82) I uv c ( u, v ) dudv = ρ +312 , (cid:82) (cid:82) I u v c ( u, v ) dudv = ρ − ν +212 , (cid:82) (cid:82) I u v c ( u, v ) dudv = η + , (4.1)where ρ , ν , and η are Spearman’s rho, Blest measures I , and III , respectively. It isworth to mention that the value of (7.1) is Blest measure II . To find the maximumcopula entropy, we have to apply the Lagrange function and Kuhn-Tucker methodas well as before, and the result copula function is: c ( u, v ) = exp (cid:0) − − λ − Σ ri =1 λ i ( u i + v i ) (4.2) − λ r +1 uv − λ r +2 ( u v + uv ) − λ r +4 u v (cid:1) , ∀ u, v ∈ [0 , . The values of λ i for i = 0 , . . . , r + 4 is gotten by applying c ( u, v ) in the intendedconstraints, and a system of equations has to be solved. In practice, measures ofdependence exerted in the conditions have to be estimated, because the copulafunction is required in their computations. In section 2, some pre-estimators aredetermined. We are going to find some copula function under some estimateddependence measures in section 6. By the way, after getting the copula densityfunction related to the dependence measures, their joint density functions can beobtained by this formula: f X,Y ( x, y ) = c ( u, v ) f X ( x ) f Y ( y ) , (4.3)where f X ( · ) and f Y ( · ) are marginal functions gotten by the maximum entropyprinciple based on Shannon’s definition. In this regard, the functions (3.1) and(4.3) can be functionally compared. Although (3.1) has no effect of dependency,the (4.3) highly performs the dependency in data, which was our aim.So, the joint density function of a dependence data set is gotten via the maximumentropy and copula function. The maximum entropy principle is applied, becauseit is the best choice when enough information is not available, and it can help us tofind a fitted distribution while there is not sufficient information or the sample sizeis not large enough. The copula function keeps the original dependency betweenvariables of the data set as well. Thus, the result of the joint density function isreasonable for our goal. In the next section, T -Hotelling statistics are presented,because, in the last step of this paper, suitable control limits are designed for smoothshifts, which are unacceptable for manufacturing processes. FALLAH ET AL. New control chart using T -Hotelling In the previous section, we had some estimated dependence measures. Basedon them, we got the unknown joint density function of a data set. Our goal is towork on data sets obtained from a manufacturing process, and we would like tocontrol the process by the time. To control the production process, we need theappropriate statistical control limits [
LCL, U CL ]. These limits straightly dependon the joint density function of the process, generally unknown in practice. Someclassical methods exist which have a strong assumption on their distribution. Thedistribution is supposed to be normal, which is overall for every data received fromany production process. The assumption can be invalidated in many procedures.So, their control limits are affected by a false distribution. However, the decisionbased on such limits can be wrong and waste many funds. The purpose of thispaper is to use the density function (4.3) to compute the suitable control limits. Ithas some advantages. First, it is not a general distribution for all processes and isgotten separately for each of them. Second, the dependency of data is considered init, which is its superiority with respect to (3.1). For this aim, we suppose to apply T -Hotelling statistic to deal with a multivariate data set. But to find the propercontrol limits, we need the joint density function represented in (4.3) as well. Firstof all, we preset the T -Hotelling statistic for a random vector X ∼ with mean vector µ ∼ and variance-covariance matrix Σ as: T Hotelling = ( X ∼ − µ ∼ ) (cid:48) Σ − ( X ∼ − µ ∼ ) . In our case of study, we have: X ∼ = (cid:18) XY (cid:19) ,µ ∼ = (cid:18) µ X µ Y (cid:19) , Σ − = (cid:20) a a a a (cid:21) where a = a . It is obvious that T -Hotelling is a positive statistic, and thecorresponding LCL is 0 because it measures the distance. So, the lower the value,the closer the quality is to the standards. Therefore, we have to solve this equationto get the
U CL : P ( T Hotelling (cid:54)
U CL ) ≥ − α, (5.1)where α is the first type of error and negligible. Then, we have:1 − α ≤ P (( X ∼ − µ ∼ ) (cid:48) Σ − ( X ∼ − µ ∼ ) (cid:54) U CL )= P ( a ( X − µ X ) + a ( Y − µ Y ) + 2 a ( X − µ X )( Y − µ Y ) (cid:54) U CL )= (cid:90) (cid:90) { ( x,y ) | a ( X − µ X ) + a ( Y − µ Y ) +2 a ( X − µ X )( Y − µ Y ) (cid:54) UCL } f X,Y ( x, y ) dx dy. We need the value of
U CL satisfying the last equation. These control limits arebased on the dependency of two variables X and Y , whose dependence reflectson f X,Y ( · , · ). In statistical quality control is common to use average run lengths( ARL )to show the performance of the limits. There are two types: first,
ARL based on the first type of error α meaning the number of samples to be taken fromthe process to see an out-of-control sample under a controlled situation: ARL = 1 α , and second ARL according to the second type of error β , which is defined as thenumber of samples taken under controlled conditions until one sample is observedoutside the control range: ARL = 11 − β . Note that their distributions are geometric. In the next section, we are going to usethis method to find the statistical control limits and their
ARL s for a simulationstudy. 6.
Simulation example of a manufacturing process
In many studies consisting of numerical data, the main questions are how to findout the distribution of the data set, which can be univariate or multivariate, butin almost all researches, the distribution of the existing data set is unknown andshould be estimated via a statistical method. The entropy concept is well knownand used in many different fields of study. The maximum entropy principle is astatistical method to find the best distribution dealing with inadequate informa-tion. Moreover, it acts acceptable with small sample sizes as well. Some intendedconstraints are required for maximum entropy methods based on such availableinformation as moments. So, no strong assumptions are needed, which is anotherbenefit of this method. While dealing with the univariate data set, it is easy to usethe entropy procedure, and we are not worried about the loss of dependency be-tween variables. In section 3, we found out a joint distribution function according tosome constraints whose result function is (3.1), which is relatively dependence free.An important question is how intended conditions have to be defined to keep theoriginal dependency between multivariate data set, or which kinds of constraintsguarantee the original dependency in the result distribution function. One way toreply to these kinds of questions is by using the copula functions. So, we haveto find a copula function with the same dependency as the data set has. To dothis, we use the maximum entropy to get a copula function named the maximumcopula entropy. We define some constraints under some dependency measures ofdata considering copula. Then, the result function has the same dependency on theavailable data set. Just by using the Sklar theorem, The maximum copula functionis transferred to the joint distribution function. Thus, the unknown distribution ofthe practical set can be obtained with the same dependency.In this paper, we introduced a feasible way to find out the distribution of an availabledata set by applying the maximum copula entropy. Afterward, we get statisticalcontrol limits by exerting the joint density function, so the control limits are basedon the original dependency between variables. Thus, the decision according to thelimits is reliable. In the following, the power of the charts is exhibited by simulationstudy for different steps of shifts. By the way, five different groups of dependentmeasures are determined first in Table 1, and all changes are applied for all groups.The scales are Spearman’s rho, Blest I , and III used in the number of conditions.Coefficients of c ( u, v ) are calculated respect to dependence groups represented in Table 1.
Triple measures for five dependency groups.
Dependence group ρ ν η Group 1 − . − . . − . − .
18 0 . . . .
18 0 . . . . Table 2 as well as their corresponding surface plots in Figure 1. The Lagrangecoefficients are estimated according to the function (4.2) with r = 5. Various de-pendency values affect differently on the maximum copula function. We use thesecopula functions to get the joint density functions of some samples with differentmeans. The surface plot of (4.3) for several options are drawn in Table 2. Notethat if function (3.1) were used, all surfaces would be the same for different depen-dencies, but the maximum copula function (4.3) is various for dependency groups.Varied dependence measures have clear efficacy on the density function, so ignoringthem leads to misunderstandings of production process features. In Table 3, Table 2.
Coefficients of the maximum copula entropy
Lagrange coefficients Group 1 Group 2 Group 3 Group 4 Group 5 λ . . − . − . − . λ − . − . . . . λ .
528 296 . . . . λ − . − . − . − . − . λ . . . . . λ . − . . . . λ . . − . − . − . λ − . − . . . . λ . . − . − . − . Table 3.
U CL with confidence level of 1 − α for some means anddifferent measures of dependence whose copula coefficients are inTable 2. The first type of error is approximating 0 .
05 respectivelyto each case.
Dependence groups µ X = 2 , µ Y = 1 µ X = 3 , µ Y = 5 µ X = 7 , µ Y = 61 − α UCL − α UCL − α UCL Group 1 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4.
ARL with α = 0 .
05 for different means.
Different means Dependence groupGroup1 Group2 Group3 Group4 Group5Mean variance Mean variance Mean variance Mean variance Mean variance µ X = 2 , µ Y = 1 31 .
355 819 .
878 27 .
937 710 .
053 22 .
462 464 .
647 25 .
016 528 .
356 22 .
404 418 . µ X = 3 , µ Y = 5 31 .
075 792 .
859 8 .
928 50 .
836 23 .
537 501 .
814 25 .
157 565 .
758 22 .
044 423 . µ X = 7 , µ Y = 6 41 .
045 1645 .
209 27 .
209 657 .
729 23 .
060 434 .
728 25 .
161 592 .
440 23 .
198 462 . EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 11 (a) ρ = − . ν = − .
5, and η = 0 . (b) ρ = − . ν = − .
18, and η = 0 . (c) ρ = 0, ν = 0, and η = 0 . (d) ρ = 0 . ν = 0 .
18, and η = 0 . (e) ρ = 0 . ν = 0 .
5, and η = 0 . Figure 1.
Surface plots of Copula density functions for Table 2there are three means for X and Y , which are µ X = 2 , ,
7, and µ Y = 1 , , X and Y . They are es-timated via a univariate maximum entropy method, as presented in section 3. Thebasic negligible first type of error is 0 .
05. In this regard, the
U CL s satisfy equation(5.1). Tables 4, 5, 6, 7 consist means and variance of
ARL and ARL . All ARL2 FALLAH ET AL.
ARL and ARL . All ARL2 FALLAH ET AL. (a) µ X = 2, µ Y = 1 for group 1 (b) µ X = 2, µ Y = 1 for group 2 (c) µ X = 3, µ Y = 5 for group 1 (d) µ X = 3, µ Y = 5 for group 3 (e) µ X = 7, µ Y = 6 group 4 (f) µ X = 7, µ Y = 6 for group 5 Figure 2.
Surface plots of density functions for (4.3) based ondifferent means and dependency groups.is recalculated 1000 times. According to
ARL ’s definition, the bigger ARL , thebetter performance of the control limits. The base value of ARL for α = 0 .
05 is20. The efficiency of the maximum copula entropy is better than the basic in Table4 where almost all of them are greater than 20. Conversely of
ARL , the smaller EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 13
ARL , the superior implementation of the control chart. Some steps of shifts areneeded to calculate ARL that we make a mean model of changes for X and Y as: (cid:26) µ (cid:48) X = µ X + δ X σ X ,µ (cid:48) Y = µ Y + δ Y σ Y , This paper aims to detect soft shifts that classical manners are unable to discover.The control chart is mighty because it is based on a fit distribution to data, but thetraditional basic distribution is normal and global for all process data. The
ARL method of generating is related to the definition here, so we do not use the β in thecalculations. In this regard, the corresponding β can be gotten by ARL ’s values.For example, when ARL = 4 .
171 for µ X = 2 and µ Y = 1 in group 1, the β iscomputed about 0 .
24. This β is affected by the α considering equation (5.1) appliedin the U CL calculations.
ARL s are provided for different margins in Tables 5, 6,and 7 for shift steps 0 .
1, 0 .
5, and 1. Their mean and variance approximately obeythe features of geometric distribution as well, and they are decreasing as the shiftsare becoming larger. Since
ARL s are so small with an error of 0 .
05, they preventwastage of capital and energy, and the accuracy of decisions can be increased byusing such control limits. Thus, the control limits of the maximum copula entropyhave a preferable performance for undesirable soft shifts, which is difficult to detectfor traditional methods. As a result, they are also suitable for observing largerchanges. Thus, while using the control limit, we can ensure that a wide range ofchanges is easily detectable.
Table 5.
ARL when µ X = 2 and µ Y = 1 X ’s shifts Y ’s shifts Dependence groupsGroup 1 Group 2 Group 3 Group 4 Group 5Mean variance Mean variance Mean variance Mean variance Mean variance δ X = 0 . δ Y = 0 32 .
542 1022 .
793 28 .
012 698 .
124 22 .
836 431 .
109 24 .
663 472 .
750 21 .
795 383 . δ Y = 0 . .
326 226 .
981 23 .
598 459 .
605 19 .
152 316 .
726 20 .
960 368 .
750 18 .
360 289 . δ Y = 0 . .
565 9 .
099 12 .
278 111 .
551 11 .
118 95 .
357 11 .
327 98 .
064 10 .
817 85 . δ Y = 1 4 .
288 7 .
932 6 .
916 26 .
947 7 .
264 25 .
841 7 .
563 36 .
621 7 .
106 30 . δ X = 0 . δ Y = 0 31 .
202 867 .
716 27 .
167 680 .
873 23 .
965 467 .
924 23 .
965 467 .
924 23 .
244 427 . δ Y = 0 . .
244 427 .
704 24 .
085 487 .
198 18 .
944 268 .
881 20 .
635 380 .
476 19 .
137 301 . δ Y = 0 . .
582 9 .
782 12 .
176 111 .
103 10 .
887 86 .
533 11 .
111 86 .
724 11 .
330 98 . δ Y = 1 4 .
211 8 .
073 6 .
856 25 .
671 7 .
340 32 .
595 7 .
251 31 .
274 7 .
138 32 . δ X = 1 δ Y = 0 32 .
854 850 .
620 27 .
681 699 .
654 22 .
845 441 .
568 23 .
85 520 .
022 22 .
324 434 . δ Y = 0 . .
130 184 .
141 23 .
696 521 .
899 18 .
417 295 .
962 20 .
810 396 .
427 18 .
934 302 . δ Y = 0 . .
401 7 .
451 11 .
906 111 .
007 10 .
994 83 .
304 10 .
751 78 .
850 10 .
914 88 . δ Y = 1 4 .
171 6 .
703 6 .
863 27 .
930 7 .
693 39 .
513 7 .
224 32 .
401 6 .
850 28 . Real data examples
In this paper, we try to investigate a new method of finding statistical con-trol limits. To do that, we start to estimate the dependence distribution via themaximum entropy method. The copula function is used to preserve the main de-pendency in the database. A simulation section is added to show the performanceof the presented method. Finally, we are going to perusing real data examples.7.1.
A production process quality.
The first example is chosen from [21], whosedata includes eleven different quality variables from a production process. The dataset has 30 samples provided over time. In this regard, we are focusing on the firsttwo quality characteristics like what paper [30] has done. The data along with somemore information are in Table 8. The quality variables are nominated as X and Y , Table 6.
ARL when µ X = 3 and µ Y = 5 X ’s shifts Y ’s shifts Dependence groupsGroup 1 Group 2 Group 3 Group 4 Group 5Mean variance Mean variance Mean variance Mean variance Mean variance δ X = 0 . δ Y = 0 32 .
813 977 .
826 8 .
912 56 .
347 22 .
372 479 .
534 25 .
903 550 .
030 21 .
148 384 . δ Y = 0 . .
458 185 .
348 7 .
441 35 .
481 19 .
821 343 .
291 19 .
595 327 .
403 18 .
157 249 . δ Y = 0 . .
416 7 .
219 5 .
200 13 . .
301 91 .
144 10 .
959 91 .
580 11 .
442 92 . δ Y = 1 4 .
164 6 .
061 4 .
269 7 .
605 7 .
288 31 .
501 7 .
418 34 .
317 6 .
960 27 . δ X = 0 . δ Y = 0 31 .
731 836 .
836 8 .
977 52 .
560 23 .
267 515 .
985 25 .
976 557 .
825 21 .
891 379 . δ Y = 0 . .
190 203 .
555 7 .
829 37 .
334 19 .
505 324 .
951 20 .
671 355 .
171 18 .
489 270 . δ Y = 0 . .
514 9 .
354 5 .
015 12 .
175 10 .
396 82 .
576 10 .
816 79 .
830 10 .
781 80 . δ Y = 1 4 .
100 6 .
369 4 .
158 6 .
638 7 .
233 34 .
486 7 .
053 28 .
445 6 .
770 27 . δ X = 1 δ Y = 0 33 .
194 1085 .
540 8 .
971 56 .
333 23 .
243 476 .
359 23 .
243 476 .
359 21 .
847 426 . δ Y = 0 . .
947 216 .
730 7 .
653 34 .
365 18 .
494 257 .
132 21 .
060 405 .
754 20 .
335 334 . δ Y = 0 . .
550 8 .
280 5 .
297 14 .
682 11 .
210 91 .
010 10 .
893 91 .
104 10 .
281 72 . δ Y = 1 4 .
121 6 .
275 4 .
124 7 .
301 7 .
340 30 .
844 7 .
247 31 .
471 6 .
742 29 . Table 7.
ARL when µ X = 7 and µ Y = 6 X ’s shifts Y ’s shifts Dependence groupsGroup 1 Group 2 Group 3 Group 4 Group 5Mean variance Mean variance Mean variance Mean variance Mean variance δ X = 0 . δ Y = 0 37 .
914 1367 .
884 28 .
839 714 .
480 22 .
773 437 .
063 24 .
991 488 .
456 23 .
238 476 . δ Y = 0 . .
635 394 .
610 24 .
542 523 .
843 19 .
251 333 .
922 20 .
044 327 .
204 19 .
099 288 . δ Y = 0 . .
656 8 .
963 12 .
600 115 .
441 11 .
122 89 .
440 11 .
536 112 .
924 10 .
422 84 . δ Y = 1 4 .
165 6 .
893 6 .
761 29 .
426 7 .
545 31 .
372 7 .
305 31 .
392 7 .
022 33 . δ X = 0 . δ Y = 0 38 .
526 1365 .
249 27 .
402 695 .
918 22 .
005 454 .
189 24 .
345 496 .
639 23 .
341 454 . δ Y = 0 . .
390 341 .
434 24 .
840 528 .
936 24 .
840 528 .
936 20 .
150 337 .
094 19 .
938 337 . δ Y = 0 . .
534 9 .
573 12 .
725 123 .
402 10 .
351 75 .
765 11 .
521 103 .
358 10 .
172 79 . δ Y = 1 4 .
246 7 .
057 7 .
219 29 .
654 7 .
163 31 .
682 7 .
427 35 .
455 6 .
958 28 . δ X = 1 δ Y = 0 40 .
114 1553 .
225 29 .
374 738 .
088 22 .
813 479 .
399 25 .
677 552 .
555 22 .
919 455 . δ Y = 0 . .
049 400 .
818 25 .
885 572 .
166 18 .
691 281 .
885 20 .
329 310 .
796 18 .
500 294 . δ Y = 0 . .
628 9 .
766 12 .
608 117 .
483 10 .
385 82 .
205 10 .
801 85 .
260 10 .
313 79 . δ Y = 1 4 .
201 7 .
920 6 .
739 31 .
503 7 .
371 36 .
881 7 .
040 28 .
521 6 .
801 27 . respectively. Since the first phase of the data is required to obtain control limits, itis assumed that the first twenty samples belong to phase one. In this example, weare going to discuss in detail how to find out the U CL via the paper method. Thefirst step is to calculate the marginal distribution of the variables by their means: (cid:26) (cid:82) S X dF X ( x ) = 1 , (cid:82) S X x dF X ( x ) = 0 . , and (cid:26) (cid:82) S Y dF Y ( y ) = 1 , (cid:82) S Y y dF Y ( y ) = 59 . . The maximum Shannon entropy for margins are: (cid:26) f X ( x ) = exp(1 . − . x ) , x ∈ S X ,f Y ( y ) = exp( − . . y ) , y ∈ S Y . The next step is to figure out the joint density function based on copula function.To do this, the dependencies have to be calculated via the estimators presented insection 2 or any other available estimators. Note that the presented estimators aresufficient when the sample size is large enough and depending on the data too. Theestimated dependency values are: (cid:26) (cid:98) ρ = 0 . , (cid:98) η = 0 . , (cid:98) ν = 0 . , (cid:98) ν = 0 . . EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 15
Then, we have to make the constraints for calculation of maximum copula entropy: (cid:82) (cid:82) I c ( u, v ) dudv = 1 , (cid:82) (cid:82) I u i c ( u, v ) dudv = i +1 , f or i = 1 , . . . , , (cid:82) (cid:82) I v i c ( u, v ) dudv = i +1 , f or i = 1 , . . . , , (cid:82) (cid:82) I uv c ( u, v ) dudv = (cid:98) ρ +312 , (cid:82) (cid:82) I u v c ( u, v ) dudv = (cid:98) ρ − (cid:98) ν +212 , (cid:82) (cid:82) I uv c ( u, v ) dudv = (cid:98) ρ − (cid:98) ν +212 , (cid:82) (cid:82) I u v c ( u, v ) dudv = (cid:98) η + . (7.1)The second and third conditions, and also the fifth and sixth are merged to re-duce the calculations as we explained them before. The maximum copula entropyconcerning the Shannon is gotten: c ( u, v ) = exp (cid:0) − .
614 + 4921 .
810 ( u + v ) − .
657 ( u + v )+ 3137 .
956 ( u + v ) + 4030 .
942 ( u + v ) − .
658 ( u + v ) − . uv + 14459 .
561 ( u v + uv ) − . u v (cid:1) , ∀ u, v ∈ [0 , . Therefore, the joint distribution function is computed by (4.3), just multiply c ( u, v )by the margins. The final step is to get the U CL , and (5.1) have to be solvedconcerning
U CL . The
U CL is 3 . U CL , which keeps all samplesin control. In Figure 3, we draw the first 20 samples along with the
U CL . Fivepoints are above the control limit. So, we have to build another
U CL in the lakeof those samples which are not in control. The T -Hotelling values for the secondphase are calculated based on the first 20 samples in stage 1. Bold values of samplesare out of control, but this result is not reliable. Thus, we recalculate the U CL instage 2. To do this, we must do all the calculations from the beginning. So, themarginal distributions are: (cid:26) f X ( x ) = exp(1 . − . x ) , x ∈ S X ,f Y ( y ) = exp( − . . y ) , y ∈ S Y . The dependency measures are: (cid:26) (cid:98) ρ = 0 . , (cid:98) η = 0 . , (cid:98) ν = 0 . , (cid:98) ν = 0 . . So, the maximum copula entropy is: c ( u, v ) = exp (cid:0) − .
656 + 4126 .
525 ( u + v ) − .
327 ( u + v ) − .
043 ( u + v ) + 11897 .
127 ( u + v ) − .
781 ( u + v ) − . uv + 12658 .
220 ( u v + uv ) − . u v (cid:1) , ∀ u, v ∈ [0 , . The second
U CL is 3 . U CL is 3 . Figure 3.
Control chart for the first 20 samples in the first stage.achieve a data set that has the desired quality and controlled conditions. In thelast step, we reached the control limit that included all the samples of phase one.So, this limit of control will be reliable, and the final
U CL is 2 . T -Hotelling statistics. Eachof them is based on the accepted examples at the same stage of the calculation.The bold values are out of control samples in each stage. The control limits of thelower stage are not accurate enough to detect all changes.7.2. A flood events.
The second example is from Yue [33], where there are threevariables, duration, volume, and peak, for the flood from 1919 to 1995. The flooddata was recorded in the Madawaska river basin in the province of Quebec, Canada.Cheng and Mukherjee [2] made a T -Hotelling control chart for the first two vari-ables. They assumed the first 70 samples belong to phase one, and the rest isin phase two. In this subsection example, we are going to make a proper controllimit for the data according to the durations and volumes of floods. The variablesare offered as X and Y . The computation steps are briefly discussed here. As wementioned in 7.1, the first step of the calculation is to get the marginal distributionfor phase one of the data, which is the first 70 samples. The margins are foundout according to the maximum entropy principle concerning Shannon entropy. Theconditions are based on means, and phase one has 80 . .
657 for X and Y , respectively. So, the density functions are: (cid:26) f X ( x ) = exp( − . − . x ) , x ∈ S X ,f Y ( y ) = exp( − . − . e − y ) , y ∈ S Y . EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 17 (a)
Sample 17 is out of control. (b)
Sample 5 is out of control.
Figure 4.
Control plots of stage 2 and 3. In each stage, samples17 and 5 are removed in order to find a suitable
U CL .In this example, we assume that the dependencies between the variables have notchanged over 110 years, and the dependency measures over these years are: (cid:26) (cid:98) ρ = 0 . , (cid:98) η = 0 . , (cid:98) ν = 0 . , (cid:98) ν = 0 . , where their means are 76 . .
258 for the variables, respectively. Thecopula function according to these measures and the presented conditions in 7.1are: c ( u, v ) = exp (cid:0) . − . u + v ) − . u + v )+ 21517 . u + v ) − . u + v ) − . u + v )+22505 . uv − . u v + uv ) + 20931 . u v (cid:1) , ∀ u, v ∈ [0 , . So, the joint density function of X and Y are gotten by substituted the marginaland copula functions inside (4.3). The upper control limit is calculated by solvingequation (5.1) concerning U CL , which is 6 . (cid:26) f X ( x ) = exp( − . − . x ) , x ∈ S X ,f Y ( y ) = exp( − . − . e − y ) , y ∈ S Y . Since no change in the dependence between the two variables is assumed, the samecopula function is used for this step of the calculation. The final
U CL is 6 . T -Hotelling values of phase two for twodifferent stage is exhibited in Table 9 as well as there Hotelling statistics for phaseone. The bold values are detected as out of control according to each step U CL . Table 8.
The sample along with the corresponding T -Hotellingvalues of the four different stages are presented to get the reli-able control limits and detect the samples that have the undesiredquality. Num
X Y T -HotellingStage 1 Stage 2 Stage 3 Stage 41 0 .
567 60 .
558 0 . . . . .
538 56 .
303 12 . − − − .
53 59 .
524 0 . . . . .
562 61 .
102 1 . . . . .
483 59 .
834 1 . . . − .
525 60 .
228 0 . . . . .
556 60 .
756 0 . . . . .
586 59 .
823 1 . . . . .
547 60 .
153 0 . . . . .
531 60 .
64 0 . . . . .
581 59 .
785 1 . . . . .
585 59 .
675 1 . . . . .
54 60 .
489 0 . . . . .
458 61 .
067 5 . − − −
15 0 .
554 59 .
788 0 . . . . .
469 58 .
64 4 . − − −
17 0 .
471 59 .
574 2 . . − −
18 0 .
457 59 .
718 4 . − − −
19 0 .
565 60 .
901 1 . . . . .
664 60 .
18 9 . − − −
21 0 . . . . . .
22 0 .
586 58 . . . . .
23 0 .
567 60 .
216 1 . . . . .
496 60 .
214 2 . . . .
25 0 .
485 59 . . . . .
26 0 .
573 60 .
052 1 . . . .
27 0 .
52 59 .
501 0 . . . . .
556 58 . . . . .
29 0 .
539 58 .
666 2 . . . .
30 0 .
554 60 .
239 0 . . . . There is an interesting point in the results. Samples number 81 and 84 in the firststage are known as out of control samples, while in the second stage they are withinthe control range. Comparing the duration and volumes of the samples along withthe other in control data, we realize that the detection in the second stage is nearlycorrect.
EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 19 (a)
The out of control samples are detected. (b)
New control limit is defined according tothe rest of samples.
Figure 5.
The second example control plots. In stage 2, samples2, 3, 4, and 61 are removed in order to get a appropriate
U CL . Table 9.
The flood duration and volume are represented as X and Y , respectively. Two columns belong to T -Hotelling valuesfor two-stage of calculations. The first 70 samples belong to phaseone, and the rest of them were getting from phase two.The boldnumbers are out of control. Num
X Y T -Hotelling Num X Y T -HotellingStep 1 Step 2 Stage 1 Stage 21 100 12057 1 . . . . . −
57 93 8949 0 . . . −
58 90 12577 1 . . . −
59 65 11437 3 . . . . . . . . . − . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 52 6334 1 . . . . . . . . . . . .
22 53 6865 1 . . . . . . .
346 10702 . . . . . .
307 15229 . . .
25 56 6907 1 . . . . . .
26 60 4189 3 . . . . . . . . .
102 13458 . . .
28 48 8409 2 . . . . . .
29 79 13602 4 . . .
561 11364 . . . . . . . .
31 47 5002 2 . . . . . .
32 43 5167 3 . . . . . .
33 109 10128 2 . . .
033 20245 . . .
34 92 12035 1 . . . . . .
35 70 10828 1 . . . . . .
36 66 8923 0 . . . . . .
37 62 11401 3 . . .
587 15786 . . .
38 85 6620 1 . . . . . . . . . . . .
40 48 8192 2 . . . . .
41 56 6414 1 . . . . . .
42 58 8900 1 . . .
585 7585 . . .
43 69 9406 0 . . . . . .
44 107 7235 5 . . . . . .
45 76 8177 0 . . . . . .
46 97 7684 2 . . . . .
47 61 3306 5 . . .
39 14528 . . .
48 87 8026 0 . . . . . . . . . . . .
50 74 8692 0 . . . . . .
51 67 11272 2 . . . . . .
52 76 8640 0 . . .
87 17778 . . .
53 55 6989 1 . . . . . . . . . . . .
55 81 12825 3 . . . . . . EW STATISTICAL CONTROL LIMITS USING MAXIMUM COPULA ENTROPY 21 conclusion In manufacturing processes, several procedures release a multivariate data set.They reflect the quality of some different product specifications. In statistical qual-ity control, the main goal is to monitor such data, but their distribution is unknown,so it is hard to define fitting control limits to the process. There are some tradi-tional methods to deal with these situations. But there is a strong assumptionaround a distribution that made the week in performance, and far from efficiencyto detect small shifts in several processes. In this paper, we find a joint densityfunction and then get suitable control limits. The fundamental way is to link themaximum entropy principle and copula functions. The idea to add the copula isto preserve the original dependency in the data, and transfer it to the estimateddistribution. In this regard, we apply T -Hotelling statistics used while dealingwith multivariate data to combine the information. It is common to approximate T -Hotelling distribution via Fisher distribution when the data set has a normaldistribution, but it is not valid in general. By the aim of this paper, we estimate T -Hotelling distribution through the maximum copula entropy.In the end, we add a simulation study and find some copula functions based onsome dependency measures such as Spearman’s rho and Blest. We draw somemaximum copula functions and their corresponding density functions to show theeffect of dependency. The goal is to get the unknown multivariate distribution ofmanufacturing process data whose variables are dependent. Then, we would liketo find statistical quality control limits according to this distribution for all data,because in some quality control methods, the dependency of variables is not paidattention, and leads to mistakes. We use average run lengths as well to show theability of our manner. So, some ARL and ARL are provided, which display thecapability in some small changes. Two real data studies are added here to showthe performance in reality. We try to explain the details in the first example toshow how to calculate the control limits in this new method. Also, in this part, themaximum copula entropy performed well when we compared in control data alongwith out of control data. References
1. Blest, D. C. (2000). Theory and methods: Rank correlation—an alternative measure. Aus-tralian and New Zealand Journal of Statistics, 42(1), 101-111.2. Cheng, Y., Mukherjee, A. (2014, December). One Hotelling T 2 chart based on transformeddata for simultaneous monitoring the frequency and magnitude of an event. In 2014 IEEEInternational Conference on Industrial Engineering and Engineering Management (pp. 764-768). IEEE.3. Cerqueti, R., Rotundo, G., Ausloos, M. (2018). Investigating the configurations in cross-shareholding: A joint copula-entropy approach. Entropy, 20(2), 134.4. Cesari, A., Reißer, S., Bussi, G. (2018). Using the maximum entropy principle to combinesimulations and solution experiments. Computation, 6(1), 15.5. Chen, L., Singh, V. P., Guo, S., Zhou, J., Ye, L. (2014). Copula entropy coupled with artificialneural network for rainfall–runoff simulation. Stochastic Environmental Research and RiskAssessment, 28(7), 1755-1767.6. Chu, B. (2011). Recovering copulas from limited information and an application to asset allo-cation. Journal of Banking and Finance, 35(7), 1824-1842.7. Chu, B., Satchell, S. (2016). Recovering the most entropic copulas from preliminary knowledgeof dependence. Econometrics, 4(2), 20.8. Fallah Mortezanejad, S. A., Borzadaran, G. M., Gildeh, B. S. (2019). An entropic structure incapability indices. Communications in Statistics-Theory and Methods, 1-11.2 FALLAH ET AL.