Inter-class orthogonal main effect plans for asymmetrical experiments
aa r X i v : . [ m a t h . S T ] D ec Inter-class orthogonal main effect plans forasymmetrical experiments.
Sunanda Bagchi,Theoretical Statistics and Mathematics Unit, Indian Statistical Institute,Bangalore 560059, India.
Abstract :
In this paper we construct ‘inter-class orthogonal’ main effect plans (MEP) for asymmet-rical experiments. In such a plan, a factor is orthogonal to all others except possibly the ones in its ownclass. We have also defined the concept of “partial orthogonality” between a pair of factors. In manyof our plans, “partial orthogonality” has been achieved when (total) orthogonality is not possible due todivisibility or any other restriction.We present a method of obtaining ‘inter-class orthogonal’ MEPs. Using this method and also a methodof ‘cut and paste’ we have obtained several series of ‘inter-class orthogonal’ MEPs. Interestingly some ofthese happen to be orthogonal MEP (OMEP); for example we have constructed an OMEP for a 3 experiment on 64 runs. Further, many of the ‘inter-class orthogonal’ MEPs are ‘almost orthogonal’ in thesense that each factor is orthogonal to all others except possibly one. In many of the other MEPs factorsare “orthogonal through another factor”, thus leading to simplification in the analysis. Plans of small size( ≤
15 runs) are also constructed by ad-hoc methods.Finally, we present a user-friendly computational method for analysing data obtained from any generalfactorial design.AMS Subject Classification : 62k10.Key words and phrases: main effect plans, ‘inter-class’ orthogonality and orthogonality ‘through’ an-other factor.
In many industrial experiments like screening experiments, often the interest lies only in the main effectsof factors. The wide use of orthogonal main effect plans (OMEP) for such experiments is due to theirorthogonality property which ensures uncorrelated and hence most precise estimation of every main effectcontrast of every factor, apart from providing great simplicity in analysis, as is well-known.However, orthogonality requires certain divisibility conditions and so an OMEP for an asymmetricalexperiment often requires a large number of runs. [See Dey and Mukherjee (1999) and Hedayat, Sloan,and Stuffken (1999) for details]. The proportional frequency (PF) plans of Addleman (1962), as we know,are OMEPs with possibly unequal replications for one or more factors, thus requiring weaker conditionsfor existence. However, very few unequally replicated PF plans are known, apart from Stark’s (1964) planfor a 3 experiment on 16 runs. Thus, the problem of availability of an OMEP with not-too-large run sizeremains. In such situations, therefore, the question arises whether with a smaller run size one can find analternative plan - something not as good as an OMEP but not too bad either.Of late, departure from full orthogonality has been investigated in the context of main effect plans(MEPs). In the “nearly orthogonal” plans of Wang and Wu (1992) factors are allowed to be non-orthogonalto a few of the other factors. Subsequently, other nearly orthogonal MEPs having interesting combinatorialproperties have been proposed and studied by others like Nguyen (1996), Ma, Fang and Liski (2000),Huang, Wu and Yen (1992) and Xu (2002).Why do we look for “near orthogonality”? Why can’t we go far away and use a fully non-orthogonalplan ? If we are willing to use non-orthogonal plans, we would have tremendous flexibility. We could, forinstance, make a 2 experiment on 5 runs (instead of 8) [see plan A (4) of example 2.1] or a 3 experimenton 12 runs (instead of 16) [see Plan A (4) in section 5]. One hurdle to the usability of such plans is thecomplexity in the data analysis. The reduction in the precision is, of course, another problem.1n the present paper our main aim is to provide main effect plans (MEPs) for asymmetrical experimentswith small run size, deviating “as little as possible” from the desirable properties like orthogonality and/orequal replications, so that analysis remains relatively simple. Specifically, we construct plans satisfying “inter-class orthogonality” , in which each factor is possibly non-orthogonal to the members of itsown class, but orthogonal to factors of other classes. In the process we have also obtained a series of orthogonal MEP for a experiment on 64 runs (see Theorem 3.5). In many of our plans theclass size is at most two, so that a factor is orthogonal to all others except possibly one. Among plans oflarger class size, many plans satisfy the property that within class factors are “orthogonal through anotherfactor” (in the same class), thus leading to simplification in the analysis. (See Example 2.1 and Theorem3.3 (c)).We have also defined the concept of “partial orthogonality” between a pair of factors and derivedsufficient condition for it. [See definition 2.2, Lemma 2.1 and the discussion thereafter. In many ofour plans, “partial orthogonality” has been achieved between one or more pair(s) of factors when (full)orthogonality is not possible due to divisibility or any other restriction.The definitions along with examples are presented in section 2. In section 3 we construct a few series of“inter-class orthogonal” MEPs for asymmetrical experiments with factors having at most five levels. Usingad-hoc methods we have also constructed MEPs with factors nonorthogonal to one or two factors on atmost 15 runs, which are in Section 5. These plans include saturated plans for the following experiments.4 .
2, 3 . and 4 . . on 8 runs, 5 . . . . on 12 runs and 5 . on 15 runs. Insection 4 we present an user-friendly method of analysis.We believe that the information presented in Theorem 4.6 and other results in section 4 will help theexperimenter to have a clear idea about the efficiencies of the BLUEs of the main effects as well as theamount of computation involved in the analysis of a non-orthogonal plan. These features may be comparedwith those of other available plans like “plan orthogonal through one factor” or an “inter-class orthogonalplan”.This paper has been posted in aexiv.org/abs/1512.06588. Throughout this paper we shall be concerned with main effect plans, that is plans aiming at gaininginformation only about the main effects, assuming interactions to be negligible. In all plans presentedhenceforth in this paper, rows represent factors, while columns represent runs .Let us consider a main effect plan (MEP) for an experiment with m factors A, B, · · · on n runs. Supposethe factor A have a levels, B have b levels and so on. Then, the plan will be referred to as an m-factorMEP and will be represented by an m × n array ρ ( n, m ; a, b, · · · ) . r A ( i ) will denote the number of runs in which factor A is at level i , while the vector r A =( r A (1) , · · · r A ( a )) will be referred to as the replication vector of the factor A. For two factors A,B the incidence matrix N AB is the a × b matrix with the ( i, j )th entry n AB ( i, j ) = the number of runsin which A is at level i and B is at level j . Clearly, N AA is a diagonal matrix whose diagonalentries are those of r A in the same order and will sometimes be denoted by R A . Definition 2.1
Let us consider an m-factor MEP ρ on n runs. Suppose the set of factors of ρ can bedivided into several classes in such a way that every factor is orthogonal to every other from adifferent class . Then ρ is called inter-class orthogonal . An inter-class orthogonal MEP with m factorsdivided into p classes and the factors in the i-th class having levels s i , s i , · · · on n runs will be denotedby ρ ( n, m ; { s .s · · · } . { s .s · · · } · · · ) . A plan with at most m factors in a class may be referred to asan inter-class(m) orthogonal MEP.
Remark 2.1:
Any main-effect plan may be looked upon as an inter-class(m) orthogonal MEP, forsome m. For instance, an OMEP may be viewed as an inter-class (1) orthogonal MEP, while an MEP2ith p factors of which no one orthogonal to any other is inter-class(p) orthogonal. The plan L ′ (3 . )of Wang and Wu(1992) is an inter-class (8) orthogonal MEP, according to the present terminology, as the8 two-level factors are mutually non-orthogonal. We see that the term inter-class(m) orthgonal does notalways display the exact picture as there may be classes with size much smaller than m, as in the case of L ′ (3 . ). This term is informative when the class sizes are close to one another, which is the case forthe plans constructed here. Examples :
We now present two inter-class orthogonal plans and along with their graphical represen-tation. Here adjacency represents orthogonality. The interpretation of the dotted line between factors isexplained in Remark 2.4.
Example 2.1: A (1) = ρ (8 , { } . { } . A (1) = A B C D E ✟✟✟✟✟✟✁✁✁✁✁✁❍❍❍❍❍❍(cid:0)(cid:0)(cid:0)❆❆❆❆❆❆❅❅❅❆❆❆❆❆❆ •• • •• CA E DB- - - - - -
Example 2: A (1) = ρ (12 , { . } . { } ). A (1) = A B C D E ✟✟✟✟✟✟✁✁✁✁✁✁❍❍❍❍❍❍❆❆❆❆❆❆ •• • •• DB A EC- - - - - -In this equal frequency saturated plan, both the four-level factors B and C form a generalized groupdivisible design with the levels of the two-level factor A. Between themselves, they form a Balancedincomplete block design (BIBD). The relation between factors D and E is presented in details afterRemark 2.4.The relation between the factors A and B in A (1) and factor D and E in A (1) motivates us todefine the concept of partial orthogonality between two factors. Definition 2.2
We say that the factor A is partially orthogonal (PO) to another factor B if the BLUEof at least one (but not all) main effect contrast of A is orthogonal to the BLUE of every one of B . Remark 2.2.
One can verify that in the MEP presented in (2) of Huang, Wu and Yen (2002), thethree-level factors are partially orthogonal to each other. In fact the relation between every pair of three-level factors in that plan just like the relation between A and B of A (1). [See Table 2.1 below]. Moreexamples are in Section 3. Remark 2.3. If A is PO to B , then B is either PO (plan A (1)) to A or non-orthogonal (plan A (2) insection 3) to A . Regarding analysis, however, what matters is whether A and B are mutually orthogonal ornot. Thus, partial orthogonality is a feature of estimation and has no role to play in testing of hypothesis. Remark 2.4.
If two factors are partial orthogonal to each other, then in the graphical representationthey are joined by doted lines.
A statement like “ A is PO to B ” immediately raise the question “which contrast of A isorthogonal to B ”? We shall now see how the incidence matrix N AB helps us to find at leastpartial answer to this question. ow to check orthogonality of a contrast of A to those of B. We recall the proportional frequency condition of Addelman (1962).
Definition 2.3 [ Addelman (1962)] Consider a main effect plan ρ on n runs. Two factors A and B aresaid to be orthogonal to each other, if the incidence matrix N AB of A and B satisfies the proportionalfrequency condition (PFC), as stated below. n A,B ( i, j ) = r A ( i ) .r B ( j ) /n, i = 1 , , · · · a, j = 1 , , · · · b. (2.1)We now define PFC between one factor and certain levels of another factor. Definition 2.4
Consider two factors A and B , with a and b levels respectively, of a main effect plan ρ on n runs.(a) If a level i of A satisfies n A,B ( i, j ) = r A ( i ) .r B ( j ) /n, j = 1 , , · · · b, (2.2) then we say that the level i of A satisfies PFC with factor B .(b) If a pair of levels i and k of A satisfies n A,B ( i, j ) /r A ( i ) = n A,B ( k, j ) /r A ( k ) , j = 1 , , · · · b, (2.3) then the pair { i, k } of levels of A is said to satisfy PFC with factor B . We use the notation α i for the unknown effect of level i of the factor A , 1 ≤ i ≤ a and similarnotation for other factors. Further, ˆ α i − ˆ α j will denote the BLUE of the contrast α i − α j . Similar notationfor other contrasts.The proof of the following result is by straightforward verification. Lemma 2.1 (a) If a level i of A satisfy PFC with B , then the BLUE of the main effect contrast ( a − α i − P j = i α j is orthogonal to the BLUEs of all main effect contrasts of B .(b) If the pair of levels { i, k } of A satisfies PFC with B , then the BLUE of the main effect contrast α i - α k is orthogonal to the BLUEs of all main effect contrasts of B . We now illustrate these results with the help of plan A (1) and another plan presented later. Plan A (1) : We note that factors A and C satisfies PFC (see equation ( 2.1 )) and hence theyare mutually orthogonal. Similarly, the pairs ( A, D ) , ( A, E ) , ( B, C ) , ( B, D ) and (
B, E ) are also mutuallyorthogonal. Regarding the pair of factors (
A, B ) B , we see that PFC condition is not satisfied. However,level 0 of A satisfies PFC with factor B , as shown in the table below. Therefore, by (a) of Lemma 2.1the contrast 2 ˆ α − ˆ α − ˆ α is orthogonal to both the contrasts of B . By the same argument the contrast2 ˆ β − ˆ β − ˆ β is orthogonal to both the contrasts of A . Table 2.1 B → | A ↓ | | r A ↓ | r A .r ′ B /n | | | N AB | | | / / | | | / / r B → | | n = 8 .4 lan A (1) : We note that the pair of three-level factors D and E do not satisfy PFC condition.However, levels 0 and 2 of D satisfy PFC with E and so by (b) of Lemma 2.1 the contrast ˆ δ − ˆ δ isorthogonal to both the contrasts of E . By the same argument the contrast ˆ ǫ − ˆ ǫ is orthogonal to boththe contrasts of D . In the following table r denotes the constant replication number of D . Table 2.2 E → | D ↓ | | r D ↓ | N DE /r | | | / / / N DE | | | / / | | | / / / r E → | | n = 12 . Discussion:
What is the use of partial orthogonality ? This may be viewed as a “something is betterthan nothing” approach. If it is not possible to make A and B mutually orthogonal, we may at leastmake them partially orthogonal, if possible. However, the issue is more complicated, since, to achieve onecondition, we may have to sacrifice another. Let us look at the following situations. Consider two factors A and B with a and b levels respectively.Case 1. ab does not divide n , so that there does not exist any plan in which A and B are mutuallyorthogonal, each with equal frequency. In case a proportional frequency plan exists, then of course, thatis the best option. Suppose such a plan is not known. If we know a plan, say ρ , in which A is partiallyorthogonal to B , then the experimenter would be happy to be able to estimate at least a few among themain effect contrasts of A with maximum precision. However, this may lead to “too small” a precisionfor the other contrasts of A . Suppose a plan ρ is also available in which A and B are not partiallyorthogonal, but all the contrasts of A and B are estimated with “reasonably high” precision. Whether theexperimenter would prefer ρ or ρ depends on the importance she attaches to each contrast. [See Remark3.3].Case 2. ab divides n , so that orthogonality between A and B is possible. However, in the only availableplan (say ρ ) in which A and B are mutually orthogonal, various other pairs of factors are mutually non-orthogonal. Suppose another plan ρ is also available in which A is only partially orthogonal to B , butseveral pairs of factors which are mutually non-orthogonal in ρ are orthogonal in ρ . Which plan shouldthe experimenter choose ? Again, The choice depends on the importance attached to different contrastsof different factors. [See Remark 5.1].We hope that in future more and more nearly orthogonal, inter-class orthogonal and other similar planswill be available and the experimenters will have a wider range of options. Orthogonality through another factor:
The concept of “orthogonality (between two treatment factors) through a nuisance factor” has beenintroduced in Morgan and Uddin (1996) in the context of nested row-column designs. In Bagchi (2010)“orthogonality through the block factor (OTB)” is studied in details. This concept can easily be extendedto the case when the third factor is also a treatment factor.
Definition 2.5
Consider three factors A , B and C of an MEP. We say that A is orthogonal to B “through” C if the incidence matrices N AB , N BC and N AC satisfy the following condition. N AC ( R C ) − N CB = N AB . (2.4) Example 2.1:
Consider two MEPs with two- and three-level factors, on 5 runs : A (1) = ρ (5; { × } )and A (2) = ρ (5 , { } ). 5 (1) = A B C A (2) = A B C D . (2.5)In the plan A (1) A is orthogonal to B through C , while in A (2) every factor in { A, B, C } is orthogonalto every other through D .For examples of more such plans, see the equations next to ( 3.14 ). For analysis see Theorem 4.5. Seealso Remark 4.5. Definition 3.1
Consider an MEP ρ ( n, m ; a, b, · · · ) . Suppose there exists another MEP ρ ( a, l ; t , t , · · · t l ) ,( l ≥ , such that l X i =1 ( t i − ≤ a − . (3.6) Then, we construct a new MEP ˜ ρ with n runs by replacing the level u of factor A by the u -th column(run) of ρ , for each u , ≤ u ≤ a − . We say that the factor A is replaced by a class G A of l factors related through ρ and ρ will be said to the replacing array for A . In the same way wecan replace two or more factors of a given MEP, through two or more suitable replacing arrays. We now try to find conditions on the replacing array so that the resultant plan satisfies certain desirableproperties.
Lemma 3.1
Consider a set of factors R = { A, B, · · · } of an MEP ρ . Suppose ˜ ρ is an MEP obtainedfrom ρ by replacing each factor of R by a group of factors. More precisely, the factor A (respectively B) is replaced by the class of factors G A (respectively G B ) related through ρ A (respectively ρ B ). Then, thefollowing hold.(a) If A and B are mutually orthogonal (with equal or unequal frequency ) in the original plan ρ thenevery factor in the class G A is orthogonal to every factor in the class G B in the derived plan ˜ ρ , generallywith unequal frequency.(b) In ˜ ρ two factors of G A will be partially (respectively totally) orthogonal if and only if the corre-sponding rows of ρ A are partially (respectively totally) orthogonal.(c) If ρ and each of the replacing arrays ρ A etc. are saturated, then so is ˜ ρ . Proof :
We shall prove (a). (b) will follow by similar argument and (c) by straightforward counting.Proof of (a): Fix a factor, say K of G A and a factor L of G B . Let β s (respectively γ t ) denote the setof runs of ρ A (respectively ρ B ) in which the level s of K (respectively t of L ) appear.Let ˜ N K,L , ˜ r K , ˜ r L denote the incidence matrix of K, L and the replication vectors of K and L respectivelyin ˜ ρ . Then, the ( s, t )th entry of ˜ N K,L is given by˜ n K,L ( s, t ) = X i ∈ β s X j ∈ γ t n A,B ( i, j ) . (3.7)From this, we obtain that for a level s of K ,˜ r K ( s ) = X i ∈ β s X t X j ∈ γ t n A,B ( i, j ) = X i ∈ β s r A ( i ) . (3.8)6imilarly, ˜ r L ( t ) = P j ∈ γ t r B ( j ). But N A,B , r A , r B satisfy ( 2.1 ) by hypothesis. Combining that with therelations above we see that ˜ M K,L , ˜ r K , ˜ r L also satisfy ( 2.1 ). (cid:3) We present the well-known definition of an orthogonal array.
Definition 3.2
Let m, n, t ≥ be integers and s = ( s , · · · s m ) be a vector of integers ≥ . Then anorthogonal array of strength t is an m × n array, with the entries of the i th row coming from a set of s i symbols satisfying the following. All t -tuples of symbols appear equally often as rows in every n × t subarray. Such an array is denoted by OA ( m, n, s × · · · × s m , t ) . When s = s = · · · s m = s , say, thisarray is represented by OA ( n, m, s, t ) . Corollary 3.1
Suppose there exists an orthogonal array OA ( n, m, s, . Suppose further for an integer k ( < m ) , there exist arrays ρ i = ρ ( s, l i ; t i, , · · · t i,l i ) , satisfying s − ≥ P l i j =1 ( t i,j − , i = 1 , , · · · k .Then, an inter-class orthogonal array ρ ( n, l ; Q ki =1 { t i, × · · · t i,l i } .s m − k ) exists. Here l = P ki =1 l i . Proof :
Let ρ be the orthogonal MEP represented by the given orthogonal array. We replace theith factor by a group of factors related through ρ i , i = 1 , · · · k to form a new MRP ρ . Clearly ρ has l = P ki =1 l i + m − k factors. That ρ is interclass orthogonal with the given parameters follows from Lemma3.1. (cid:3) Examples of replacing arrays with desirable properties:
We have seen that to obtain an useful inter-class orthogonal MEP, one needs replacing arrays withdesirable properties. We now present a few such arrays. In each plan, the factors are named as
A, B, · · · ,in that order. The set of s levels of a factor will be denoted by the set of integers modulo s . Plan with s runs, two factors with p and q levels, p + q = s : A s (1) = (cid:20) · · · · · · p −
10 1 · · · q − · · · (cid:21) . (3.9) Remark 3.1:
The BLUE of the contrast α i − α j of factor A is orthogonal to the BLUEs of thecontrasts of B, for i = j, i, j ≥
1. Similarly, the BLUE of the contrast β i − β j of B is orthogonal to theBLUEs of the contrasts of A, for i = j, i, j ≥ Plans with 4 runs :
A plan, say A (1) may be obtained by putting s = 4 , p = 2 , q = 3 in ( 3.9 ). Wenow present another plan. A (2) = ρ (4; { × } ) = (cid:20) (cid:21) . Remark 3.2 : (a) In A (2), ( 2.2 ) is satisfied by level 0 of factor A , so that 2 ˆ α − ˆ α − ˆ α is orthogonalto the BLUEs of the main effect contrast of B . Plans with 5 runs :
Two plans, namely A (1) = ρ (5; { × } ) and A (2) = ρ (5; { } ) are presentedin Example 2.1. Two other plans A (3) and A (4) are obtained from ( 3.9 ) by putting s = 5 , p = 4 , q = 2and s = 5 , p = q = 3 respectively. Plans with 7 runs :
A plan, say A (1) may be obtained by putting s = 7 , p = 6 , q = 2 in ( 3.9 ).Another plan is displayed below. A (2) = ρ (7; { } ) = .Another plan, say A (3) = ρ (7; { } ) may be obtained by taking the first two rows and the columnsnumbered 2,3,7,8,10,12,and 13 from the array R in (3.17).7n the next section we use suitable arrays from the list above to replace one or more rows of existingorthogonal arrays and obtain inter-class orthogonal MEPs. Before that we compare the two replacingarrays A (1) and A (2) regarding the precision of the BLUEs of the main effect contrasts. We firstcompute the C-matrices (the coefficient matrices). C AA ; ¯ A denotes the coefficient matrix of the system ofreduced normal equations for factor A . [See Notation 4.3 and (c) of Corollary 4.1].It is rather surprising that C BB ; ¯ B is the same for both the plans. C BB ; ¯ B = (cid:20) / − / − / / (cid:21) .C AA ; ¯ A is, however different in the two plans. They are given below. C AA ; ¯ A for A (1) is / − / − / − / / − / − / − / / and C AA ; ¯ A for A (2) is − / − / − / / − / / . . Remark 3.3:
We note that both contrasts of the three-level factor A are estimated with the sameprecision in A (1). In A (2), however, the contrast 2 ˆ α − ˆ α − ˆ α orthogonal to the BLUEs of the contrastsof B and hence is estimated with maximum possible precision (given the replication vector), while thecontrast ˆ α − ˆ α is estimated with much less precision. Thus, while replacing a four-level factor theexperimenter may choose between A (1) and A (2) depending on whether equal importance is attachedto both the contrasts or not. Our starting point is an OA ( n, m, s, t ) (see Definition 3.2). Theorem 3.1 (a) Whenever an OA ( n, m, Q mi =1 s i , exists, an inter-class orthogonal MEP ρ ( n, m ; Q mi =1 { ( s i − t i ) . ( t i + 1) } exists. Here t i is an integer, ≤ t i ≤ s i − .(b) These inter-class orthogonal MEPs may be constructed so as to satisfy partial orthogonality propertyamong the members of the same class similar to the description in Remark 3.1. Proof : (a) For every i, ≤ i ≤ m , one can choose a 2 × s i array, say ρ i , with p = s i − t i symbols inthe first and q = t i + 1 symbols in the second row. Now ρ i may be used as a replacing array for the ithfactor of the given OA.(b) In particular, if ρ i has the same structure as A s (1), with s = s i , p = s − t i , q = t i + 1, then themembers of the ith class will satisfy the stated partial orthogonality property. Theorem 3.2
Suppose s = 3 , , or 7. Whenever an OA ( n, m, s, exists, the following series of inter-class orthogonal MEPs ρ exist. Here p, q, r, s, t are nonnegative integers. ρ = ρ ( n ; 3 p × { } q ) , p + q = m, if s=3 ρ ( n ; 4 p . { . } q . t ) p + q + t = m, if s=4 ρ ( n ; 5 p . { . } q . { } r . { . } t . { } u ) p + q + r + t + u = m, if s=5 ρ ( n ; 7 p . { . } q . { } r . { } t ) p + q + r + t = m, if s=7 (3.10) Proof:
Let O be an OA ( n, m, s, p (out of m) of the factors of O as they are, replace everyother factor by a class of factors related through an appropriate replacing array. This replacing array canbe (i) an OA if it is available (which is the case when s = 4), (ii) one of the replacing arrays shown aboveor (iii) a replacing array of similar type, for instance A (1), obtained by putting s = 3 in A s (1). Corollary3.1 implies that the MEP thus constructed satisfies the required property. (cid:3) Discussion :
1. While applying Theorem 3.2 with s = 4, the experimenter has a choice between thereplacing arrays A (1) and A (2). Remark 3.3 may be useful in making the choice.8. Comparing an inter-class(2) orthogonal MEP, say ρ constructed in Theorem 3.2 with s = 4 withan existing plan with the same number of runs, we find the following. In the plan ICA ( n, l , n − − l ) ofHuwang, Wu and Yen (2002), a three-level factor is orthogonal to every two-level factor and non-orthogonalto every other three-level factor [see p-349, line 7 of HWY]. In ρ every three-level factor is orthogonal toevery other three-level factor and all but one two-level factors, (with which it is partially orthogonal incase A (2) is used). We shall now present a two-stage construction. In the first stage we start with an existing MEP, fix asubset (say R ) of factors and obtain a number of MEPs by replacing each factor in R by a class of factors.Here we may use different replacing arrays for the same factor while constructing the first stage MEPs.In the next stage we juxtapose these the first stage MEPs in a suitable manner to form an array. Inorder that the resultant array is a meaningful MEP, the replacing arrays need to satisfy certain conditionas we shall see now. Definition 3.3
Consider an MEP ρ . Let F denote the class of all factors of ρ . Suppose ρ P (1) and ρ P (2) denote two replacing arrays for a factor P . If these replacing arrays have the same number of factors andthe same number of levels for the corresponding factors, then they are said to be compatible. Further bothof them are said to represent the same class of factors, say G P .Let ρ and ρ be two MEPs obtained from ρ by replacing the factors in a certain subset R of F . ρ and ρ are said to be compatible w.r.t. the factor P , if the corresponding replacing arrays ρ P (1) and ρ P (2) for P are compatible, in which case we say that both ρ and ρ are obtained by replacing P with the sameclass G P of factors.If ρ and ρ are compatible w.r.t. each factor in R then they are said to be compatible w.r.t R . The following results are immediate from the definition.
Lemma 3.2
Consider an MEP ρ and a subset R of F . Suppose for every factor P in R of ρ there is a classof replacing arrays ρ P ( i, j ) , j = 1 , , · · · J, i = 1 , , · · · I , such that the replacing arrays ρ P ( i, j ) , j = 1 , , · · · J are mutually compatible for every i = 1 , , · · · I and every P in R . For j = 1 , , · · · J let G P ( i ) denote theclass of factors of each ρ P ( i, j ) , j = 1 , , · · · J .Now, for every i = 1 , , · · · I, j = 1 , , · · · J , we obtain an array ρ ij by replacing a factor P of ρ by aclass of factors G P ( i ) related through ρ P ( ij ) . Let ρ ∗ = (( ρ ij )) ≤ i ≤ I, ≤ j ≤ J . (3.11) Then ρ ∗ represents an MEP satisfying the following.(a) ρ ∗ can be viewed as an MEP directly obtained from ρ by replacing every P in R with a class G P of factors related through the following array. ρ ∗ P = (( ρ P ( ij ))) ≤ i ≤ I, ≤ j ≤ J . (3.12) (b) If P and Q are mutually orthogonal in the original plan ρ , then every factor in the class G P ( i ) isorthogonal to every factor in the class G Q ( i ′ ) in the derived plan ρ ∗ , i, i ′ = 1 , , · · · I .(c) Fix a factor P of ρ . Fix i = i ′ , i, i ′ = 1 , , · · · I . let ρ P ( i, i ′ ) = (cid:20) ρ P ( i · · · ρ P ( i, J ) ρ P ( i ′ · · · ρ P ( i ′ J ) (cid:21) . Let us look at the set of factors G P ( i ) ∪ G P ( i ′ ) of the derived plan ρ ∗ . Two factors in this class aremutually orthogonal if and only if the corresponding factors in the plan represented by ρ P ( i, i ′ ) are so. emark 3.4: In our definition of replacing arrays we have used the condition ( 3.6 ), so that they arenot supersaturated and hence the resultant MEPs are also not supersaturated. However, we now relax thiscondition a bit. That is, we make use of one or more supersaturated replacing arrays in the intermediatestage, but the final MEP will not be supersaturated.
Lemma 3.3
Consider a set up just like that in the statement of Lemma 3.2, except the following. Thereis a factor P and an i, say i , such that ρ P ( i j ) is supersaturated, i.e. it does not satisfy ( 3.6 ) for every j = 1 , · · · J .Let ρ ∗ and ρ ∗ P be as in Lemma 3.2. Let G P denote the class of factors G P = S Ii =1 G P ( i ) .Then, statements (a) and (b) of Lemma 3.2 hold. Further, the following modified form of Statement(c) of the same Lemma hold.(c)’ If ρ ∗ P [see ( 3.12 )] is not supersaturated, then(i) in ρ ∗ any pair of factors in G P are mutually orthogonal if and only if the corresponding factors inthe plan represented by ρ ∗ P are so and(ii) the main effect contrast of each member of G P can be estimated. We now apply the technique of two stage construction to construct more inter-class orthogonal MEPswith two or three levels. Some of them turn out to be (fully) orthogonal.
Theorem 3.3 (a) The existence of an OA ( n, m, s, , s = 4 , or 7 implies the existence of the followinginter-class orthogonal MEP. ρ = ρ (2 n, m ; { } m . m ) if s=4 ρ (2 n, m ; { } m ) if s=5 ρ (2 n, m ; { } m ) if s=7 (3.13) Further, this MEP satisfies the following properties.(b) In the case s = 4 , the pairs of three-level factors are partial orthogonal to each other - in fact therelation between the pairs of three-level factors is just like the factors A and B of A (1) [See Section 1].so that every contrast is orthogonal to all except possibly another contrast. In particular, the relationbetween the(c) In the cases s = 5 and s = 7 , among the four members in the same class, every pair among thelast 3 are mutually orthogonal through the first one. Proof :
Fix s ∈ { , , } Let R s (4 × s ) denote a suitably chosen array, which is partitioned as R s = (( R ij )) ≤ i,j ≤ , each R ij is 2 × s. Let O denote the given OA. We first construct four arrays ρ , ρ , ρ and ρ following the method ofTheorem 3.2. In this process we use R ij as the replacing array for each factor to construct ρ ij , i, j = 1 , ρ ∗ = (( ρ ij )) ≤ i,j ≤ , which is the required MEP. By Lemma 3.2, it follows that ρ ∗ may be viewed as the plan obtained byreplacing every factor P by the class G P of four factors related through the replacing array R s . The restof the proof follows from the structures of R s , s = 4 , ,
7, shown below.10 = (3.14) R = (3.15) R = (cid:3) (3.16)Our next result is based on the elegant plan of Stark (1964), which is quoted below. [See Dey (1985),for instance, for an explicit presentation of the plan and more details]. Theorem 3.4 (Stark (1964)) An OMEP for a experiment on 16 runs exists. Theorem 3.5 (a)The existence of an OA ( n, m, , implies the existence of an orthogonal MEP for m three-level factors on n runs.(b) The existence of an OA ( n, m, , implies the existence of an orthogonal MEP for m three-levelfactors on n runs. Proof:
Let R be a 7 ×
16 array with symbols 0 , , R as R = (cid:2) R R (cid:3) , each R i is of order 7 × . We first construct arrays ρ j from the given OA by using replacing array R j for every factor followingthe method of Theorem 3.2 : j = 1 ,
2. Then we form the required plan ρ ∗ as ρ ∗ = (cid:2) ρ ρ (cid:3) . That ρ ∗ satisfies the required property follows from Lemma 3.3.(b) Let ˜ R denote the 6 ×
16 array obtained by deleting a row (say the 0th one) from R . Now wepartition ˜ R as follows. ˜ R = (( ˜ R ij )) ≤ i,j ≤ , such that ˜ R ij is of order 2 × i = 1 , × i = 3 , A denote the given OA. We construct array ρ ij from A by using replacing array ˜ R ij for every factorfollowing the method of Theorem 3.2, i, j = 1 , , ,
4. Then we form the required plan ρ ∗ as ρ ∗ = (( ρ ij )) ≤ i,j ≤ . Note that this procedure may be viewed as follows. Fix a factor, say P of A . The intermediatearrays ρ ij , ≤ j ≤ , i = 1 , P of A by two three-level factorseach, so that ρ ij , ≤ j ≤ , i = 1 , ρ ij , ≤ j ≤ , i = 3 , P is replaced by one three-level factor. This fact, together with the choice ofreplacing arrays imply that the class G P (the class of factors in ρ ∗ replacing P) is nothing but a class ofsix three-level factors, related through ˜ R . The rest follows from Lemma 3.3 and the fact that there are m factors in A . (cid:3) Analysis of a general main effect plan.
The crucial component of data analysis of a general factorial experiment is, of course, the computation ofthe error sum of squares. We proceed towards a user-friendly formula for computing SS E . The results arenot new, but are not available in the form presented here. We denote the factors by F , F , · · · , instead of A, B · · · for the sake of notational simplicity.We assume an additive, fixed effects, main effects model with homoscedastic and uncorrelated errorshaving constant variance σ . n will denote the n × J m × n will denote the m × n matrix of all-ones.Let ρ denote a main effect plan on n runs with factors F , F , · · · F m , F i having a i levels, i = 1 , · · · m .Let the unknown effect of the jth level of the factor F i be denoted by α ij and let the a i × α i denotethe vector of unknown effects of F i , ≤ i ≤ m . Let Y u denote the yield from the uth run, u = 1 , , · · · n .Then, assuming that in the u th run the factor F i is set at level l i = l i ( u ) , i = 1 , · · · m and denoting thegeneral effect by µ , Y u is given by Y u = µ + m X i =1 α il i + ǫ u , u = 1 , , · · · n. Viewing the general effect as the (m+1)-th factor ( F m +1 ) and therefore writing α m +1 = µ we expressthe model in matrix form as Y = X β, where X = (cid:2) X · · · X m +1 (cid:3) and β = (cid:2) α · · · α m +1 (cid:3) T . (4.17)Here, X i , the design matrix for F i is a 0 − u, t )th entry of X i is 1 if in the uth runthe factor F i is set at level t and 0 otherwise, i = 1 , , · · · m and X m +1 = n .Let T i denote the vector of raw totals of F i , i = 1 , · · · m + 1. Thus, T m +1 is the grand total and willsometimes be denoted by G . Notation 4.1
For any m × n matrix A , C ( A ) will denote the column space of A . Further, P A will denotethe projection operator on the column space of A . In other words, P A = A ( A ′ A ) − A ′ , where B − denotes ag-inverse of B . Notation 4.2
Let I = { , , · · · m + 1 } and S = { i, j, · · · } be a subset of I . For the sake of compactness,we introduce the following notation.(a) ¯ i = I \ { i } .(b) X S = (cid:2) X i X j · · · (cid:3) .(b) α S = (cid:2) α i · · · α j (cid:3) T .(c) P i will denote the projection operator onto the column space of X i , i ∈ I . Further, P S will denotethe projection operator onto the column space of X S . The system of reduced normal equations for a class of factors.Notation 4.3 (a) Let
S, T, U be three subsets of I such that(i) S ∩ U = T ∩ U = φ and(ii) either S = T or S ∩ T = φ .Let us define the matrix C S,T ; U and the vector Q S ; U as follows. C S,T ; U = (( C ij ; U )) i ∈ S, j ∈ T , C ij ; U = X ′ i ( I − P U ) X j , (4.18) Q S ; U = (( Q i ; U )) i ∈ S , Q i ; U = X ′ i ( I − P U ) Y . (4.19) (b) In particular, if S and T are a singleton sets, say S = { i } and T = { j } , then we may write C ij ; U and Q i ; U instead of C S,T ; U and Q S ; U respectively. Sometimes we may write C i ; U instead of C ii ; U .(c) Suppose U = { k, m + 1 } . Then we may and do write C S,T ; k and Q S ; k instead of C S,T ; U and Q S ; U respectively. [This is because P U = P k ]. Lemma 4.1
Suppose I is partitioned into two subsets S and U . Then, the reduced normal equations for c α S , after eliminating c α U is given by C S,S ; U c α S = Q S ; U , where C S,S ; U and Q S ; U are as given in ( 4.18 ) and the next equation. Remark 4.1:
In order that every main effect contrast of F i is estimable, rank of C i ;¯ i must be a i − ρ with m ≥
3, one has to check whether
Rank ( C i ;¯ i ) = a i −
1, for every i = 1 , , · · · m .In view of Remark 4.1 above, we define a class of MEPs, borrowing a term from the theory of blockdesigns. Definition 4.1
An m-factor MEP is said to be ‘connected’ if
Rank ( C i ;¯ i ) = a i − , for every i = 1 , , · · · m . Henceforth, the MEP ρ under consideration will be assumed to be connected. We now present a fewspecial cases of Lemma 4.1 Corollary 4.1 (a) Consider a factor, say F i . Let ¯ i = I \ { i } . Then the BLUE of the main effect contrast l ′ α i (in case it is estimable) of F i is l ′ b α i , where b α i is a solution of C i ;¯ i b α i = Q i ;¯ i . Here the expressions for C i ;¯ i and Q i ;¯ i are obtained from (b) of Notation 4.3.(b) In particular, suppose m = 1 . Then, the reduced normal equation for α (obtained by eliminatingonly F = µ ) is ( R − r ( r ) ′ /n ) c α = T − r G/n. (4.20) (c) Suppose m = 2 . Then the reduced normal equation for α (obtained by eliminating F and F = µ )is C c α = Q , where C = R − N ( R ) − N and Q = T − N ( R ) − T . (4.21) Notation 4.4
We now define sum of squares for one or more factors, adjusted for one or more otherfactors. Fix a set of factors T of I . For i not in T , we define SS i ; T , the sum of squares for F i , adjustedfor the factors F t , t ∈ T . More generally, for S disjoint from T, we define SS S ; T , the sum of squares forthe set of factors F i , i ∈ S , viewed as a single factor, adjusted for the factors F t , t ∈ T . SS i ; T = Q ′ i ; T ( C i ; T ) − Q i ; T and SS S ; T = Q ′ S ; T ( C S,S ; T ) − Q S ; T Remark 4.2:
Consider two sets of disjoint factors S and T . We may view all the factors in S combinedtogether as a single factor, say F S , having design matrix X S . Similarly F T is the set of all factors in T .Then SS S ; T may be viewed as the sum of squares for F S adjusted for F T .In order to study the relationship between the sums of squares, we need the following results onpartition matrices. Lemma 4.2
Consider a matrix W partitioned as (cid:2) U V (cid:3) . Let Z = ( I − P V ) U . Then, P W − P V = P Z . orollary 4.2 Let T ⊂ I and i ∈ I \ T . Let D = ( I − P T ) X i . Then, P D = P T ∗ − P T , where T ∗ = T ∪ { i } . We need some more notation.
Notation 4.5 (a) The total sum of squares and the error sum of squares will be denoted by SS tot and SS E respectively.(b) Fix a factor F i , ≤ i ≤ m . Let T = { i + 1 , · · · m + 1 } and ¯ i = I \ { i } .(i) Let SS i ; all> = SS i ; T and(ii) SS i ; all = SS i ;¯ i .Thus, SS i ; all> is the sum of squares for F i , adjusted for the factors F i +1 , · · · F m +1 , while SS i ; all denotesthe sum of squares for F i , adjusted for all other factors. Remark 4.3 :
Note that SS m ; all> is the so-called unadjusted sum of squares for F m .We are now in a position to present the computational formulae of the error sum of squares. Theorem 4.1
Consider a main effect plan with m mutually non-orthogonal factors F , F , · · · F m . Theerror sum of squares ( SS E ) may be computed from the total sum of squares ( SS tot ) as follows. SS E = SS tot − SS sub , where (4.22) SS sub = m X i =1 SS i ; all> . (4.23) Theorem 4.2
The data obtained from a connected main effect plan with m mutually non-orthogonalfactors may be analyzed using the following table.Table 2.1 : ANOVA for an m-factor non-orthogonal main effect planSource d.f S.S adjusted S.S adjusted F-statisticsfor all others for the next ones F a − SS all SS all> = SS , ··· m SS all / ( a − SS E /e F a − SS all SS all> = SS , ··· m SS all / ( a − SS E /e ... ... ... ... F m a m − SS m ; all SS m ; all> = SS m ; m +1 SS m ; all / ( a m − SS E /e To be subtracted - - SS sub = Sum of all aboveError e SS E = SS tot − SS sub Total n-1 SS tot = P nu =1 Y u − G /n -Here the error degrees of freedom is e = n − − P mi =1 ( a i − , as usual. xtension to a general factorial experiment: Consider a plan for a factorial experiment with kfactors. Let E denote a factorial effect - a main effect or a t-factor interaction, 2 ≤ t ≤ k . We list thefactorial effects under study as say E , · · · E m , where m is the number of factorial effects of interest. Thenwe treat these E i ’s in the same way as the main effects F i ’s are treated above. That is we denote thedesign matrix and the unknown effects of E i as X i and α i as here, orders of these would be different whenthe effects are interactions. Thus, following the same argument, we arrive at the following result. Theorem 4.3
The error sum of squares of a general factorial experiment can be obtained in the samemanner as described in Theorem 4.1.
Situations when analysis is considerably simpler.
We have seen in Theorem 4.1 that analysis of a general MEP is rather involved - needs computationof 2 m − F , the sum of squares for F is nothing but theso-called unadjusted sum of squares T ′ ( R ) − T − G /n . Moreover, in the situations when there are twofactors, say F and F , the sum of squares for F (obtained by adjusting for F ) is SS = Q ′ ( C ) − Q where C and Q are as in ( 4.21 ). (See (b) and (c) of Corollary 4.1).Now we seek the answer to the following questions. Consider a main effect plan for m factors( m ≥ ). Fix a factor, say F i . What conditions must the design matrices satisfy so that thesum of squares for F i adjusted for all others is the same as(a) the unadjusted sum of squares for F i ?(b) the sum of squares for F i adjusted for only one factor, (say F m ) ? [That is so far as F i is concerned, other factors are virtually absent.] Theorem 4.4
Fix a factor, say F i .(a)A necessary and sufficient condition for SS i ; all = SS i ; m +1 is that the incidence matrix N ij satisfiesthe proportional frequency condition stated in ( 2.1 ) [see Definition 2.3.(b) A necessary and sufficient condition for SS i ; all = SS i ; m is that N ij = N im ( R m ) − N ′ jm , j = i, ≤ i, j ≤ m − . (4.24)The proof relies on two lemmas we present now. Lemma 4.3
Consider matrices A ( m × n ) , B (( m × p ) such that C ( B ) ⊆ C ( A ) . Let C (( m × q ) be any matrix. Then a necessary and sufficient condition that C ( P B C ) = C ( P A C ) isthat ( P A − P B ) C = 0 . Lemma 4.4
Consider a matrix W partitioned as (cid:2) U V (cid:3) . Let Z = ( I − P V ) U . Then, P W − P V = P Z . Proof of theorem 4.4:
Let T = { , , · · · i − , i + 1 , · · · m } and T ∗ = T ∪ { m + 1 } . From Notation4.5 (b), we see that SS i ; all = Y ′ P U Y, SS i ; m +1 = Y ′ P V Y, where U = ( I − P T ∗ ) X i and V = ( I − P m +1 ) X i . Proof of (a);
From the expressions above, a necessary and sufficient condition for SS i ; all = SS i ; m +1 is that P U = P V , that is C ( U ) = C ( V ). Take A = X m +1 , B = X T ∗ , C = X i . Then, clearly, C ( A ) ⊂ C ( B ),that is [ C ( B )] ⊥ ⊂ [ C ( A )] ⊥ . By Lemma 4.3 a necessary and sufficient condition for C ( U ) = C ( V ) is that[( I − P A ) − ( I − P B )] C = 0, which is same as 15 P T ∗ − P m +1 ) X i = 0 . (4.25)Now by Lemma 4.4, P T ∗ − P m +1 = P Z , where Z = ( I − P m +1 ) X T . Thus, ( 4.25 ) is ⇔ P z X i = 0 ⇔ X ′ i Z = 0 ⇔ X ′ i ( I − P m +1 ) X j = 0 , j = i, which is the same as the proportional frequency condition. Proof of (b) :
Proceeding along similar lines as in the proof of (a), we find that the necessary andsufficient condition for SS i ; all = SS i ; m is that P W X i = 0 , where W = ( I − P m ) X T . (4.26)But this condition ⇔ X ′ i W = 0 ⇔ X ′ i ( I − P m ) X j = 0 , j = i, ≤ i, j ≤ m −
1. This condition simplifiesto the form in the statement. (cid:3)
Remark 4.4 :
The sufficiency part of (a) of Theorem 4.4 is well-known. We now point out that therespective conditions are also necessary for the sum of squares to satisfy these desirable properties.
Properties of a plan orthogonal through a factor.
Let us recall Definition 2.5.
Theorem 4.5
An MEP orthogonal through F m has the following properties.(a) For every factor F i , ≤ i ≤ m − , the reduced normal equation for b α i is ( R i − N im ( R m ) − ( N im ) ′ ) b α i = T i − N im ( R m ) − T m . (b) The error sum of squares is obtained by subtracting the following from the total sum of squares.Add the sum of squares for each F j adjusted for F m , ≤ j ≤ m − and then the unadjusted sum of squaresfor F m . Symbolically, SS E = SS tot − m − X j =1 SS j ; m − SS m ; m +1 . Proof :
Let L = { , , · · · m − } . Then, the reduced normal equation for the combined effect of thevector of treatment factors ( c α L ) (after eliminating ˆ µ and c α m ) is C LL ; m c α L = Q L ; m , where C LL ; m = (( C ij ; m )) ≤ i,j ≤ m − , C ij ; m = X ′ i ( I − P m ) X j , (4.27) Q L ; m = (( Q i ; m )) ≤ i ≤ m − , Q i ; m = X ′ i ( I − P m ) Y. m ) Y. (4.28)(a) Since the plan is orthogonal through F m , C ij ; m = 0 , i = j, ≤ i, j ≤ m −
1. Thus, the reducednormal equation for b α i is C ii ; m b α i = Q i ; m , where C ii ; m and Q i ; m are as in ( 4.27 ) and the next equation.That C ii ; m and Q i ; m are of the form in the statement of the theorem can be verified easily.(b) Since the off-diagonal block matrices of C LL ; m are null, SS L ; m,m +1 = m − X i =1 Q ′ i ; m ( C ii ; m ) − Q i ; m = m − X i =1 SS i ; m Now the rest follows from ( 4.22 ). (cid:3)
Remark 4.5:
Statement (a) of Theorem 4.5 was observed as early as 1996 by Morgan and Uddin intheir Theorem 2.1 [equation (7)]. However, since their paper essentially was concerned with the construc-tion of optimal nested row-column designs, the result was overlooked by many authors (such as Mukherjee,Dey and Chatterjee (2001) and Bagchi (2010)) working on blocked main effect plans.16 emark 4.6: (b) of Theorem 4.5 shows that a plan orthogonal through one factor ( F m ) consid-erably simplifies the computation of error SS as well as the sum of squares for the treatment factors F , F · · · F m − . Thus, in case F m happens to be a block factor, the whole analysis is only a little moreinvolved than a fully orthogonal plan, as has been noted in Bagchi (2010). However, in the situation when F m is a treatment factor, analysis is a little more involved since SS m ; all needs to be computed. Needlessto mention that the precision of the BLUEs of the main effect contrasts of F m (being non-orthogonal tom-1 other factors) is less than the other factors. Remark 4.7:
Let us recall the plan A (2) [see ( 2.5 )]. If we remove the last column (run) and the lastrow (factor D), then we get an OA (4,3,2,2), say ρ ∗ . Let C Q denote the coefficient matrix of the reducednormal equation for factor Q, Q = A,B,C obtained from the plan ρ ∗ . One may check that C Q ; ¯ Q = C Q for Q = A,B,C. Thus, even though A (2) is not orthogonal, the main effects of factors A,B and C areestimated with the same precision as the orthogonal plan ρ ∗ . Therefore, by adding one more run, we areable to accommodate one more factor (D), without sacrificing the precision of the three existing factors.The main effect of D is, however, being estimated with less precision than the others. Data analysis of an inter-class orthogonal plan.Notation 4.6
Consider an inter-class orthogonal plan with k classes, the ith class having m i factorsdenoted by F i, , · · · F i,m i , ≤ i ≤ k . Let F G denote the general effect.(a) Let α ij and X ij denote respectively the vector of unknown effects and the design matrix of F ij , ≤ j ≤ m i , i = 1 , · · · k .(b) Let I i = { ( i, , · · · ( i, m i ) } . For a fixed j, ≤ j ≤ m i , let T j = { ( i, j + 1) , · · · ( i, m i ) } and ¯ j = I i \ { ( i, j ) } .(c) SS ij ; U will denote the sum of squares for F i,j adjusted for each F i,k , k ∈ U , where j is not in U .(d) SS ij ; all will denote the sum of squares for F i,j adjusted for all other factors in its’s own class, i.e. SS ij ; all = SS ij ;¯ j .Further, SS ij ; all> will denote the sum of squares for F ij adjusted for all factors next to it in its’s ownclass, ≤ j ≤ m i − , while SS im i ; all> will denote the sum of squares for F i,m i adjusted for F G (that isthe unadjusted sum of squares). Thus, SS ij ; all> = SS ij ; T j , ≤ j ≤ m i − and SS im i ; all> = SS ( i,m i ); G . (e) The following expression will be referred to as the class total for the ith class. SS itotal = m i X j =1 SS ij ; all> . Theorem 4.6
Consider an inter-class orthogonal plan as in Notation 4.6. Fix a class, say the ith oneand a factor, say F i,j .(a) The reduced normal equation for c α ij is obtained by eliminating only the other factorsin the ith class . More explicitly, the reduced normal equation is as follows. C ij ;¯ j c α ij = Q ij ;¯ j where C ij ;¯ j = ( X ij ) ′ ( I − P i ¯ j ) X ij and Q ij ;¯ j = ( X ij ) ′ ( I − P i ¯ j ) Y. Here P i ¯ j is the projection operator onto the column space of X i ¯ j .(b) The sum of squares for F ij , adjusted for all other factors , is nothing but the sum of squares adjusted for all other factors in the ith class . Similar statements hold for the sum of squares adjustedfor all factors next to F ij . Symbolically, SS ( i,j ); all = SS ij ; all and SS ( i,j ); all> = SS ij ; all> . c) The error sum of squares is obtained by subtracting the class totals for all the k classes from thetotal sum of squares. Symbolically, SS E = SS tot − k X i =1 SS itotal . We now note, that if all the factors of a class except one are mutually orthogonal through that one,the computation of the class total is considerably simpler. The proof follows from Theorem 4.5.
Theorem 4.7
Suppose an inter-class orthogonal plan has a class (say the ith one) in which all the factorsare orthogonal through F i,m i . Then, the class total for this class can be expressed as follows. SS itotal = m i − X j =1 SS ij ; m i + SS m i ; G . In this section, we present MEPs with fifteen or less runs obtained by ad-hoc methods. The factors haveat most five levels and the class size of each plan is at most three. The plans having class-size three are A (1) , A (3) and A (4). Further, all plans except A (2) and A (3) are saturated. The graph nextto each plan shows the relationship between factors : the edges drawn with continuous lines representorthogonality while dotted line indicate partial orthogonality. The factors are named as A,B, .... in thenatural order. The equal frequency plans are indicated by “*”.We begin with a general plan for two p -level and one two-level factors on 2 p runs. If p = 3, the levelsof A form a balanced incomplete block design (BIBD) with those of B . A p (1) = ρ (2 p, { p } .
2) = · · · p − · · · p −
10 1 · · · p − · · ·
00 0 · · · · · · ❅❅ (cid:0)(cid:0) •• • CA BNow a plan with . A (1) = A B C D Remark 5.1:
For the same experiment an equal-frequency plan is available - plan L (3 . ) of Wangand Wu (1992). The graphical representation of these two plans are shown below. L (3 . ) = ✟✟✟❍❍❍ • ••• A CBD while A (1) = (cid:0)(cid:0)(cid:0)❅❅❅ • •• • C DA B- - - -Regarding the performances, the new plan estimates all but one contrast ( C = ˆ a − a + ˆ a ) with equalor more precision. Using the formulae in Theorem 4.6, one may check that the amount of computationis also less here. However, C may be more important for some experimenter, in which case the old plan L (3 . ) would be preferable. 18e now present plans on . We take up the well-known OA (8 , , . ) and add one more two-levelfactor (F) with unequal frequency such that it is orthogonal to all other factors except A .(a) A (2) = ρ (8 , { × } . ) = ❅❅❅ ✏✏✏✏✏✏✏(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)❅❅❅❅❅ PPPPPPP (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✏✏✏✏✏✏✏ PPPPPPP ❅❅❅ • •• • •• C DB A EFOur next plan on 8 runs has two three-level factors satisfying partial orthogonality.(b) The plan A (3) = ρ (8 , { } . { } .
2) = ✟✟✟✟✟✟✁✁✁✁✁✁❍❍❍❍❍❍(cid:0)(cid:0)(cid:0)❆❆❆❆❆❆❅❅❅❆❆❆❆❆❆ •• • ••
CA E DB- - - - - -We now present two plans with 4-level factors on 8 runs. Note that on 8 runs, a four-level factorcan be orthogonal to neither a four-level nor a three-level factor. Using non-orthogonality, we are able toaccommodate two four-level factors in one plan and one four-level and one three-level factor in anotherplan on 8 runs.(c) A (4) ∗ = ρ (8 , { } .
2) = is obtained by putting p = 4 in A p (1). In this plan, the 4-level factorsA and B form a group divisible design ( m = n = 2 , r = k = 2 , λ = 0 , λ = 1).(d) A (5) = ρ (8 , { . } . ) = (cid:0)(cid:0)(cid:0)❅❅❅ • •• • C DA B- - - -
A plan on 10 runs : A ∗ = ρ (10 , { } .
2) = is obtained by putting p = 5 in A p (1). Here the5-level factors form a symmetric cyclic PBIBD with r = k = 2 , λ = 1 , λ = 0).We shall now present plans on 12 runs : . There is no plan in the literature accommodating one ormore 4-level factors on 12 runs. So, we begin with such plans.(a) A (1) ∗ = ρ (12 , { . } . { } ) = ✟✟✟✟✟✟✁✁✁✁✁✁❍❍❍❍❍❍❆❆❆❆❆❆ •• • •• DB A EC- - - - - -19n this equal frequency saturated plan, both the four-level factors B and C form a generalized groupdivisible design with the levels of the two-level factor A. Between themselves, they form a Balancedincomplete block design (BIBD). The relation between factors D and E is presented in details afterRemark 2.4.(b) A (2) = ρ (12 , { } . { . } ) = (cid:0)(cid:0)(cid:0)(cid:0)❅❅❅❅ • •• • C DA BHere all the factors other except C has equal frequency. The levels of factors A and B form a balancedblock design (BBD). D is partially orthogonal to C as contrast ˆ δ − ˆ δ is orthogonal the contrasts for C .However, C is non-orthogonal to D . Remark 5.2:
In the plan A (2), the four-level factor D may be replaced by three mutually orthogonaltwo-level factors to obtain an almost orthogonal MEP for an 3 . experiment.(c) A (3) ∗ = ρ (12 ,
7; 2 . { } ) = (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✏✏✏✏✏✏✏✓✓✓✓✓ ❅❅❅❅❅❅❅❅❅❅ PPPPPPP ❅❅❅ ❝❝❝❝❝❝❇❇❇❇❇❇❇❅❅❅❅❅❅❅❙❙❙❙❙(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✂✂✂✂✂✂✂★★★★★★ • •• •• •• C DA BE FG
Remark 5.3:
The plan A (3) is very similar to the plan L ′ (3 . ) of Wang and Wu (1992). Thedifference is that A (3) provides one more two-level factor and one less three-level factor and so hastotal d.f one less than L ′ (3 . ). On the other hand, since each three-level factor (say P) in A (3) isnon-orthogonal to two and not three factors and the relationship of P with the any other three-level factoris the same as that in L ′ (3 . ) it’s contrasts are estimated with greater precision.(d) A (4) = ρ (12 , { } . { . } ) = ❍❍❍❍❍❍❍❍❍❍❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳❳✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✟✟✟✟✟✟✟✟✟✟ • •• •• • C FB EA DThis inter-class (3) orthogonal MEP has accommodated five three-level factors together with a two-level factor. The levels of A form a BBD with those of each of B and C, while the levels of B form avariance-balanced non-binary design with those of C . Both the three-level factors D and E are partiallyorthogonal to the two-level factor F . A plan on 15 runs : A = ρ (15 , { } . { } ) = (cid:0)(cid:0)(cid:0)(cid:0)❅❅❅❅ • •• • CA DB20