On Orthogonal Projections on the Space of Consistent Pairwise Comparisons Matrices
aa r X i v : . [ c s . OH ] F e b On orthogonal pro jections on the spaceof consistent pairwise comparisonsmatrices
Waldemar W. Koczkodaj ∗ Ryszard Smarzewski † Jacek Szybowski ‡ February 18, 2020
Abstract
In this study, the orthogonalization process for different inner prod-ucts is applied to pairwise comparisons. Properties of consistent ap-proximations of a given inconsistent pairwise comparisons matrix areexamined. A method of a derivation of a priority vector induced by apairwise comparison matrix for a given inner product has been intro-duced.The mathematical elegance of orthogonalization and its universaluse in most applied sciences has been the motivating factor for thisstudy. However, the finding of this study that approximations dependon the inner product assumed, is of considerable importance.
Keywords: pairwise comparisons, inconsistency, approximation, inner prod-uct, orthogonal basis. ∗ Computer Science, Laurentian University, Sudbury, Ontario P3E 2C6, Canada,[email protected] † Institute of Mathematics and Cryptology Cybernetics Faculty, Military University ofTechnology Kaliskiego 2, 00-908 Warsaw, Poland, [email protected] ‡ AGH University of Science and Technology, Faculty of Applied Mathematics, al. Mick-iewicza 30, 30-059 Kraków, Poland, [email protected] Introduction
The growing number of various orthogonalization approaches in [1, 2, 3, 4]supports the importance of orthogonalization in various computer scienceapplications. Pairwise comparisons allow us to express assessments of manyentities (especially, of the subjective nature) into one value for the use in thedecision making process. Pairwise comparisons have been used since the lateyears in the 13th century by Llull for conducting the better election process(as stipulated in [5]). However, the ineffability of pairwise comparisons comesfrom decision making which must have been made by our ancestors duringthe Stone Age. Two stones must have been compared to decide which ofthem fit for the purpose. It could be for a hatchet, a gift, or a decoration.Pairwise comparisons matrices can be transformed by a logarithmic map-ping into a linear space and the set of consistent matrices into its subspace.The structure of a Hilbert space is obtained by using an inner product. Sucha space is complete with respect to the norm corresponding to the innerproduct. In such a space, we may use orthogonal projections as a tool toproduce a consistent approximation of a given pairwise comparison matrix.
Structure of the paper
A gentle introduction to pairwise comparisons is provided in Section 2. Sec-tion 3 discusses the problem of approximation of an inconsistent PC matrixby a consistent PC matrix using Frobenius inner product on the space ofmatrices. Other inner products are discussed in Section 4. In Section 5 thedependence of an optimal priority vector on the choice of an inner product onthe space of pairwise comparison matrices has been proved. The Conclusionsare self explanatory.
In this subsection, we define a pairwise comparisons matrix (for short, PCmatrix) and introduce some related notions. Pairwise comparisons are tra-ditionally stored in a PC matrix. It is a square n × n matrix M = [ m ij ] withreal positive elements m ij > for every i, j = 1 , . . . , n, where m ij representsa relative preference of an entity E i over E j as a ratio. The entity could bean object, attribute of it, abstract concept, or a stimulus. For most abstract2ntities, we do not have a well established measure such as a meter or kilo-gram. “Software safety” or “environmental friendliness” are examples of suchentities or attributes used in pairwise comparisons.When we use a linguistic expression containing "how many times", weprocess ratios. The linguistic expression "by how much", "by how muchpercent" (or similar) gives us a relative difference. Ratios often express sub-jective preferences of two entities, however, it does not imply that they canbe obtained only by division. In fact, equalizing the ratios with the divi-sion (e.g., E i /E j ), for pairwise comparisons, is in general unacceptable. It isonly acceptable when applied to entities with the existing units of measure(e.g., distance). However, when entities are subjective (e.g., reliability androbustness commonly used in a software development process as product at-tributes), the division operation has no mathematical meaning although wecan still consider which of them is more (or less) important than the otherfor a given project. The use of the symbol "/" is in the context of "relatedto" (not the division of two numbers). Problems with some popular cus-tomization of PCs have been addressed in [8]. We decided not to addressthem here.A PC matrix M is called reciprocal if m ij = m ji for every i, j = 1 , . . . , n .In such case, m ii = 1 for every i = 1 , . . . , n .We can assume that the PC matrix has positive real entries and is re-ciprocal without the loss of generality since a non reciprocal PC matrix canbe made reciprocal by the theory presented in [9]. The conversion is doneby replacing a ij and a ji with geometric means of a ij and a ji ( √ a ij a ji ). Thereciprocal value is √ a ij a ji .Thus a PC matrix M is the n × n -matrix of the form: M = m · · · m n m · · · m n ... ... ... ... m n m n · · · . Sometimes, we write that M ∈ PC n in order to indicate the size of a givenPC matrix. 3 .1 The Geometric Means Method The main goal to use a pairwise comparison matrix is to obtain the so calledpriority vector. The coordinates of this vector correspond to the weights ofalternatives. If we know the priority vector, we can set alternatives in orderfrom the best to the worst one.In the Geometric Means Method (GMM) introduced in [10] the coordi-nates of the vector are calculated as the geometric means of the elements inrows of the matrix: v i = n vuut n Y j =1 a ij . (1)The above vector is the solution of the Logarithmic Least Square Method. One of the fundamental problems in pairwise comparisons is the inconsis-tency. It takes place when we provide, for any reason, all (hence supernu-merary) comparisons of n entities which is n or n ( n − if the reciprocity isassumed and used to reduce the number of entered comparisons. The suffi-cient number of comparisons is n − , as stipulated in [11], but this numberis based on some arbitrary selection criteria of the minimal set of entities tocompare. In practice, we have a tendency to make all n ( n − / comparisons(when reciprocity is assumed which is expressed by m ij = m ji property alsonot always without its problem). Surprisingly, the equality x/y = y/x doesnot take place even if both x = 0 and y = 0 . For example, the blind winetesting may result in claiming that x is better than y and y is better than x or even that x is better than x which is placed on the main diagonal in a PCmatrix M , expressing all pairwise comparisons in a form of a matrix.The basic concept of inconsistency may be illustrated as follows. If analternative A is three times better than B , and B is twice better than C ,than A should not be evaluated as five times better than C. Unfortunately,it does not imply that A to C should be · hence 6, as the common sensemay dictate, since all three assessments (3, 5, and 2) may be inaccurateand we do not know which one of them is or not incorrect. Inconsistency issometimes mistakenly taken for the approximation error but it is incorrect.For example, triad T = (3 , , can be approximated by T approx (1 , , with0 inconsistency but we can see that such approximation is far from optimal4y any standard. So, the inconsistency can be 0 yet the approximation errorcan be different than 0 and of arbitrarily large value. Definition 2.1.
Given n ∈ N , we define T ( n ) = { ( i, j, k ) ∈ { , . . . , n } : i < j < k } as the set of all PC matrix indexes of all permissible triads in the uppertriangle. Definition 2.2.
A PC matrix M = [ m ij ] is called consistent (or transitive )if, for every ( i, j, k ) ∈ T ( n ) : m ik m kj = m ij . (2)Equation (2) was proposed a long time ago (in 1930s) and it is known asa "consistency condition". Every consistent PC matrix is reciprocal, how-ever, the converse is false in general. If the consistency condition does nothold, the PC matrix is inconsistent (or intransitive). In several studies, con-ducted between 1940 and 1961 ([12, 13, 14, 15]) the inconsistency in pairwisecomparisons was defined and examined.Inconsistency in pairwise comparisons occurs due to superfluous inputdata. As demonstrated in [11], only n − pairwise comparisons are reallyneeded to create the entire PC matrix for n entities, while the upper trianglehas n ( n − / comparisons. Inconsistencies are not necessarily "wrong" asthey can be used to improve the data acquisition. However, there is a realnecessity to have a "measure" for it. Lemma 2.3.
If a
P C matrix M = [ m ij ] ni,j =1 is consistent, then m ij = ω i ω j for all i, j = 1 , , . . . , n where ω > ω j = ω m j for every j = 2 , , . . . , n .Proof. By the definition of ω j and consistency of M one gets m j = ω ω /m j = ω ω j m ij = m i − ,j m i − ,i = ω i − /ω j ω i − /ω i = ω i ω j . whenever < i ≤ n. It is easy to observe that the set M n = ( M n , · ) of all consistent P C ma-trices M is a multiplicative subgroup of the group of all P C n × n -matrices en-dowed with the coordinate-wise multiplication A · B = [ a ij b ij ] , where A = [ a ij ] and B = [ b ij ] . Its representation in R n consists of all priority vectors υ ( M ) =( ω , ω , . . . , ω n ) , defined uniquely as in Lemma 2.3, up to a multiplicativeconstant ω > . In the following we use priority vectors normalized by thecondition ω = 1 , unless otherwise stated. Instead of a PC matrix M = [ m ij ] with m ij ∈ R ∗ + , the set of positive realnumbers considered with multiplication, we can transform entries of M bya logarithmic function and get a matrix A = [ a ij ] = [log m ij ] . Since a
P C matrix M is reciprocal, it follows that it is anti-symmetric, i.e. a ij = − a ji for every i, j = 1 , , . . . , n. Moreover, if M is consistent then A = log M satisfies the condition of addi-tive consistency: a ik + a kj = a ij for every ( i, j, k ) ∈ T ( n ) , which yields the following well-known representation. Lemma 2.4.
If an anti-symmetric matrix A = [ a ij ] ni,j =1 is additively con-sistent, then a ij = σ i − σ j for all i, j = 1 , , . . . , n, where σ is arbitrary and σ j = σ − a j for every j = 2 , , . . . , n . In view of this representation, the set A n = ( A n , +) of all additivelyconsistent matrices is an additive subgroup of all n × n - matrices, wheneverit is endowed with the coordinatewise matrix addition A + B = [ a ij + b ij ] A = [ a ij ] and B = [ b ij ] . It is a one-to-one image of the multiplicativegroup M n = ( M n , · ) by the group isomorphism A = log M = [log m ij ] .The inverse group isomorphism is clearly given by the formula M = exp A =[exp a ij ] . Moreover, the additive priority vector υ ( A ) = ( σ , σ , . . . , σ n ) of A satisfies υ ( A ) = log υ ( M ) , where σ = log ω is supposed to be arbitraryadditive constant. In particular, it is said to be normalized if σ = 0 . Here andin the following matrix functions log M = [log m ij ] and exp A = [exp a ij ] are always understood in the coordinate- wise sense. Numerous heuristics have been proposed for approximations of inconsistentpairwise comparisons matrices by consistent pairwise comparisons matrices.Geometric means (GM) of rows is regarded as dominant. Some mathemati-cal evidence, to support GM as the method of choice, was also provided in[16]. [17] shows that orthogonal projections have a limit which is GM (toa constant). [18] demonstrates that the inconsistency reduction algorithmbased on the orthogonal projections converges very quickly for practical ap-plications. The proof of inconsistency convergence was outlined in [6] andfinalized in [7]. Axiomatization of inconsistency still remains elusive. Itsrecent mutation in [22] has a deficiency (the monotonicity axiom incorrectlydefined).
Let K = R or C . Let M ( n, K ) be the set of all n × n -matrices with entries fromthe field K , and let C = M n ⊂ M ( n, K ) be the set of all consistent n × n -matrices with entries from the field K . We consider M ( n, K ) as a K − linearspace with addition of matrices and multiplication by numbers from the field K , clearly dim K M ( n, K ) = n and the unit matrices E ij = (cid:2) e i,jrs (cid:3) nr,s =1 , i, j = 1 , , . . . , n, form a basis in M ( n, K ) , where e i,jrs is equal to 1, if r = i and s = j, andotherwise 0.In the linear space M ( n, K ) one can define the Frobenius inner productas follows. For all A = [ a ij ] , B = [ b ij ] ∈ M ( n, K ) , A, B i F = n X i =1 n X j =1 a ij ¯ b ij . In this Section we recall results from [19].
Theorem 3.1.
The set C is a linear subspace of M( n, K ) . Proof.
Let A = [ a ij ] , B = [ b ij ] ∈ C , that is a ik + a kj = a ij and b ik + b kj = b ij . Let C = [ c ij ] = A + B, then c ik + c kj = ( a ik + b ik ) + ( a kj + b kj ) = ( a ik + a kj ) + ( b ik + b kj ) = a ij + b ij = c ij . Hence, C ∈ C . Let α ∈ K and A ∈ C . It is clear that αA ∈ C . Theorem 3.2.
The subspace
C ⊂ M( n, K ) has dimension n − over K . Proof.
By applying the consistency condition, all elements of the matrix A = [ a ij ] can be generated by n − elements a k,k +1 for k = 1 , . . . , n − , i.e.by the second diagonal, that is diagonal directly above the main diagonal(see [11]). Theorem 3.3 ([19, Proposition 1]) . The following set of n − matricesconstitutes a basis of C : B k = [ b kij ] , where b kij = , for ≤ i ≤ k < j ≤ n, − , for ≤ j ≤ k < i ≤ n, , otherwise,where k = 1 , . . . n − . Remark.
For the standard inner product (i.e. Frobenius), an example ofapproximation of a × inconsistent matrix as a projection onto C is givenin [19]. 8 .2 Approximation by a consistent matrix Suppose that we have a
P C matrix A ∈ M ( n, K ) \C , i.e. A is inconsistent.Our aim is to find a consistent metric projection A C of A onto the set C = A n or M n with respect to norm k·k induced by an inner product h· , ·i ,i.e. a nonlinear mapping A C : M ( n, K ) ∋ A A approx ∈ C such that thedistance of A to C dist ( A, C ) = inf B ∈C k A − B k = k A − A approx k . is attained by the matrix B = A approx . In the additive case C = A n metric projection A A n coincides with theorthogonal projection A proj : A A approx of M ( n, K ) onto the ( n − -linear subspace A n , which is characterized by the well-known orthogonalitycondition A − A approx ⊥ A n . This condition enables to compute the orthogonal projection A proj much moreeffectively than its nonlinear multiplicative counterpart M M n : M M approx .Therefore, it was proposed [10, 17] to linearize the process of determiningmetric projections for practical applications. It was achieved by introducinga new concept of linearized consistent approximations to estimate nonlinearmetric projections. For the simplicity, in the following the symbol M approx willbe also used to denote these linearized consistent approximations. It wouldnot lead to misunderstanding, since we shall always restrict our attention tothe linearized case, unless otherwise stated. Definition 3.4.
Let M ∈ M ( n, K ) \M n be a P C inconsistent matrix.A consistent approximation M proj : M M approx of M onto M n is definedin the following way:1. we construct the matrix A = log M,
2. we find the orthogonal projection A approx of A onto the ( n − -dimensionalsubspace C = log M n .
3. we set M approx = exp ( A approx ) . In short, we define M approx = exp h (log M ) approx i . .3 Orthogonalization In order to simplify calculation in the examples below, we would like tohave orthogonal basis for C . We produce such a basis by the Gram-Schmidtprocess. Namely, let V be an n -dimensional vector space over K with aninner product h· , ·i and B , . . . , B n be its basis. We construct an orthogonalbasis E , . . . , E n as follows: E = B ,E = B − h E , B ih E , E i E ,E = B − h E , B ih E , E i E − h E , B ih E , E i E ,. . . = . . .E n = B n − n − X j =1 h E j , B n ih E j , E j i E j . (3) Example 3.5.
Consider an inconsistent PC matrix M in the multiplicativevariant: M = e e e − e e − e − . (4)Its priority vector v ( M ) obtained by (1) is v ( M ) = e e e − . (5)Taking natural logarithms, we switch to the additive PC matrix variant andget the following additive PC matrix: A = − − − . We need to find A proj , the projection of A onto C . By Theorem 3.2, we havethat dim R C = 2 . By Theorem 3.3, we get a basis of the linear space of10onsistent matrices C : B = − − and B = − − . Evidently, h B , B i F = 2 . Therefore, we have to apply Gram-Schmidt processof orthogonalization (3). If E , E denotes an orthogonal basis of C , then E = − − and E = −
12 1212 − − . Our goal is to find A proj = ε E + ε E , that is to find coefficients ε and ε such that for every C ∈ C , h A − A proj , C i F = 0 which is equivalent to solving: h A − ε E − ε E , E i F =0 , h A − ε E − ε E , E i F =0 . Since E and E are orthogonal, we get a system of linear equations: h A, E i F − ε h E , E i F =0 , h A, E i F − ε h E , E i F =0 . By computing Frobenius inner products, we get the following equation: − ε =0 , − ε =0 . By solving the above equations for ε , ε , we get ε = and ε = . Thus, A proj = A approx , F = 92 E + 113 E =
83 193 − − − . Finally, we get a consistent approximation for
M,M approx , F = e e e − e e − e − ∈ C . Notice that the priority vector v ( M approx , F ) coincides with v ( M ) given by (5).11 Other inner products on M( n, K ) The standard (Frobenius) inner product on the linear space M( n, K ) is de-fined by: h A, B i F = Tr( B ∗ A ) . (6)The above inner product is exactly the Frobenius inner product defined inprevious section, and it defines the Frobenius norm in a usual way by: k A k = h A, A i F = n X i =1 n X j =1 | a ij | . In [20] the following result is mentioned:
Proposition 4.1.
For every m ∈ N and positive semi-definite matrices X i , Y i , i = 1 , . . . , m, the following function: h A, B i ∗ = Tr m X i =1 B ∗ X i AY i ! (7) defines an inner product in M( n, K ) . Proof.
All properties of an inner product follow from the following equation: h A, B i ∗ = Tr m X i =1 B ∗ X i AY i ! = Tr B ∗ m X i =1 X i AY i ! = * m X i =1 X i AY i , B ∗ + F . Example 4.2.
Consider the following four matrices in the space
M(3 , R ) : X = , X = , Y = , Y = . By applying Sylvester’s criterion in [21], it is easy to see that they are positivesemi-definite. Evidently, they are symmetric hence Hermitian.Let A ( A ) = A { X i ,Y i | i =1 , } ( A ) = X AY + X AY . h A, B i ∗ = hA ( A ) , B i F . By Proposition 4.1, h· , ·i ∗ is an inner product in M( n, R ) . Example 4.3.
Consider × matrices B , B with real entries computedby the formula in Theorem 3.3 (see Example 3.5 for details). Evidently, B = { B , B } is a basis for C ⊂
M(3 , R ) . By applying Gram-Schmidt process(3) with the inner product from Example 4.2 to the basis B , we get anorthogonal basis E = { E , E } for C in h· , ·i ∗ . The above transformations imply that h E , B i ∗ = hA ( E ) , B i F . Since A ( E ) = − − − − − − − , we have h E , B i ∗ = 49 and h E , E i ∗ = 65 . By equations (3), we get E = − − and E = − − − = 165 −
49 1649 0 65 − −
65 0 . Example 4.4.
Take the following additive PC matrix: A = − − − . This is the PC matrix from Example 3.5. Next, we compute the orthogonal(with respect to the inner product from Example 4.2) projection onto thespace C . For it, we need to solve a system of linear equations for ε and ε : h A, E i ∗ − ε h E , E i ∗ =0 , h A, E i ∗ − ε h E , E i ∗ =0 . (8)We get A ( E ) = −
570 1 ,
710 36226 1 , − ,
930 5 ,
050 1 , .
13e can also utilize some computation conducted in the previous example andby using the symmetry of the inner product h· , ·i ∗ , the equation (8) becomes: − ε =0 , , − (cid:18) (cid:19) , ε =0 . Consequently, ε = = , ε = , , therefore, we get: A proj , = A approx , ∗ = ε E + ε E = ε − ε ε + ε − ε + ε ε − ε − ε − ε . Finally, we obtain the following multiplicative PC matrix: M approx , ∗ = e ε − ε e ε + ε e − ε + ε e ε e − ε − ε e − ε . Example 4.5.
Let us repeat the calculations made in Examples 4.2, 4.3and 4.4 to provide a consistent approximation of the matrix M set in (4) bymeans of the inner product induced by matrices: X = , X = , Y = , Y = . We obtain A ( E ) = − −
10 0 0 , so h E , B i ∗ = 16 and h E , E i ∗ = 32 . By equations (3), we get E = − − and E = −
12 1212 − − = 12 − − − . A ( E ) = − − − −
12 0 , we calculate the inner products h A, E i ∗ = 144 , h A, E i ∗ = 88 and h E , E i ∗ = 24 . By solving the equations − ε =0 , − ε =0 . we get ε = , and ε = therefore, A proj , = A approx , ∗ = ε E + ε E =
83 193 − − − . Finally, M approx , ∗ = e e e − e e − e − , and its priority vector calculated with the use of GMM is equal to v ( M approx , ∗ ) = e e e − = v ( M ) . It is worthwhile to stress that in the previous examples we got three approx-imations of the same matrix M . An important dilemma has surfaced: howto compare different approximations of a given PC matrix obtained by theuse of different inner products? The answer to this question is: they areincomparable. 15 .1 Inconsistency The first criterion that we took into consideration was to compare incon-sistency indices of the exponential transformations of differences A − A proj .However, this attempt appeared to be incorrect.Let us consider the inconsistency index Kii of a pairwise comparisonmatrix M given by formula: Kii ( M ) = max i Let A and B be additive pairwise comparison matrices suchthat B is additively consistent. Then Kii (exp( A − B )) = Kii (exp( A )) . Proof. Take any ( i, j, k ) ∈ T ( n ) . Since b ij + b jk = b ik , we get − min (cid:26) e a ik − b ik e a ij − b ij e a jk − b jk , e a ij − b ij e a jk − b jk e a ik − b ik (cid:27) =1 − min (cid:26) e a ik e b ij + b jk − b ik e a ij e a jk , e a ij e a jk e a ik e b ij + b jk − b ik (cid:27) =1 − min (cid:26) e a ik e a ij e a jk , e a ij e a jk e a ik (cid:27) , which completes the proof.From the above theorem it follows that if we take two different consistentapproximations B and C of an additive matrix A they satisfy Kii (exp( A − B )) = Kii (exp( A )) = Kii (exp( A − C )) . The second attempt to judge whether a consistent approximation A approx of a P C matrix A is acceptable could be to compare the priority vectors inducedby A and A approx for any inner product. In [10] it has been proved thatthe elements of a projection matrix A approx induced by a Frobenius productare given by the ratios w i w j , where vector w is obtained by GMM. As it hasbeen shown in [17] the priority vectors induced by A and A approx in this casecoincide: 16 heorem 5.2. Let A be a PC matrix and A approx = h w i w j i , where w = GM ( A ) , i.e. w k = n vuut n Y j =1 a kj . Then GM ( A ) = GM ( A approx ) . As the following example shows the priority vectors of a matrix and itsconsistent approximation may differ if we use other inner products. Example 5.3. Consider an inconsistent additive PC matrix A from Exam-ple 3.5: A = − − − and its corresponding multiplicative PC matrix M = exp( A ) . Let us takethree inner products: Frobenius product and the inner products h· , ·i ∗ and h· , ·i ∗ from Examples 4.2 and 4.5. The approximations A approx , F , A approx , ∗ and A approx , ∗ are given in Examples 3.5, 4.3 and 4.5, respectively.Notice that GM (exp( A )) = GM (exp( A approx , F )) = GM (exp( A approx , ∗ )) , but GM (exp( A )) and GM (exp( A approx , ∗ )) , are linearly independent. Thisobservation, however, is not surprising. The matrix exp( A approx , ∗ ) minimizesthe distance from exp( A ) to the set of cosistent PC matrices according tothe inner product < · , · > ∗ , but not to the Frobenius inner product.In the following we show that as we change the inner product, we also haveto change the formula for a priority vector. It is done by extending Theorem5.2 to weighted Frobenius inner products. For this purpose we recall themost general standard definition of an inner product in M ( n, K ) : Let G , G , . . . , G N be N = n linearly independent matrices in the space M ( n, K ) . Represent matrices A, B ∈ M ( n, K ) in a unique manner as A = N X k =1 α k G k ; α k ∈ K , B = N X k =1 β k G k ; β k ∈ K , and define the inner product by h A, B i = N X i,j =1 γ ij α i β j , where Γ = [ γ ij ] is a positive definite N × N -matrix. For example, if wechoose the identity matrix Γ = I and G ( i − n + j = E ij / √ ̺ ij for a matrix P = [ ̺ ij ] = [ ̺ i ̺ j ] of n positive weights, then we get weighted Frobeniusnorm k A k F, P = h A, A i F, P induced by the weighted Frobenius inner product h A, B i F, P = n X i,j =1 ̺ ij a ij b ij . By Lemma 2.4 each matrix [ b ij ] ∈ A n satisfies b ij = σ i − σ j , wherethe additive constant σ is fixed. Hence the squared weighted distance dist F, P ( A, A n ) of an anti-symmetric real matrix A to the space A n of all ad-ditively consistent real matrices is equal to the minimal value of the quadraticfunction f A ( σ ) = n X i,j =1 ̺ ij ( a ij − σ i + σ j ) , of variable σ = ( σ , σ , . . . , σ n ) ∈ R n with first coordinate σ fixed. Thisminimal value is attained at the unique solution σ , σ , . . . , σ n of the followingsystem of normal equations n X j =1 ̺ j ( a ij − σ i + σ j ) = 0 , i = 2 , , . . . , n, (10)with left-hand sides equal to − ̺ i ∂f A ( σ ) ∂σ i . From now on we consider only real-valued n × n –matrices and, unlessotherwise stated, always choose the first coordinate σ of priority vector σ equal to . In view of the following theorem it follows that another reasonablechoice for additive constant σ in (10) would be the weighted arithmetic meanof the first row of matrix A : σ = P nj =1 ̺ j a j P nj =1 ̺ j . heorem 5.4. Let A = [ a ij ] be an anti-symmetric real matrix. If P =[ ̺ i ̺ j ] is a matrix of positive weights, then the additively consistent orthogonalapproximation A approx = [ σ i − σ j ] of A onto A n with respect to weightedFrobenius norm k · k F, P is determined by: σ i = P nj =1 ̺ j a ij P nj =1 ̺ j , i = 1 , , . . . , n. (11) Proof. Since the orthogonal projection is determined uniquely, it is sufficientto check that normal equations (10) are satisfied by given in (11) values of σ i . For this purpose denote | ̺ | = ̺ + · · · + ̺ n and note that n X j =1 ̺ j ( a ij − σ i ) = n X j =1 ̺ j a ij − σ i | ̺ | = 0 for these values of σ i . Moreover, by the anti-symmetry of A we have a jk = − a kj , and so n X j =2 ̺ j σ j = − n X j =2 ̺ j | ̺ | n X k =1 ̺ k a kj = − n X k =1 ̺ k | ̺ | − ̺ a k + n X j =1 ̺ j a kj ! = ̺ σ − n X k =1 ̺ k σ k = − n X k =2 ̺ k σ k . Thus the last sum is also equal to 0, which completes the proof. Theorem 5.5. Let M = [ m ij ] be a P C matrix. If P = [ ̺ i ̺ j ] is a matrix ofpositive weights , then the consistent approximation M approx := exp(log M ) approx = [ ω i /ω j ] ǫ M n of M with respect to weighted Frobenius norm k · k F, P is determined uniquelyby: ω i = " n Y j =1 ( m ij ) ̺ j / P nj =1 ̺ j , i = 1 , , . . . , n. (12) Proof. Apply Theorem 5.4 to the anti-symmetric matrix A = [ a ij ] with a ij =log m ij in order to show that the elements of consistent orthogonal projection (log M ) approx = [ σ i − σ j ] of log M onto A n are determined by:19 i = P nj =1 ̺ j log m ij P nj =1 ̺ j = log " n Y j =1 ( m ij ) ̺ j / P nj =1 ̺ j , i = 1 , , . . . , n. Hence we get formulae (12) from identity ω i = exp σ i , which is a directconsequence of Definition 3.4.The direct corollaries of Theorems 5.4 and 5.5 are the following general-izations of Theorem 5.2, which state that Definition 3.4 is idempotent: Corollary 5.6. Let A = [ a ij ] be an anti-symmetric matrix. If P = [ ̺ i ̺ j ] is amatrix of positive weights , then the additively consistent approximation withrespect to weighted Frobenius norm k · k F, P is idempotent: ( A approx ) approx = A approx . Corollary 5.7. Let M = [ m ij ] be a P C matrix. If P = [ ̺ i ̺ j ] is a matrix ofpositive weights , then the consistent approximation with respect to weightedFrobenius norm k · k F, P is idempotent: (cid:16) exp h (log M ) approx i (cid:17) approx. = exp h (log M ) approx i . This means that in a weighted Frobenius norm the consistent approximationmapping M proj : M M approx from Definition 3.4 is a projection of the set P C n of all P C matrices onto the multiplicative group M n = ( M n , · ) . The squared weighted Frobenius distance dist F, P ( M, M n ) of a P C ma-trix M to the space M n of all multiplicatively consistent matrices [ x i /x j ] isdetermined by a point x = ( x , x , . . . , x n ) with the first coordinate x = 1 , for which minimal value of the function g M ( x ) = n X i,j =1 ̺ ij ( m ij − x i x j ) , x = 1 . is attained. If P = [ ̺ ij ] is a symmetric matrix of positive weights , then thisminimal value is attained at solution x , x , . . . , x n of the following system ofnonlinear normal equations 20 x i n X j =1 ̺ j (cid:20) x j x i (cid:18) m ij − x j x i (cid:19) − x i x j (cid:18) m ij − x i x j (cid:19)(cid:21) = 0 , i = 2 , , . . . , n, (13)where the left- hand sides are equal to − ∂g M ( x ) ∂x i . It seems unlikely that one can find an explicit solution of this system.However, it can be solved by the locally convergent Newton’s method. As astarting point, the priority vector x = ( x , x , . . . , x n ) , given in Theorem 5.5,should be used. Moreover, further improvement could be made by applyingrecent results on classical discrete orthogonal polynomials proposed in [23].The lack of an explicit solution should not be a huge surprise. Similarsituation exists in physics with the three body problem having only numericalsolution and a proof that the general case of this problem has no analyticalsolution. Evidently, the numerical solution is sufficient to conquer the space. The primary goal of this study was to generalize orthogonal projections forcomputing approximations of inconsistent PC matrices from the Euclideanspace to the Hilbert space of PC matrices endowed in a different inner prod-uct. However, a side product of our study seems to be even more important:there is no mathematical reasoning to support any belief that there is onlyone approximation method of inconsistent PC matrices. It is a matter of anarbitrary choice of the dot product for the orthogonalization projection pro-cess. However, there is a practical reason to use a Frobenius inner product(which generates GM solution), which is its computational simplicity. Acknowledgments The authors would also like to express appreciation to Tiffany Armstrong(Laurentian University, Computer Science), and Grant O. Duncan, TeamLead, Business Intelligence, Integration and Development, Health SciencesNorth, Sudbury, Ontario, Canada) for the editorial improvements of ourtext and their creative comments. The research of the third author wassupported by the National Science Centre, Poland as a part of the projectno. 2017/25/B/HS4/01617, and by the Faculty of Applied Mathematics of21GH UST within the statutory tasks subsidized by the Polish Ministry ofScience and Higher Education, grant no. 16.16.420.054.