Graph Products Revisited: Tight Approximation Hardness of Induced Matching, Poset Dimension and More
GGraph Products Revisited: Tight Approximation Hardness ofInduced Matching, Poset Dimension and More
Parinya Chalermsook ∗ Bundit Laekhanukit † Danupon Nanongkai ‡ Abstract
Graph product is a fundamental tool with rich applications in both graph theory and theo-retical computer science. It is usually studied in the form f ( G ∗ H ) where G and H are graphs, ∗ is a graph product and f is a graph property. For example, if f is the independence number and ∗ is the disjunctive product , then the product is known to be multiplicative : f ( G ∗ H ) = f ( G ) f ( H ).In this paper, we study graph products in the following non-standard form: f (( G ⊕ H ) ∗ J )where G , H and J are graphs, ⊕ and ∗ are two different graph products and f is a graphproperty. We show that if f is the induced and semi-induced matching number , then for someproducts ⊕ and ∗ , it is subadditive in the sense that f (( G ⊕ H ) ∗ J ) ≤ f ( G ∗ J ) + f ( H ∗ J ).Moreover, when f is the poset dimension number , it is almost subadditive.As applications of this result (we only need J = K here), we obtain tight hardness ofapproximation for various problems in discrete mathematics and computer science: bipartiteinduced and semi-induced matching (a.k.a. maximum expanding sequences), poset dimension,maximum feasible subsystem with 0/1 coefficients, unit-demand min-buying and single-mindedpricing, donation center location, boxicity, cubicity, threshold dimension and independent pack-ing. Graph products generally refer to a way to use two graphs, say G and H , to construct a new graph,say G ∗ H for some product ∗ . Studying properties of graphs resulting from a graph product,i.e., f ( G ∗ H ) for some function f , has been an active research area with countless applications ingraph theory and computer science. For example, the fact that the independence number α of the disjunctive product G ∨ H is multiplicative , i.e., α ( G ∨ H ) = α ( G ) α ( H ), has been used to amplifythe hardness of approximating the maximum independent set problem.In this paper, we study some graph properties when we apply graph products in a non-standardfashion to improve approximation hardness of several problems. We will study graph products inthe form ( G ⊕ H ) ∗ J where G , H and J are any graphs and ∗ and ⊕ denote two different products.This form may look strange at first, but it will be clear later that it in fact captures a “generic”graph transformation technique that has been used a lot in the past (cf. Section 3). ∗ IDSIA, Lugano, Switzerland. Supported by the Swiss National Science Foundation project 200020-122110/1and by Hasler Foundation Grant 11099. Partially supported by Julia Chuzhoy’s CAREER grant CCF-0844872.E-mail: [email protected] † School of Computer Science, McGill University, Montreal QC, Canada. Supported by the Natural Sciencesand Engineering Research Council of Canada (NSERC) grant No. 28833 and by European Research Council (ERC)Starting Grant 279352. E-mail: [email protected] ‡ Theory and Applications of Algorithms Research Group, University of Vienna, Austria, and Nanyang Techno-logical University, Singapore. E-mail: [email protected] . a r X i v : . [ c s . D M ] O c t he products we will study are the tensor product G × H , the extended tensor product G × e H ,the disjunctive product G ∨ H and the lexicographic product G · H . These products produce a graphwhose vertex set is V ( G ) × V ( H ) = { ( u, v ) : u ∈ V ( G ) , v ∈ V ( H ) } with different edge sets. Theirexact definitions are not necessary at this point, but if you are impatient, see Section 2.Properties of a graph G that we are interested in are the induced matching number im ( G ), the semi-induced matching number sim ( G ) and the poset dimension number dim ( G ). Informally, aninduced matching of an undirected graph G is a matching M of G such that no two edges in M are joined by an edge in G . We let the induced matching number of G , denoted by im ( G ), be thesize of the maximum induced matching. See the formal definition in Section 2. Definitions of poset and other graph properties are not needed in this section and are deferred to Section 2.Now that we have introduced all notations necessary, we are ready to state our result. We showthat for f = im or f = sim and for an appropriate choice of product ⊕ and ∗ , f will be subadditive ,i.e., f (( G ⊕ H ) ∗ J ) ≤ f ( G ∗ J ) + f ( H ∗ J ) for any graphs G , H and J . For f = dim , we will get almost subadditivity instead. The precise statement is as follows. Theorem 1.1 ((Almost) Subadditivity) . For any undirected graphs G , H and J and a height-twoposet (cid:126)P , im (( G ∨ H ) × J ) ≤ im ( G × J ) + im ( H × J ) (1) sim (( G ∨ H ) × J ) ≤ sim ( G × J ) + sim ( H × J ) (2) dim (( G · H ) × e (cid:126)P ) ≤ dim ( G × e (cid:126)P ) + χ ( G ) dim ( H × e (cid:126)P ) + dim ( (cid:126)P ) (3) where χ ( G ) is the chromatic number of G . Note that Eq.(3) suggests that the poset dimension number is almost subadditive in the sensethat if χ ( G ) and dim ( (cid:126)P ) are small, then it will be subadditive (with a small multiplicative factor).This will be the case when we use it to prove the hardness of approximation; see Section 5 for moredetail. Organization.
In Subsection 1.1, we give an example of how Theorem 1.1 plays a role in provinghardness of approximation. In Subsection 1.2, we discuss problems whose hardness of approximationcan be obtained via our technique. In Section 2, we define formal terms needed in the rest of thepaper. We then prove Theorem 1.1 in Section 3 (for the special case which gives more intuition)and Section 4 (for the general case). Section 5 and 6 show the hardness results. Our results aresummarized in Fig. 1.
We now sketch the proof idea of the n − (cid:15) hardness of the bipartite induced matching problem, where n is the number of vertices, which shows how Theorem 1.1 plays a role in proving the hardnessof approximation. The full proof appears in Section 5. We build on the idea of [17] and applyTheorem 1.1 with J = K .For any graph G , let α ( G ) be the size of the maximum independent set. We use the followingconnection between independence and induced matching numbers which was implicitly shown in[17]. α ( G ) ≤ im ( G × e K ) ≤ im ( G × K ) + α ( G ) . (4) To the best of our knowledge, this product has not been considered before. Interestingly, it is mentioned in [27,pp 42] as “not worthy of attention”. ndependent Setand Coloring Bip. Semi-Ind. Matching(Max. Expand. Seq.)Bipartite In-duced MatchingPoset Dimension Bip. Ind. Packing(0/1)-Feas. Subsys.Donation CenterSingle-Minded PricingUnit-Demand PricingBoxicity and CubicityThreshold dimensionSubadditivity(Thm 1.1)Sec. 3,4Sec. 5Sec. 6 Figure 1: Paper Outline. Problems in the third column are defined in Section 2. Other problems are defined inSection 6. If im ( G × K ) is relatively small, i.e., im ( G × K ) = O ( α ( G )), then we will already have thehardness of n − (cid:15) using the hardness of approximating the independent set number (e.g., [28]).However, im ( G × K ) could be as large as | V ( G ) | , and in such case, we do not get any hardnessresult (not even NP-hardness). To remedy this, we apply Eq.(1) in Theorem 1.1 repeatedly to showthat im (( G k ) × K ) ≤ im (( G k − ) × K ) + im ( G × K ) ≤ . . . ≤ im ( G × K ) k where G k = G ∨ G ∨ . . . ∨ G is a k -fold product. Combining this with Eq.(4), we have α ( G k ) ≤ im (( G k ) × e K ) ≤ im ( G × K ) k + α ( G k ) . It is well known that α ( G k ) = ( α ( G )) k . Thus, after applying the k -fold product, the term im ( G × K )only grows linearly in terms of k , while the term α ( G k ) grows exponentially! So, by choosing largeenough k , the induced matching number and the independence number coincide, i.e., im (( G k ) × e K ) ≈ α ( G k ). Now, any hardness of approximating the independence number implies immediatelyroughly the same hardness of approximating the induced matching number. Note that it can bechecked with the definition of × e in Section 2 that ( G k ) × e K is a bipartite graph, so we get thehardness of the bipartite induced matching problem as desired. The almost subadditivity properties shown in Theorem 1.1 are useful in proving many other hard-ness of approximation results as listed in the following theorem.
Theorem 1.2.
For any (cid:15) > , unless ZPP = NP there is no n − (cid:15) -approximation algorithm, where n is the number of vertices in the input graph, for the following problems: bipartite induced andsemi-induced matching (a.k.a. maximum expanding sequence), poset dimension, bipartite indepen-dent packing, donation center location, maximum feasible subsystem with 0/1 coefficients, boxicity,cubicity and threshold dimension.Additionally, there is no d / − (cid:15) -approximation algorithm for the induced and semi-inducedmatching problem on d -regular bipartite graphs.Moreover, unless NP ⊆ ZPTIME ( n poly log n ) , there is no log − (cid:15) m -approximation algorithm andno k / − (cid:15) -approximation algorithm for the single-minded and unit-demand pricing problems, where m is the number of consumers and k is the maximum consumer’s set size. tight except the hardness of k / − (cid:15) and d / − (cid:15) of the pricing problems and the induced-matching problemon d -regular graphs, respectively. Remark on Stronger Results.
We note that for problems having n − (cid:15) -hardness stated in Theo-rem 1.2, we can actually prove a slightly stronger result: for any γ >
0, unless NP ⊆ ZPTIME ( n poly log n ),there is no n (log n )3 / γ -approximation algorithm. This is achieved by applying the result of Khotand Ponnuswami [33]. For the sake of presentation, we will prove only n − (cid:15) -hardness using theresult of H˚astad [28].Now, we are ready to discuss our hardness results. We provide the formal definitions of the firstthree problems in Section 2 and provide the definitions of the remaining problems in Section 6. Bipartite Induced Matching.
One immediate application is the tight n − (cid:15) hardness of ap-proximating the induced matching problem on bipartite graphs , improving upon the previous besthardness of n / − (cid:15) [17]. Our result also implies the tight hardness of the independent packingof graphs [12] as well. A similar technique can also be used to show that the induced matchingproblem on d -regular bipartite graphs is hard to approximate to within a factor of d / − (cid:15) , improvingupon the APX -hardness of [16, 50]. (This result is not tight as the best known upper bound isΘ( d ) [24].)The notion of induced matching has naturally arisen in discrete mathematics and computerscience. It is, for example, studied as the “risk-free” marriage problem in [45] and is a subtask offinding a strong edge coloring . This problem and its variations also have connections to variousproblems such as storylines extraction [35] and network scheduling, gathering and testing (e.g.[18, 45, 32, 38, 7]). The problem was shown to be NP -complete in [45, 11] and was later shown tobe hard to approximate to within a factor of n / − (cid:15) unless NP = ZPP by [17]. We have sketchedthe proof of the tight hardness of n − (cid:15) in Section 1.1, and more detail can be found in Section 5. Bipartite Semi-induced Matching (a.k.a. Maximum Expanding Sequence).
The sametechnique used in proving the hardness of the bipartite induced matching problem can be extended(with some additional work) to its interesting variation which captures a few other problems.This variation was introduced independently by Briest and Krysta [10] as the maximum expandingsequence problem and by Elbassioni et al. [17] as the bipartite semi-induced matching problem . Thereit was used as an intermediate problem that captures the hardness of some important algorithmicpricing problems and the maximum feasible subsystem problem, which we will see shortly.
Poset Dimension.
Another immediate application of Theorem 1.1 is the tight n − (cid:15) hardness ofapproximating the poset dimension, improving upon the hardness of n / − (cid:15) of Hegde and Jain [29].The notion of poset dimension has long been a central subject of study in discrete mathematics(e.g., [48]) and has connections with many other notions, e.g., transitive-closure spanners [39] aswell as the boxicity and the threshold dimension of graphs [1]. A variant called the fractionaldimension is shown to have a connection to some classical scheduling problems (e.g., [4]). We notethat our technique also implies the tight hardness of approximating the fractional dimension ofposets.The computational complexity of the poset dimension problem was one of the twelve outstandingopen problems in Garey and Johnsons treatise on NP-completeness [23]. It was independentlyshown by Yannakakis [49] and Lawler and Vornberger [36] that the problem is NP -complete. More4ecently, Hegde and Jain showed that the problem is hard to approximate to within a factor of n / − (cid:15) unless NP = ZPP . Here we resolve the approximability of this problem using graph products.We note that our result actually implies the tight hardness of approximating the dimensionof adjacency poset . This is the notion, along with the incidence poset , of the dimension of posetsarising from graphs. They have been extensively studied due to their connections with graph’splanarity and chromatic number (e.g., [20, 43, 44]).
Unit-demand Min-buying (
Udp-Min ) and Single-minded (
Smp ) Pricing.
A result that isnot so immediate from Theorem 1.1 is the hardness of approximating the two combinatorial pricingproblems, called
Udp-Min and
Smp . The tight hardness of these two problems were recentlyproved by Chalermsook et al. [13]. Here we give alternate proofs of the results in [13] by employingthe tight hardness of an intermediate problem – the maximum expanding sequence problem, thusconfirming the role of expanding sequences in the hardness of pricing problems suggested in [10].Both
Udp-Min and
Smp are among the most basic pricing problems in the literature and havereceived a lot of attention (e.g., [10, 25, 40, 41, 6, 13]). Briest and Krysta showed the hardness oflog (cid:15) m , assuming the (rather non-standard) hardness of the bounded-degree bipartite independentset problem. To prove the hardness of Udp-Min and
Smp , they introduced the maximum expandingsequence problem and showed that it can be reduced to
Udp-Min and
Smp . Thus, by proving thehardness of the maximum expanding sequence problem, they obtain the hardness results for thesepricing problems. As mentioned in [10], this “indicates that expanding sequences are a commonsource of hardness for quite different combinatorial pricing problems”. Chalermsook et al. [13]recently showed the tight hardness of log − (cid:15) m of these problems, assuming a standard assumption(i.e., NP (cid:54)⊆ DTIME ( n poly log n )), by avoiding the maximum expanding sequence problem and provingthe hardness of Udp-Min and
Smp directly. In this paper, we revisited Briest and Krysta’s originalproposal to prove the hardness of these problems via the maximum expanding sequence problem.We show the hardness of approximation result for a special case of the maximum expandingsequence problem via graph products, which then implies the hardness of Udp-Min and
Smp .Our results confirm that the maximum expanding sequence problem is indeed the main source ofhardness for both
Udp-Min and
Smp . Maximum Feasible Subsystem with 0/1 Coefficients.
In the maximum feasible subsystem(
Mrfs ) problem, we are given a system of m linear inequalities (cid:96) i ≤ a Ti x ≤ µ i , where a i ∈ { , } n ,and (cid:96) i , µ i ∈ R + . The goal is to find a non-negative solution x ∈ R n + that maximizes the numberof constraints satisfied. When coefficients are not necessarily 0/1, the m − (cid:15) -hardness of Mrfs wasproved by Guruswami and Raghavendra [26] . Elbassioni et al. [17] showed that even in the 0/1-coefficient case, the problem has the hardness of m / − (cid:15) . They actually showed a gap-preservingreduction from the semi-induced matching problem to 0/1- Mrfs . This means that our hardness ofthe semi-induced matching problem immediately implies the tight hardness of m − (cid:15) for any (cid:15) for Mrfs . This hardness result holds even when we allow a violation of the upper bounds by at mostan O ( n ) factor. We also show the tight hardness of log − (cid:15) (max i ∈ [ n ] (cid:96) i ), matching an upper boundin [17]. Boxicity, Cubicity and Threshold Dimension of Graphs.
The notion of boxicity arosefrom the study of intersection graphs . It was introduced by Roberts [21] and studied extensivelyin discrete mathematics. It also has connections to important graph theoretic measures such astreewidths [15], genuses [20, 2], crossing numbers [3] and the maximum degree of graphs [14]. In Our special case is different from that in [10]. (If we use their special case, we only obtain the hardness oflog / − (cid:15) m .) In particular, our special case requires that the input must be in some specific form of a graph product. Indeed, Guruswami and Raghavendra proved the hardness of the
Max 3LIN problem, which can be seen as aspecial case of
Mrfs cubicity and threshold dimension ). Combining this with ourhardness of approximating poset dimension, we get the tight n − (cid:15) -hardness for all these problems. Donation Center Location.
In this problem, we are given a set of agents and a set of centers,where agents have preferences over centers and centers have capacities. The goal is to open a subsetof centers and to assign a maximum-sized subset of agents to their most-preferred opened centers,while respecting the capacity constraints.Huang and Svitkina [30] introduced this problem and showed an n / − (cid:15) approximation hardnessby a reduction from the maximum independent set problem. We show a straightforward reductionfrom the semi-induced matching problem, hence giving the tight n − (cid:15) -hardness. This hardnessresult holds even when all agents have the same preference over centers, and each center has unitcapacity. In this section, we define graph products and graph properties we will use. For any directed orundirected graph G , we use V ( G ) and E ( G ) to denote its vertex and edge sets, respectively. Notethat if G is directed, then it is possible that, for some u, v ∈ V ( G ), uv ∈ E ( G ) but vu / ∈ E ( G ).This is not the case when G is undirected. When a graph is directed, we shall put an arrow above (cid:126)G to emphasize that (cid:126)G is a directed graph. Definition 2.1 (Graph Products) . A graph product is a binary operation that constructs from twographs G and H a graph with vertex set V ( G ) × V ( H ) = { ( u, v ) : u ∈ V ( G ) , v ∈ V ( H ) } , and theedge set is determined by adjacency of vertices of G and H . The graph products we study include the tensor product G × H , the extended tensor product G × e H , the disjunctive product G ∨ H and the lexicographic product G · H . The edge sets of theseproducts are as belows.(tensor) E ( G × H ) = { ( u, a )( v, b ) : uv ∈ E ( G ) and ab ∈ E ( H ) } (extended tensor) E ( G × e H ) = { ( u, a )( v, b ) : ( uv ∈ E ( G ) or u = v ) and ab ∈ E ( H ) } (disjunctive) E ( G ∨ H ) = { ( u, a )( v, b ) : uv ∈ E ( G ) or ab ∈ E ( H ) } (lexicographic) E ( G · H ) = { ( u, a )( v, b ) : uv ∈ E ( G ) or ( u = v and ab ∈ E ( H ) ) } Definition 2.2 (Induced Matching Number, im ( G )) . Let G = ( V, E ) be any undirected graph. The induced matching of G is the set of edges M ⊆ E ( G ) such that M is a matching and no two edgesin M are joined by an edge in G , i.e., for any edges uu (cid:48) , vv (cid:48) ∈ M , G has none of the edges in { uv, uv (cid:48) , u (cid:48) v, u (cid:48) v (cid:48) } . The induced matching number of G , denoted by im ( G ) , is the cardinality of themaximum-cardinality induced matching of G . Now, we shall define a variant of an induced matching called a semi-induced matching; thisnotion is with respect to a total order. For any finite set S , a total order σ on S is a bijection σ : S → [ | S | ]. The total order σ gives an ordering of elements S as we may order elements x i ∈ S so that σ ( x ) < σ ( x ) < . . . < σ ( x n ), where x i is such that σ ( x i ) = i , for all i . Definition 2.3 (Semi-induced Matching, sim ( G )) . Given any graph G = ( V, E ) and any total order σ , we say that a matching M is a σ -semi-induced matching if, for any pair of edges uu (cid:48) , vv (cid:48) ∈ M such that σ ( u ) < σ ( u (cid:48) ) and σ ( u ) < σ ( v ) < σ ( v (cid:48) ) , there are no edges uv (cid:48) and uv in E .
6e can check if a matching M is a σ -semi-induced matching as follows. First, we order edges in M as u v , u v , . . . , u q v q where, for any i , σ ( u i ) < σ ( v i ) and σ ( u ) < σ ( u ) < . . . < σ ( u q ). Now, M is a σ -semi-induced matching if and only if, for any i < j , M has no edge in { u i u j , u i v j } .For any graph G , we define sim σ ( G ) to be the size of a maximum σ -induced matching, and wedefine sim ( G ) = max σ sim σ ( G ). In the semi-induced matching problem , we are given a graph G ,and the goal is to compute sim ( G ). Definition 2.4 (Partially Ordered Set (poset)) . A directed graph (cid:126)P is a partially ordered set(poset) if it is directed, acyclic and transitive (i.e., uv, vw ∈ E ( (cid:126)P ) ⇒ uw ∈ E ( (cid:126)P ) ). An important class of posets is a height-two poset. Given a poset (cid:126)P , we say that a vertex is minimal (resp., maximal ) if it has zero in-degree (resp., out-degree) in (cid:126)P . A poset is a height-two poset if every vertex is minimal or maximal (some vertex might be both).A poset can be defined by an ordering of d -dimensional points. This leads to the notion of theposet dimension number of graphs. For any d -dimensional points p, q ∈ R d , we say that p < q iffor any 1 ≤ i ≤ d , p [ i ] ≤ q [ i ], and there exists j such that p [ j ] < q [ j ]; otherwise, we say that p (cid:54) < q . Definition 2.5 (Poset Dimension Number, dim ( G ) ) . Let (cid:126)P be any poset. We say that a mapping ϕ : V ( (cid:126)P ) → R d realizes poset (cid:126)P if for any distinct vertices u, v ∈ V ( (cid:126)P ) , uv ∈ E ( (cid:126)P ) if and onlyif ϕ ( u ) < ϕ ( v ) . The dimension of poset (cid:126)P , denoted by dim ( (cid:126)P ) , is the smallest integer d such thatthere is a mapping ϕ : V ( (cid:126)P ) → R d that realizes (cid:126)P . Why Height-two Posets?
Recall that our main theorem only applies to height-two posets. Thereis a good reason for this. Note that it is not always the case that the product G × e (cid:126)P between anundirected graph G and poset (cid:126)P (that we use in Theorem 1.1) will result in a poset (an exampleis when G and (cid:126)P are a path and a directed path of three vertices, respectively); so, the term dim ( G × e (cid:126)P ) does not necessarily make sense. However, if (cid:126)P is a height-two poset, then G × e (cid:126)P isalways a poset (in fact, a height-two one). (We prove this fact in Lemma 4.8 in Section 4.) Forthis reason, Theorem 1.1 is stated only for a height-two poset (cid:126)P . In this section, we will focus on the special case of Theorem 1.1 where we consider the products( G ⊕ H ) ∗ J when J = K only. This is partly because the proof is more intuitive (and easier toillustrate by pictures) in this special case. Moreover, this case itself is sufficient for our purposein proving hardness results. We will also use B [ G ⊕ H ] instead of ( G ⊕ H ) ∗ K to simplify thenotation (more on this in Section 3.1). Proofs of the general cases can be found in Section 4. Theyare relatively short, perhaps easier to verify, and can be read without understanding any materialin this section. However, since the proofs in the general case are less intuitive, some readers mightfind the intuition in this section helpful. K ? We first give a motivation of studying the graph products in this specific (and rather peculiar)form. First, notice that both G × K and G × e K are bipartite . To see this, let V ( K ) = { , } ; so, Our definition follows [31]. This concept is also sometime called the order dimension or the
Dushnik–Miller dimension . It is usually definedusing the notion of realizer . This is however equivalent to our “embedding to R d ” definition, and we use the embeddingdefinition as it will be easier to use in our proofs. a) G , H , B[ G ] and B[ H ] (b) B[ G ∨ H ], super vertices V xi and induced matching M G Figure 2: Example of graphs G , H , B[ G ], B[ H ] and B[ G ∨ H ], as well as super vertices V xi , set of edges E G andinduced matching M G (defined in Section 3.2). Bold edges are in M G . Solid edges (in blue, including bold edges) areedges assigned to E G , and dashed edges (in gray) are edges assigned to E H . Observe that if we view V xi as a vertex(by unifying vertices in them) and consider only edges in E G , then the graph looks exactly like B[ G ]. Moreover, theinduced matching M G becomes an induced matching { V u V v , V v V w } in this graph of super vertices. This is themain fact we use to prove Eq.(5). the vertices of G × K and G × e K are in the form ( v, i ) where v ∈ V ( G ) and i ∈ { , } (e.g., Fig. 2and 4). Thus, we can partition vertices of these graphs into V ∪ V where V i = { ( v, i ) : v ∈ V ( G ) } .There is no edge between vertices in the same partition in both graphs so they are bipartite.So, we may think of the product G × K and G × e K as a “transformation” of graph G into abipartite graph. To emphasize this point of view and simplify the notation later on, we will writeB[ G ] and B e [ G ] instead of G × K and G × e K in this and next sections.An intuitive way to think of B[ G ] and B e [ G ] is to imagine the following construction. Weconstruct B[ G ] by making two copies of each vertex v ∈ V ( G ) to get vertices ( v,
1) and ( v,
2) inB[ G ]. Then, for each edge uv ∈ E ( G ), we add an edge between ( u,
1) and ( v, e [ G ],we simply add an edge connecting the two copies of each vertex v ∈ V ( G ); i.e., we add an edgebetween ( v,
1) and ( v,
2) for all v ∈ V ( G ).This transformation might now sound familiar to many readers. The graph B[ G ] is actuallyknown as the bipartite double cover and has been used repeatedly as a natural way to transform anygraph into a bipartite graph; for example, one can use this transformation to reduce the problemof computing cycle covers to the maximum bipartite matching problem.In the context of posets, we will abuse the notation and define B[ G ] = G × (cid:126)K and B e [ G ] = G × e (cid:126)K . Edges in these graphs always point from vertices in V to vertices in V . The graph B[ G ]also has its name in this case – an adjacency poset [20]. (1) ). In this section, we aim to prove the following special case of Eq.(1): im (B[ G ∨ H ]) ≤ im (B[ G ]) + im (B[ H ]) . (5)Let V and V be the two partitions of vertices in B[ G ∨ H ] and M be an induced matching inB[ G ∨ H ]. Recall that each edge in B[ G ∨ H ] is of the form ( u, a, v, b, u, v ∈ V ( G )and a, b ∈ V ( H ), and it appears in B[ G ∨ H ] if and only if at least one of the following conditionsholds: (1) uv ∈ E ( G ) or (2) ab ∈ E ( H ). Our strategy is to consider edges satisfying each conditionseparately. 8 a) G , H , B[ G ] and B[ H ] (b) B[ G ∨ H ]Figure 3: Example of graphs G , H , B[ G ], B[ H ] and B[ G ∨ H ], as well as super vertices V xi . Solid edges (in blue) areedges assigned to E G , and dashed edges (in gray) are edges assigned to E H . ( E G and E H are defined in Section 3.2.) In particular, we let E (B[ G ∨ H ]) = E G ∪ E H , where E G and E H consist of edges ( u, a, v, b, E G = { ( u, a, v, b,
2) : uv ∈ E ( G ) } and E H = { ( u, a, v, b,
2) : ab ∈ E ( H ) } . For example, in Fig. 2(b), E G consists of solid edges (inblue) and E H consists of dashed edges (in gray). Note that some edges, e.g., edge ( u, a, v, b,
2) inFig. 2(b), are in both E G and E H . We also partition the induced matching M into M = M G ∪M H where M G = M ∩ E G and M H = M ∩ E H . Obviously, |M| ≤ |M G | + |M H | . Our goal is to showthat |M G | ≤ im (B[ G ]) and |M H | ≤ im (B[ H ]). We will only show the former claim because thelatter can be argued similarly.To prove this claim, we partition vertices in V and V according to which vertices in G they“inherit” from. That is, for any vertex u ∈ V ( G ), we let V u = { ( u, a,
1) : a ∈ V ( H ) } and V u = { ( u, a,
2) : a ∈ V ( H ) } (e.g., see Fig. 2(b)).We can think of each set V ui as a “super vertex” corresponding to a vertex ( u, i ) in B[ G ] inthe sense that if we unify all vertices in V ui into one vertex, for all u ∈ V ( G ) and i ∈ V ( K ), andremove duplicate edges, then we will get the graph B[ G ]. In fact, we can show more than this.We can show that if we look at M G in the graph of super vertices, then we will get an inducedmatching of B[ G ] having the same size as M G ! For example, in Fig. 2(b) the induced matching M G in B[ G ∨ H ] consisting of bold edges becomes a set of two edges { V u V v , V v V w } in the graphof super vertices, which is still an induced matching.The key idea in proving this fact is an observation that for any pair of super vertices V u and V v ,either there is no edge between any pair of vertices in V u and V v , or there will be edges betweenall pairs of vertices in V u and V v . For example, in Fig. 2(b), there is no edge between any pairof vertices x ∈ V u and y ∈ V w while there is an edge between every pair of vertices x ∈ V u and y ∈ V v . Using this observation, we can easily prove the two lemmas below. The first lemma saysthat M G becomes a matching in the graph of super vertices, and the second one says that thismatching is, in fact, an induced matching.Before proceeding to the proofs, recall that we write the edge set of B[ G ∨ H ] as E (B[ G ∨ H ]) = E G ∪ E H , where E G = { ( u, a, v, b,
2) : uv ∈ E ( G ) } and E H = { ( u, a, v, b,
2) : ab ∈ E ( H ) } . Lemma 3.1.
For any u ∈ V ( G ) and i ∈ { , } , V ui contains an endpoint of at most one edge in M G .Proof. For the sake of contradiction, assume that there is a vertex u ∈ V ( G ) such that V u containstwo endpoints of two edges in M G , say ( u, a, v, b,
2) and ( u, a (cid:48) , v (cid:48) , b (cid:48) , V u isproved analogously.) Since ( u, a, v, b,
2) is in E G (recall that M G = M ∩ E G ), we have that9 𝐺 𝑢,1 = (1,0,0) 𝐵 𝑒 𝐺 𝜑 𝐺 𝑣,1 = (0,1,0) 𝜑 𝐺 𝑤,1 = (0,0,1) 𝜑 𝐺 𝑢,2 = (2,2,0) 𝜑 𝐺 𝑣,2 = (2 2,2) 𝜑 𝐺 𝑤,2 = (0,2,2) 𝜑 𝐻 𝑎,1 = (3,0,0) 𝐵 𝑒 𝐻 𝜑 𝐻 𝑏,1 = (0,3,0) 𝜑 𝐻 𝑐,1 = (0,0,3) 𝜑 𝐻 𝑎,2 = (5,5,0) 𝜑 𝐻 𝑏,2 = (5 5,5) 𝜑 𝐻 𝑐,2 = (0,5,5) 𝐺 𝐻 𝑢 𝑣 𝑤 𝑎 𝑏 𝑐 𝐸 𝐵 𝑒 𝐺 ⋅ 𝐻 = 𝑢, 𝑎, 1 𝑣, 𝑏, 2 𝑢, 𝑎 𝑣, 𝑏 ∈ 𝐸 𝐺 ⋅ 𝐻 𝑜𝑟 𝑢, 𝑎 = 𝑣, 𝑏 = … | 𝑢𝑣 ∈ 𝐸 𝐺 𝑜𝑟 𝑢 = 𝑣 𝑎𝑛𝑑 𝑎𝑏 ∈ 𝐸 𝐻 𝑜𝑟 𝑢, 𝑎 = (𝑣, 𝑏) (a) G , H , B e [ G ] and B e [ H ] (𝑢,𝑎,1) 𝑽 (𝑢,𝑏,1) (𝑢,𝑎,2) 𝑽 (𝑢,𝑏,2) 𝐵 𝑒 𝐺 ⋅ 𝐻 (𝑢,𝑐,1) (𝑢,𝑐,2) (𝑣,𝑎,1) 𝑽 (𝑣,𝑏,1) (𝑣,𝑎,2) 𝑽 (𝑣,𝑏,2) (𝑣,𝑐,1) (𝑣,𝑐,2) (𝑤,𝑎,1) 𝑽 (𝑤,𝑏,1) (𝑤,𝑎,2) 𝑽 (𝑤,𝑏,2) (𝑤,𝑐,1) (𝑤,𝑐,2) (b) B e [ G · H ]Figure 4: Graphs G , H , B e [ G ], B e [ H ] and B e [ G · H ], as well as super vertices V xi and mappings ϕ G and ϕ H thatrealize B e [ G ] and B e [ H ], respectively. Note that directions of edges are omitted in the pictures. They are alwaysfrom left to right. uv ∈ E ( G ) and thus ( u, v,
2) is in E (B[ G ]). This fact then implies that there is an edge between( u, a (cid:48) ,
1) and ( v, b,
2) in E G as well, contradicting to the fact that M G (and thus M ) is an inducedmatching. Example.
Here we illustrate the proof of Lemma 3.1. Consider Fig. 3(b) and let us say that M G contains edges ( u, a, v, b,
2) and ( u, b, v, a,
2) which means that V u contains endpoints of twoedges in M G . Having the first edge in E G means that uv ∈ E ( G ) and thus ( u, v,
2) is in E (B[ G ])(as witnessed in Fig. 3(a)). But then it means that the edge ( u, a, v, a,
2) must be in E G as well,making M G (and thus M ) not an induced matching. Lemma 3.2.
For any u, u (cid:48) , v, v (cid:48) ∈ V ( G ) , if M G contains an edge between a pair of vertices in V u and V v and an edge between another pair of vertices in V u (cid:48) and V v (cid:48) , then there must be no edgebetween vertices in V u and V v (cid:48) in E G .Proof. Assume for a contradiction that M G contains edges ( u, a, v, b,
2) and ( u (cid:48) , a (cid:48) , v (cid:48) , b (cid:48) , u, c, v (cid:48) , d,
2) in E ( G ). Since the edge ( u, c, v (cid:48) , d,
2) is in E G , we have uv (cid:48) ∈ E ( G ) and thus ( u, v (cid:48) , ∈ E (B[ G ]). This implies that ( u, a, v (cid:48) , b (cid:48) ,
2) is in E G , whichcontradicts the fact that M is an induced matching in B[ G ∨ H ]. Example.
Here we illustrate the proof of Lemma 3.2. Consider Fig. 3(b), and let us say that thematching M G contains ( v, a, u, a,
2) and ( u, a, w, a,
2) and there is an edge ( v, b, w, b, M G from being an induced matching in the graph of super vertices. Having the lastedge in E G implies that ( v, w, ∈ E (B[ G ]) which in turns implies that ( v, a, w, a, ∈ E G ,making M G (and thus M ) not an induced matching in B[ G ∨ H ]. Note on Proving the General Version (Section 4).
To prove the general version, i.e., Eq.(1),we may define E G and E H analogously to the proof in this section. We may then define supervertices in a similar way and prove the lemmas that are similar in spirit to Lemma 3.1 and 3.2.However, we choose an alternative way which seems more suitable in proving the general versionby decomposing the graph into products of some well-structured graphs and show the associativity property of graph products we use. 10 𝑽 When use 𝜑 = 𝜑 𝐺 , there are still undesirable edges. Actually, there will be a clique between “the same” super vertices 𝑽 𝑽 𝑽 𝑽 ( ( (0,0,1) ( ( ( ( (0,0,1) (0,0,1) (2,2,0) ( (0,2,2) (2,2,0) (2,2,0) ( ( (0,2,2) (0,2,2) (𝑢,𝑎,1) (𝑢,𝑏,1) (𝑢,𝑎,2) (𝑢,𝑏, 2) (𝑢,𝑐,1) (𝑢,𝑐,2) (𝑣, 𝑎, 1) (𝑣,𝑏,1) (𝑣, 𝑎,2) (𝑣, 𝑏,2) (𝑣, 𝑐,1) (𝑣, 𝑐,2) (𝑤, 𝑎,1) (𝑤,𝑏,1) (𝑤,𝑎,2) (𝑤,𝑏,2) (𝑤, 𝑐,1) (𝑤,𝑐,2) 𝜑 𝐺 𝜑 𝐺 (a) 𝑽 𝑽 When use 𝜑 = 𝜑 𝜑 , there could be some desirable edge missing 𝑽 𝑽 𝑽 𝑽 ( ( (0,0,1,3,0,0) ( ( ( ( (0,0,1,0,3,0) (0,0,1,0,0,3) (2,2,0,5,5,0) ( (0,2,2,5,5,0) (2,2,0,5,5,5) (2,2,0,0,5,5) ( ( (0,2,2,5,5,5) (0,2,2,0,5,5) (𝑢,𝑎,1) (𝑢,𝑏,1) (𝑢,𝑎,2) (𝑢,𝑏, 2) (𝑢,𝑐,1) (𝑢,𝑐,2) (𝑣, 𝑎, 1) (𝑣,𝑏,1) (𝑣, 𝑎,2) (𝑣, 𝑏,2) (𝑣, 𝑐,1) (𝑣, 𝑐,2) (𝑤, 𝑎,1) (𝑤,𝑏,1) (𝑤,𝑎,2) (𝑤,𝑏,2) (𝑤, 𝑐,1) (𝑤,𝑐,2) 𝜑 𝐺 𝜑 𝐻 𝜑 𝐺 𝜑 𝐻 (b)Figure 5: (a) ϕ (cf. Section 3) introduces “undesirable edges” such as the bold edge in this picture. (b) ϕ (cf.Section 3) removes “desirable edges” such as the bold edge in this picture. (3) ). We now prove the special case of Eq. (3) : dim (B e [ G · H ]) ≤ dim (B e [ G ]) + χ ( G ) dim (B e [ H ]) . Throughout this section, we will think of B e [ G ], for any undirected graph G , as a poset G × e (cid:126)K .Thus, edges in B e [ G · H ] are directed edges in the form ( u, a, v, b,
2) for some u, v ∈ V ( G ) and a, b ∈ V ( H ). Let d G = dim (B e [ G ]), d H = dim (B e [ H ]), and ϕ G : V (B e [ G ]) → R d G , ϕ H : V (B e [ H ]) → R d H be mappings that realize the posets B e [ G ] and B e [ H ], respectively. This means that, forexample, ( u, v, ∈ E (B e [ G ]) if and only if ϕ G ( u, < ϕ G ( v, ϕ G and ϕ H are non-negative (by adding appropriate positivenumbers). See an example in Fig. 4.Our strategy is to use ϕ G and ϕ H to define a mapping ϕ : V (B e [ G · H ]) → R d G + χ ( G ) d H thatrealizes B e [ G · H ]. Again, this means that we want ϕ such that for any vertices ( u, a, i ) and ( v, b, j ),( u, a, i )( v, b, j ) ∈ E (B e [ G · H ]) if and only if ϕ ( u, a, i ) < ϕ ( v, b, j ) . To simplify our discussion, wewill focus on the case where i = 1 and j = 2 (The cases when i = j are easy to deal with). Proof Idea.
Before we show the construction of ϕ , let us show a few failed attempts to illustratethe intuition behind the construction (readers may feel free to skip this part to the definition of ϕ below). We use Fig. 4 as an example. Fig. 5 and 7 might be also helpful as visual aids.The first attempt is to use ϕ ( u, a, i ) = ϕ G ( u, i ) to realize B e [ G · H ]. This obviously fails, simplybecause we did not use ϕ H at all: In Fig. 4 (also see Fig. 5(a)), we have ϕ ( u, a, < ϕ ( u, c, u, a, u, c, / ∈ E (B e [ G · H ]). In other words, ϕ “introduces” some “undesirable edges” –edges ( u (cid:48) , a (cid:48) , v (cid:48) , b (cid:48) ,
2) that are not in B e [ G · H ] but ϕ ( u (cid:48) , a (cid:48) , < ϕ ( v (cid:48) , b (cid:48) , ϕ ( u, a, i ) = ϕ G ( u, i ) ϕ H ( a, i ) which is a “concatenation”of ϕ G ( u, i ) and ϕ H ( a, i ) (thus, the dimension of ϕ is d G + d H ). It can be shown that there is noundesirable edge introduced by ϕ . However, ϕ might “remove” some “desirable edges” – edges( u (cid:48) , a (cid:48) , v (cid:48) , b (cid:48) ,
2) that are in B e [ G · H ] but ϕ ( u (cid:48) , a (cid:48) , (cid:54) < ϕ ( v (cid:48) , b (cid:48) , ϕ ( u, a, (cid:54) < ϕ ( v, c, u, a, v, c, ∈ E (B e [ G · H ]). Note that Eq.(3) implies that dim (B e [ G · H ]) ≤ dim (B e [ G ]) + χ ( G ) dim (B e [ H ]) + 1, so we are proving a slightlystronger statement for this special case. ( x, r,
1) = ϕ G ( x, ϕ H ( r,
1) (0 , ,
0) and ϕ ( x, r,
2) = ϕ G ( x, ϕ H ( r,
2) ( ∞ , ∞ , ∞ ) ϕ ( v, r,
1) = ϕ G ( v,
1) (0 , , ϕ H ( r,
1) and ϕ ( v, r,
1) = ϕ G ( v,
2) ( ∞ , ∞ , ∞ ) ϕ H ( r, Figure 6: Example of ϕ . Note that x is node in { u, w } . 𝑽 𝑽 The solution 𝑽 𝑽 𝑽 𝑽 (
1, 0,0, 3, 0, 0, 0, 0, 0) (
0, 1,0, 0, 0, 0, 3, 0, 0) (0, 0, 1, 3, 0, 0, 0, 0, 0) (
1, 0,0, 0, 3, 0, 0, 0, 0) (
1, 0,0, 0, 0, 3, 0, 0, 0) (
0, 1,0, 0, 0, 0, 0, 3, 0) (
0, 1,0, 0, 0, 0, 0, 0, 3) (0, 0, 1, 0, 3, 0, 0, 0, 0) (0, 0, 1, 0, 0, 3, 0, 0, 0) (2, 2,0, 5, 5, 0, ∞, ∞, ∞) ( (0, 2, 2, 5, 5, 0, ∞, ∞, ∞) (2, 2,0, 5, 5, 5, ∞, ∞, ∞) (2, 2,0, 0, 5, 5, ∞, ∞, ∞) ( ( (0, 2, 2, 5, 5, 5, ∞, ∞, ∞) (0, 2, 2, 0, 5, 5, ∞, ∞, ∞) (𝑢, 𝑎, 1) (𝑢, 𝑏, 1) (𝑢, 𝑎, 2) (𝑢, 𝑏, 2) (𝑢, 𝑐, 1) (𝑢, 𝑐, 2) (𝑣, 𝑎, 1) (𝑣, 𝑏, 1) (𝑣, 𝑎, 2) (𝑣, 𝑏, 2) (𝑣, 𝑐, 1) (𝑣, 𝑐, 2) (𝑤, 𝑎, 1) (𝑤, 𝑏, 1) (𝑤, 𝑎, 2) (𝑤, 𝑏, 2) (𝑤, 𝑐, 1) (𝑤, 𝑐, 2) 𝜑 𝐺 𝜑 𝐻 𝜑 𝐺 𝜑 𝐻 𝜑 𝐻 𝜑 𝐻 Figure 7: ϕ (cf. Section 3) which realizes B e [ G · H ]. We thus need a more clever way to combine ϕ G with ϕ H . A crucial observation we found isthat if we concatenate them only at vertices that are independent in G , then we will not removeany desirable edges . For example, in Fig. 4(a), vertices u and w are independent in G . So, for any x ∈ { u, w } and r ∈ V ( H ), we will let ϕ be as in Fig. 6. In other words, every vertex starts with ϕ G (i.e., in the first (gray) boxes). Then, we “attach” ϕ H to ϕ G at vertices of the form ( x, r, i ),where x ∈ { u, w } , while attaching “trivial” vectors ((0 , ,
0) or ( ∞ , ∞ , ∞ )) at ( v, r, i ) (i.e., in thesecond (blue) boxes in Fig 6). We then do the opposite. We attach ϕ H to vertices ( v, r, i ) whileattaching trivial vectors to other vertices (i.e., in the last (green) boxes). It can be checked (e.g.,Fig. 7) that ϕ does realize B e [ G · H ].To summarize, the general idea of constructing ϕ is to keep attaching ϕ H to ϕ G where eachattachment must be done only on vertices that are independent in G . The dimension of ϕ dependson how many times we attach ϕ H . A natural way to minimize the number of attachments is to use χ ( G ) color classes since each color class contains independent vertices. This is why the dimensionbecomes d G + χ ( G ) d H . Constructing ϕ . Let C : V ( G ) → [ k ] be an optimal coloring of G , where k = χ ( G ). Thecoordinates in R d G + kd H are viewed as k + 1 blocks . In the first block B , we have d G = dim (B e [ G ])coordinates, and in the k consecutive blocks B , . . . , B k , we have d H = dim (B e [ H ]) coordinates perblock. We will define the coordinates of each vertex in B e [ G · H ] by describing the coordinates ineach block. For each point x ∈ R d G + kd H and each block B j , we refer to coordinates in block B j of x as x | B j . For each vertex in B e [ G · H ] of the form ( u, a, ϕ (( u, a, | B = ϕ G (( u, ,ϕ (( u, a, | B C ( u ) = ϕ H (( a, , and ϕ (( u, a, | B (cid:96) = (0 , . . . , , otherwise.12or each vertex in B e [ G · H ] of the form ( u, a, ϕ (( u, a, | B = ϕ G (( u, ,ϕ (( u, a, | B C ( u ) = ϕ H (( a, , and ϕ (( u, a, | B (cid:96) = ( ∞ , . . . , ∞ ) , otherwise.We finish the proof of Eq.(6) by the following lemma, which can be proved by case analysis. Lemma 3.3. ϕ realizes B e [ G · H ] .Proof. We will use the following properties:P1. For any ( u, , ( v, ∈ V (B e [ G ]), ( u, v, ∈ E (B e [ G ]) ⇐⇒ ϕ G (( u, < ϕ G (( v, a, b, ∈ V (B e [ H ]), ( a, b, ∈ E (B e [ H ]) ⇐⇒ ϕ H (( a, < ϕ H (( b, uv ∈ E ( G ), C ( u ) (cid:54) = C ( v ).We argue that for any vertices ( u, a, , ( v, b, ∈ V (B e [ G · H ]), we have ( u, a, v, b, ∈ E (B e [ G · H ]) if and only if ϕ (( u, a, < ϕ (( v, b, Case 1: u (cid:54) = v and uv ∈ E ( G ) . We will show that ϕ (( u, a, < ϕ (( v, b, u, a, v, b, ∈ E (B e [ G · H ]) by construction, and C ( u ) (cid:54) = C ( v ) by Property P3. Next, consider the blocks B , B C ( u ) and B C ( v ) . We have ϕ (( u, a, | B = ϕ G (( u, < ϕ G (( v, ϕ (( v, b, | B ϕ (( u, a, | B C ( u ) = ϕ H (( a, < ∞ = ϕ (( v, b, | B ( C ( u )) ϕ (( u, a, | B C ( v ) = 0 < ϕ G (( v, ϕ (( v, b, | B C ( v ) The first line is because of Property P1 and the fact that ( u, v, ∈ E (B e [ G ]). This proves theclaim. Case 2: u (cid:54) = v and uv (cid:54)∈ E ( G ) . We will show that ϕ (( u, a, (cid:54) < ϕ (( v, b, u, a, v, b, (cid:54)∈ E (B e [ G · H ]) by construction. Consider the block B . Because of Property P1 and ( u, v, (cid:54)∈ E (B e [ G ]), we have ϕ (( u, a, | B = ϕ G (( u, (cid:54) < ϕ G (( v, ϕ (( v, b, | B . Thus, ϕ (( u, a, (cid:54) <ϕ (( v, b, Case 3: u = v , a (cid:54) = b and ab ∈ E ( H ) . We will show that ϕ (( u, a, < ϕ (( v, b, u, a, v, b, ∈ E (B e [ G · H ]) by construction. Note that C ( u ) = C ( v ) since u = v . Consider eachblock. We have ϕ (( u, a, | B = ϕ G (( u, ϕ G (( v, ϕ (( v, b, | B (6) ϕ (( u, a, | B C ( u ) = ϕ H (( a, < ϕ H (( b, ϕ (( v, b, | B C ( v ) (7) ϕ (( u, a, | B (cid:96) = 0 ≤ ∞ = ϕ (( v, b, | B (cid:96) (8)Eq.(7) follows because ( a, b, ∈ E (B e [ H ]) and Property P2, and Eq.(8) follows from thesettings of other blocks B (cid:96) . This proves the claim. Case 4: u = v , a (cid:54) = b and ab (cid:54)∈ E ( H ) . We will show that ϕ (( u, a, (cid:54) < ϕ (( v, b, u, a, v, b, (cid:54)∈ E (B e [ G · H ]) by construction. Consider the block B C ( u ) . By Property P2, ϕ (( u, a, | B C ( u ) = ϕ H (( a, (cid:54) < ϕ H (( b, ϕ (( v, b, | B C ( v ) , thus proving the claim.13 ase 5: u = v and a = b . We will show that ϕ (( u, a, < ϕ (( v, b, e , ( u, a, v, b, ∈ E (B e [ G · H ]). Next, consider each block. ϕ (( u, a, | B = ϕ G (( u, < ϕ G (( v, ϕ (( v, b, | B (9) ϕ (( u, a, | B C ( u ) = ϕ H (( a, < ϕ H (( b, ϕ (( v, b, | B C ( v ) (10) ϕ (( u, a, | B (cid:96) = 0 ≤ ∞ = ϕ (( v, b, | B (cid:96) (11)Eq.(10) follows from Property P2, and Eq.(11) follows from the settings of other blocks B (cid:96) .This proves the claim. In this section, we prove the main theorem. We shall restate the main theorem here.
Theorem 1.1. (restated)
For any undirected graphs G , H and J and a height-two poset (cid:126)P , im (( G ∨ H ) × J ) ≤ im ( G × J ) + im ( H × J ) (1) sim (( G ∨ H ) × J ) ≤ sim ( G × J ) + sim ( H × J ) (2) dim (( G · H ) × e (cid:126)P ) ≤ dim ( G × e (cid:126)P ) + χ ( G ) dim ( H × e (cid:126)P ) + dim ( (cid:126)P ) (3) where χ ( G ) is the chromatic number of G . (1) ). In this section, we prove that im (( G ∨ H ) × J ) ≤ im ( G × J ) + im ( H × J ). We first observe that wecan decompose edges of G ∨ H into two sets: E ( G ∨ H ) = { ( u, a )( v, b ) | uv ∈ E ( G ) or ab ∈ E ( H ) } = E ∪ E (12)where E = { ( u, a )( v, b ) ∈ E ( G ∨ H ) : uv ∈ E ( G ) } and E = { ( u, a )( v, b ) ∈ E ( G ∨ H ) : ab ∈ E ( H ) } . For any i ∈ { , } , define a subgraph G i of G ∨ H to be G i = ( V ( G ∨ H ) , E i ). Note that E (( G ∨ H ) × J ) ⊆ E ( G × J ) ∪ E ( G × J ). Claim 4.1. im (( G ∨ H ) × J ) ≤ im ( G × J ) + im ( G × J ) . Proof.
Let M be any induced matching in the graph ( G ∨ H ) × J . Let M = M ∩ E ( G × J )and M = M ∩ E ( G × J ). By Eq.(12), M = M ∪ M . Observe that M and M are inducedmatchings of G and G , respectively, since they are induced matchings of ( G ∨ H ) × J which is asuper graph of G × J and G × J . The claim follows.Now, we try to write G and G as a product of two other graphs. For any set X of vertices,we denote by K X a complete graph whose vertex set is V ( X ). Lemma 4.2. im ( G × J ) = im (( K H × e G ) × J ) and im ( G × J ) = im (( K G × e H ) × J ) . roof. The lemma simply follows from the fact that K G × e H is exactly the same as G and K H × e G is isomorphic to G . To see this, we simply observe that E ( K G × e H ) = { ( u, a )( v, b ) : ab ∈ E ( H ) ∧ u, v ∈ V ( G ) } which is exactly the same as E , and E ( K H × e G ) = { ( a, u )( b, v ) : uv ∈ E ( G ) ∧ a, b ∈ V ( H ) } which is almost the same as E except that vertices are in V ( H ) × V ( G )instead of V ( G ) × V ( H ).The simple lemma above allows us to rewrite the equation in Claim 4.1 as im (( G ∨ H ) × J ) ≤ im (( K H × e G ) × J ) + im (( K G × e H ) × J ) (13)Now we need the following associativity property. Lemma 4.3.
For any graphs X , Y and Z , ( X × e Y ) × Z = X × e ( Y × Z ) .Proof. The following equalities simply follow from the definition of × and × e (cf. Definition 2.1). E (( X × e Y ) × Z ) = { ( x, y, z )( x (cid:48) , y (cid:48) , z (cid:48) ) : ( x, y )( x (cid:48) , y (cid:48) ) ∈ E ( X × e Y ) and zz (cid:48) ∈ E ( Z ) } = { ( x, y, z )( x (cid:48) , y (cid:48) , z (cid:48) ) : ( xx (cid:48) ∈ E ( X ) or x = x (cid:48) ) and yy (cid:48) ∈ E ( Y ) and zz (cid:48) ∈ E ( Z ) } = X × e ( Y × Z )This allows us to rewrite Eq.(13) as im (( G ∨ H ) × J ) ≤ im ( K H × e ( G × J )) + im ( K G × e ( H × J )) (14)We finish our proof with the following lemma which says that the product of any graph X witha complete graph K L will not increase the induced matching number of X . We note that in factthe equality could be achieved, but since it is not important to this proof, we only show the upperbound. Lemma 4.4.
For any graph X and any set L of vertices, im ( K L × e X ) ≤ im ( X ) .Proof. Let M be any induced matching in K L × e X . We construct a set of edges M X ⊆ E ( X )by projection: for each edge ( i, x )( j, y ) ∈ E ( K L × e X ), we add an edge xy to M X . To prove thelemma, it suffices to show that M X is an induced matching in graph X .Assume for contrary that M X is not an induced matching, i.e., there exists xy, x (cid:48) y (cid:48) ∈ M X such that either (1) x = x (cid:48) (making M X not a matching) or (2) xx (cid:48) ∈ E ( X ) (making M X not aninduced matching). (We note that (1) also includes the case where multiple edges are created, i.e., x = x (cid:48) and y (cid:48) = y (cid:48) .) We shall use the following simple facts. xy ∈ M X = ⇒ ∃ i, j : ( i, x )( j, y ) ∈ M (15) xy ∈ E ( X ) = ⇒ ∀ i, j : ( i, x )( j, y ) ∈ E ( K L × e X ) (16)By Eq.(15), the assumption that xy, x (cid:48) y (cid:48) ∈ M X implies that edges ( i, x )( j, y ) and ( i (cid:48) , x (cid:48) )( j (cid:48) , y (cid:48) )belong to M for some i, j, i (cid:48) , j (cid:48) ∈ V ( K L ). Case 1: If x = x (cid:48) , then we have that xy (cid:48) is also in M X ⊆ E ( X ) and thus Eq.(16) implies that( i, x )( j (cid:48) , y (cid:48) ) ∈ E ( K L × e X ). This contradicts the fact that M is an induced matching. Case 2: If xx (cid:48) ∈ E ( X ), then Eq.(16) implies that ( i, x )( i (cid:48) , x (cid:48) ) ∈ E ( K L × e X ) which again contradictsthe fact that M is an induced matching. 15sing Lemma 4.4, we can rewrite Eq.(14) as im (( G ∨ H ) × J ) ≤ im ( G × J ) + im ( H × J ) asdesired. (2) ). In this section, we prove the subadditivity property of the semi-induced matching number. Theproof closely follows the case of the induced matching number.We prove the following subadditivity theorem for semi-induced matching which is equivalent toEq.(2).
Theorem 4.5.
For any graphs G and H and any total order σ on V (( G ∨ H ) × J ) , there existbijections σ on V ( G × J ) and σ on V ( H × J ) such that sim σ (( G ∨ H ) × J ) ≤ sim σ ( G × J ) + sim σ ( H × J ) . The rest of this subsection is devoted to proving the above theorem. We first decomposeedge set E ( G ∨ H ) into E ∪ E where E = { ( u, a )( v, b ) : uv ∈ E ( G ) ∧ a, b ∈ V ( H ) } and E = { ( u, a )( v, b ) : ab ∈ E ( H ) ∧ u, v ∈ V ( G ) } . For any i ∈ { , } , define a subgraph G i of G ∨ H to be G i = ( V ( G ∨ H ) , E i ). Claim 4.6.
For any bijection σ : V (( G ∨ H ) × J ) → [ | V (( G ∨ H ) × J ) | ] , sim σ (( G ∨ H ) × J ) ≤ sim σ ( G × J ) + sim σ ( G × J ) .Proof. Let M be any σ -semi-induced matching in ( G ∨ H ) × J . Let M = M ∩ E ( G × J ) and M = M ∩ E ( G × J ). It is clear that M = M ∪ M , and M and M are σ -semi-inducedmatchings.Next, we write G and G as G = G × e K H and G = K G × e H as in Lemma 4.2. So, we havethat sim σ ( G × J ) = sim σ (cid:48) (( K H × e G ) × J ), for some σ (cid:48) , and that sim σ ( G × J ) = sim σ (( K G × e H ) × J )(we can use σ in the second equality since G × J = K G × e H , but we need a different mapping σ (cid:48) in the first equality since G × J is only isomorphic to K H × e G ). Then, by applying associativityin Lemma 4.3, we have that sim σ (( G ∨ H ) × J ) ≤ sim σ (cid:48) ( K H × e ( G × J )) + sim σ ( K G × e ( H × J )) . (17)The following lemma will finish the proof. Lemma 4.7.
For any graph X , set L , and a total order τ on V ( K L × X ) , there exists a total order τ (cid:48) on V ( X ) such that sim τ ( K L × e X ) ≤ sim τ (cid:48) ( X ) .Proof. Let M be any τ -semi-induced matching in K L × e X . We construct a set of edges M X ⊆ E ( X ) by adding to M X an edge xy for each edge ( i, x )( j, y ) ∈ E ( K L × e X ). To prove the lemma,it suffices to define a total order τ (cid:48) on vertices V ( X ) such that M X is a τ (cid:48) -semi-induced matchingin the graph X . We will use Eq. (16) and Eq. (15); we recall them here: xy ∈ M X = ⇒ ∃ i, j : ( i, x )( j, y ) ∈ M (15) xy ∈ E ( X ) = ⇒ ∀ i, j : ( i, x )( j, y ) ∈ E ( K L × e X ) (16)Before defining τ (cid:48) , we first argue that M X is a matching in X . Suppose otherwise; i.e., xy, xy (cid:48) ∈M X for some x , y and y (cid:48) . Then, from Eq.(15), we have edges ( i, x )( j, y ) and ( i (cid:48) , x )( j (cid:48) , y (cid:48) ) in M for16ome i, j, i (cid:48) , j (cid:48) ∈ V ( K L ). This means that xy, xy (cid:48) ∈ E ( G ), and therefore, from Eq.(16), we mustalso have edges ( i, x )( j (cid:48) , y (cid:48) ) and ( i (cid:48) , x )( j, y ). This contradicts the fact that M is τ -semi-inducedmatching, i.e. no matter how we define τ , this case cannot happen.Now, we are ready to define the total order τ (cid:48) on vertices V ( X ). Since each vertex in X appearsat most once in M X , for each vertex x ∈ V ( X ) that appears in M X , we define τ (cid:48) ( x ) = τ ( i, x )where ( i, x ) is the vertex that appears in M . Now, it is easy to check that M X is τ (cid:48) -semi-inducedmatching.From this lemma, we conclude that there exists a total order σ on V ( K H × e ( G × J )) suchthat sim σ (cid:48) ( K H × e ( G × J )) ≤ sim σ ( G × J ), and there exists a total order σ on V ( K G × e ( H × J ))such that sim σ ( K G × e ( H × J )) ≤ sim σ ( H × J ). Theorem 4.5 then follows by combining theseinequalities with Eq.(17). (3) ). In this section, we show that dim (( G · H ) × e (cid:126)P ) ≤ dim ( G × e (cid:126)P ) + χ ( G ) dim ( H × e (cid:126)P ) + dim ( (cid:126)P ).Definitions related to poset and dimension can be found in Section 2. We first note that theextended tensor product between an undirected graph G and a height-two poset (cid:126)P is still a poset(in fact, it is a height-two poset). So, the quantity dim ( G × e (cid:126)P ) is well-defined. This fact isformalized and proved in the following lemma. Lemma 4.8.
For any graph A and height-two poset (cid:126)P , A × e (cid:126)P is a height-two poset.Proof. Consider any vertex ( a, p ) ∈ V ( A ) × V ( (cid:126)P ). Observe that if p is a minimal element in (cid:126)P ,then ( a, p ) is also a minimal element in A × e (cid:126)P ; otherwise, if there is a vertex ( a (cid:48) , p (cid:48) ) ∈ V ( A ) × V ( (cid:126)P )such that ( a (cid:48) , p (cid:48) )( a, p ) ∈ E ( A × e (cid:126)P ), then p (cid:48) p ∈ E ( (cid:126)P ), which contradicts the fact that p is minimalin (cid:126)P . A similar argument shows that if p is a maximal element in (cid:126)P , then ( a, p ) is also maximal in A × e (cid:126)P . Since every vertex ( a, p ) ∈ V ( A ) × V ( (cid:126)P ) is either a minimal or maximal element (or both)in A × e (cid:126)P , the graph product A × e (cid:126)P is a height-two poset.Our proof of Eq.(3) has two steps. In the first step (Lemma 4.9), we write the poset ( G · H ) × e (cid:126)P as the intersection of two other posets (cid:126)P and (cid:126)P , where dim (( G · H ) × e (cid:126)P ) ≤ dim ( (cid:126)P ) + dim ( (cid:126)P ).In the second step, we bound the dimensions of (cid:126)P and (cid:126)P . Step 1: Decomposition of poset.
This step is summarized in the following lemma.
Lemma 4.9.
Consider any undirected graph A and a height-two poset (cid:126)P . Denote by U and V theset of minimal and maximal elements of (cid:126)P , respectively. (Since (cid:126)P is of height two, U ∪ V = V ( (cid:126)P ) .)Then, E ( A × e (cid:126)P ) can be written as E ( A × e (cid:126)P ) = E ( A × e (cid:126)K U,V ) ∩ E ( K A × e (cid:126)P ) where (cid:126)K U,V is a complete height-two poset with U and V as the sets of minimal and maximalelements respectively, i.e., E ( (cid:126)K U,V ) = { uv : u ∈ U, v ∈ V } .Proof. The lemma follows from simple logical implications as shown in Fig. 8. The second equalityis because pp (cid:48) ∈ E ( (cid:126)P ) implies that p ∈ U and p (cid:48) ∈ V and the fact that the statement “ a (cid:54) = a (cid:48) or a = a (cid:48) ” is a true statement. 17 ( A × e (cid:126)P ) = { ( a, p )( a (cid:48) , p (cid:48) ) : ( aa (cid:48) ∈ E ( A ) or a = a (cid:48) ) and pp (cid:48) ∈ E ( (cid:126)P ) } = { ( a, p )( a (cid:48) , p (cid:48) ) : ( aa (cid:48) ∈ E ( A ) or a = a (cid:48) ) and p ∈ U and p (cid:48) ∈ V and ( a (cid:54) = a (cid:48) or a = a (cid:48) ) and pp (cid:48) ∈ E ( (cid:126)P ) } = { ( a, p )( a (cid:48) , p (cid:48) ) : ( aa (cid:48) ∈ E ( A ) or a = a (cid:48) ) and p ∈ U and p (cid:48) ∈ V }∩{ ( a, p )( a (cid:48) , p (cid:48) ) : ( a (cid:54) = a (cid:48) or a = a (cid:48) ) and pp (cid:48) ∈ E ( (cid:126)P ) } = E ( A × e (cid:126)K U,V ) ∩ E ( K A × e (cid:126)P ) . Figure 8: Decomposition of Poset.
Let U and V be as in the above lemma. This allows us to write E (( G · H ) × e (cid:126)P ) = E (( G · H ) × e (cid:126)K U,V ) ∩ E ( K G · H × e (cid:126)P ) . (18)Note that both ( G · H ) × e (cid:126)K U,V and K G · H × e (cid:126)P are height-two posets (by Lemma 4.8). Moreover,they have the same vertex set, which is V ( G ) × V ( H ) × V ( (cid:126)P ). We next apply the following lemmawhich relates graph intersection to poset dimension. Lemma 4.10.
Let (cid:126)P and (cid:126)P be any height-two posets on the same vertex set V (cid:48) . Let (cid:126)P =( V (cid:48) , E ( (cid:126)P ) ∩ E ( (cid:126)P )) . Then, (cid:126)P is a height-two poset. Moreover, dim ( (cid:126)P ) ≤ dim ( (cid:126)P ) + dim ( (cid:126)P ) .Proof. The proof that (cid:126)P is a height-two poset is essentially the same as the proof of Lemma 4.8.Consider any vertex p ∈ V (cid:48) . Observe that if p is a minimal element in (cid:126)P , then it is also a minimalelement in (cid:126)P ; otherwise, if there is a vertex p (cid:48) ∈ V (cid:48) such that p (cid:48) p ∈ E ( (cid:126)P ), then p (cid:48) p ∈ E ( (cid:126)P ), whichcontradicts the fact that p is minimal in (cid:126)P . A similar argument shows that if p is a maximalelement in (cid:126)P , then it is also maximal in (cid:126)P . Since every vertex in (cid:126)P is either minimal or maximal(or both), (cid:126)P is a height-two poset.We now argue that dim ( (cid:126)P ) ≤ dim ( (cid:126)P ) + dim ( (cid:126)P ). Let d i = dim ( (cid:126)P i ) and let ϕ i : V (cid:48) → R d i be amapping that realizes poset (cid:126)P i . We define ϕ (cid:48) : V (cid:48) → R d + d as a concatenation of ϕ and ϕ . Thatis, for any p ∈ V (cid:48) , we let ϕ (cid:48) ( p ) = ( ϕ ( p ) , ϕ ( p )). We finish the proof by showing that ϕ (cid:48) realizes (cid:126)P using the following simple logical implications. pp (cid:48) ∈ E ( (cid:126)P ) ∩ E ( (cid:126)P ) ⇐⇒ pp (cid:48) ∈ E ( (cid:126)P ) and pp (cid:48) ∈ E ( (cid:126)P ) ⇐⇒ ϕ ( p ) < ϕ ( p (cid:48) ) and ϕ ( p ) < ϕ ( p (cid:48) ) ⇐⇒ ϕ (cid:48) ( p ) < ϕ (cid:48) ( p (cid:48) ) . Using the above lemma and Eq.(18), we get dim (( G · H ) × e (cid:126)P ) ≤ dim (( G · H ) × e (cid:126)K U,V ) + dim ( K G · H × e (cid:126)P ) . (19) Step 2: Bounding the dimensions.
Our next step is to bound the dimension numbers of( G · H ) × e (cid:126)K U,V and K G · H × e (cid:126)P separately. The nice thing is that these graph products are notin the general form anymore – one of the graphs in each product is “complete”. This allows us tobound the dimension numbers of these graphs as in the next two lemmas.18 emma 4.11. For any set L of vertices and any height-two poset (cid:126)P , dim ( K L × e (cid:126)P ) ≤ dim ( (cid:126)P ) .Proof. Let d = dim ( (cid:126)P ) and ϕ : V ( (cid:126)P ) → R d be a mapping that realizes (cid:126)P . We define ϕ (cid:48) : V ( K L ) × V ( (cid:126)P ) → R d as ϕ (cid:48) ( v, p ) = ϕ ( p ). We complete the proof with the fact that ϕ (cid:48) realizes K L × (cid:126)P , proved as follows.( v, p )( v (cid:48) , p (cid:48) ) ∈ E ( K L × e (cid:126)P ) ⇐⇒ (( vv (cid:48) ∈ E ( K L )) or v = v (cid:48) ) and pp (cid:48) ∈ E ( (cid:126)P ) ⇐⇒ pp (cid:48) ∈ E ( (cid:126)P ) ⇐⇒ ϕ ( p ) < ϕ ( p (cid:48) ) ⇐⇒ ϕ (cid:48) ( v, p ) < ϕ (cid:48) ( v (cid:48) , p (cid:48) ) . We note that the equality can be attained in Lemma 4.11, but it is not important to us. Thesame holds for the next lemma.
Lemma 4.12.
For any undirected graph A and sets U and V of vertices, dim ( A × e (cid:126)K U,V ) ≤ dim ( A × e (cid:126)K ) , where (cid:126)K U,V is as in Lemma 4.9.Proof.
Recall that the set of minimal and maximal elements of (cid:126)K
U,V are U and V , respectively.Let the minimal and maximal element of (cid:126)K be u and v , respectively. Let d = dim ( A × e (cid:126)K ) and ϕ : V ( A ) × V ( (cid:126)K ) → R d be a mapping that realizes A × e (cid:126)K . Define a function r : V ( (cid:126)K U,V ) → V ( (cid:126)K )as r ( i ) = u for all i ∈ U and r ( i ) = v otherwise. Now, define a mapping ϕ (cid:48) : V ( A ) × V ( (cid:126)K U,V ) → R d as ϕ (cid:48) ( a, i ) = ϕ ( a, r ( i )) . We finish the lemma by showing that ϕ (cid:48) realizes poset A × e (cid:126)K U,V . Observe that for any i and i (cid:48) in V ( (cid:126)K U,V ), we have ii (cid:48) ∈ E ( (cid:126)K U,V ) ⇐⇒ r ( i ) r ( i (cid:48) ) ∈ E ( (cid:126)K ). Thus,( a, i )( a (cid:48) , i (cid:48) ) ∈ E ( A × e (cid:126)K U,V ) ⇐⇒ ( aa (cid:48) ∈ E ( A ) or a = a (cid:48) ) and ii (cid:48) ∈ E ( (cid:126)K U,V ) ⇐⇒ ( aa (cid:48) ∈ E ( A ) or a = a (cid:48) ) and r ( i ) r ( i (cid:48) ) ∈ E ( (cid:126)K ) ⇐⇒ ( a, r ( i ))( a (cid:48) , r ( i (cid:48) )) ∈ E ( A × e (cid:126)K ) ⇐⇒ ϕ ( a, r ( i )) < ϕ ( a (cid:48) , r ( i (cid:48) )) ⇐⇒ ϕ (cid:48) ( a, i ) < ϕ (cid:48) ( a (cid:48) , i (cid:48) )Applying Lemma 4.11 and 4.12 to Eq.(19), we get dim (( G · H ) × e (cid:126)P ) ≤ dim (( G · H ) × e (cid:126)K ) + dim ( (cid:126)P ) . Finally, we apply Eq.(6) proved in Section 3.3 to get the desired inequality: dim (( G · H ) × e (cid:126)P ) ≤ dim ( G × e (cid:126)P ) + χ ( G ) dim ( H × e (cid:126)P ) + dim ( (cid:126)P ) . Hardness from Graph Products
In this section, we show applications of subadditivity inequalities presented in Theorem 1.1 in prov-ing the tight hardness of approximating the induced matching number, the semi-induced matchingnumber, and the poset dimension number. Moreover, we prove the hardness of d / − (cid:15) for approxi-mating the induced matching number of d -regular bipartite graphs. In this section, we prove that induced and semi-induced matching problems in bipartite graphsare hard to approximate to within a factor of n − (cid:15) . Recall that we use B[ G ] = G × K andB e [ G ] = G × e K .We have already sketched the proof of the hardness of the induced matching problem in Sec-tion 1.1 and will give more detail here. We can actually say something stronger than just thehardness of these problems. In fact, it is hard to distinguish between the case where an input graph G has large im ( G ) and small sim ( G ), as stated in Theorem 5.1 below. Theorem 5.1.
Given any bipartite graph G and (cid:15) > , unless NP ⊆ ZPP , no polynomial-timealgorithm can distinguish between the following two cases: • ( Yes-Instance ) im ( G ) ≥ | V ( G ) | − (cid:15) . • ( No-Instance ) sim ( G ) ≤ | V ( G ) | (cid:15) . Note that sim ( G ) ≥ im ( G ); thus, Theorem 5.1 implies that no polynomial-time algorithm candistinguish between the cases where im ( G ) ≥ | V ( G ) | − (cid:15) and im ( G ) ≤ | V ( G ) | (cid:15) as well as the caseswhere sim ( G ) ≥ | V ( G ) | − (cid:15) and sim ( G ) ≤ | V ( G ) | (cid:15) . Theorem 5.1 thus implies the hardness of bothinduced and semi-induced matching problems in bipartite graphs. Proof of Theorem 5.1.
Our proof is based on a reduction from the maximum independent setproblem. As discussed earlier, we start from the result of [28] instead of [33], to keep the parameterssimple.
Theorem 5.2 ([28]) . Let (cid:15) > be any constant. Given a graph G , unless NP = ZPP , no polynomial-time algorithm can distinguish between the following two cases: • ( Yes-Instance ) α ( G ) ≥ | V ( G ) | − (cid:15) . • ( No-Instance ) α ( G ) ≤ | V ( G ) | (cid:15) . We start from a graph G given by Theorem 5.2 and return as an output a graph B e [ G k ], where k = (1 /(cid:15) ) and G k = G ∨ G ∨ . . . ∨ G (there are k copies of G ). By construction, the number ofvertices in B e [ G k ] is n = 2 | V ( G ) | k .If G is a Yes-Instance , then we know that α ( G ) ≥ | V ( G ) | − (cid:15) . We will use the followinglemma, essentially due to [17], but since it is not explicitly stated in [17], we shall provide the prooffor completeness. Lemma 5.3 (Implicit in [17]) . For any graph G , α ( G ) ≤ im ( B e [ G ]) . Proof.
First, we show the lower bound of im (B e [ G ]). Let I ⊆ V ( G ) be an independent set in G .Clearly, the set of edges M = { ( u, u,
2) : u ∈ I } corresponding to I is an induced matching inB[ G ] since if there is u, u (cid:48) ∈ I such that ( u, u (cid:48) , ∈ M , then uu (cid:48) ∈ E ( G ), contradicting the factthat I is an independent set. Thus, B[ G ] has an induced matching of size at least α ( G ).20e recall the following standard fact in graph theory. Lemma 5.4 (Folklore; See e.g. [47]) . For any graphs G and H , α ( G ∨ H ) = α ( G ) α ( H ) . Inparticular, α ( G k ) = ( α ( G )) k . It follows from the above lemmas that im (B e [ G k ]) ≥ α ( G k ) ≥ ( α ( G )) k ≥ | V ( G ) | k (1 − (cid:15) ) =Ω( n − (cid:15) ).Now if G is a No-Instance , we can invoke the next lemma, implicitly used in 5.3.
Lemma 5.5.
For any graph G ,we have sim ( B e [ G ]) ≤ sim ( B [ G ]) + α ( G ) . Proof.
Let σ be any total order on V ( G ). Consider any σ -semi-induced matching M in B e [ G ].We may write M = M ∪ M , where M consists of edges that are also present in B[ G ] and M = M \ M . It can be seen that |M | ≤ sim σ (B[ G ]) and |M | ≤ α ( G ). The former inequalityis because E (B[ G ]) ⊆ E (B e [ G ]). The latter inequality is because we can define an independent setin G by choosing vertices corresponding to edges in M (since every edge in M is in the form( v, v,
2) for some v ∈ V ( G )). Since this is true for all σ , the lemma follows. Corollary 5.6 (Immediate from Theorem 1.1) . For any integer k and graph G , sim ( B [ G k ]) ≤ sim ( B [ G ]) k . By applying Lemma 5.5, we have that sim (B e [ G k ]) ≤ sim (B[ G k ]) + α ( G ) k , and by invokingCorollary 5.6, we have sim (B e [ G k ]) ≤ sim (B[ G ]) k + α ( G ) k . Then we plug in k = 1 /(cid:15) and α ( G ) ≤| V ( G ) | (cid:15) and conclude that sim (B e [ G k ]) ≤ O ( | V ( G ) | ) ≤ n (cid:15) when G is a No-Instance . Thiscompletes the proof of Theorem 5.1.To get a better hardness result, we start the reduction from Theorem 1 in [33] using the valueof k = log γ | V ( G ) | instead of (1 /(cid:15) ) (where γ is as in Theorem 1 in [33]). This will give the hardnessof n/ log / γ n under the assumption that NP (cid:54)⊆ BPTIME ( n poly log n ). d -Regular Graphs. Here we show the hardness result of the induced matching problem on d -regular bipartite graphs.For this, we need an instance G of the maximum independent set problem such that G is d -regular.It can be seen that the following hardness result follows from Trevisan’s construction in [46] onthe hardness of the maximum independent set problem on bounded degree graphs. As it is notguaranteed that an instance G obtained from Trevisan’s construction has regular degree, we haveto slightly modify the construction in the same way as in [13] and [5]. Theorem 5.7 ([46], modified from Theorem 4 in [13]) . Let λ : N → N be any function. Assumingthat NP (cid:54)⊆ ZPTIME ( n O ( λ ( n )) ) , there is no polynomial-time algorithm that can solve the followingproblem.For any constant (cid:15) > and any integer q , given a graph G of size q O ( λ ( q )) such that all verticeshave degree ∆ = 2 O ( λ ( q )) , the goal is to distinguish between the following two cases: • ( Yes-Instance ) α ( G ) ≥ | V ( G ) | / ∆ (cid:15) . • ( No-Instance ) α ( G ) ≤ | V ( G ) | / ∆ − (cid:15) . λ to specify the degree of vertices we want.For instance, if we use λ ( q ) = c for some constant c , then we get the hardness of the constant-degreemaximum independent set problem, and the hardness assumption is NP (cid:54)⊆ ZPP . But, if we choose λ ( q ) = O (log log q ), then we have the hardness for the logarithmic-degree maximum independentset problem with the hardness assumption of NP (cid:54)⊆ ZPTIME ( n O (log log n ) ).We will need the following lemma which will also be used later to prove the hardness of pricingproblems. Lemma 5.8.
Let ∆ : N → N . Assuming that NP (cid:42) ZPTIME ( n O (log ∆( n )) ) , there is no polynomialtime algorithm that can solve the following problem: For any (cid:15) and integer q , given a ∆( q ) -regulargraph G on O ( q O (log ∆( q )) ) vertices and an empty graph H on ∆( q ) vertices, the goal is to distinguishbetween the following cases: • ( Yes-Instance ) im ( B e [ G ∨ H ]) ≥ | V ( G ) | (∆( q )) − (cid:15) • ( No-Instance ) sim ( B e [ G ∨ H ]) ≤ | V ( G ) | (∆( q )) (cid:15) . Note that we use ∆( q ) = c for some constant c in the proof of Theorem 5.9, while we choose∆( q ) = poly log q when proving the hardness of pricing problems in the next section. From thelemma, the hardness of the induced matching problem on d -regular graphs follows immediately. Of Lemma 5.8.
Let (cid:15) > G be a ∆( q )-regular graph obtained from Theorem 5.7(choosing λ ( q ) = O (log ∆( q )) so that we get the graph G of degree ∆( q ) and | V ( G ) | = q O (log ∆( q )) .)We output a graph B e [ G ∨ H ], where H = ¯ K ∆( q ) is an empty graph on ∆( q ) vertices. This finishesthe construction.Notice that the number of vertices in B e [ G ∨ H ] is n = 2 | V ( G ) || V ( H ) | = 2 | V ( G ) | ∆( q ). In the Yes-Instance , by Lemma 5.3 and 5.4, we have that im (B e [ G ∨ H ]) ≥ α ( G ∨ H ) ≥ α ( G ) α ( H ) ≥| V ( G ) | ∆( q ) − (cid:15) because α ( G ) ≥ | V ( G ) | / ∆( q ) (cid:15) in the Yes-Instance .In the
No-Instance , we have sim (B e [ G ∨ H ]) ≤ α ( G ∨ H ) + sim (B[ G ∨ H ]) (by Lemma 5.5).The first term is at most α ( G ) α ( H ) ≤ | V ( G ) | ∆( q ) (cid:15) (by Lemma 5.4). The second term is at most,by Eq.(2) in Theorem 1.1, sim (B[ G ]) + sim (B[ H ]) ≤ | V ( G ) | + ∆( q ) ≤ | V ( G ) | , for our choice of∆( q ). Therefore, in the No-Instance , the value of the solution is at most O ( | V ( G ) | ∆( q ) (cid:15) ).Since λ ( q ) = O (log ∆( q )), we get the complexity assumption of NP (cid:54)⊆ ZPTIME ( n O (log ∆( n )) ). Theorem 5.9.
Let d ≤ poly log n be any sufficiently large number. For any constant (cid:15) > , unless NP ⊆ ZPTIME ( n poly log n ) , it is hard to approximate the induced matching problem on d -regulargraphs to within a factor of d / − (cid:15) .Proof. From Lemma 5.8, observe that the degree of each vertex in B e [ G ∨ H ] is d = ∆ + 1: eachvertex ( v, a, ∈ B e [ G ∨ H ] is connected to ( v, a,
2) and other vertices ( u, b,
2) for all u ∈ V ( G )and b ∈ V ( H ). The gap between Yes-Instance and
No-Instance is ∆ − (cid:15) ≥ d / − O ( (cid:15) ) . We use∆ ≤ O (poly log n ), so the running time of the reduction is n O (log log n ) . We now prove the n − (cid:15) -hardness of approximating poset dimension. Note that here we use B[ G ] = G × (cid:126)K and B e [ G ] = G × e (cid:126)K . We denote by G k = G · G · . . . · G where G appears k times. Construction.
We will need the following hardness result of the graph coloring problem, due toFeige and Kilian [19]. (In fact, there is a stronger hardness result by Khot and Ponnuswami [33],but we use the result of Feige and Kilian to keep the presentation simple.)22 heorem 5.10 ([19]) . Let (cid:15) > be any constant. Given a graph G , unless NP = ZPP , nopolynomial-time algorithm can distinguish between the following two cases: • ( Yes-Instance ) χ ( G ) ≤ | V ( G ) | (cid:15) . • ( No-Instance ) χ ( G ) ≥ | V ( G ) | − (cid:15) . Our reduction starts from the instance G given by Theorem 5.10. Then we output B[ G k ] where k = 1 /(cid:15) . The construction size is n = 2 | V ( G ) | k . Analysis.
We need the following lemma, similar in spirit to Lemma 5.3. Since we state the lemmain our language, we provide the proof for completeness.
Lemma 5.11 (Implicit in [29]) . For any graph G , χ ( G ) ≤ dim ( B [ G ]) ≤ dim ( B e [ G ]) + χ ( G ) Proof.
Recall that B[ G ] is almost identical to B e [ G ] except that B[ G ] has no edges of the form( u, u,
2) for all u ∈ V ( G ). We say that a mapping ψ : B e [ G ] → R hits a vertex u ∈ V ( G ) if ψ (( u, > ψ (( u, ψ (( v, ≤ ψ (( w, vw ∈ E ( G ). In other words, ψ is a linearorder that “reverses” the direction of edge ( u, u, ∈ B e [ G ].The following claim was proved by Hegde and Jain in [29]. We restated it here in our terminologyand also provide the proof for completeness. Claim 5.12 ([29]) . Let X ⊆ V ( G ) . There is a mapping ψ that hits all vertices in X if and only if X is an independent set in G .Proof. One direction is easy to see. Suppose X ⊆ V ( G ) contains u, v such that uv ∈ E ( G ), so thefunction ψ hitting { u, v } means that ψ (( u, > ψ (( u, ≥ ψ (( v, > ψ (( v, ≥ ψ (( u, X is an independent set. We define function ψ by processing verticesin X in arbitrary order. When vertex u ∈ X is considered, we set the values ψ (( u, ψ (( u, ψ (( u, ψ (( u, u ’s whosevalues ψ were undefined. Now notice that the only way to violate the hitting property of ψ is tohave ψ (( u, ψ (( v, uv ∈ E ( G ), but this is impossible because X is anindependent set.Now we prove the inequality.The lower bound follows immediately from Claim 5.12. Let ˜ ϕ : V (B[ G ]) → R d be a mappingthat realizes B[ G ]. For each coordinate q , define ψ q as ψ q (( u, ϕ ((( u, q ] and ψ q (( u, ϕ (( u, q ], i.e. ψ q is function where we project the q th coordinate of ˜ ϕ . Observe that, for eachvertex u ∈ V ( G ), there must be some ψ q that hits the vertex u . We argue that there is a validcoloring of G using at most d colors. To see this, construct a coloring as follows. For each vertex u ∈ V ( G ), assign a color q to u , where q is the first coordinate such that ψ q hits the vertex u .Claim 5.12 guarantees that each color class is an independent set. Thus, the coloring is valid and χ ( G ) ≤ d .To prove the upper bound, let ϕ be a function that realizes the poset B e [ G ]. Then, for anyvertices u (cid:54) = v of G , we have ϕ (( u, ≤ ϕ (( v, uv ∈ E ( G ). Then we need to extend ϕ into ˜ ϕ such that (i) Each vertex u ∈ V ( G ) is hit by some coordinate of ˜ ϕ , and (ii) For any twovertices of the form ( u, i ) and ( v, i ) where u (cid:54) = v and i ∈ { , } , we must have some coordinates q, q (cid:48) such that ˜ ϕ (( u, q ] < ˜ ϕ (( v, q ] and ˜ ϕ (( u, q (cid:48) ] > ˜ ϕ (( v, q (cid:48) ]. We only need two morecoordinates to satisfy (ii). To deal with (i), it suffices to find a collection of functions ψ j such thateach vertex u ∈ V ( G ) is hit by some ψ j . Then ˜ ϕ can be defined by concatenating ϕ with all the23appings ψ j . Each such ψ j can be obtained from Claim 5.12 by defining, for each j , ψ j to be amap that hits all vertices in a color class j .We will also need the following lemma which bounds the chromatic number of the k -fold productof graphs. Lemma 5.13 ([37, 22, 34] and [42, Cor. 3.4.5])) . For any graph G and any number k , (cid:16) χ ( G )log | V ( G ) | (cid:17) k ≤ χ ( G k ) ≤ ( χ ( G )) k . We are now ready to analyze the gap between the
Yes-Instance and
No-Instance .Suppose that G is a No-Instance . Then χ ( G ) ≥ | V ( G ) | − (cid:15) . By Lemma 5.11 and 5.13 andfor sufficiently large | V ( G ) | (so that log | V ( G ) | ≤ | V ( G ) | (cid:15) ), we have that dim (B[ G k ]) ≥ χ ( G k ) ≥ (cid:16) χ ( G )log | V ( G ) | (cid:17) k ≥ (cid:16) | V ( G ) | − (cid:15) log | V ( G ) | (cid:17) k ≥ n − (cid:15) (2 log | V ( G ) | ) k ≥ n − O ( (cid:15) ) . For the
Yes-Instance , we have that dim (B[ G k ]) ≤ dim (B e [ G k ]) + χ ( G k ). By Lemma 5.13, theterm χ ( G k ) can be upper bounded by χ ( G ) k ≤ | V ( G ) | (cid:15)k = | V ( G ) | ≤ n (cid:15) because χ ( G ) ≤ | V ( G ) | (cid:15) inthe Yes-Instance . We use the following claim to bound the term dim (B e [ G k ]). Claim 5.14.
For any graph G and integer k , dim ( B e [ G k ]) ≤ χ ( G ) dim ( B e [ G ]) k + k. Proof.
Note that dim ( (cid:126)K ) = 1. By Theorem 1.1, we have that dim (B e [ G k ]) ≤ dim (B e [ G k − ]) + χ ( G ) · dim (B e [ G ]) + 1 ≤ dim (B e [ G k − ]) + 2 · χ ( G ) · dim (B e [ G ]) + 2... ≤ dim (B e [ G ]) + ( k − · χ ( G ) · dim (B e [ G ]) + ( k − ≤ k · χ ( G ) · dim (B e [ G ]) + k By Claim 5.14 and the fact that dim (B e [ G ]) ≤ | V ( G ) | , we have dim (B e [ G k ]) ≤ χ ( G ) dim (B e [ G ]) k + k ≤ | V ( G ) | (cid:15) | V ( G ) | k + k This implies that dim (B e [ G k ]) ≤ O ( | V ( G ) | (cid:15) ) ≤ n (cid:15) . Therefore, dim (B[ G k ]) ≤ n O ( (cid:15) ) , implying thegap of n − O ( (cid:15) ) . In this section, we present all other applications discussed in the introduction.
The following reduction follows the ideas implicit in Theorem 3.5 in [17]. For completeness, weinclude the proof in Appendix A.
Theorem 6.1 ([17]) . Consider an instance G = ( V ∪ V , E ) of the bipartite semi-induced matchingproblem. There is a polynomial time reduction that, for any < β ≤ | V ( G ) | , outputs an instance A = ( A, (cid:96), µ ) of Mrfs satisfying the following properties: ( Size ) Matrix A is an m -by- n matrix, where m = | V | , n = | V | and L = max i ∈ [ n ] { (cid:96) i } ≤ ( βn ) O ( n ) . • ( Yes-Instance ) There is a solution x ∈ R n + that satisfies at least im ( G ) constraints in A . • ( No-Instance ) There is no solution x ∈ R n + that “ β -satisfies” more than sim ( G ) constraintsin A ; i.e., | (cid:8) i : (cid:96) i ≤ a Ti x ≤ βµ i (cid:9) | ≤ sim ( G ) for all x . Now, we prove the hardness of approximating
Mrfs , which holds even in the following bi-criteria setting. For any instance A , we denote by OPT ( A ) the maximum number of constraintsthat can be satisfied, i.e., OPT ( A ) = max i | (cid:8) i : (cid:96) i ≤ a Ti x ≤ µ i (cid:9) | . For any 0 < α, β ≤ m , we say thatan algorithm is an ( α, β )-approximation algorithm if, for any instance A of Mrfs , the algorithmreturns a solution x that β -satisfies at least OPT ( A ) /α constraints; i.e., | (cid:8) i : (cid:96) i ≤ a Ti x ≤ βµ i (cid:9) | ≥ OPT ( A ) /α . The non-bi-criteria setting (defined in Section 1.2) is when β = 1. Corollary 6.2.
Let (cid:15) > be any constant. There is no polynomial-time ( m − (cid:15) , m + n ) -approximationalgorithm for Mrfs unless NP ⊆ ZPP . Moreover, when considering an approximation factor interms of L , finding (log − (cid:15) L, m + n ) -approximation algorithm cannot be done in polynomial time,unless NP ⊆ ZPP .Proof.
We start from the graph G given by Theorem 5.1 and invoke Theorem 6.1 on G . For Yes-Instance where im ( G ) ≥ | V ( G ) | − (cid:15) , we have that there is a solution x that satisfies im ( G ) ≥| V ( G ) | − (cid:15) = m − (cid:15) constraints. In the No-Instance where sim ( G ) ≤ | V ( G ) | (cid:15) , there is no solutionthat β -satisfies sim ( G ) ≤ | V ( G ) | (cid:15) ≤ m (cid:15) , for any 0 < β ≤ | V ( G ) | = m + n . Thus, even when weallow the solution to ( m + n )-satisfies the constraints, there is still an m − O ( (cid:15) ) gap.Theorem 6.1 guarantees that L ≤ O ( n log n ) ≤ n (cid:15) , so the hardness of n − O ( (cid:15) ) can also bewritten as log − O ( (cid:15) ) L .We remark that the hardness factor can be improved to m/ log / γ m . Our bounds are nearlytight since it is trivial to get a factor of m -approximation, and since [17] showed an O (log( nL ))-approximation algorithm. In this section, we revisit
Udp-Min and
Smp and give an alternative proof of the hardness resultsin [13]. As discussed in the introduction, our proof illustrates the insight that the maximumexpanding sequence problem, which is equivalent to the bipartite semi-induced matching problem(see Appendix A.2), is the main source of hardness of these pricing problems.We start by defining the pricing problems we consider. In Unit-Demand Min-Buying Pricing(
Udp-Min ), we have a collection of items I = [ n ] and customers C where each consumer c ∈ C isassociated with set S c ⊆ [ n ] and a budget B c . Once the price p : I → R + is fixed, each consumer c buys the cheapest item in S c if the price of such item is at most B c ; otherwise, the consumer buysnothing. Our goal is to set the price p so that the profit is maximized.In Single-Minded Pricing ( Smp ), the setting is the same except that now each consumer c buysthe whole set S c of its items if (cid:80) i ∈ S c p ( i ) ≤ B c ; otherwise, the consumer c buys nothing. Again,the goal is to set the prices p so that the profit is maximized.For any instance P of Udp-Min or Smp , we denote by
OPT ( P ) the revenue that can be obtainedby an optimal price function.Our contribution lies in proving the following theorem that makes connections between thebipartite semi-induced matching problem and pricing problems. The proof of this theorem borrowsmany ideas from [10, 13]. The proof is included in Appendix A.25 heorem 6.3. There are reductions with a running time of | V ( G ) | O ( | V ( H ) | ) that transform inputgraph G (cid:48) = B e [ G ∨ H ] into an instance ( C , I ) of Udp-Min or Smp such that im ( G (cid:48) ) ≤ OPT ( C , I ) ≤ sim ( G (cid:48) ) + O ( | V ( G ) | (1 + | E ( H ) | )) Furthermore, the number of consumers and items are |C| = | V ( H ) | O ( | V ( H ) | ) | V ( G ) | and |I| = | V ( G ) || V ( H ) | , respectively, and each consumer c ∈ C satisfies | S c | ≤ O (∆ ) . Note that the running time and the number of consumers for
Udp-Min can be slightly improvedwith a more careful analysis as follows: The running time can be made 2 O ( | V ( H ) | ) poly | V ( G ) | , andthe number of consumers is 2 O ( | V ( H ) | ) | V ( G ) | .Applying the subadditivity property (Theorems 1.1), the hardness of the induced and semi-induced matching problems (Lemma 5.8) and Theorem 6.3, we get the following result, which is analternative proof of the result in [13]. Theorem 6.4.
For any constant (cid:15) > , both Smp and
Udp-Min are hard to approximate towithin a factor of log − (cid:15) |C| and (max c ∈C | S c | ) / − (cid:15) , where C is the set of consumers, unless NP ⊆ ZPTIME ( n O (poly log n ) ) .Proof. First, we take a ∆( q )-regular graph G on O ( q O (log ∆( q )) ) vertices and an empty graph H on ∆( q ) vertices as stated in Lemma 5.8. Thus, assuming that NP (cid:42) ZPTIME ( n O (log ∆( n )) ),there is no polynomial-time algorithm that distinguishes between the case that im (B e [ G ∨ H ]) ≥| V ( G ) | (∆( q )) − (cid:15) and the case that sim (B e [ G ∨ H ]) ≤ | V ( G ) | (∆( q )) (cid:15) , for function ∆ whose valuewill be specified later.Then we apply Theorem 6.3 on B e [ G ∨ H ] to obtain an instance ( C , I ) of Udp-Min . This meansthat in the
Yes-Instance , the optimal revenue from ( C , I ) is at least im ( G (cid:48) ) ≥ | V ( G ) | (∆( q )) − (cid:15) .Additionally, the optimal revenue in the No-Instance is at most 2 sim ( G (cid:48) )+ O ( | V ( G ) | (1+ | E ( H ) | )),which is O ( | V ( G ) | (∆( q )) (cid:15) ) because sim ( G (cid:48) ) ≤ | V ( G ) | (∆( q )) (cid:15) (by Lemma 5.8), and the term | V ( G ) | (1+ | E ( H ) | ) is at most O ( | V ( G ) | ) ≤ | V ( G ) | ∆ (cid:15) since H is an empty graph. This implies the gap of(∆( q )) − O ( (cid:15) ) between the two cases.Now, we choose ∆( q ) = log b q where b = O (1 /(cid:15) ). So, | V ( H ) | = ∆( q ) = log b q and | V ( G ) | = q O (log log q ) . By Theorem 6.3, the number of consumers is bounded by |C| ≤ O ( | V ( H ) | ) | V ( G ) | ≤ log b +1 q = 2 (∆( q )) (cid:15) , so we obtain the gap of (∆( q )) − O ( (cid:15) ) ≥ log − O ( (cid:15) ) |C| as desired.Lemma 5.8 holds with the assumption that NP (cid:42) ZPTIME ( n O (log ∆( n )) ) = ZPTIME ( n O (log log n ) ),and the running time of the reduction in Theorem 6.3 is | V ( G ) | O ( | V ( H ) | ) = q log b q = q poly log q , so thehardness assumption we need is NP (cid:54)⊆ ZPTIME ( n poly log n ).Now to get the hardness in terms of k / − (cid:15) = (max c ∈C | S c | ) / − (cid:15) , notice that k ≤ O (∆( q ) ).Therefore, the hardness in terms of k is (∆( q )) − O ( (cid:15) ) = k / − O ( (cid:15) ) . This holds for any ∆( q ) ≤ poly log q . The Donation Center Location (Dcl) problem is defined as follows. The input consists of a directedbipartite graph G = ( A ∪ L, E ) with edges directed from the set A of agents to the set L of donationcenters . Each center (cid:96) ∈ L has a capacity c (cid:96) ∈ Z that represents the maximum number of clientsthat can be served, and each vertex a ∈ A has a strictly ordered preference ranking of its neighborin L . We are interested in choosing a subset L (cid:48) ⊆ L of centers to open and an assignment of subset A (cid:48) ⊆ A of agents to centers such that: (1) The number of agents assigned to any center (cid:96) is at most26 (cid:96) , and (2) Each a ∈ A (cid:48) is assigned to its highest-ranked neighbor in L (cid:48) . Our goal is to maximizethe number of satisfied agents.We therefore write an instance of Dcl as a triple P = ( G = ( A ∪ L, E ) , { c (cid:96) } (cid:96) ∈ L , {(cid:22) a } a ∈ A ), wherethe relation (cid:22) a represents the rank relation of agent a ∈ A . Denote by OPT ( P ) the optimal valueof the instance P . The following theorem makes a connection between Dcl and the semi-inducedmatching problem.
Theorem 6.5.
Let G (cid:48) be a bipartite graph. There is a polynomial time reduction that transforms G (cid:48) into an instance P = ( G, { c (cid:96) } , {(cid:22) a } ) of Dcl such that: im ( G (cid:48) ) ≤ OPT ( P ) ≤ sim ( G (cid:48) ) Moreover, | V ( G ) | = | V ( G (cid:48) ) | , c (cid:96) = 1 for all (cid:96) ∈ L , and (cid:22) a = (cid:22) ∗ for all a ∈ A , where (cid:22) ∗ is someglobal preference.Proof. Given a bipartite graph G (cid:48) = ( V ∪ V , E ) where V = { ( u,
1) : u ∈ [ n ] } and V = { ( u,
2) : u ∈ [ n ] } ,we create an instance of Dcl as follows. Each vertex ( u,
1) represents a center (cid:96) ( u ), and each vertex( v,
2) represents an agent a ( v ). The capacity of each center (cid:96) ( u ) is c (cid:96) ( u ) = 1, and each agent uses aglobal preference (cid:22) ∗ that satisfies (cid:96) ( u ) (cid:22) ∗ (cid:96) ( u (cid:48) ) if and only if u > u (cid:48) are integers.First, we prove the lower bound of OPT ( P ) ≥ im ( G (cid:48) ). Given any induced matching M in G (cid:48) ,we argue that there is a solution of value |M| to the Dcl instance. For each edge ( u, v, ∈ M ,we open the center (cid:96) ( u ) and assign the agent a ( v ) to (cid:96) ( u ). Now, we only need to argue that allagents that are matched in the matching M are satisfied. For each ( v, u, v, ∈ M ,assume that ( v,
2) prefers some other center (cid:96) ( u (cid:48) ) to (cid:96) ( u ) that is currently open. This means thatthere must be an edge ( u (cid:48) , v, ∈ E , contradicting the fact that M is an induced matching.To prove the upper bound, given any solution L (cid:48) ⊆ L, A (cid:48) ⊆ A and assignment ϕ : A (cid:48) → L (cid:48) , weshow that we can construct a σ -semi-induced matching M in G (cid:48) such that |M| = | A (cid:48) | . Considerthe following set: M = (cid:8) ( u, v,
2) : (cid:96) ( u ) ∈ L (cid:48) and ϕ ( a ( v )) = (cid:96) ( u ) (cid:9) The set M is indeed a matching because each agent is only assigned once, and each center has unitcapacity. It is sufficient to show that the matching M is σ -semi-induced for total order σ definedby σ ( u ) < σ ( u (cid:48) ) if and only if u < u (cid:48) . Assume that this is not a σ -semi-induced matching. Thenthere must be an edge ( u (cid:48) , v, ϕ ( a ( v )) = (cid:96) ( u ) and u > u (cid:48) . This means that the agent a ( v ) prefers (cid:96) ( u (cid:48) ) to (cid:96) ( u ), but a ( v ) was assigned to (cid:96) ( u ) instead. This contradicts the fact that thesolution is feasible. Corollary 6.6.
Let (cid:15) > be any constant. Unless NP ⊆ ZPP , it is hard to approximate
Dcl towithin a factor of n − (cid:15) where n is the number of vertices in the input graph.Proof. First, we take a graph G (cid:48) as in Theorem 5.1, and we then invoke Theorem 6.5 on G (cid:48) to obtain an instance P = ( G = ( A ∪ L, E ) , { c l } , {(cid:22) a } ) of DCL . In the
Yes-Instance , where im ( G (cid:48) ) ≥ | V ( G (cid:48) ) | − (cid:15) , we have that OPT ( P ) ≥ | V ( G ) | − (cid:15) . In the No-Instance , where sim ( G (cid:48) ) ≤| V ( G (cid:48) ) | (cid:15) , we have OPT ( P ) ≤ | V ( G ) | (cid:15) . Therefore, we obtain a gap of | V ( G ) | − (cid:15) as desired. We start by giving the definitions of the problems and related notions in graph theory.27 efinition 6.7 (Intersection Graph) . Given a graph G = ( V, E ) , we say that a set system { S v } v ∈ V ( G ) is a set system representation of G if ( ∀ u, v ∈ V ( G )) uv ∈ E iff S u ∩ S v (cid:54) = ∅ It is well-known that any graph G can be represented by a set system: for each vertex u ∈ V ( G ),we define set S u to contain edges incident to u . We are interested in a set system representationwhere each set corresponds to a geometric object. Definition 6.8 (Boxicity and Cubicity) . We say that the boxicity of a graph G is at most d if G can be represented by a set system { S v } v ∈ V ( G ) such that each set S v is a d -dimensional axis-parallelhyper-rectangle in R d . Similarly, we say that the cubicity of G is at most d if each set S v is a unitcube in R d . In other words, the boxicity of G , denoted by box ( G ), is the minimum dimension d such thatwe can represent each node v ∈ V ( G ) as a d -dimensional rectangle in the geometric setting. It isknown that the boxicity is at most one and two in interval graphs and planar graphs, respectively. Definition 6.9 (Threshold Dimension) . A graph G is a threshold graph if there is a real number η and weight function w : V ( G ) → R such that uv ∈ E ( G ) ⇔ w ( u ) + w ( v ) ≥ η . For any graph G ,the threshold dimension of G is the minimum k such that there exist threshold graphs G , . . . , G k where E ( G ) = (cid:83) ki =1 E ( G i ) . Adiga et al. [1] show that the problems of approximating boxicity, cubicity, and thresholddimension are at least as hard as poset dimension (within a constant factor). We get the tighthardness of these problems by combining the reductions in [1] with our hardness of poset dimension.We provide an outline of their reductions here and the readers to [1] for the complete proof.First, they show that there is a polynomial-time algorithm that transforms any poset (cid:126)P into agraph G (cid:126)P such that box ( G (cid:126)P ) ≤ dim ( (cid:126)P ) ≤ box ( G (cid:126)P ). This implies the hardness of approximatingboxicity of graphs. Since cubicity is known to be within a logarithmic factor of boxicity, approxi-mating cubicity is also as hard as boxicity (up to a factor of log n ). Also, they construct graph G (cid:48) (cid:126)P such that the threshold dimension of G (cid:48) (cid:126)P is the same as the dimension of poset (cid:126)P , hence implyingthe hardness of approximating threshold dimension. We have shown that simple techniques based on graph products are powerful tools in provinghardness of approximation. While some of these results are tight, some problems are still open.In particular, it remains to close the gap between d / − (cid:15) and O ( d ) for the semi-induced matchingproblem on d -regular graphs. Also, there is a gap between O ( k )-approximation (see [6]) and k / − (cid:15) -hardness for the k -hypergraph vertex pricing problem.It is also interesting to further investigate the power of our techniques in proving hardness ofapproximation or even in other types of lower bounds. A potential starting point is to look atproblems which share common structures to those studied in this paper. For example, Udp-Min isa special case of the multi-user Stackelberg network pricing problem, so graph products can be usedto prove the hardness of this problem. However, the approximability of the single-user version is stillwide open as there is a large gap between (2 − (cid:15) )-hardness and O (log n )-approximation algorithm.In fact, the proof of the (2 − (cid:15) )-hardness can be viewed as a reduction from the independent setproblem, but we found no graph product techniques that can be used to boost the hardness further.(See, e.g., [8, 9] for detail.) 28 eferences [1] Abhijin Adiga, Diptendu Bhowmick, and L. Sunil Chandran. The hardness of approximatingthe boxicity, cubicity and threshold dimension of a graph. Discrete Appl. Math. , 158(16):1719–1726, 2010.[2] Abhijin Adiga, Diptendu Bhowmick, and L. Sunil Chandran. Boxicity and poset dimension.
SIAM J. Discrete Math. , 25(4):1687–1698, 2011.[3] Abhijin Adiga, L. Sunil Chandran, and Rogers Mathew. Cubicity, degeneracy, and crossingnumber. In
FSTTCS , pages 176–190, 2011.[4] Christoph Amb¨uhl, Monaldo Mastrolilli, Nikolaus Mutsanas, and Ola Svensson. Precedenceconstraint scheduling and connections to dimension theory of partial orders.
Bulletin of theEATCS , 95:37–58, 2008.[5] Matthew Andrews, Julia Chuzhoy, Venkatesan Guruswami, Sanjeev Khanna, Kunal Talwar,and Lisa Zhang. Inapproximability of edge-disjoint paths and low congestion routing on undi-rected graphs.
Combinatorica , 30(5):485–520, 2010.[6] Maria-Florina Balcan and Avrim Blum. Approximation algorithms and online mechanisms foritem pricing.
Theor. Comput. , 3(1):179–195, 2007.[7] Vincenzo Bonifaci, Peter Korteweg, Alberto Marchetti-Spaccamela, and Leen Stougie. Mini-mizing flow time in the wireless gathering problem.
ACM Transactions on Algorithms , 7(3):33,2011.[8] Patrick Briest, Parinya Chalermsook, Sanjeev Khanna, Bundit Laekhanukit, and DanuponNanongkai. Improved hardness of approximation for stackelberg shortest-path pricing. In
WINE , pages 444–454, 2010.[9] Patrick Briest, Martin Hoefer, and Piotr Krysta. Stackelberg network pricing games.
Algo-rithmica , 62(3-4):733–753, 2012.[10] Patrick Briest and Piotr Krysta. Buying cheap is expensive: Approximability of combinatorialpricing problems.
SIAM J. Comput. , 40(6):1554–1586, 2011.[11] Kathie Cameron. Induced matchings.
Discrete Appl. Math. , 24(1-3):97–102, 1989.[12] Kathie Cameron and Pavol Hell. Independent packings in structured graphs.
Math. Program. ,105(2-3):201–213, 2006.[13] Parinya Chalermsook, Julia Chuzhoy, Sampath Kannan, and Sanjeev Khanna. Improvedhardness results for profit maximization pricing problems with unlimited supply. In
APPROX-RANDOM , pages 73–84, 2012.[14] L. Sunil Chandran, Mathew C. Francis, and Naveen Sivadasan. Boxicity and maximum degree.
J. Comb. Theory, Ser. B , 98(2):443–445, 2008.[15] L. Sunil Chandran and Naveen Sivadasan. Boxicity and treewidth.
J. Comb. Theory, Ser. B ,97(5):733–744, 2007.[16] William Duckworth, David Manlove, and Michele Zito. On the approximability of the maxi-mum induced matching problem.
J. Discrete Algorithms , 3(1):79–91, 2005.2917] Khaled M. Elbassioni, Rajiv Raman, Saurabh Ray, and Ren´e Sitters. On the approximabilityof the maximum feasible subsystem problem with 0/1-coefficients. In
SODA , pages 1210–1219,2009.[18] S. Even, O. Goldreich, S. Moran, and P. Tong. On the np-completeness of certain networktesting problems.
Networks , 14(1):1–24, 1984.[19] Uriel Feige and Joe Kilian. Zero knowledge and the chromatic number.
J. Comput. Syst. Sci. ,57(2):187–199, 1998.[20] Stefan Felsner, Ching Man Li, and William T. Trotter. Adjacency posets of planar graphs.
Discrete Math. , 310(5):1097–1104, 2010.[21] F.S.Roberts. On the boxicity and cubicity of a graph.
Recent Progresses in Combinatorics ,pages 301–310, 1969.[22] Guogang Gao and Xuding Zhu. Star-extremal graphs and the lexicographic product.
DiscreteMath. , 152(1-3):147–156, 1996.[23] M. R. Garey and David S. Johnson.
Computers and Intractability: A Guide to the Theory ofNP-Completeness . W. H. Freeman, 1979.[24] Zvi Gotthilf and Moshe Lewenstein. Tighter approximations for maximum induced matchingsin regular graphs. In
WAOA , pages 270–281, 2005.[25] Venkatesan Guruswami, Jason D. Hartline, Anna R. Karlin, David Kempe, Claire Kenyon,and Frank McSherry. On profit-maximizing envy-free pricing. In
SODA , pages 1164–1173.SIAM, 2005.[26] Venkatesan Guruswami and Prasad Raghavendra. Hardness of solving sparse overdeterminedlinear systems: A 3-query pcp over integers.
TOCT , 1(2), 2009.[27] Richard Hammack, Wilfried Imrich, and Sandi Klavˇzar.
Handbook of product graphs . DiscreteMath. Appl. (Boca Raton). CRC Press, Boca Raton, FL, second edition, 2011.[28] Johan H˚astad. Clique is hard to approximate within n − (cid:15) . In FOCS , pages 627–636, 1996.[29] Rajneesh Hegde and Kamal Jain. The hardness of approximating poset dimension.
Electron.Notes Discrete Math. , 29:435–443, 2007.[30] Chien-Chung Huang and Zoya Svitkina. Donation center location problem. In
FSTTCS , pages227–238, 2009.[31] David S. Johnson. The NP-completeness column: An ongoing guide.
J. Algorithms , 2(4):393–405, 1981.[32] Changhee Joo, Gaurav Sharma, Ness B. Shroff, and Ravi R. Mazumdar. On the complexity ofscheduling in wireless networks.
EURASIP J. Wireless Comm. and Networking , 2010, 2010.[33] Subhash Khot and Ashok Kumar Ponnuswami. Better inapproximability results for maxclique,chromatic number and min-3lin-deletion. In
ICALP (1) , pages 226–237, 2006.[34] Sandi Klavzar and Hong-Gwa Yeh. On the fractional chromatic number, the chromatic number,and graph products.
Discrete Math. , 247(1-3):235–242, March 2002.3035] Ravi Kumar, Uma Mahadevan, and D. Sivakumar. A graph-theoretic approach to extractstorylines from search results. In
KDD , pages 216–225, 2004.[36] Eugene L. Lawler and Oliver Vornberger. The partial order dimension problem is NP-complete.Manuscript, 1981.[37] Nathan Linial and Umesh V. Vazirani. Graph products and chromatic numbers. In
FOCS ,pages 124–128, 1989.[38] Nikola Milosavljevic. On complexity of wireless gathering problems on unit-disk graphs. In
ADHOC-NOW , pages 308–321, 2011.[39] Sofya Raskhodnikova. Transitive-closure spanners: A survey. In
Property Testing , pages 167–196, 2010.[40] Paat Rusmevichientong. A non-parametric approach to multi-product pricing: Theory andapplication.
Ph. D. thesis, Stanford University , 2003.[41] Paat Rusmevichientong, Benjamin Van Roy, and Peter W. Glynn. A nonparametric approachto multiproduct pricing.
Oper. Res. , 54:82–98, January 2006.[42] E.R. Scheinerman and D.H. Ullman.
Fractional graph theory: a rational approach to the theoryof graphs . Wiley-Intersci. Ser. Discrete Math. Optim. Wiley, 1997.[43] Walter Schnyder. Planar graphs and poset dimension.
Order , 5:323–343, 1989.[44] Walter Schnyder. Embedding planar graphs on the grid. In
SODA , pages 138–148, 1990.[45] Larry J. Stockmeyer and Vijay V. Vazirani. Np-completeness of some generalizations of themaximum matching problem.
Inf. Process. Lett. , 15(1):14–19, 1982.[46] Luca Trevisan. Non-approximability results for optimization problems on bounded degreeinstances. In
STOC , pages 453–461, 2001.[47] Luca Trevisan. CS294: PCP and Hardness of Approximation, Lecture 5, 2006.[48] W.T. Trotter.
Combinatorics and Partially Ordered Sets: Dimension Theory . Johns HopkinsStudies in the Mathematical Sciences. Johns Hopkins University Press, 2001.[49] Mihalis Yannakakis. The Complexity of the Partial Order Dimension Problem.
SIAM J.Algebra Discr. , 3(3):351–358, 1982.[50] Michele Zito. Induced matchings in regular graphs and trees. In WG , pages 89–100, 1999.31 ppendixA Omitted Proofs from Section 6 A.1 Proof of Theorem 6.1
Theorem 6.1. (restated)
Consider an instance G = ( V ∪ V , E ) of the bipartite semi-inducedmatching problem. Let m = | V | and n = | V | . There is a polynomial time reduction that, for any β > , outputs an instance A = ( A, (cid:96), µ ) of Mrfs satisfying the following properties: • ( Size ) Matrix A is an m -by- n matrix and L = max i ∈ [ m ] { (cid:96) i } = ( βn ) O ( m ) . • ( Yes-Instance ) There is a solution x ∈ R n + that satisfies at least im ( G ) constraints in A . • ( No-Instance ) There is no solution x ∈ R n + that “ β -satisfies” more than sim ( G ) constraintsin A ; i.e., | (cid:8) i : (cid:96) i ≤ a Ti x ≤ βµ i (cid:9) | ≤ sim ( G ) for all x .Proof. For the sake of presentation, we represent the set of vertices of G as V = { ( u,
1) : u ∈ [ m ] } and V = { ( u,
2) : u ∈ [ n ] } .We define a linear system consisting of n variables { x ( w, } w ∈ [ n ] and the following m constraints: ∀ u ∈ [ m ] , ( βn ) u − ≤ (cid:88) w :( u, w, ∈ E ( G ) x w ≤ ( βn ) u (20)Formally, for each ( u, w ) ∈ [ m ] × [ n ], define a u,w = 1 if ( u, w, ∈ E ( G ) and a u,w = 0 otherwise.Then we create a constraint u for each vertex ( u, ∈ V as (cid:96) u ≤ (cid:80) ( w, ∈ V a u,w x w ≤ µ u where (cid:96) u = ( βn ) u − and µ u = ( βn ) u .By the construction, the number of constraints is m = | V | , and the number of variables is n = | V | . Also, notice that L = max ( u, ∈ V (cid:96) u = ( βn ) O ( m ) . This proves the first property.Let M = { ( u i , w i ,
2) : i = 1 , . . . , r } be an induced matching of size r in G . We can define thefollowing solution for linear system A : For each i = 1 , . . . , r , we have x w i = (cid:96) u i , and x w (cid:48) = 0 for allother w (cid:48) ∈ [ n ]. It suffices to show that a constraint u i is satisfied for all i ∈ [ r ]. To see this, considerany constraint u i , where i ∈ [ r ]. Only variables x w j with ( u i , w j , ∈ E ( G ) participate in thisconstraint, and the only variable with positive value is x w i = (cid:96) u i ; otherwise, it would contradictthe fact that M is an induced matching. This proves the second property.Now, to prove the third property, assume that we have a solution x that β -satisfies r constraints,for some 0 < β ≤ | V ( G ) | ; i.e, there exists a subset V ∗ ⊆ V , denoted by V ∗ = { ( u , , . . . , ( u r , } ,such that ∀ ( u i , ∈ V ∗ ( βn ) u i − ≤ (cid:88) w :( u i , w, ∈ E ( G ) x w ≤ β ( βn ) u i . We note the following claim.
Claim A.1.
For any ( u i , ∈ V ∗ , there exists w i ∈ [ n ] such that x w i ≥ (cid:96) u i /n and ( u i , w i , ∈ E ( G ) .Proof. Consider the constraint u i , which involves variables x w (cid:48) for all w (cid:48) ∈ [ n ] such that ( u i , w (cid:48) , ∈ E ( G ). Since there are at most n such variables and (cid:80) w (cid:48) :( u i , w (cid:48) , ∈ E ( G ) x w (cid:48) ≥ (cid:96) u i , one of the vari-ables x w (cid:48) must have a value of at least (cid:96) u i /n . Thus, w i = w (cid:48) is the desired index, proving theclaim. 32ext, we define a set of matching M by { ( u i , w i , } ri =1 where w i is as in the above claim.It is not difficult to check that M is a matching: For any u i > u j , note that x w i ≥ (cid:96) u i /n =( βn ) u i − /n > β ( βn ) u j = βµ u j ≥ x w j ; thus, w i (cid:54) = w j .We define a total order σ on V ( G ) as σ ( v ) = v for all ( v, ∈ V and σ ( w ) = m + w for all( w, ∈ V . We claim that M is a σ -semi-induced matching. To see this, assume that it is not.Then, by the definition of σ -semi-induced matching and the fact that G is bipartite, there must besome edge ( u i , w j , ∈ E ( G ) for some i and j such that u i < u j . Observe that (cid:88) w (cid:48) :( u i , w (cid:48) , ∈ E ( G ) x w (cid:48) ≥ x w j ≥ (cid:96) u j /n = ( βn ) u j − /n > β ( βn ) u i = βµ u i . This means that constraint u i is violated by more than a factor of β , contradicting the assumptionthat x β -satisfies constraints corresponding to vertices in V ∗ . This proves the third claim andcompletes the proof of Theorem 6.1. A.2 Equivalence between semi-induced matching and maximum expanding se-quence
In this section, we show that the maximum expanding sequence problem is in fact equivalent tothe semi-induced matching problem.
Maximum Expanding Sequence.
We are given an ordered collection of sets S = { S , . . . , S m } over the ground elements U . An expanding sequence ϕ = ( ϕ (1) , . . . , ϕ ( (cid:96) )) of length (cid:96) is a selection ofsets S ϕ (1) , . . . , S ϕ ( (cid:96) ) such that, for all j : 2 ≤ j ≤ (cid:96) , we have ϕ ( j − < ϕ ( j ) and S ϕ ( j ) (cid:54)⊆ (cid:83) j (cid:48) Let ( S , U ) be an instance of the maximum expanding sequence problem. Then thereis a polynomial-time reduction that constructs an instance ( G, σ ) of the σ -semi-induced matchingproblem such that sim σ ( G ) = OPT ( S , U ) . Conversely, given an instance ( G, σ ) of the semi-inducedmatching problem, we can construct ( S , U ) such that OPT ( S , U ) = sim σ ( G ) .Proof. We only prove one direction of the reduction. It will be clear from the description that theconverse also holds. Given an instance ( S , U ) of the expanding sequence problem, we construct thebipartite graph G = ( V ∪ V , E ) where V = { ( i, 1) : i ∈ [ |S| ] } and V = { ( i, 2) : i ∈ [ |U | ] } . Each S i ∈ S corresponds to ( i, ∈ V and each element j ∈ U corresponds to vertex ( j, 2) in V . Theset S i contains j if and only if ( i, j, ∈ E , and finally the total order σ is defined as follows: • σ (( i, i for all 0 ≤ i ≤ |S| − 1, and • σ (( j, |S| + j for all 0 ≤ j ≤ |U | − σ put the order of vertices in V before those in V , and the ordering ofvertices in V are ordered according to their corresponding sets.We now claim that expanding sequences in ( S , U ) are equivalent to σ -semi-induced matchings in G . For any expanding sequence, S ϕ (1) , . . . , S ϕ ( (cid:96) ) , we define the σ -semi-induced matching as follows:For each j = 1 , . . . , (cid:96) , we have an edge ( ϕ ( j ) , ψ ( j ) , 2) where ψ ( j ) is defined as an arbitraryelement in S ϕ ( j ) \ (cid:16)(cid:83) j (cid:48) 1) : u ∈ V ( G ) , a ∈ V ( H ) } and V = { ( u, a, 2) : u ∈ V ( G ) , a ∈ V ( H ) } .For convenience, we may think of vertices in V ( G ) and V ( H ) as integers (so that their orderingand arithmetic can be naturally done). For each vertex ( u, a, ∈ V , we have an item I ( u, a ). So, I = { I ( u, a ) : u ∈ V ( G ) , a ∈ V ( H ) } , and hence |I| = | V ( G ) || V ( H ) | . For each vertex ( u, a, ∈ V ,we have n a consumers C ( u, a ) = { c ( u, a, r ) } n a r =1 , and each such consumer in this set has budget B c ( u,a,r ) = 1 /n a and an associated set S c ( u,a,r ) = { I ( v, b ) : ( u, a, v, b, ∈ E } . The final set ofconsumers is C = (cid:83) u,a C ( u, a ). This completes the construction. The instance ( C , I ) here will beused as both Udp-Min and Smp instances.We first show that the optimal revenue we receive from the above instance is at least thesize of the maximum induced matching, for both Udp-Min and Smp . Let OPT UDP ( C , I ) and OPT SMP ( C , I ) denote the optimal value on instance ( C , I ) of Udp-Min and Smp , respectively. Lemma A.3. The followings hold for Udp-Min and Smp : • OPT UDP ( C , I ) ≥ im ( G (cid:48) ) • OPT SMP ( C , I ) ≥ im ( G (cid:48) ) Proof. Let M be an induced matching of cardinality K . First, for each ( u, a, v, b, ∈ M , weset the price of I ( v, b ) to p ( I ( v, b )) = 1 /n a . Next, we set prices of the other items. For Udp-Min ,we set their prices to ∞ . For Smp , we set their prices to 0. It is clear that this price function iswell-defined because M is a matching.Now, we argue that the revenue that can be made from the price function p is K for both Udp-Min and Smp . It suffices to show that, for each pair ( u, a ) such that ( u, a, 1) is matched in M ,each consumer in the set C ( u, a ) pays the price of 1 /n a . Consider any edge ( u, a, v, b, ∈ M .For Udp-Min , any consumer c ∈ C ( u, a ) has exactly one item I ( v, b ) ∈ S c with finite price of 1 /n a ;otherwise, M would not be an induced matching. Similarly, for Smp , any consumer c ∈ C ( u, a ) hasexactly one item in I ( v, b ) ∈ S c with non-zero price of 1 /n a . Therefore, the total profit made fromconsumers in C ( u, a ) is exactly one for both Udp-Min and Smp , and the lemma follows.The next lemma proves the upper bound of the revenue. Lemma A.4. The followings hold for Udp-Min and Smp : • OPT UDP ( C , I ) ≤ sim ( G (cid:48) ) + | V ( G ) | ( | E ( H ) | + 1) • OPT SMP ( C , I ) ≤ sim ( G (cid:48) ) + O ( | V ( G ) || E ( H ) | ) Proof. Let p ∗ be any optimal price function for either Udp-Min or Smp , and let K be the revenuethat we obtain from p ∗ . Our goal is to show that we can find a semi-induced matching of cardinality K/ − | V ( G ) | ( | E ( H ) | + 1) in G (cid:48) .First, observe that, for any u ∈ V ( G ) , a ∈ V ( H ), all consumers in C ( u, a ) pay exactly the sameprice since they desire the same set of items and have the same budget. So, we will treat theseconsumers as a bundle and refer to index ( u, a ) as a representative of all consumers in C ( u, a ).We need the following notion of tight index, which intuitively captures the consumers who spenda sufficiently large fraction of their budgets in the solution p ∗ . Definition A.5 (Tight Index) . We say that an index ( u, a ) is tight if consumers in C ( u, a ) paybetween /n a +1 and /n a (i.e., between /n and proportion of their budgets). laim A.6. The number of tight indices is at least K/ .Proof. Assume for a contradiction that the number of tight indices is less than K/ 2. Consider anon-tight index ( u, a ) such that consumers in C ( u, a ) can afford to buy items: • For Udp-Min , min ( v,b ):( u,a, v,b, ∈ E ( G (cid:48) ) p ∗ ( I ( v, b )) ∈ [0 , /n a +1 ). • For Smp , (cid:88) ( v,b ):( u,a, v,b, ∈ E ( G (cid:48) ) p ∗ ( I ( v, b )) ∈ [0 , /n a +1 ).We call these indices feasible non-tight indices.Observe that we earn a profit of strictly larger than K/ Udp-Min and Smp , and this is because the price function p ∗ yields a revenue of K which only comes from either consumers with tight or feasible non-tightindices, and we get strictly less than K/ p (cid:48) = 2 p ∗ . By using the price function p (cid:48) , the revenue that weearn from the feasible non-tight indices will be twice. So, we would have revenue strictly more than K for both Udp-Min and Smp . This contradicts the optimality of p ∗ .Our goal is to “recover” a large semi-induced matching of G (cid:48) from the tight indices. We willshow that the set of “recoverable indices”, which is large, is exactly the following set of canonicaltight indices . Note that, in both Udp-Min and Smp , for any tight index ( u, a ), there must be anitem I ( v, b ) such that ( u, a, v, b, ∈ E ( G (cid:48) ) (i.e., the item that consumers in C ( u, a ) want tobuy) and 1 /n a +2 ≤ p ∗ ( I ( v, b )) ≤ /n a (i.e., I ( v, b ) is expensive but affordable by consumers in C ( u, a )). We say that ( u, a ) is canonical if and only if I ( u, a ) is the only such item. To be precise,the canonical tight index is defined as below. Definition A.7 (Canonical Tight Index) . We say that a tight index ( u, a ) is canonical if for any ( v, b ) such that ( u, a, v, b, ∈ E ( G (cid:48) ) , we have that /n a +2 ≤ p ∗ ( I ( v, b )) ≤ /n a if and only if u = v and a = b . First, we show that the number of canonical tight indices is large. Claim A.8. There are at least K/ − | V ( G ) | ( | E ( H ) | + 1) canonical tight indices.Proof. Let ( u, a ) be any non-canonical tight index. Recall that since ( u, a ) is tight, there must bean item I ( v, b ) such that ( u, a, v, b, ∈ E ( G (cid:48) ) and 1 /n a +2 ≤ p ∗ ( I ( v, b )) ≤ /n a . Since ( u, a ) isnon-canonical, it is not the case that both u = v and a = b . Consequently, since ( u, a, v, b, ∈ E ( G (cid:48) ), either uv ∈ E ( G ) or ab ∈ E ( H ). We say that ( u, a ) is G -non-canonical if uv ∈ E ( G )and H -non-canonical if ab ∈ E ( H ). Note that every non-canonical tight index must be either G -non-canonical or H -non-canonical (or both).First, we claim that the number of G -non-canonical tight indices is at most | V ( G ) | . In particular,we claim that for any u ∈ V ( G ), there is at most one G -non-canonical tight index of the form( u, a, G -non-canonical tightindices ( u, a ) and ( u, a (cid:48) ) for some u ∈ V ( G ) and a, a (cid:48) ∈ V ( H ) such that a < a (cid:48) .For the case of Udp-Min , observe that since ( u, a (cid:48) ) is G -non-canonical and tight, there exists anindex ( v, b (cid:48) ) such that (1) uv ∈ E ( G ) and (2) 1 /n a (cid:48) +2 ≤ p ∗ ( I ( v, b (cid:48) )) ≤ /n a (cid:48) . The first propertyimplies that ( u, a, v, b (cid:48) , ∈ E ( G (cid:48) ). Consequently, by the second property, consumers in C ( u, a )pays at most /n a (cid:48) < /n a +1 , contradicting the assumption that index ( u, a ) is tight.For the case of Smp , observe that since ( u, a ) is G -non-canonical and tight, there exists an index( v, b ) such that (1) uv ∈ E ( G ) and (2) 1 /n a +2 ≤ p ∗ ( I ( v, b )) ≤ /n a . The first property implies35hat ( u, a (cid:48) , v, b, ∈ E ( G (cid:48) ). Consequently, by the second property, consumers in C ( u, a ) pays atleast /n a +2 > /n a (cid:48) , contradicting the assumption that index ( u, a (cid:48) ) is tight.Secondly, we claim that the number of H -non-canonical tight indices is at most | V ( G ) || E ( H ) | .To see this, recall that for any ( u, a ) that is H -non-canonical and tight, there exists an index ( v, b )such that (1) ab ∈ E ( H ) and (2) 1 /n a +2 ≤ p ∗ ( I ( v, b )) ≤ /n a . Observe that, by the first condition,any H -non-canonical tight index ( u, a ) must be in the following set Φ( u ) = (cid:83) a,b : ab ∈ E ( H ) { ( u, a ) } . Obviously, | Φ( u ) | ≤ | E ( H ) | . It follows that the number of H -non-canonical tight indices is at most (cid:80) u ∈ V ( G ) | Φ( u ) | ≤ | V ( G ) || E ( H ) | as claimed.Now, we have already shown that there are at most | V ( G ) | ( | E ( H ) | + 1) non-canonical tightindices; since the number of tight indices is at least K/ K/ − | V ( G ) | ( | E ( H ) | + 1) as desired.We finish the proof by showing that we can recover a large semi-induced matching from canonicaltight indices. Claim A.9. sim ( G (cid:48) ) is at least the number of canonical tight indices. In other words, sim ( G (cid:48) ) ≥ K/ − | V ( G ) | ( | E ( H ) | + 1) .Proof. Let ( u , a ) , ( u , a ) . . . ( u t , a t ), for some t , be the canonical tight indices. Order them insuch a way that a ≤ a ≤ . . . ≤ a t . Let M = { ( u i , a i , u i , a i , } i =1 ...t .For the case of Udp-Min , let σ be any total ordering of nodes in G (cid:48) such that σ ( u , a , <σ ( u , a , < . . . < σ ( u t , a t , 1) and σ ( u i , a i , > σ ( u t , a t , 1) for all i . We claim that M is a σ -semi-induced matching in G (cid:48) . In particular, we show that for any i < j , ( u i , a i , u j , a j , / ∈ E ( G (cid:48) ). Tosee this, note that since ( u j , a j ) is canonical and tight, 1 /n a j +2 ≤ p ∗ ( I ( u j , a j )) ≤ /n a j .Consider two possible cases: either (1) a i < a j or (2) a i = a j . In the first case, we have that p ∗ ( I ( u j , a j )) ≤ /n a j < /n a i +1 . Thus, if ( u i , a i , u j , a j , ∈ E ( G (cid:48) ), then consumers in C ( u i , a i )will pay strictly less than 1 /n a i +1 , contradicting the fact that ( u i , a i ) is tight. In the second case,we have that 1 /n a i +2 ≤ p ∗ ( I ( u j , a j )) ≤ /n a i . Thus, if ( u i , a i , u j , a j , ∈ E ( G (cid:48) ), then ( u i , a i )is not a canonical index, a contradiction. Since both cases lead to a contradiction, we have that( u i , a i , u j , a j , / ∈ E ( G (cid:48) ), and thus M is a σ -semi-induced matching as claimed.For the case of Smp , let σ (cid:48) be any total ordering such that σ (cid:48) ( u , a , > σ (cid:48) ( u , a , > . . . >σ (cid:48) ( u t , a t , 1) and σ (cid:48) ( u i , a i , > σ (cid:48) ( u , a , 1) for all i . We claim that M is a σ (cid:48) -semi-induced matchingin G (cid:48) . In particular, we show that for any i > j , ( u i , a i , u j , a j , / ∈ E ( G (cid:48) ). To see this, noteagain that since ( u j , a j ) is canonical and tight, 1 /n a j +2 ≤ p ∗ ( I ( u j , a j )) ≤ /n a j .Consider two possible cases: either (1) a i > a j or (2) a i = a j . In the first case, we have that p ∗ ( I ( u j , a j )) ≥ /n a j +2 > /n a i . Thus, if ( u i , a i , u j , a j , ∈ E ( G (cid:48) ), then consumers in C ( u i , a i )pay strictly more than 1 /n a i , contradicting the fact that ( u i , a i ) is tight. In the second case, wehave that 1 /n a i +2 ≤ p ∗ ( I ( u j , a j )) ≤ /n a i . Thus, if ( u i , a i , u j , a j , ∈ E ( G (cid:48) ), then ( u i , a i ) isnot a canonical index, a contradiction. Since both cases lead us to a contradiction, we have that( u i , a i , u j , a j , / ∈ E ( G (cid:48) ), and thus M is a σ (cid:48)(cid:48)