Recognition of unipolar and generalised split graphs
aa r X i v : . [ c s . D S ] A p r Recognition of unipolar and generalised splitgraphs
Colin McDiarmid [email protected]
Nikola Yolov [email protected]
November 30, 2017
Abstract
A graph is unipolar if it can be partitioned into a clique and a disjointunion of cliques, and a graph is a generalised split graph if it or its com-plement is unipolar. A unipolar partition of a graph can be used to findefficiently the clique number, the stability number, the chromatic number,and to solve other problems that are hard for general graphs. We presentthe first O ( n ) time algorithm for recognition of n -vertex unipolar andgeneralised split graphs, improving on previous O ( n ) time algorithms. A graph is unipolar if for some k ≥ k + 1cliques { C i } ki =0 so that there are no edges between C i and C j for 1 ≤ i < j ≤ k .A graph G is a generalised split graph if either G or its complement G is unipolar.All generalised split graphs are perfect; and Pr¨omel and Steger [PS92] showthat almost all perfect graphs are generalised split graphs. Perfect graphs canbe recognised in polynomial time [CLV03] and [CCL + O ( n ) running time, [CH12] with O ( n m ) time and [EW14] with O ( nm + nm ′ ) time, where n and m are respectively the number of vertices and edges ofthe input graph, and m ′ is the number of edges added after a triangulation ofthe input graph. Note that almost all unipolar graphs and almost all generalisedsplit graphs have (1 + o (1)) n / G G is unipolar, each of the mentioned algorithms above recognises generalisedsplit graphs in O ( n ) time. The algorithm in this paper has running time O ( n ).This leads to polynomial-time algorithms for the problems mentioned above(stable set, clique, colouring and so on) which have O ( n . ) expected runningtime for a random perfect graph R n and an exponentially small probability ofexceeding this time bound. Here we assume that R n is sampled uniformly fromthe perfect graphs on vertex set [ n ] = { , , . . . n } . We use V ( G ), E ( G ), v ( G ) and e ( G ) to denote V , E , | V | and | E | for a graph G = ( V, E ). We let N ( v ) denote the neighbourhood of a vertex v , and let N + ( v )denote N ( v ) ∪ { v } , also called the closed neighbourhood of v . If G = ( V, E )and S ⊆ V , then G [ S ] denotes the subgraph induced by S . Let GS + be the setof all unipolar graphs and let GS be the set of all generalised split graphs. Fix G = ( V, E ) ∈ GS + . If V , V ⊆ V with V ∩ V = ∅ and V ∪ V = V are suchthat V is a clique and V is a disjoint union of cliques, then the ordered pair( V , V ) will be called a unipolar representation of G or just a representation of G . For each unipolar representation R = ( V , V ) we call V the central clique of R , and we call the maximal cliques of V the side cliques of R . A graph isunipolar iff it has a unipolar representation. Definition 1.1.
Let R = ( V , V ) be a unipolar representation of a graph G .A partition B of V ( G ) is a block decomposition of G with respect to R if theintersection of each part of B with V is either a side clique or ∅ . Assume that G is an input graph throughout. The algorithm for recognisingunipolar graphs has three stages. In the first stage a sufficiently large max-imal independent set is found. The second stage constructs a partition B of V ( G ), such that B is a block decomposition for some unipolar representation if G ∈ GS + . The third stage generates a 2-CNF formula which is satisfiable iff B is a block decomposition for some unipolar representation. The formula is con-structed in such a way that a satisfying assignment of the variables correspondsto a representation of G , and the algorithm returns either a representation of G , or reports G / ∈ GS + .We describe the third stage first (in §
2) as it is short and includes a nat-ural transformation to 2-SAT. In § § §
5, we briefly discuss random perfect graphsand algorithms for them using the algorithm described above.
The most commonly used data type for this algorithm is the set. We assumethat the operation A ∩ B takes O (min( | A | , | B | )) time, A ∪ B takes O ( | A | + | B | )2ime, A \ B takes O ( | A | ) time and a ∈ A takes O (1) time. These properties canbe achieved by using hashtables to implement sets.Functions will always be of the form f : [ m ] → A for some m , where [ m ] = { , , . . . m } . Therefore functions can be implemented with simple arrays, hencethe lookup and assignment operations are assumed to require O (1) time. Let x , . . . , x n be n boolean variables. A 2- clause is an expression of the form y ∨ y , where each y j is a variable, x i , or the negation of a variable, ¬ x i . Thereare 4 n possible 2-clauses. The problem of deciding whether or not a formulaof the form ψ = ∃ x ∃ x . . . ∃ x n ( c ∧ c ∧ . . . ∧ c m ), where each c j is a 2-clause, issatisfiable is called 2-SAT. The problem 2-SAT is solvable in O ( n + m ) time –[EIS76] and [APT79], where n is the number of variables and m is the numberof clauses in the input formula. Let G = ( V, E ) be a graph with vertex set V = [ n ]. In this subsection weshow how to test if a partition of V is a block decomposition for some unipolarrepresentation, in which case we must have G ∈ GS + . Let B be the partitionof V we want to test. From each block of B , we seek to pick out some verticesto form the central clique V of a representation, with the remaining vertices inthe blocks forming the side cliques. Suppose that |B| = m , and B is representedby a surjective function f : V → [ m ], so that B = { f − [ i ] : i ∈ [ m ] } . Let { x v : v ∈ V } be Boolean variables. We use the procedure verify to constructa formula ψ ( x , . . . x n ), so that each satisfying assignment of { x v } correspondsto a representation of G . Procedure verify ( G, f ): ψ := ∃ x ∃ x . . . ∃ x n for { u, v } ∈ V (2) doif uv ∈ E thenif f ( u ) = f ( v ) then do nothing else ψ := ψ ∧ ( x u ∨ x v ) end ifelseif f ( u ) = f ( v ) then ψ := ψ ∧ ( x u ∨ x v ) ∧ ( ¬ x u ∨ ¬ x v ) else ψ := ψ ∧ ( ¬ x u ∨ ¬ x v ) end if nd ifend forreturn ψ )There is an exception: the first time a clause is added to ψ it should be addedwithout the preceding sign for conjunction. The following lemma is easy tocheck. Lemma 2.1.
The formula ψ is satisfiable iff B is a block decomposition for somerepresentation. Indeed an assignment Φ : { x v : v ∈ V } → { , } satisfies ψ ifand only if R = ( V , V ) is a representation of G and B is a block decompositionof G with respect to R , where V i = { v ∈ V : Φ( v ) = 1 − i } .Proof. Suppose Φ is a satisfying assignment and let V , V be as above. If u and v are both in V , then uv ∈ E , since otherwise Φ contains a clause ¬ x u ∨ ¬ x v .If u and v are in V , then either uv ∈ E and f ( u ) = f ( v ) or uv E and f ( u ) = f ( v ), because in the other two cases φ contains the clause x u ∨ x v .This means that the vertices in V are grouped into cliques by their value of f . For the other direction, it is sufficient to verify that each generated clause issatisfied, which is a routine check.At most a constant number of operations are performed per pair { u, v } , so O ( n ) time is spent preparing ψ . The formula ψ can have at most 2 clauses perpair { u, v } , so the length of ψ is also O ( n ), and since 2-SAT can be solved inlinear time, the total time for this step is O ( n ). Let α ( G ) be the maximum size of an independent set in a graph G . Let G ∈ GS + and let R be a unipolar representation of G . Observe that for any representation R of G , the number s ( R, G ) of side cliques satisfies s ( R, G ) ≤ α ( G ) ≤ s ( R, G ) +1. We deduce that for every two representations R and R of G we have | s ( R , G ) − s ( R , G ) | ≤ G is K n or its complement, then the number s ( R, G ) dependson R . However, this is not necessarily the case for all graphs, see Figure 1.It can be shown that the number of n -vertex unipolar graphs with a uniquerepresentation is (1 − e − Θ( n ) ) |GS + n | , and that the number of n -vertex unipolargraphs G with a unique representation R and such that s ( G, R ) = α ( G ) is(1 − O ( e − n δ )) |GS + n | for a constant δ > It is well known that calculating α ( G ) for a general graph is NP-hard. For G ∈GS + let s ( G ) = max R s ( R, G ), where the maximum is over all representations R of G . For G / ∈ GS + set s ( G ) = 0. In this section we see how to find a maximalindependent set I , such that if G ∈ GS + , then | I | ≥ s ( G ) ( ≥ α ( G ) − G on the left, s ( R, G ) = α ( G ) for all representations R , and the graph G on the right, s ( R, G ) = α ( G ) + 1 for all representations R . The idea is to start with G and with I = ∅ ; and as long as the remaininggraph has two non-adjacent vertices, say v and v , pick r = 1 or 2 of thesevertices to add to I , and delete from G the closed neighbourhood of the addedvertices. We do this in such a way that a given representation R of G yields arepresentation with r less side cliques, or (only when r = 2) with one less sideclique and the central clique removed. Procedure indep ( G ): I := ∅ , U := V ( G ) while U = ∅ doif G [ U ] is complete then pick an arbitrary u ∈ U return I ∪ { u } . else pick u , u ∈ U , so that u u / ∈ E ( G ). U := ( N + ( u ) \ N + ( u )) ∩ UU := ( N + ( u ) \ N + ( u )) ∩ U if G [ U ] is complete thenif G [ U ] is complete then I := I ∪ { u , u } U := U \ ( N + ( u ) ∪ N + ( u )) else I := I ∪ { u } U := U \ N + ( u ) end ifelseif G [ U ] is complete then I := I ∪ { u } U := U \ N + ( u ) else I := I ∪ { u , u } U := U \ ( N + ( u ) ∪ N + ( u )) end ifend ifend ifend while eturn I Observe that the main body of the indep ( G ) procedure is a while loop. Analternative way of seeing the algorithm is that instead of the loop there is arecursive call to indep ( G [ U ]) at the end of the iteration and the procedurereturns the union of the vertices found during this iteration and the recursivelyretrieved set. A recursive interpretation is clearer to work with for inductiveproofs. Lemma 3.1.
Procedure indep ( G ) always returns a maximal independent set I .Proof. This is easy to see, since each vertex deleted from U is adjacent to avertex put in I . Lemma 3.2. If indep ( G ) returns I , then | I | ≥ s ( G ) .Proof. If G / ∈ GS + , then the statement holds, because s ( G ) = 0. From now onassume that G ∈ GS + . We argue by induction on v ( G ). It is trivial to see thatthe lemma holds for v ( G ) = 1. Let v ( G ) > G is complete, then | I | = 1 = s ( G ).Fix an arbitrary unipolar representation R of G . We show that | I | ≥ s ( R, G ).If G is not complete, then the procedure selects two non-adjacent vertices u and v . The vertices u and v are either in different side cliques or one of them is inthe central clique and the other is in a side clique.We start with the case when u and v are contained in side cliques. Afterinspecting their neighbourhoods, the algorithm removes from U either one orboth of them along with their neighbourhood. Suppose that it removes r ofthem, where r is 1 or 2. Let G ′ and R ′ be the graph and the representationinduced by the remaining vertices. By the induction hypothesis, if I ′ is therecursively retrieved set, then | I ′ | ≥ s ( R ′ , G ′ ). Let I be the independent setreturned at the end of the algorithm, so that | I ′ | + r = | I | . Both u and v see all the vertices in their corresponding side clique, and see no vertices fromdifferent side cliques, so after removing r of them with their neighbours, thenumber of side cliques in the representation decreases by precisely r , and hence s ( R, G ) = s ( R ′ , G ′ ) + r . Now | I | = | I ′ | + r ≥ s ( R ′ , G ′ ) + r = s ( R, G ).Now w.l.o.g. assume that u belongs to a side clique and v belongs to thecentral clique. Then N + ( u ) contains the side clique of u and perhaps parts ofthe central clique. Therefore, N + ( u ) \ N + ( v ) is a subset of the side clique of u ,and hence it is a clique. If N + ( v ) \ N + ( u ) is a not clique, then the algorithmcontinues recursively with G [ V \ N + ( u )]; and using the same arguments as abovewith r = 1, we guarantee correct behaviour. Now assume that N + ( v ) \ N + ( u ) isa clique. Then N + ( v ) \ N + ( u ) can intersect at most one side clique, because thevertices in different side cliques are not adjacent. In this case N + ( v ) ∪ N + ( u )completely covers the side clique of u , completely covers the central clique,and it may intersect one additional side clique. Hence s ( R, G ) = s ( R ′ , G ′ ) +1 or s ( R, G ) = s ( R ′ , G ′ ) + 2, where G ′ and R ′ are the induced graph and6epresentation after the removal of N + ( v ) ∪ N + ( u ). If I ′ is the recursivelyobtained independent set, from the induction hypothesis we deduce that | I | = | I | ′ + 2 ≥ s ( R ′ , G ′ ) + 2 ≥ s ( R, G ). In this form the algorithm takes more than O ( n ) time, because checkingwhether an induced subgraph is complete is slow. However, we can maintaina set of vertices, C , which we have seen to induce a complete graph. We willcreate an efficient procedure to check if a subgraph is complete, and to returnsome additional information to be used for future calls if the subgraph is notcomplete. Procedure antiedge ( G, U, C ): C ′ := C for v ∈ U \ C doif C ′ \ N + ( v ) = ∅ then C ′ := C ′ ∪ { v } else let u ∈ C ′ \ N + ( v ) return ( uv, C ′ ) end ifend forreturn ( F alse, C ′ ).The following lemma summarises the behaviour of antiedge . Lemma 3.3.
Let C ⊆ U and suppose that G [ C ] is complete. If G [ U ] is complete,then antiedge ( G, U, C ) returns ( F alse, U ) ; if not, then it returns ( uv, C ′ ) suchthat1. uv ∈ C ′ × ( U \ C ′ ) − E ( G ) i.e. u ∈ C ′ , v ∈ U \ C ′ , uv / ∈ E ( G ) C ⊆ C ′ ⊆ U G [ C ′ ] is completeProof. Easy checking.
Procedure indep ( G ): I := ∅ ; U := V ( G ) e := F alse ; C := ∅ ( e, C ) := antiedge ( G, U, C ) while U = ∅ doif e = F alse then pick an arbitrary u ∈ U return I ∪ { u } . else Assume that e = u u ; u , u ∈ U := ( N + ( u ) \ N + ( u )) ∩ UU := ( N + ( u ) \ N + ( u )) ∩ U ( e , C ) := antiedge ( G, U , C ∩ U )( e , C ) := antiedge ( G, U , C ∩ U ) if e = F alse thenif e = F alse then I := I ∪ { u , u } U := U \ ( N + ( u ) ∪ N + ( u ))( e, C ) := antiedge ( G, U, ∅ ) else I := I ∪ { u } U := U \ N + ( u ) C := C ; e := e end ifelseif e = F alse then I := I ∪ { u } U := U \ N + ( u ) C := C ; e := e else I := I ∪ { u , u } U := U \ ( N + ( u ) ∪ N + ( u ))( e, C ) := antiedge ( G, U, ∅ ) end ifend ifend ifend whilereturn I Lemma 3.4.
Let U and C be the sets stored in the respective variables at thebeginning of an iteration of the main loop of the modified indep , and let U ′ and C ′ be the sets stored at the beginning of the next iteration, if the algorithm doesnot terminate meanwhile. The following loop invariants hold:(I1) G [ C ] is complete and C ⊆ U ,(I2) U ′ \ C ′ ⊆ U \ C . A loop invariant is a condition which is true at the beginning of each iterationof a loop.
Proof.
Observe that the initial values of U and C , which are V and ∅ respec-tively, guarantee by Lemma 3.3 that the values after the call to antiedge satisfycondition (I1). Therefore (I1) holds for the first iteration. Concerning futureiterations, observe that (I1) guarantees the precondition of Lemma 3.3, whichit turn guarantees (I1) for the next iteration. We deduce that (I1) does indeedgive a loop invariant. By proving this we have proved that the preconditions ofLemma 3.3 are always met; and so we can use Lemma 3.3 throughout.8f e = F alse , then there is no next iteration, hence condition (I2) is auto-matically correct. Now assume that e = u u . Depending on e and e there aretwo cases for how many vertices are excluded. Case 1: one vertex is excluded.W.l.o.g. assume that u is excluded, so U ′ = U \ N + ( u ), C ∩ U ⊆ C ′ and C ⊆ N + ( u ) ∪ N + ( u ). Then U ′ \ C ′ ⊆ U ′ \ ( C ∩ U )= ( U \ N + ( u )) \ ( C ∩ ( N + ( u ) \ N + ( u )) ∩ U )= U \ [ N + ( u ) ∪ ( C ∩ ( N + ( u ) \ N + ( u )))]= U \ [ N + ( u ) ∪ C ] ⊆ U \ C. Case 2: two vertices are excluded. Now U ′ = U \ ( N + ( u ) ∪ N + ( u )), and U ′ \ C ′ ⊆ U ′ ⊆ U \ C. We have shown that condition (I2) holds at the start of the next iteration, andso it gives a loop invariant as claimed.A vertex v is absorbed if it is processed during the loop of antiedge andthen appended to the result set, C ′ . Corollary 3.5.
A vertex can be absorbed once at most.Proof.
Let
U, C, U ′ and C ′ be as before. Observe that if, during the iteration,vertex v is absorbed in a call to antiedge , then v ∈ U \ C and v / ∈ U ′ \ C ′ . TheCorollary now follows from the second invariant in Lemma 3.4. Lemma 3.6.
The procedure indep ( G ) using antiedge takes O ( n ) time.Proof. The total running time of each iteration of the main loop of indep ( G )besides calling antiedge is O ( n ). The set U decreases by at least one vertex oneach iteration, so the time spent outside of antiedge is O ( n ).From Corollary 3.5 at most n vertices are absorbed and O ( n ) steps areperformed each time, so in total O ( n ) time is spent in all calls to antiedge forabsorbing vertices.Assume that v is processed in antiedge for the first time, but it is notabsorbed and it is tested against a set C . Since v is not absorbed, we mayassume that antiedge has returned the pair of vertices vu . At least one of u and v is removed (along with its neighbourhood) from U and moved to I . If v isremoved from U , then no more time can be spent on it in antiedge , hence thetotal time spent on v in antiedge is O ( n ). Now assume that u is removed. Wehave that C ⊆ N + ( u ) and each vertex in N + ( u ) is removed from U . Hence,if v is processed again in antiedge , it will be tested against a set C with C ∩ C = ∅ , and therefore | C | + | C | = | C ∪ C | = O ( n ). As we saw before, if v is absorbed or removed from U , then it cannot be processed again in antiedge ;and thus the running time spent on v is again O ( n ). If v is not removed from U , then C is removed from U . Hence, if v is processed again in antiedge , v C with | C | + | C | + | C | = | C ∪ C ∪ C | = O ( n ),and so on. Thus, we see that over all these tests, each vertex is tested at mostonce for adjacency to v , and so the total time spent on v is O ( n ). In this subsection we present a short algorithm for creating a partition of V ( G )using an independent set I and then checking if this partition is a block decom-position using the procedure verify from Section 2. Procedure test ( G, I ): U := V ( G ); t := 0; f := ∅ for i ∈ I dofor v ∈ N + ( i ) ∩ U do f ( v ) := t end for U := U \ N + ( i ) t := t + 1 end forfor v ∈ U do f ( v ) := t end forreturn verify ( G, f ) Lemma 4.1.
Suppose that I ⊆ V ( G ) is an independent set with | I | ≥ s ( G ) − and V ∩ I = ∅ for some unipolar representation R = ( V , V ) of G . Then test ( G, I ) returns T rue .Proof.
On each step of the main loop a vertex from i ∈ I is selected. Since V ∩ I = ∅ , the vertex i is a part of some side clique, say C . Now C ∩ N + ( j ) = ∅ for each j ∈ I \ { i } , so C ⊆ U . Also C ⊆ N + ( i ), and hence C ⊆ N + ( i ) ∩ U .Vertex i does not see vertices from other side cliques, so N + ( i ) ∩ U is correctlymarked as a separate block.Since | I | ≥ s ( G ) −
1, at most one side clique is not represented in I . Ifthere is an unrepresented side clique, say C , then none of the previously createdblocks can claim any vertex from it, and hence C ⊆ U . We have shown thatwhen the main loop ends, either U ∩ V = ∅ or U ∩ V is a side clique; so U is correctly marked as a separate block. The set U also contains all remainingvertices, so f is partition of V into blocks, and hence verify ( G, f ) will return
T rue . By Lemma and 3.1 and 3.2, indep ( G ) returns a maximal independent set I ofsize at least s ( G ). Thus, Lemma 4.1 suggests a naive algorithm for recognition10or GS + – try test ( G, I \ i ) for each i ∈ I and return T rue if any attemptsucceeds. The proposed algorithm is correct, since | I ∩ V | ≤
1. The runningtime is O ( | I | n ) = O ( n ), while we aim for O ( n ). However, with relativelylittle effort we can localise I ∩ V to at most 2 candidates from I . Procedure blocks ( G, I ): C := I for v ∈ V ( G ) doif | N + ( v ) ∩ I | = 2 then C := C ∩ N + ( v ) end ifend forif | C | = 1 thenreturn test ( G, I \ C ) else if | C | = 2 then Assume that C = { c , c } return test ( G, I \ { c } ) ∨ test ( G, I \ { c } ) elsereturn test ( G, I ) end ifProcedure recognise ( G ) :return blocks ( G, indep ( G )) Lemma 4.2.
The procedure recognise ( G ) returns T rue iff G ∈ GS + .Proof. First assume that G ∈ GS + and let R = ( V , V ) be an arbitrary repre-sentation of G . Let I = indep ( G ). By Lemma 3.2, | I | ≥ s ( G ) ≥ s ( R, G ). Since V is a clique and I is an independent set, we have | V ∩ I | ≤ V ∩ I = ∅ . Observe that blocks returns test ( G, I ′ ), where I ′ iseither I or I \ { v } for some v ∈ I , hence | I ′ | ≥ | I | − ≥ s ( G ) − I ′ ⊆ I ⊆ V ,so test ( G, I ′ ) = T rue from Lemma 4.1.Case 2: V ∩ I = { c } . Blocks starts by calculating the set C , where C = I if there is no v ∈ V with | N + ( v ) ∩ I | = 2, and otherwise C = \ { N + ( v ) ∩ I : v ∈ V ( G ) , | N + ( v ) ∩ I | = 2 } . Assume that | N + ( v ) ∩ I | = 2 for some v ∈ V . If v ∈ V , then c ∈ N + ( v ),because V is a clique. If v ∈ V , then N + ( v ) can intersect at most one vertexfrom I ∩ V and at most one vertex from I ∩ V = { c } and since | N + ( v ) ∩ I | = 2,we have c ∈ N + ( v ). For each v ∈ V if | N + ( v ) ∩ I | = 2, then c ∈ N + ( v ) ∩ I , so c belongs to their intersection. If no v ∈ V exists with | N + ( v ) ∩ I | = 2, then C = I , but c ∈ I , so again c ∈ C . We deduce that if V ∩ I = { c } , then c ∈ C and | C | >
0. 11f | C | = 1 or | C | = 2 then test ( G, I \ { i } ) is tested individually for eachvertex i ∈ C , but c ∈ C and test ( G, I \ { c } ) = T rue by Lemma 4.1.If | C | >
2, then there is no v ∈ V with | N + ( v ) ∩ I | = 2. Either | I | = s ( R, G )or | I | = s ( R, G )+1, so either all side cliques are represented by vertices of I , or atmost one is not represented, say S . We can handle both cases simultaneously bysaying that S = ∅ in the former case. We have that I is a maximal independentset, but no vertex of I \ { c } can see a vertex of S , because they belong todifferent side cliques, so c is connected to all vertices of S and therefore { c } ∪ S is a clique. Let T = N ( c ) ∩ ( V \ S ). Then | N + ( v ) ∩ I | = 2 for each v ∈ T ,but no such vertex exists by assumption, so T = ∅ . Now N ( c ) ∩ V = S , and V is a union of disjoint cliques, so V ∪ { c } is also a union of disjoint cliques.Hence R ′ = ( V \ { c } , V ∪ { c } ) is a representation of G , so from Lemma 4.1 test ( G, I ) =
T rue .On the contrary, if
G / ∈ GS + , then there is no representation for G , hence test cannot generate a block decomposition of G , and therefore test will return F alse . Lemma 4.3. recognise ( G ) takes O ( n ) time.Proof. The procedure test loops over a subset of V and intersects two subsetsof V , so the time for each step is bounded by O ( n ), and since the number ofsteps is O ( n ), O ( n ) time is spent in the loop. Then it performs one moreoperation in O ( n ) time, so the total time spent for preparation is O ( n ). Then test calls verify , which takes O ( n ) time, so the total running time of test is O ( n ).While building C , blocks handles O ( n ) sets with size O ( n ), so it spends O ( n ) time in the first stage. Depending on the size of C , blocks calls test once or twice, but in both cases it takes O ( n ) time, so the total running timeof blocks is O ( n ). The total time spent for recognition is the time spentfor blocks plus the time spent for indep, and since both are O ( n ), the totalrunning time for recognition is O ( n ). Gr¨otschel, Lov´asz, and Schrijver [GLS84] show that the stable set problem, theclique problem, the colouring problem, the clique covering problem and theirweighted versions are computable in polynomial time for perfect graphs. Thealgorithms rely on the Lov´asz sandwich theorem, which states that for everygraph G we have ω ( G ) ≤ ϑ ( G ) ≤ χ ( G ), where ϑ ( G ) is the Lov´asz number. TheLov´asz number can be approximated via the ellipsoid method in polynomialtime, and for perfect graphs we know that ω ( G ) = χ ( G ), hence ϑ ( G ) is aninteger and its precise value can be found. Therefore χ ( G ) and ω ( G ) can befound in polynomial time for perfect graphs, though these are NP-hard problemsfor general graphs. Further, α ( G ) and χ ( G ) (the clique covering number) can be12omputed from the complement of G (which is perfect). The weighted versionsof these parameters can be found in a similar way using the weighted version ofthe Lov´asz number, ϑ w ( G ).These results tell us more about computational complexity than algorithmdesign in practice. On the other hand, the problems above are much more easilysolvable for generalised split graphs. We know that the vast majority of the n -vertex perfect graphs are generalised split graphs [PS92]. One can first test ifthe input perfect graph is a generalised split graph using the algorithm in thispaper and if so, apply a more efficient solution.Eschen and Wang [EW14] show that, given a generalised split graph G with n vertices together with a unipolar representation of G or G , we can efficientlysolve each of the following four problems: find a maximum clique, find a max-imum independent set, find a minimum colouring, and find a minimum cliquecover.It is sufficient to show that this is the case when G is unipolar, as otherwisewe can solve the complementary problem in the complement of G . Findinga maximum size stable set and minimum clique cover in a unipolar graph isequivalent to determining whether there exists a vertex in the central cliquesuch that no side clique is contained in its neighbourhood, which is trivial andcan be done very efficiently. Suppose there are k side cliques. If there is such avertex v , then a maximum size stable set (of size k + 1) consists of v and fromeach side clique a vertex not adjacent to v , and a minimum size clique cover isformed by the central clique and the k side cliques. If not, then a maximum sizestable set (of size k ) consists of a vertex from each side clique, and a minimumclique cover is formed by extending the k side cliques to maximal cliques (whichthen cover C ).Let us focus on finding a maximum clique and minimum colouring of a unipo-lar graph G with a representation R . If R contains k side cliques, C , . . . C k ,then ω ( G ) = χ ( G ) = max { ω ( G [ C ∪ C i ]) } ki =1 = max { χ ( G [ C ∪ C i ]) } ki =1 , where C is the central clique. Therefore, in order to find a maximum cliqueor a minimum colouring, it is sufficient to solve the corresponding problem ineach of the co-bipartite graphs induced by the central clique and a side clique.The vertices outside a clique in a co-bipartite graph form a cover in the com-plementary bipartite graph, and the vertices coloured with the same colour in aproper colouring of a co-bipartite graph form a matching in the complementarybipartite graph. By K¨onig’s theorem it is easy to find a minimum cover usinga given maximum matching, and therefore finding a maximum clique and aminimum colouring in a co-bipartite graph is equivalent to finding a maximummatching in the complementary bipartite graph. For colourings, we explicitlyfind a minimum colouring in each co-bipartite graph G [ C ∪ C i ], and such colour-ings can be fitted together using no more colours, since C is a clique cutset.Assume that G [ C ∪ C i ] contains n i vertices and m i edges, so each n i ≤ n and P i m i ≤ | C | ( n − | C | ) ≤ n /
4. We could use the Hopcroft–Karp algo-13ithm for maximum matching in O (( | E | + | V | ) p | V | ) time to find time bound P i O (( m i + n i ) √ n i ) = O (( n + m ) √ n ) = O ( n . ).The approach of Eschen and Wang [EW14] is very similar, and they givemore details, but unfortunately there is a mistake with their analysis, and a cor-rected version of their analysis yields O ( n . / log n ) time, instead of the claimed O ( n . / log n ). In order to see the mistake consider the case when the inputgraph is a split graph with an equitable partition.Given a random perfect graph R n , we run our recognition algorithm in time O ( n ). If we have a generalised split graph, with a representation, we solve eachof our four optimisation problems in time O ( n . ), if not, which happens withprobability e − Ω( n ) , we run the methods from [GLS84]. This simple idea yieldsa polynomial-time algorithm for each problem with low expected running time,and indeed the probability that the time bound is exceeded is exponentiallysmall. References [APT79] Bengt Aspvall, Michael F. Plass, and Robert Endre Tarjan. A linear-time algorithm for testing the truth of certain quantified booleanformulas.
Inf. Process. Lett. , 8(3):121–123, 1979.[CCL +
05] Maria Chudnovsky, G´erard Cornu´ejols, Xinming Liu, Paul Seymour,and Kristina Vuˇskovi´c. Recognizing Berge graphs.
Combinatorica ,25(2):143–186, 2005.[CH12] Ross Churchley and Jing Huang. Solving partition problems withcolour-bipartitions.
Graphs and Combinatorics , pages 1–12, 2012.[CLV03] G´erard Cornu´ejols, Xinming Liu, and Kristina Vuˇskovi´c. A poly-nomial algorithm for recognizing perfect graphs. In
Foundations ofComputer Science, 2003. Proceedings. 44th Annual IEEE Symposiumon , pages 20–27. IEEE, 2003.[EIS76] S. Even, A. Itai, and A. Shamir. On the complexity of timetableand multicommodity flow problems.
SIAM Journal on Computing ,5(4):691–703, 1976.[EW14] Elaine M. Eschen and Xiaoqiang Wang. Algorithms for unipolar andgeneralized split graphs.
Discrete Appl. Math. , 162:195–201, January2014.[GLS84] Martin Gr¨otschel, Laszlo Lov´asz, and Alexander Schrijver. Poly-nomial algorithms for perfect graphs.
North-Holland MathematicsStudies , 88:325–356, 1984.[MY16] Colin McDiarmid and Nikola Yolov. Random perfect graphs.
Inpreparation , 2016+. 14PS92] Hans J¨urgen Pr¨omel and Angelika Steger. Almost all Berge graphsare perfect.
Combinatorics, Probability and Computing , 1(01):53–79,1992.[TC85] RI Tyshkevich and AA Chernyak. Algorithms for the canonical de-composition of a graph and recognizing polarity.