Hopf-algebraic deformations of products and Wick polynomials
Kurusch Ebrahimi-Fard, Frédéric Patras, Nikolas Tapia, Lorenzo Zambotti
aa r X i v : . [ m a t h . P R ] A ug HOPF-ALGEBRAIC DEFORMATIONS OF PRODUCTSAND WICK POLYNOMIALS
K. EBRAHIMI-FARD, F. PATRAS, N. TAPIA, AND L. ZAMBOTTI
Abstract.
We present an approach to cumulant–moment relations andWick polynomials based on extensive use of convolution products of lin-ear functionals on a coalgebra. This allows, in particular, to understandthe construction of Wick polynomials as the result of a Hopf algebradeformation under the action of linear automorphisms induced by mul-tivariate moments associated to an arbitrary family of random variableswith moments of all orders. We also generalize the notion of deformedproduct in order to discuss how these ideas appear in the recent theoryof regularity structures.
Keywords : cumulant–moment relations; Wick polynomials; Hopf algebras; convolutionproducts; regularity structures.
MSC Classification : (Primary) 16T05; 16T15; 60C05; (Secondary) 16T30; 60H30. Introduction
Chaos expansions and Wick products have notoriously been thought ofas key steps in the renormalization process in perturbative quantum fieldtheory (QFT). The technical reason for this is that they allow to removecontributions to amplitudes (say, probability transitions between two physi-cal states) that come from so-called diagonal terms – from which divergencesin the calculation of those amplitudes may originate. Rota and Wallstrom[19] addressed these issues from a strictly combinatorial point of view using,in particular, the structure of the lattice of set partitions. These are thesame techniques that are currently used intensively in the approach by Pec-cati and Taqqu in the context of Wiener chaos and related phenomena. Werefer to their book [16] for a detailed study and the classical results on thesubject, as well as for a comprehensive bibliography and historical survey.Recently, the interest in the fine structure of cumulants and Wick prod-ucts for non-Gaussian variables has been revived, since they both play im-portant roles in M. Hairer’s theory of regularity structures [10]. See, forinstance, references [6, 9]. The progress in these works relies essentially ondescribing the underlying algebraic structures in a transparent way. Indeed,the combinatorial complexity of the corresponding renormalization processrequires the introduction of group-theoretical methods such as, for instance,renormalization group actions and comodule Hopf algebra structures [3].
Date : August 27, 2018.
Another reference of interest on generalized Wick polynomials in view ofthe forthcoming developments is the recent paper [13].Starting from these remarks, in this article we shall discuss algebraic con-structions related to moment–cumulant relations as well as Wick products,using Hopf algebra techniques. A key observation, that seems to be newin spite of being elementary and powerful, relates to the interpretation ofmultivariate moments of a family of random variables as a linear form on asuitable Hopf algebra. It turns out that the operation of convolution withthis linear form happens to encode much of the theory of Wick productsand polynomials. This approach enlightens the classical theory, as variousstructure theorems in the theory of chaos expansions follow immediatelyfrom elementary Hopf algebraic constructions, and therefore are given bythe latter a group-theoretical meaning. Our methods should be comparedwith the combinatorial approach in [16].Our approach has been partially motivated by similarities with meth-ods that have been developed for bosonic and fermionic Fock spaces byC. Brouder et al. [1, 2] to deal with interacting fields and non-trivial vacuain perturbative QFT. This is not surprising since, whereas the combinatoricsof Gaussian families is reflected in the computation of averages of creationand annihilation operators over the vacuum in QFT, combinatorial proper-ties of non-Gaussian families correspond instead to averages over non-trivialvacua.The main idea of this paper is that the coproduct of a bialgebra allows todeform the product and that this permits to encode interesting constructionssuch as generalized Wick polynomials. In the last sections of this paper, weshow how the above ideas can be used in more general contexts, whichinclude regularity structures. Regarding the latter, we mention that theseideas have been used and greatly expanded in a series of recent papers[10, 3, 6] on renormalization of regularity structures. These papers handleproducts of random distributions which can be ill-defined and need to be renormalized . The procedure is rather delicate since the renormalization,which we rather call deformation in this paper, must preserve other algebraicand analytical structures. Without explaining in detail the rather complexconstructions appearing in [10, 3, 6], we describe how one can formalize thisdeformed (renormalized) product of distributions by means of a comodulestructure.1.1.
Generalized Wick polynomials.
The main results of the first partof this paper (Theorems 5.1 and 5.3) are multivariate generalizations of thefollowing statements for single real-valued random variable X with finitemoments of all orders.We denote by H := R [ x ] the algebra of polynomials in the variable x ,endowed with the standard product x n · x m := x n + m , (1) OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 3 for n, m ≥
0. We equip H with the cocommutative coproduct ∆ : H → H ⊗ H defined by ∆ x n := n X k =0 (cid:18) nk (cid:19) x n − k ⊗ x k . (2)Product (1) and coproduct (2) together define a connected graded commu-tative and cocommutative bialgebra, and therefore a Hopf algebra structureon H . On the dual space H ∗ a dual product α ⋆ β ∈ H ∗ can be defined interms of (2) ( α ⋆ β )( x n ) := ( α ⊗ β )∆ x n , (3)for α, β ∈ H ∗ . This product is commutative and associative, and the space G ( H ) := { λ ∈ H ∗ : λ (1) = 1 } forms a group for this multiplication law.We define the functional µ ∈ H ∗ given by µ ( x n ) := µ n = E ( X n ). Then µ ∈ G ( H ) and therefore its inverse µ − in G ( H ) is well defined. Theorem 1.1 (Wick polynomials) . We define W := µ − ⋆ id : H → H ,i.e., the linear operator such that W ( x n ) = ( µ − ⊗ id)∆ x n = n X k =0 (cid:18) nk (cid:19) µ − ( x n − k ) x k . (4) Then ⋄ W : H → H is the only linear operator such that W (1) = 1 , dd x ◦ W = W ◦ dd x , µ ( W ( x n )) = 0 , (5) for all n ≥ . ⋄ W : H → H is the only linear operator such that for all n ≥ x n = ( µ ⊗ W )∆ x n = n X k =0 (cid:18) nk (cid:19) µ ( x n − k ) W ( x k ) . We call W ( x n ) ∈ H the Wick polynomial of degree n associated to thelaw of X . If X is a standard Gaussian random variable then the recurrence(5) shows that W ( x n ) is the Hermite polynomial H n . Therefore (4) givesan explicit formula for such generalized Wick polynomials in terms of theinverse µ − of the linear functional µ in the group G ( H ).The Wick polynomial W permits to define a deformation of the Hopfalgebra H . Theorem 1.2.
The linear operator W : H → H has a composition inverse W − : H → H given by W − = µ ⋆ id . If we define for n, m ≥ the product x n · µ x m := W ( W − ( x n ) · W − ( x m )) , and define similarly a twisted coproduct ∆ µ , then H endowed with · µ , ∆ µ and ε µ := µ is a bicommutative Hopf algebra. The map W becomes anisomorphism of Hopf algebras. In particular W ( x n + ··· + n k ) = W ( x n ) · µ W ( x n ) · µ · · · · µ W ( x n k ) , K. EBRAHIMI-FARD, F. PATRAS, N. TAPIA, AND L. ZAMBOTTI for all n , . . . , n k ∈ N . We recall that in the case of a single random variable X with finite mo-ments of all orders, the sequence ( κ n ) n ≥ of cumulants of X is defined bythe following formal power series relation between exponential generatingfunctions exp X n ≥ t n n ! κ n = X n ≥ t n n ! µ n , (6)where t is a formal variable and µ n = E ( X n ) is the n th-order moment of X . Note that µ = 1 and κ = 0. Equation (6) is equivalent to the classicalrecursion µ n = n X m =1 (cid:18) n − m − (cid:19) κ m µ n − m . (7)In fact, equation (6) together with (7) provide the definition of the classicalBell polynomials, which, in turn, are closely related to the Fa`a di Brunoformula [17].We will show multivariate generalization of the following formulae thatexpress Hopf algebraically the moments/cumulants relations Theorem 1.3.
Setting µ, κ ∈ H ∗ , µ ( x n ) := µ n and κ ( x n ) := κ n , n ≥ , wehave the relations µ = exp ⋆ ( κ ) := ε + X n ≥ n ! κ ⋆n , (8) κ = log ⋆ ( µ ) := X n ≥ ( − n − n ( µ − ε ) ⋆n , (9) where ε ( x k ) := ( k =0) . Note that the n -fold convolution product κ ⋆n = κ ⋆ · · · ⋆ κ ( n times)is well-defined as the convolution product defined in (3) is associative. Theabove formulae (8) and (9) are Hopf-algebraic interpretations of the classical Leonov–Shiryaev relations [12], see (11) and (12) below.1.2.
Deformation of products.
Our Theorem 1.2 above introduces theidea of a deformed product · µ in a polynomial algebra. This idea is used ina very important way in the recent theory of regularity structures [10, 3, 6],which is based on products of random distributions , i.e. of generalized func-tions on R d . Such products are in fact ill-defined and need to be renor-malized ; this operation corresponds algebraically to a deformation of thestandard pointwise product, and is achieved through a comodule structurewhich extends the coproduct (2) to a much larger class of generalized mono-mials.In the last sections of this paper we extend the notion of a deformedproduct to more general comodules and we discuss one important and in-structive example, the space of decorated rooted trees endowed with the OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 5 extraction-contraction operator. This setting is relevant for branched roughpaths [8], and constitutes a first step towards the more complex frameworkof regularity structures [3].We hope that this discussion may help the algebraic-minded reader be-coming more familiar with a theory which combines probability, analysisand algebra in a very deep and innovative way.1.3.
Organisation of the paper.
In Section 2 we briefly review classicalmultivariate moment–cumulants relations. Section 3 provides an interpre-tation of these relations in a Hopf-algebraic context. In Section 4 we extendthe previous approach to generalized Wick polynomials. Section 5 is de-voted to Hopf algebra deformations, which are applied to Wick polynomialsin Section 6. In Section 7 still another interpretation of Wick polynomials interms of a suitable comodule structure is introduced. Section 8 explains thedeformation of the pointwise product on functions. Section 9 addresses theproblem of extending our results to Hopf algebras of non-planar decoratedrooted trees. It prepares for Section 10, which outlines briefly the idea ofapplying the Hopf algebra approach to cumulants and Wick products in thecontext of regularity structures.Apart from the basic definitions in the theory of coalgebras and Hopf al-gebras, for which we refer the reader to P. Cartier’s
Primer on Hopf algebras [5], this article aims at being a self-contained reference on cumulants andWick products both for probabilists and algebraists interested in probabil-ity. We have therefore detailed proofs and constructions, even those thatmay seem obvious to experts from one of these two fields.For convenience and in view of applications to scalar real-valued randomvariables, we fix the field of real numbers R as ground field. Notice howeverthat algebraic results and constructions in the article depend only on theground field being of characteristic zero. Acknowledgements : The second author acknowledges support from theCARMA grant ANR-12-BS01-0017, “Combinatoire Alg´ebrique, R´esurgence,Moules et Applications”. The third author was partially supported by theCONICYT/Doctorado Nacional/2013-21130733 doctoral scholarship and ac-knowledges support from the “Fondation Sciences Math´ematiques de Paris”.The fourth author acknowledges support of the ANR project ANR-15-CE40-0020-01 grant LSD.2.
Joint cumulants and moments
We start by briefly reviewing classical multivariate moment–cumulantrelations.2.1.
Cumulants.
If we have a finite family of random variables ( X a , a ∈ S )such that X a has finite moments of all orders for every a ∈ S , then the K. EBRAHIMI-FARD, F. PATRAS, N. TAPIA, AND L. ZAMBOTTI analogue of the exponential formula (6) holdsexp X n ∈ N S t nn! κ n = X n ∈ N S t nn! µ n , (10)where µ n := E ( X n) . Here we use multivariable notation, i.e., with N := { , , , , . . . } and ( t a , a ∈ S ) commuting variables, we define for n = ( n a , a ∈ S ) ∈ N S t n := Y a ∈ S ( t a ) n a , X n := Y a ∈ S ( X a ) n a , n! := Y a ∈ S ( n a )! , and we use the conventions ( t a ) := 1, ( X a ) := 1. This defines in a uniqueway the family ( κ n , n ∈ N S ) of joint cumulants of ( X a , a ∈ S ) once the familyof corresponding joint moments ( µ n , ∈ N S ) is given. When it is necessaryto specify the dependence of κ n on ( X a , a ∈ S ) we shall write κ n( X ), andsimilarly for µ n.Identifying a subset B ⊆ S with its indicator function B ∈ { , } S , wecan use the notation κ B and µ B for the corresponding joint cumulants andmoments. The families ( κ B , B ⊆ S ) and ( µ B , B ⊆ S ) satisfy the so-called Leonov–Shiryaev relations [12, 20] µ B = X π ∈ P ( B ) Y C ∈ π κ C (11) κ B = X π ∈ P ( B ) ( | π | − − | π |− Y C ∈ π µ C , (12)where we write P ( B ) for the set of all set partitions of B , namely, allcollections π of subsets (blocks) of B such that ∪ C ∈ π C = B and elements of π are pairwise disjoint; moreover | π | denotes the number of blocks of π , which isfinite since B is finite. Formulae (11) and (12) have been intensively studiedfrom a combinatorial perspective, see, e.g., [16, Chapter 2]. Regarding theproperties of cumulants we refer the reader to [20].Formula (11) has in fact been adopted, for instance, in [9] as a recursivedefinition for ( κ B , B ⊆ S ). This approach does indeed determine the cumu-lants uniquely by induction over the cardinality | B | of the finite set B . Thisfollows from the right-hand side containing κ B , which is what we want todefine, as well as κ C for some C with | C | < | B | , which have been alreadydefined in lower order.Although this recursive approach seems less general than the one viaexponential generating functions as in (10), since it forces to consider onlyn ∈ { , } S , it turns out that they are equivalent. Indeed, replacing ( X a , a ∈ S ) with ( Y b , b ∈ S × N ∗ ), where Y b := X a for b = ( a, k ) ∈ S × N ∗ , then forn ∈ N S we have κ n( X ) = κ B ( Y ) , B = { ( a, k ) : a ∈ S , ≤ k ≤ n( a ) } . OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 7
In this paper we show that the Leonov–Shiryaev relations (11)-(12) havean elegant Hopf-algebraic interpretation which also extends to Wick poly-nomials. Notice that a different algebraic interpretation of (11)-(12) hasbeen given in terms of M¨obius calculus [16, 20]. Moreover, the idea of writ-ing moment–cumulant relations in terms of convolution products is closelyrelated to Rota’s Umbral calculus [11, 18].3.
From cumulants to Hopf algebras
In this section we explain how classical moment–cumulant relations canbe encoded using Hopf algebra techniques. These results may be folkloreamong followers of Rota’s combinatorial approach to probability, and, aswe already alluded at, there exist actually in the literature already variousother algebraic descriptions of moment–cumulant relations (via generatingseries as well as more sophisticated approaches in terms of umbral calculus,tensor algebras and set partitions). Our approach is most suited regardingour later applications, i.e., the Hopf algebraic study of Wick products. Sincethese ideas do not seem to be well-known to probabilists, we believe thatthey deserve a detailed presentation.3.1.
Moment–cumulant relations via multisets.
Throughout the pa-per we consider a fixed collection of real-valued random variables X = { X a } a ∈ A defined on a probability space (Ω , F , P ) for an index set A . Wesuppose that X a has finite moments of all orders for every a ∈ A .We do not assume that A is finite, but moments and cumulants will bedefined only for finite subfamilies. We extend the setting of (10), where S was a finite set, by defining M ( A ) ⊂ N A as the set of all finitely supported functions B : A → N . In the case of B ∈ M ( A ) ∩ { , } A , we have that B isthe indicator function of a finite set S ( B ), namely the support of B . For ageneral C ∈ M ( A ), we can identify the finite set S ( C ) given by the supportof C , and then C ( a ) ≥ a ∈ S ( C )in C viewed as a multiset.This multiset context is motivated by the following natural definition for B ∈ M ( A ): X ∅ := 1 , X B := Y a ∈ A B ( a ) > ( X a ) B ( a ) . (13)For all B ∈ M ( A ) we also set | B | := X a ∈ A B ( a ) < + ∞ . The set M ( A ) is a poset for the partial order defined by B ≤ B ′ if andonly if B ( a ) ≤ B ′ ( a ) for all a in A . Moreover, it is a commutative monoidfor the product ( A · B )( a ) := A ( a ) + B ( a ) , a ∈ A , (14) K. EBRAHIMI-FARD, F. PATRAS, N. TAPIA, AND L. ZAMBOTTI for
A, B in M ( A ), i.e., the map ( A, B ) → A · B is associative and commu-tative. The set M ( A ) is actually the free commutative monoid generatedby the indicator functions of the one-element sets { a } , a ∈ A (with neutralelement the indicator function of the empty set). Definition 3.1.
We call H the vector space freely generated by the set M ( A ) . As a vector space, H is isomorphic to the algebra of polynomials overthe set of (commuting) variables x a , a ∈ A (the isomorphism given bymapping B ∈ M ( A ) to monomials Q a ∈ A x B ( a ) a ). Moreover the product (14) ismotivated, using the notation (13), by X A · B = X A X B , A, B ∈ M ( A ) , and is therefore the multivariate analogue of (1).For n ≥ B, B , . . . , B n ∈ M ( A ) we set (cid:18) BB . . . B n (cid:19) := ( B · B ··· B n = B ) Y a ∈ A B ( a )! B ( a )! · · · B n ( a )! , where B · B · · · B n is the product of B , . . . , B n in M ( A ) using the multi-plication law defined in (14). Note that for a given B ∈ M ( A ), there existonly finitely many B , . . . , B n ∈ M ( A ) such that B · B · · · B n = B . Definition 3.2.
For every B ∈ M ( A ) , we define the cumulant E c ( X B ) inductively over | B | by E c ( X ∅ ) = 0 and else E (cid:0) X B (cid:1) = | B | X n =1 n ! X B ,...,B n ∈ M ( A ) \{∅} (cid:18) BB . . . B n (cid:19) n Y i =1 E c (cid:0) X B i (cid:1) . (15) Remark 3.3. If B ∈ M ( A ) ∩ { , } A , then (15) reduces to the first Leonov–Shiryaev relation (11) , since on the right-hand side of (15) B , . . . , B n ∈ M ( A ) are also in { , } A and in particular the binomial coefficient (whennon-zero) is equal to 1. As we will show in (17) below, expression (15) is equivalent to the usualformal power series definition of cumulants (whose exponential generatingseries is the logarithm of the exponential generating series of moments).As for (11), expression (15) does indeed determine the cumulants uniquelyby induction over | B | . This is because the right-hand side only involves E c ( X B ), which is what we want to define, as well as E c ( X ¯ B ) for some ¯ B with | ¯ B | < | B | , which is already defined by the inductive hypothesis.3.2. Exponential generating functions.
Define two linear functionals on H : µ : H → R A µ ( A ) := E ( X A ) κ : H → R A κ ( A ) := E c ( X A ) , (16) OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 9 where A ∈ M ( A ), µ ( ∅ ) := 1 and κ ( ∅ ) := 0.Let us fix a finite subset S = { a , . . . , a p } ⊂ A . For B ∈ M ( S ) we set t B := p Y i =1 ( t i ) B ( a i ) , where the t i are commuting variables. Then we define for B ∈ M ( A ) thefactorial B ! := Y a ∈ A ( B ( a ))! , and the exponential generating function of λ ∈ H ∗ (seen as a formal powerseries in the variables t i ) ϕ λ ( t, S ) := X B ∈ M ( S ) t B B ! λ ( B ) . Then from Definition 3.2 we get the usual exponential relation betweenthe exponential moment and cumulant generating functions of µ and κ ,analogous to (6) and (10): ϕ µ ( t, S ) = X B ∈ M ( S ) t B B ! µ ( B )= X n ≥ n ! X B ∈ M ( S ) X B ,...,B n ∈ M ( S ) B ! (cid:18) BB . . . B n (cid:19) n Y i =1 (cid:0) t B i κ ( B i ) (cid:1) = X n ≥ n ! X B ,...,B n ∈ M ( S ) n Y i =1 (cid:18) t B i B i ! κ ( B i ) (cid:19) = X n ≥ n ! X B ∈ M ( S ) t B B ! κ ( B ) n = exp( ϕ κ ( t, S )) . (17)From (17) we obtain another recursive relation between moments and cu-mulants. Let us set ( a ) ( b ) := ( a = b ) for a, b ∈ A . Then we have µ ( A · ( a ) ) = X B ,B ∈ M ( A ) (cid:18) AB B (cid:19) κ ( B · ( a ) ) µ ( B ) . (18)This recursion is the multivariate analogue of the one in (7).3.3. Moment–cumulant relations and Hopf algebras.
We endow nowthe space H from Definition 3.1 with the commutative and associative prod-uct · : H ⊗ H → H induced by the monoid structure of M ( A ) defined in(14). The unit element is the null function ∅ (we will also write abusively ∅ for the unit map – the embedding of R into H : λ λ · ∅ ). We also define a coproduct ∆ : H → H ⊗ H on H by∆ A := X B ,B ∈ M ( A ) (cid:18) AB B (cid:19) [ B ⊗ B ] , (19)recall (2). The counit ε : H → R is defined by ε ( A ) = ( A = ∅ ) and turns H into a coassociative counital coalgebra. Coassociativity (∆ ⊗ id)∆ =(id ⊗ ∆)∆ follows from the associativity of the monoid M ( A ):(∆ ⊗ id)∆ A = X ( B · B ) · B = A (cid:18) AB B B (cid:19) [ B ⊗ B ⊗ B ]= X B · ( B · B )= A (cid:18) AB B B (cid:19) [ B ⊗ B ⊗ B ]= (id ⊗ ∆)∆ A. Proposition 3.4. H is a commutative and cocommutative bialgebra and,since H is graded by | A | as well as connected, it is a Hopf algebra.Proof. Indeed, we already noticed that H is isomorphic as a vector spaceto the polynomial algebra, denoted P , generated by commuting variables x a , a ∈ A . The latter is uniquely equipped with a bialgebra and Hopfalgebra structure by requiring the x a to be primitive elements, that is, bydefining a coproduct map ∆ P : P → P ⊗ P such that it is an algebra mapand ∆ P ( x a ) = x a ⊗ ⊗ x a . Recall that since P is a polynomial algebra,these two conditions define ∆ P uniquely. The antipode is the algebra endo-morphism of P induced by S ( x a ) := − x a . We let the reader check that thenatural isomorphism between H and P is an isomorphism of algebras andmaps ∆ to ∆ P . The proposition follows. (cid:3) Recall that the dual of a coalgebra is an algebra, which is associative(resp. unital, commutative) if the coalgebra is coassociative (resp. counital,cocommutative). In particular, the dual H ∗ of H is equipped with an asso-ciative and commutative unital product written ⋆ , defined for all f, g ∈ H ∗ and A ∈ H by: ( f ⋆ g )( A ) := ( f ⊗ g )∆ A. (20)The unit of this product is the augmentation map ε . For later use, wealso mention that the associative product defined in (20) extends to linearendomorphisms f, g of H as well as to the product of a linear form on H with a linear endomorphism of H , by the same defining formula.We denote ∆ := id : H → H , ∆ := ∆ : H → H ⊗ H , and for n ≥ n := (∆ ⊗ id)∆ n − : H → H ⊗ ( n +1) . Proposition 3.5.
We have µ = exp ⋆ ( κ ) = ε + X n ≥ n ! κ ⋆n . (21) OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 11
Proof.
By the definitions of ∆ and ∆ n we find that κ ⊗ n ∆ n − A = X B ,...,B n ∈ M ( A ) (cid:18) AB . . . B n (cid:19) n Y i =1 κ ( B i ) , and, by (15), this yields the result. (cid:3) Remark 3.6.
Formula (21) is the Hopf-algebraic analogue of the first Leo-nov-Shiryaev relation (11) . Since κ ( ∅ ) = 0, κ ⊗ n ∆ n − ( A ) vanishes whenever | A | > n . Similarly, underthe same assumption, ( µ − ε ) ⋆n ( A ) = 0. It follows that one can handleformal series identities such as log ⋆ (exp ⋆ ( κ )) = κ or exp ⋆ (log ⋆ ( µ )) = µ without facing convergence issues. In particular Proposition 3.7.
We have κ = log ⋆ ( µ ) = X n ≥ ( − n − n ( µ − ε ) ⋆n . (22)From Proposition 3.7 we obtain the formula E c ( X B ) = X n ≥ ( − n − n X B ,...,B n ∈ M ( A ) \{∅} (cid:18) BB . . . B n (cid:19) n Y i =1 E ( X B i ) (23)which may be considered the inverse to (15). Remark 3.8.
The formula (22) is the Hopf-algebraic analogue of the secondLeonov–Shiryaev relation (12) . Moreover, for B ∈ M ( A ) ∩ { , } A , then (23) also reduces to the second Leonov–Shiryaev relation (12) , since on theright-hand side of (23) B , . . . , B n ∈ M ( A ) are also in { , } A . A sub-coalgebra.
If one prefers to work in the combinatorial frame-work of the Leonov–Shiryaev formulae (11)-(12) rather than with (15)-(23),then one may consider the linear span J of M ( A ) ∩ { , } A (namely of allfinite subsets of A , or of their indicator functions).Then J is a linear subspace of H , which is not a sub-algebra of H for theproduct · defined in (14). The coproduct ∆ defined in (19) coacts howevernicely on J since for all finite subsets A of A ∆ A = X B · B = A B ⊗ B ∈ J ⊗ J. Moreover the restriction of ε to J defines a counit for ( J, ∆). Therefore J is a sub-coalgebra of H . With a slight abuse of notation we still write ⋆ forthe dual product on J ∗ ( f ⋆ g )( A ) := ( f ⊗ g )∆ A, for A ∈ J and f, g ∈ J ∗ . If we denote as before µ : J → R A µ ( A ) := E ( X A ) κ : J → R A κ ( A ) := E c ( X A ) , with A ∈ M ( A ) ∩{ , } A , µ ( ∅ ) := 1 and κ ( ∅ ) := 0, then the Leonov–Shiryaevrelations (11)-(12) can be rewritten in J ∗ as, respectively, µ = exp ⋆ ( κ ) = ε + X n ≥ n ! κ ⋆n and κ = log ⋆ ( µ ) = X n ≥ ( − n − n ( µ − ε ) ⋆n . Wick Products
The theory of Wick products, as well as the related notion of chaos de-composition, play an important role in various fields of applied probability.Both have deep structural features in relation to the fine structure of thealgebra of square integrable functions associated to one or several randomvariables. The aim of this section and the following ones is to revisit thetheory on Hopf algebraic grounds. The basic observation is that the formulafor the Wick product is closely related to the recursive definition of antipodein a connected graded Hopf algebra. This approach seems to be new, alsofrom the point of view of concurring approaches such as umbral calculus[11, 18] or set partition combinatorics `a la Rota–Wallstrom [19].4.1.
Wick polynomials.
We are going to use extensively the notion ofWick polynomials for a collection of (not necessarily Gaussian) random vari-ables which is defined as follows.
Definition 4.1.
Given a collection X = { X a } a ∈ A of random variables withfinite moments of all orders, for any A ∈ M ( A ) the Wick polynomial : X A : is a random variable defined recursively by setting : X ∅ : = 1 and postulatingthat X A := X B ,B ∈ M ( A ) (cid:18) AB B (cid:19) E ( X B ) : X B : . (24)As for cumulants, (24) is sufficient to define : X A : by recursion over | A | .Indeed, the term with B = A is precisely the quantity we want to define,and all other terms only involve Wick polynomials : X B : , for B ∈ M ( A )with | B | < | A | .It is now clear that formula (24) can be lifted to H as A = X B ,B ∈ M ( A ) (cid:18) AB B (cid:19) E ( X B ) : B : , (25) OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 13 and written in Hopf algebraic terms as follows A = ( µ ⋆ W )( A ) = ( µ ⊗ W )∆ A, (26)for A ∈ M ( A ). We have set W : H → H , W ( A ) := : A : and call W theWick product map (see Theorem 5.3 for a justification of the terminology).Notice that it depends on the joint distribution of the X a s. Formula 26 is theHopf algebraic analogue of the definition of the Wick polynomial : X B : usedin references [9, 13]. Moreover, introducing the algebra map ev : A X A from H to the algebra of random variables generated by ( X a , a ∈ A ), onegets by a recursion over | A | that ev( : A : ) = : X A : (for that reason, fromnow on we will call slightly abusively both : A : and the random variable: X A : the Wick polynomial associated to A ).4.2. A Hopf algebraic construction.
We want to present now a closedHopf algebraic formula for the Wick polynomials introduced in Definition4.1. We define the set G ( H ) := { λ ∈ H ∗ : λ ( ∅ ) = 1 } . Then it is wellknown that G ( H ) is a group for the ⋆ -product. Indeed, any λ ∈ G ( H ) hasan inverse λ − in G ( H ) given by the Neumann series λ − = X n ≥ ( ε − λ ) ⋆n . (27)As usual, this infinite sum defines an element of H ∗ since, evaluated on any A ∈ H , it reduces to a finite number of terms. Theorem 4.2.
Let µ ∈ G ( H ) be given by µ ( A ) = E ( X A ) , then for all A ∈ M ( A ) : A : = W ( A ) = ( µ − ⋆ id)( A ) = ( µ − ⊗ id)∆ A. (28) Proof.
The identity follows from (26) and from the associativity of the ⋆ product. (cid:3) From (27) and (28) we obtain
Proposition 4.3.
Wick polynomials have the explicit expansion : A : = A + X n ≥ ( − n X B ∈ M ( A ) X B ,...,B n ∈ M ( A ) B i = {∅} (cid:18) AB . . . B n B (cid:19) µ ( B ) · · · µ ( B n ) B. Hopf algebra deformations
The group G ( H ) = { λ ∈ H ∗ : λ ( ∅ ) = 1 } equipped with the ⋆ productacts canonically on H by means of the map φ λ : H → Hφ λ ( A ) := ( λ ⊗ id)∆ A, (29)for λ ∈ G ( H ) and A ∈ H . In other words, φ λ = λ ⋆ id = id ⋆ λ , the latteridentity following from cocommutativity of ∆. This is a group action sinceone checks easily using the coassociativity of ∆ that φ λ ⋆λ = φ λ ◦ φ λ , so that in particular ( φ λ ) − = φ λ − . Being invertible, the maps φ λ allow to define deformations of the product · defined in (14), as well as of the coproduct ∆ defined in (19) and of thecounit. Namely we define · λ : H ⊗ H → H , ∆ λ : H → H ⊗ H and ε λ by A · λ B := φ − λ ( φ λ ( A ) · φ λ ( B )) , ∆ λ A := ( φ − λ ⊗ φ − λ )∆ φ λ A,ε λ ( A ) := ε ◦ φ λ ( A ) = λ ( A ) . (30)Although ε λ = λ , we find it useful to introduce the notation ε λ to featurethe new role of λ as a counit.Notice that, as λ ( ∅ ) = 1, we have φ λ ( ∅ ) = ∅ and A · λ ∅ = A . Dually,( ε λ ⊗ id) ◦ ∆ λ ( A ) = ( ε ⊗ φ λ − )∆ φ λ ( A ) = A. Then we have
Theorem 5.1.
For any λ ∈ G ( H ) , the quintuple ( H, · λ , ∅ , ∆ λ , ε λ ) defines aHopf algebra. The map φ − λ : ( H, · , ∅ , ∆ , ε ) → ( H, · λ , ∅ , ∆ λ , ε λ ) is an isomorphism of Hopf algebras.Proof. Although the Theorem follows directly from the properties of con-jugacy, we detail the proof. Associativity of · λ and coassociativity of ∆ λ follow directly. First,( A · λ B ) · λ C = φ − λ ( φ λ ( A ) · φ λ ( B ) · φ λ ( C )) = A · λ ( B · λ C ) , which shows associativity. Coassociativity is simple to see as well(∆ λ ⊗ id)∆ λ A = ( φ − λ ⊗ φ − λ ⊗ φ − λ )(∆ ⊗ id)∆ φ λ A = ( φ − λ ⊗ φ − λ ⊗ φ − λ )(id ⊗ ∆)∆ φ λ A = (id ⊗ ∆ λ )∆ λ A. We check now the compatibility relation between · λ and ∆ λ :(∆ λ A ) · λ (∆ λ B ) = ( φ − λ ⊗ φ − λ ) (cid:16)(cid:0) ( φ λ ⊗ φ λ )∆ λ A (cid:1) · (cid:0) ( φ λ ⊗ φ λ )∆ λ B (cid:1)(cid:17) = ( φ − λ ⊗ φ − λ ) (cid:0) (∆ φ λ A ) · (∆ φ λ B ) (cid:1) = ( φ − λ ⊗ φ − λ )∆ ( φ λ A · φ λ B )= ∆ λ ( A · λ B ) . Finally, we check that φ − λ : ( H, · , ∅ , ∆ , ε ) → ( H, · λ , ∅ , ∆ λ , ε λ ) is a bialgebramorphism: φ − λ ( A · B ) = φ − λ ( A ) · λ φ − λ ( B ) , ( φ − λ ⊗ φ − λ )∆ A = ∆ λ φ − λ A. We have proved until now that φ − λ is a isomorphism of bialgebras. Since( H, · λ , ∅ , ∆ λ , ε λ ) is a graded connected bialgebra, it has an antipode. Since OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 15 moreover the antipode of a Hopf algebra is unique, we obtain that φ − λ preserves the antipode as well. (cid:3) Remark 5.2.
The construction of (30) and Theorem 5.1 works also if wereplace φ λ with any linear invertible map φ : H → H such that φ ( ∅ ) = ∅ .Indeed, in the above considerations we have never used the formula (29)which defines φ λ .In the particular case of λ = µ , where µ is the moment functional definedin (16), we obtain by Theorem 4.2 and Theorem 5.1: Theorem 5.3.
The Wick product map W ( A ) = : A : is equal to φ µ − .Therefore W : ( H, · , ∅ , ∆ , ε ) → ( H, · µ , ∅ , ∆ µ , ε µ ) is a Hopf algebra isomor-phism, in particular : A · A : = : A : · µ : A : , for A , A ∈ H . More generally, we obtain for any A , . . . , A n ∈ H that: A · · · A n : = : A : · µ · · · · µ : A n : . (31)We notice at last an interesting additional result expressing abstractlycompatibility relations between the two Hopf algebra structures on H (seealso Proposition 7.3 below). We recall that a linear space M is a left co-module over the coalgebra ( H, ∆ , ε ) if there is linear map ρ : M → H ⊗ M such that (∆ ⊗ id M ) ρ = (id H ⊗ ρ ) ρ, ( ε ⊗ id M ) ρ = id M . (32)A left comodule endomorphism of M is then a linear map f : M → M suchthat ρ ◦ f = (id H ⊗ f ) ρ. In particular the coalgebra ( H, ∆ , ε ) is a left comodule over itself, with ρ = ∆. Proposition 5.4.
If we consider H as a left comodule over itself, then φ λ is a left comodule morphism for all linear λ : H → R , namely ∆ φ λ = (id ⊗ φ λ )∆ . (33) In particular the Wick product map W is a left comodule endomorphism of ( H, ∆ , ε ) .Proof. We have ∆ φ λ = ( λ ⊗ id ⊗ id)(id ⊗ ∆)∆= ( λ ⊗ id ⊗ id)(∆ ⊗ id)∆= (id ⊗ λ ⊗ id)(∆ ⊗ id)∆= (id ⊗ λ ⊗ id)(id ⊗ ∆)∆= (id ⊗ φ λ )∆ , where we have used, in this order, coassociativity, cocommutativity and thencoassociativity again. (cid:3) Wick products as Hopf algebra deformations
Let a ∈ A . We define now the functional ζ a : H → R given by ζ a ( A ) := ( A = { a } ) , for every A ∈ M ( A ). Then we define the operator ∂ a : H → H as ∂ a := ζ a ⋆ id = φ ζ a in the notation (29), namely ∂ a A = ( ζ a ⊗ id)∆ A. It is simple to see that ∂ a acts as a formal partial derivation with respect to a , namely it satisfies for A, B ∈ M ( A ) and a, b ∈ A ∂ a { b } = ( a = b ) ∅ , ∂ a ( A · B ) = ∂ a ( A ) · B + A · ∂ a ( B ) , since ζ a satisfies ζ a ( ∅ ) = 0 and ζ a ( A · B ) = ζ a ( A ) ε ( B ) + ε ( A ) ζ a ( B ), namely ζ a is an infinitesimal character. Recall that the product A · B has beendefined in (14) and that { b } is identified with { b } ∈ H .Then the following result is a reformulation in our setting of [13, Propo-sition 3.4]. Theorem 6.1.
The family of polynomials ( : A : , A ∈ M ( A )) is the onlycollection such that : ∅ : = ∅ and for all non-null A ∈ M ( A ) and a ∈ A ∂ a : A : = : ∂ a A : and µ ( : A : ) = 0 . (34) Proof.
Since µ ∈ G ( H ) equation (28) implies µ ( : A : ) = ( µ − ⋆ µ )( A ) = ε ( A ) = ( A = ∅ ) . Using (33) for λ = ζ a we obtain∆ ∂ a = (id ⊗ ∂ a )∆ . We conclude from (28) that ∂ a : A : = ( ζ a ⋆ µ − ⋆ id)( A ) = ( µ − ⋆ ζ a ⋆ id)( A ) = : ∂ a A :by the associativity and commutativity of ⋆ . Therefore : A : satisfies (34).The converse follows from the fact that (34) defines by recurrence a uniquefamily. (cid:3) Back to simple subsets.
As in Subsection 3.4, we can restrict thewhole discussion to Wick polynomials associated to finite sets B ∈ M ( A ) ∩{ , } A and their linear span J . Indeed, if A ∈ J then : A : = W ( A ) alsobelongs to J and is defined by the recursion A = X B · B = A E ( X B ) : B : . OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 17
As in Theorem 4.2, we have W = µ − ⋆ id and id = µ ⋆ W , and, as inProposition 4.3,: A : = A ++ X n ≥ ( − n X B ∈ M ( A ) X B ,...,B n ∈ M ( A ) \{∅} ( B · B ··· B n = A ) µ ( B ) · · · µ ( B n ) B for all A ∈ M ( A ) ∩ { , } A .However, as we have seen in Section 5 above, it is more interesting towork on the bialgebra H than on the coalgebra J , see in particular Theorem5.3. 7. On the inverse of unital functionals
As we have seen in Theorem 4.2, the element µ − ∈ G ( H ) plays animportant role in the Hopf algebraic representation (28) of Wick products.From (27) we obtain a general way to compute µ − . Let us consider nowa linear functional λ : H → R which is also a unital algebra morphism (or character ), namely such that λ ( ∅ ) = 1 and λ ( A · B ) = λ ( A ) λ ( B ) for all A, B ∈ H . Then there is a simple way to compute its inverse: namely as λ − = λ ◦ S , where S : H → H is the antipode , i.e. the only linear map suchthat S ⋆ id = id ⋆ S = ∅ ε, where ∅ is the unit and ε the counit of H . However the functional µ we areinterested in is not a character (moments are notoriously not multiplicativein general) and this simple representation is not available.The aim of this section is to provide an alternative antipode formula,allowing to write µ − as ˆ µ ◦ ˆ S , where ˆ S : ˆ H → ˆ H is the antipode of anotherHopf algebra ˆ H , ˆ µ : ˆ H → R is a suitable linear functional and H is endowedwith a comodule structure over ˆ H , see (36) for the precise formulation. Thisway of representing µ − by means of a comodule structure is directly inspiredby [3, 6], see Section 10. Definition 7.1.
Let ˆ H be the free commutative unital algebra (the algebraof polynomials) generated by M ( A ) . We denote by • the product in ˆ H andwe define the coproduct ˆ∆ : ˆ H → ˆ H ⊗ ˆ H given by ˆ∆( ιA ) = ( ι ⊗ ι )∆ A and ˆ∆( A • A • · · · • A n ) = ( ˆ∆ A ) • ( ˆ∆ A ) • · · · • ( ˆ∆ A n ) , where ι : H → ˆ H is the canonical injection (which we will omit wheneverthis does not cause confusion). The unit of ˆ H is ∅ and the counit is definedby ˆ ε ( A • A • · · · • A n ) = ε ( A ) · · · ε ( A n ) . Since ˆ H is a polynomial algebra, ˆ∆ is well-defined by specifying its actionon the elements of M ( A ), and requiring it to be multiplicative. It turns thespace ˆ H into a connected graded Hopf algebra, where the grading is | A • A • · · · • A n | := | A | + | A | + · · · + | A n | . The antipode ˆ S : ˆ H → ˆ H of ˆ H can be computed by recurrence with theclassical formulaˆ SA = − A − X B ,B ∈ M ( A ) \{∅} (cid:18) AB B (cid:19) h ˆ SB i • B , where we dropped the injection ι for notational convenience. A closed for-mula for ˆ S followsˆ SA = − A + X n ≥ ( − n X B ,...,B n = ∅ (cid:18) AB . . . B n (cid:19) B • B • · · · • B n . (35)We denote by C ( ˆ H ) the set of characters on ˆ H . This is a group for the ˆ ⋆ convolution, dual to ˆ∆. Proposition 7.2.
The restriction map R : C ( ˆ H ) → G ( H ) , R ˆ λ := ˆ λ | H defines a group isomorphism.Proof. The map is clearly bijective, since a character on ˆ H is uniquely de-termined by its values on H , and every λ ∈ G ( H ) gives rise in this way toa ˆ λ ∈ C ( ˆ H ) such that R ˆ λ = λ .It remains to show that R is a group morphism. This follows from R ( ˆ α ˆ ⋆ ˆ β )( A ) = ( ˆ α ⊗ ˆ β ) ˆ∆ A = ( ˆ α | H ⊗ ˆ β | H )∆ A = ( R ˆ α ) ⋆ ( R ˆ β )( A ) , where ˆ α, ˆ β ∈ C ( ˆ H ) and A ∈ H . (cid:3) For all λ ∈ G ( H ) we write ˆ λ for the only character on ˆ H which is mappedto λ by the isomorphism R . By the previous proposition we obtain, inparticular, that (ˆ λ ) − | H = λ − for all λ ∈ G ( H ). Since ˆ λ is a character onˆ H , we have (ˆ λ ) − = ˆ λ ◦ ˆ S . Therefore λ − = ( ˆ λ ◦ ˆ S ) | H . (36)This formula can be used specifically to compute the inverse µ − of thefunctional µ in (28).7.1. A comodule structure.
The above considerations suggest that wecan introduce the following additional structure: if we define δ : H → ˆ H ⊗ H , δ := ( ι ⊗ id)∆, where ι : H → ˆ H is the canonical injection of Definition 7.1,then H is turned into a left comodule over ˆ H , namely we have( ˆ∆ ⊗ id H ) δ = (id ˆ H ⊗ δ ) δ, id H = (ˆ ε ⊗ id H ) δ, (37)see (32) above. Note that (37) is in fact just the coassociativity and couni-tality of ∆ on H in disguise.Then we can rewrite the Hopf algebraic representation (28) of Wick poly-nomials as follows: : A : = (ˆ µ ◦ ˆ S ⊗ id) δA, (38) OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 19 for A ∈ H , where ˆ µ is the • -multiplicative extension of µ from H to ˆ H .Expanding this formula by means of the closed formula for ˆ S , one recovers,by different means, Proposition 4.3.From Proposition 5.4 above, we obtain Proposition 7.3.
We define the action of C ( ˆ H ) on H by ψ ˆ λ : H → H, ψ ˆ λ ( A ) = (ˆ λ ⊗ id) δA, ˆ λ ∈ C ( ˆ H ) , for A ∈ H . Then ψ ˆ λ is comodule morphism for all ˆ λ ∈ C ( ˆ H ) , namely δ ◦ ψ ˆ λ = (id ˆ H ⊗ ψ ˆ λ ) δ. Deformation of pointwise multiplication
We show now that the ideas of the previous sections can be generalizedand used to define deformations of other products. The main example forus is the pointwise product on functions f : R d → R , and we explain in thenext sections how these ideas appear in regularity structures.Let us consider a fixed family T = ( τ i , i ∈ I ). We denote by T thefree commutative monoid on T , with commutative product ⊙ and neutralelement ∅ ∈ T \ T . We define also ( C, ⊙ , ∅ ) as the unital free commutativealgebra generated by T ; then C is the vector space freely generated by T .Elements of T are commutative monomials in the elements of T , i.e. ageneric element in T is of the form τ = τ i · · · τ i k where τ i , . . . , τ i k ∈ T and juxtaposition denotes their commutative productin T . The empty set ∅ plays the role of the unit. Elements of the freecommutative algebra C are simply linear combinations of these monomials.In this context the ⊙ product is the bilinear extension to C of the productin the monoid T .We denote now by C := C ( R d ) the space of continuous functions f : R d → R , for a fixed d ≥
1. We endow C with the associative commutativeproduct · given by the pointwise multiplication and consider the spaces C T := { Π : T → C } , C T := { Γ : T → C } , of functions from T , respectively T , to C . Any function Γ ∈ C T can beuniquely extended to a linear map Γ : C → C . One can think of C T as aspace of T -indexed functions: this is typically what happens in perturbativeexpansions indexed by combinatorial objects (sequences, in usual Taylorexpansions or in Lyons’ classical theory of geometric rough paths, or morecomplex objects such as trees or forests, as in Gubinelli’s theory of non-geometric rough paths or in Hairer’s theory of regularity structures, forexample).When deforming perturbative expansions parametrized by combinatorialobjects, it is useful and often necessary to keep track of the combinatorialindices since the deformation of the various terms of the expansion will depend in practice on both the terms themselves and on their indices. Toimplement this idea, we will, for Π ∈ C T and Γ ∈ C T , use the physicists’notation h Π , τ i ∈ C , τ ∈ T, h Γ , α i ∈ C , α ∈ T . for the evaluation of Π and Γ on τ , respectively α (so that h Π , τ i and h Γ , α i are simply elements of C ). We use instead, for a given Γ ∈ C T , the classicalfunctional notationΓ( α ) := ( h Γ , α i , α ) ∈ C × T , α ∈ T , to denote a copy of the function h Γ , α i indexed by α . Definition 8.1.
For every Γ ∈ C T we denote by C Γ the vector space freelygenerated by (Γ( α ) , α ∈ T ) . By definition, C Γ is isomorphic to C as a vector space (through the pro-jection map Γ( α ) α ). We also have an evaluation map ev : C Γ → C C Γ ∋ n X i =1 c i Γ( α i ) ev n X i =1 c i Γ( α i ) ! := n X i =1 c i h Γ , α i i ∈ C . (39)Concretely, the aim of Definition 8.1 is to use deformations of the algebraicstructure of C , in particular its product, and the isomorphism between C and C Γ , to transfer the deformations of C to C Γ and ultimately to theassociated functions in C .We insist on the fact that, in general, the way such a function (or productsthereof) will be deformed will depend on its index. A key point to keep inmind is indeed that the vector space C Γ is not isomorphic, in general, tothe linear span of ( h Γ , α i , α ∈ T ) in C . To take a trivial example, we mightchoose h Γ , α i = 0 ∈ C for some (or all) α ∈ T , while Γ( α ) is always anon-zero element of C Γ for all α ∈ T . In practice, C Γ is isomorphic to thelinear span of ( h Γ , α i , α ∈ T ) in C if and only if the family ( h Γ , α i : α ∈ T )is linearly independent in C , that is if and only if the evaluation map ev isinjective. Definition 8.2.
We define the commutative and associative product M Γ on C Γ as the only linear map M Γ : C Γ ⊗ C Γ → C Γ such that M Γ (Γ( α ) ⊗ Γ( β )) := Γ( α ⊙ β ) , ∀ α, β ∈ T . Then ( C Γ , M Γ , h Γ , ∅i ) is a commutative unital algebra. In other terms, wehave extended the canonical isomorphism from C to C Γ to an isomorphismof algebras ( C, ⊙ ) → ( C Γ , M Γ ). Definition 8.3. If Γ ∈ C T is a unital algebra morphism from ( T , ⊙ ) to ( C , · ) , namely if h Γ , ∅i = 1 , h Γ , τ ⊙ τ i = h Γ , τ i · h Γ , τ i , (40) OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 21 for all τ , τ ∈ T , where · is the pointwise multiplication on C , then Γ iscalled a character . Definition 8.4.
Since C coincides with the free algebra generated by T ,for each map Π in C T there is a unique character R Π ∈ C T , called the canonical lift of Π , with h Π , τ i = h R Π , τ i , ∀ τ ∈ T. In particular, for every Π ∈ C T , M R Π is mapped by the evaluation map (39) to the canonical pointwise product on C . Constructing deformations.
We want now to define deformations of the products of Definition 8.2, taking inspiration from Section 5. Wesuppose that C is a left-comodule over a Hopf algebra ( ˆ C, • , ∅ , ˆ∆ , ˆ ε ), withcoaction δ : C → ˆ C ⊗ C satisfying the analogue of (37). We stress thatthe coaction δ is not supposed to be multiplicative with respect to the ⊙ product in C .We say that λ : ˆ C → R is unital if it is a linear functional such that λ ( ∅ ) = 1. Then we define, as in the previous section, ψ λ : C → C, ψ λ := ( λ ⊗ id) δ, (41)for every unital λ : ˆ C → R . It is easy to see that ψ λ ◦ ψ λ ′ = ψ λ ′ ˆ ⋆ λ , where ˆ ⋆ is the convolution product with respect to the coproduct ˆ∆ : ˆ C → ˆ C ⊗ ˆ C . We then define the product ⊙ λ on C as α ⊙ λ β := ψ − λ [( ψ λ α ) ⊙ ( ψ λ β )] , α, β ∈ C, (42)where ψ − λ = ψ λ − and λ − : ˆ C → R is the inverse of λ with respect to theˆ ⋆ convolution product. It is easy to see that the product ⊙ λ is associativeand commutative, arguing as in the proof of Theorem 5.1. The product ⊙ λ is in general different from ⊙ . Definition 8.5.
By analogy with the Hopf algebraic interpretation of Wickproducts, the map ψ − λ = ψ λ − is called the generalized Wick λ -product map. We can now define deformations of the product M Γ on C Γ . Definition 8.6.
For every Γ ∈ C T and every unital λ : ˆ C → R we candefine a product M λ Γ on C Γ by M λ Γ (Γ( α ) ⊗ Γ( β )) := Γ( α ⊙ λ β ) , ∀ α, β ∈ C, (43) such that Γ : ( C, ⊙ λ ) → ( C Γ , M λ Γ ) is an algebra isomorphism. We say that M λ Γ is a λ -deformation of M Γ ; if λ is the counit ˆ ε of ˆ C , thecounitality property (37) of δ implies that ψ ˆ ε is the identity map on C , hence M ˆ ε Γ coincides with M Γ . In particular we have Definition 8.7.
For every Π ∈ C T we can define a λ -deformation · λ := M λR Π of the canonical product M R Π on C R Π , such that Π( τ ) · λ · · · · λ Π( τ n ) := R Π( τ ⊙ λ · · · ⊙ λ τ n ) , (44) where τ , . . . , τ n ∈ T . We stress again that, unlike the pointwise multiplication · , the λ -defor-mation · λ = M λR Π is not defined on C but rather, for every fixed Π ∈ C T , on C R Π . As stated below Definition 8.6, the deformation M ˆ εR Π coincides withthe canonical product M R Π on C R Π . Lemma 8.8.
For every unital λ : ˆ C → R and Γ ∈ C T the map Γ λ : C → C Γ , Γ λ := Γ ◦ ψ − λ (45) defines an algebra isomorphism from ( C, ⊙ ) to ( C Γ , M λ Γ ) .Proof. By (43) M λ Γ (Γ λ ( α ) ⊗ Γ λ ( β )) = Γ λ ( α ⊙ β ) , ∀ α, β ∈ C, (46)and the claim follows. (cid:3) In particular for Γ = R Π we obtain by (44)Γ λ ( τ ) · λ · · · · λ Γ λ ( τ n ) := Γ λ ( τ ⊙ · · · ⊙ τ n ) , (47)which is reminiscent of (31). Example 8.9.
In the setting of the previous sections, we can consider T = A with C = C ( R A ), so that in this case, up to a canonical isomorphism, C = H and ˆ C = ˆ H . Then the most natural choice of Π ∈ Hom( A , C ( R A )) is givenby < Π , a > := t a , where t a : R A → R is the evaluation of the a -component,and R Π : H → C is h R Π , x a . . . x a n i := t a · · · t a n , a , . . . , a n ∈ A , Then (47) is the analogue of (31) in this context, while (44) defines a defor-mation · λ of the pointwise product of t a , . . . , t a n . This point of view will begeneralized in the next section.We conclude this section with a remark. The above construction allowsto construct families of deformed products on a vector space C Γ . In thegeneral case, the outcome of the product between Π( τ ) and Π( τ ) is not afunction in C but an element of C Γ , namely a formal linear combination offunctions in C indexed by elements of T . One may prefer a genuine functionin C as an outcome.If the family ( h Γ , α i : α ∈ T ) is linearly independent in C , then we cancanonically embed C Γ in C by means of the evaluation map (39) and theproduct ev ◦ M λ Γ (ev − · , ev − · ) : ev( C Γ ) ⊗ ev( C Γ ) → ev( C Γ )is then a commutative and associative product on the linear subspace of C spanned by ( h Π , τ i , τ ∈ T ). OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 23
In the general case, one can still compute ev ◦ M λ Γ , which indeed belongsto C , and obtain a linear mapev ◦ M λ Γ : C Γ ⊗ C Γ → C . In this case, ev ◦ M λ Γ is a weaker notion than a genuine product. We refer tothe discussion on ”multiplication” in regularity structures at the beginningof [10, Section 4]. 9. Wick products of trees
We now discuss the main example we have in mind of the general construc-tion in Section 8, namely rooted trees, that are a generalization of classicalmonomials as we show below. With the application to rough paths in mind[8], we denote by T the set of all non-planar non-empty rooted trees withedges (not nodes) decorated with letters from a finite alphabet { , . . . , d } .We stress that all trees in T have at least one node, the root.The set T is a commutative monoid under the associative and commuta-tive tree product ⊙ given by the identifications of the roots, e.g. i i i ⊙ i i i = i i i i i i , (48)see also [3, Definition 4.7].The rooted tree with a single node and no edge is the neutral elementfor this product. The set of monomials in d commuting variables X , . . . , X d can be embedded in T as follows: every primitive monomial X i is identifiedwith i , and the product of monomials with the tree product. In this wayevery monomial is identified with a decorated corolla, for instance X i X j X k −→ i j k . (49)See the discussion around Lemma 9.3 below for more on this identification.We denote by T ⊂ T the set of all non-planar planted rooted trees. Werecall that a rooted tree is planted if its root belongs to a single edge, calledthe trunk. For example, in the left-hand side of (48), the first tree is notplanted, while the second is.We also denote by F the set of non-planar rooted forests with edges (notnodes) decorated with letters from the finite alphabet { , . . . , d } , such thatevery non-empty connected component has at least one edge. On this spacewe define the product • given by the disjoint union, with neutral elementthe empty forest ∅ .We perform the identification = ∅ (50)between the rooted tree ∈ T and the empty forest ∅ ∈ F . Then we obtaincanonical embeddings T ֒ → T ֒ → F (51)and moreover ⋄ ( T , ⊙ ) is the free commutative monoid on T , ⋄ ( F , • ) is the free commutative monoid on T .In both cases the element = ∅ is the neutral element. We denote by ⋄ V the vector space generated freely by T , ⋄ C the vector space generated freely by T , ⋄ ˆ C the vector space generated freely by F .Then we have ⋄ ( C, ⊙ ) is the free commutative unital algebra generated by T , ⋄ ( ˆ C, • ) is the free commutative unital algebra generated by T ,and again in both cases the element = ∅ is the neutral element. Finally,by (50) and (51) we also have canonical embeddings V ֒ → C ֒ → ˆ C. (52)On ˆ C we also define the coproduct ˆ∆, given by the extraction-contractionoperator of arbitrary subforests [4]:ˆ∆ τ = X σ ⊆ τ σ ⊗ τ /σ, τ ∈ F , (53)where a subforest σ ∈ F of τ is determined by a (possibly empty) subsetof the set of edges of τ , and τ /σ is the tree obtained by contracting eachconnected component of σ to a single node. We recall that by (50) the emptyforest and the tree reduced to a single node are identified and called ∅ . Forexample,ˆ∆ i i i = i i i ⊗ ∅ + ∅ ⊗ i i i + i ⊗ i i + i ⊗ i i + i ⊗ i i + i i ⊗ i + i i ⊗ i + i • i ⊗ i . If ˆ ε : ˆ C → R is the linear functional such that ˆ ε ( σ ) = ( σ = ∅ ) for σ ∈ F , then( ˆ C, • , ∅ , ˆ∆ , ˆ ε ) is a Hopf algebra [4].Note that, unlike ( ˆ H, ˆ∆) in Section 7, ( ˆ C, ˆ∆) is not co-commutative;moreover the canonical embedding C ֒ → ˆ C in (52) is not an algebra mor-phism from ( C, ⊙ ) to ( ˆ C, • ). We could also endow C with a coproduct ∆ C (the extraction-contraction operator of a subtree at the root, which playsan important role in [3] and is isomorphic to the classical Butcher–Connes–Kreimer coproduct), but we do not need this for what comes next.We now go back to the construction of Section 8. With the embedding C ֒ → ˆ C , the coaction δ : C → ˆ C ⊗ C, δ ( τ ) := ˆ∆ τ, OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 25 makes C a left-comodule over ˆ C by an analogue of Proposition 7.3. Thenwe can define ψ λ : C → C as in (41), for λ : ˆ C → R unital, and a deformedproduct ⊙ λ on C as in (42) which is in general truly different from ⊙ .For Π ∈ C T and Γ := R Π ∈ C T as in Definition 8.3, the map Γ λ =( R Π) ◦ ψ λ − defines by Lemma 8.8 an algebra isomorphism from ( C, ⊙ ) to( C Γ , M λ Γ ), so that in particular we have the analogue of (47).This idea is very important in regularity structures, where the pointwiseproduct of explicit (random) distributions is ill-defined, while a suitabledeformed product is well-defined as a (random) distribution. The aboveconstruction allows to recover a precise algebraic structure of such deformedpointwise products, in the same spirit as Theorem 5.3. See Section 10 belowfor a discussion.We show now how these ideas can be implemented concretely, that is howa character λ can be constructed in practice in some interesting situation,generalizing the construction of Wick polynomials in the previous sectionsof this article.Let us now consider a C T -valued random variable X , such that ⋄ h X, ∅i = 1, ⋄ X is stationary, i.e., h X, τ i ( · + x ) has the same law as h X, τ i for all x ∈ R d and τ ∈ T ⋄ h X, τ i (0) has finite moments of any order for all τ ∈ T .Then we can define µ : C → R , µ ( τ ) := E ( h RX, τ i (0)) . (54)There is a unique extension of µ to a linear ˆ µ : ˆ C → R which is a characterof ( ˆ C, • ) and we denote by ˆ µ − : ˆ C → R its inverse with respect to the ˆ ⋆ convolution product. Theorem 9.1.
Let X be as above. The map λ = ˆ µ − is the unique characteron ( ˆ C, • ) such that E ( h RX, ψ λ τ i (0)) = 0 , ∀ τ ∈ T \ {∅} . Proof.
We note first that for every character λ on ( ˆ C, • ) we have E ( h RX, ψ λ τ i ( x )) = ( λ ⊗ µ ) ˆ∆ τ = ( λ ˆ ⋆ µ ) τ, ∀ τ ∈ T . In particular E ( h RX, ψ ˆ µ − τ i ( x )) = (ˆ µ − ⊗ ˆ µ ) ˆ∆ τ = 0 , ∀ τ ∈ T \ {∅} . On the other hand, since λ and ˆ µ are characters on ( ˆ C, • ), if for all τ ∈ T ( λ ˆ ⋆ ˆ µ ) τ = ( τ = ∅ ) , then the same formula holds by multiplicativity for all τ ∈ F and we obtainthat λ = ˆ µ − . (cid:3) Remark 9.2.
By stationarity, the function RX ◦ ψ ˆ µ − ∈ C T has the addi-tional property E ( h RX, ψ ˆ µ − τ i ( x )) = 0 , ∀ τ ∈ T \ {∅} , x ∈ R d . In other words, RX ◦ ψ ˆ µ − : C → C gives a centered deformed product.This important example is the exact analogue in this context of the BPHZrenormalization in regularity structures, see [3, Theorem 6.17].We now show that the construction on decorated rooted trees generalizesin a very precise sense the Wick products of Section 4. We use the identifica-tion between monomials in d commuting variables X , . . . , X d and corollasdecorated with letters from { , . . . , d } that we have explained in (49). Choos-ing A := { , . . . , d } we obtain a canonical embedding of H ֒ → C , where H isdefined in Definition 3.1; we call Cor the image of H in C by this embedding.Then a simple verification shows that Lemma 9.3.
The embedding
H ֒ → C is a Hopf algebra isomorphism between ( H, · , ∅ , ∆ , ε ) and (Cor , ⊙ , , ˆ∆ , ˆ ε ) , where ˆ∆ is defined in (53) . We obtain that every deformation ⊙ λ for a unital λ : ˆ C → R defines aproduct on Cor which is isomorphic to the deformed product defined in (30)by restricting λ to a map from Cor to R .10. Connection with regularity structures
It would go beyond the scope of this work to introduce and explain the al-gebraic and combinatorial aspects of seminal theory of regularity structures[10]; we want at least to explain how the concept of renormalization , whichplays such a prominent role there, is intimately related to the deformation of the standard pointwise product described in the previous sections. Theseideas can also be found in the theory of rough paths [14, 15, 7, 8], which haslargely inspired the theory of regularity structures.We denote by D ′ ( R d ) the classical space of distributions or generalizedfunctions on R d . The recent papers [3, 6] introduce a Hopf algebra ˆ H to-gether with a linear space H , which is moreover a left-comodule over ˆ H with coaction δ : H → ˆ H ⊗ H . This framework is then used to describe in acompact way a number of complicated algebraic operations, related to theconcept of renormalization . The space H in [3] is an expanded version ofthe linear span of decorated rooted trees V defined in Section 9 above; moreprecisely it is the vector space freely generated by a more complicated set ofdecorated rooted trees, which is aimed at representing monomials of gener-alized Taylor expansions. The space ˆ H in [3] is a Hopf algebra of decoratedforests with a condition of negative homogeneity .In [3, 6], the linear space H codes random distributions, which dependon a regularisation parameter ǫ >
0. As one removes the regularisationby letting ǫ →
0, these random distributions do not converge in general.More precisely, we have (random) linear functions Π ǫ : H → D ′ ( R d ) which OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 27 are well defined for all ǫ >
0, but for which there is in general no limit as ǫ →
0. In fact, we even have Π ǫ : H → C ( R d ), and Π ǫ is constructed in amultiplicative way as in Lemma 8.3 above. Indeed, although H is not analgebra, it is endowed with a partial product , i.e., some but not all pairs ofits elements are supposed to be multiplied. We try to make this idea moreprecise in the next Definition 10.1. A partial product on H is a pair ( M , S ) where S ⊆ H ⊗ H is a linear space and M : S → H is a linear function. Therefore, if τ and σ are elements of H , their product M ( τ ⊗ σ ) is welldefined if and only if τ ⊗ σ ∈ S . For example, in regularity structures onehas an element Ξ ∈ H such that Π ǫ Ξ = ξ ǫ := ρ ǫ ∗ ξ , where ξ is a white noiseon R d (a random distribution in D ′ ( R d )) and ( ρ ǫ ) ǫ> is a family of mollifiers.Although ( ξ ǫ ) is well-defined as a pointwise product in C ( R d ), as ǫ → D ′ ( R d ) and indeed, we do not expect to multiply ξ byitself in D ′ ( R d ). We express this by imposing that Ξ ⊗ Ξ / ∈ S .The divergences that arise in this context are due to ill-defined products;this is already clear in the example of Ξ ⊗ Ξ and ( ξ ǫ ) . Another more subtleexample is the following: we consider ξ ǫ := ρ ǫ ∗ ξ again, and a (possiblyrandom) function f ǫ : R d → R which, as ǫ →
0, tends to a non-smoothfunction f . Then the pointwise product f ǫ · ξ ǫ does not converge in general,since the product f · ξ is ill-defined in D ′ ( R d ). However a proper deformationof this pointwise product may still be well defined in the limit.Let ( τ i , i ∈ I ) ⊂ H be a family that freely generates H as a linear space.We can now give the following Definition 10.2.
Let
Π : ( τ i , i ∈ I ) → D ′ ( R d ) be a map and ( M , S ) apartial product on H . We define C Π as the vector space freely generated bythe symbols (Π( τ i ) , i ∈ I ) as in Definition 8.1, and Π : H → C Π the uniquelinear extension of τ i Π( τ i ) ∈ C Π . Then we define a partial product on C Π as follows: ⋄ S Π ⊆ C Π ⊗ C Π := { Π( τ ) ⊗ Π( σ ) : τ ⊗ σ ∈ S }⋄ M Π : S Π → C Π , M Π (Π( τ ) ⊗ Π( σ )) := Π( M ( τ ⊗ σ )) . We are clearly inspired by the construction of the previous sections, byrealizing that we can work on distributions rather than on continuous func-tions. We stress that this definition allows to define partial products ofdistributions in a very general setting.However the construction of interesting Π : ( τ i , i ∈ I ) → D ′ ( R d ) maynot be simple. The method which is successfully used in a large class ofapplications in [10, 3, 6] is the following. We start from a Π ǫ : ( τ i , i ∈ I ) → C ( R d ) which is multiplicative in the sense that h Π ǫ , M ( τ ⊗ σ ) i = h Π ǫ , τ i · h Π ǫ , σ i , ∀ τ ⊗ σ ∈ S, where · is the standard pointwise product in C ( R d ). In order to obtain aconvergent limit as ǫ →
0, we try to deform this pointwise product, using the comodule structure of H over ˆ H . For all unital multiplicative and linear λ : ˆ H → R we define ψ λ : H → H as in (41) and then we set as in (45)Π λǫ ( τ ) := Π ǫ ( ψ − λ τ ) , τ ∈ H. Then we can define the deformed partial product on C Π ǫ : M λ Π ǫ (Π λǫ ( τ ) ⊗ Π λǫ ( σ )) := Π λǫ ( M ( τ ⊗ σ )) , τ ⊗ σ ∈ S. If λ = λ ǫ is chosen in such a way that Π λ ǫ ǫ converges to a well defined mapˆΠ : ( τ i , i ∈ I ) → D ′ ( R d ), then we can define on C ˆΠ the partial product M ˆΠ ( ˆΠ( τ ) ⊗ ˆΠ( σ )) := ˆΠ( M ( τ ⊗ σ )) , τ ⊗ σ ∈ S which is the analogue of (47) in this setting. We note that in general neitherΠ ǫ nor λ ǫ converge; indeed, λ ǫ diverges exactly in a way that compensatesthe divergence of Π ǫ , in such a way that Π λ ǫ ǫ converges.The fact that the above construction can indeed be implemented in alarge number of interesting situations is the result of [3, 6]. Those papersconsider random maps Π ǫ with suitable properties which resemble those of X in Theorem 9.1, namely Π ǫ is supposed to be stationary and to possessfinite moments of all orders. Then, as in Theorem 9.1, it is possible to choosea specific element λ ǫ : ˆ H → R which yields a centered family of functionsΠ ǫ ◦ ψ λ − ǫ , see [3, Theorem 6.17]. Under very general conditions, this specialchoice produces a converging family as ǫ → cen-tered version of the original (non-converging) ones. The specific functional λ − ǫ is equal to µ ǫ ◦ A , where µ ǫ : ˆ H → R is an expectation with respect toΠ ǫ as in (54), and A is a twisted antipode ; the functional µ ǫ ◦ A plays therole which is played by ˆ µ − in Theorem 9.1. Remark 10.3.
We stress that the centered family Π ǫ ◦ ψ λ − ǫ can not be ingeneral reduced to the Wick polynomials of Theorem 4.2. This is becausethe coaction δ : H → ˆ H ⊗ H in this context is significantly more complexthan (19) and (53). References [1] C. Brouder, B. Fauser, A. Frabetti, and R. Oeckl. Quantum field theory and Hopfalgebra cohomology.
J. Phys. A , 37(22):5895–5927, 2004.[2] C. Brouder and F. Patras. One-particle irreducibility with initial correlations.
In“Combinatorics and Physics”, Contemporary Mathematics , 539:1–25, 2011.[3] Y. Bruned, M. Hairer, and L. Zambotti. Algebraic renormalisation of regularity struc-tures. https://arxiv.org/abs/1610.08468 , 2016.[4] C. Calaque, K. Ebrahimi-Fard, and D. Manchon. Two interacting hopf algebras oftrees: a hopf-algebraic approach to composition and substitution of b-series.
Adv. inAppl. Math. , 47(2):282–308, 2011.[5] P. Cartier. A primer of Hopf algebras.
Frontiers in Number Theory, Physics, andGeometry , 2:537–615, 2007. Springer, Berlin, Heidelberg.[6] A. Chandra and M. Hairer. An analytic BPHZ theorem for regularity structures. https://arxiv.org/abs/1612.08138 , 2016.
OPF-ALGEBRAIC DEFORMATIONS OF PRODUCTS 29 [7] M Gubinelli. Controlling rough paths.
Journal of Functional Analysis , 216(1):86 –140, 2004.[8] M. Gubinelli. Ramification of rough paths.
Journal of Differential Equations ,248(4):693 – 721, 2010.[9] M. Hairer and H. Shen. A central limit theorem for the KPZ equation.
Ann. Probab. ,45(6B):4167–4221, 2017.[10] Martin Hairer. A theory of regularity structures.
Invent. Math. , 198(2):269–504, 2014.[11] S. A. Joni and G.-C. Rota. Coalgebras and bialgebras in combinatorics.
Stud. Appl.Math. , 61(2):93–139, 1979.[12] V. P. Leonov and A. N. Sirjaev. On a method of semi-invariants.
Theor. ProbabilityAppl. , 4:319–329, 1959.[13] J. Lukkarinen and M. Marcozzi. Wick polynomials and time-evolution of cumulants.
J. Math. Phys. , 57(8):083301, 27, 2016.[14] Terry J. Lyons. Differential equations driven by rough signals.
Rev. Mat. Iberoamer-icana , 14(2):215–310, 1998.[15] Terry J. Lyons, Michael Caruana, and Thierry L´evy.
Differential equations driven byrough paths , volume 1908 of
Lecture Notes in Mathematics . Springer, Berlin, 2007.Lectures from the 34th Summer School on Probability Theory held in Saint-Flour,July 6–24, 2004, With an introduction concerning the Summer School by Jean Picard.[16] G. Peccati and M. S. Taqqu.
Wiener chaos: moments, cumulants and diagrams ,volume 1 of
Bocconi & Springer Series . Springer, Milan; Bocconi University Press,Milan, 2011. A survey with computer implementation, Supplementary material avail-able online.[17] J. Riordan.
Combinatorial identities . Robert E. Krieger Publishing Co., Huntington,N.Y., 1979. Reprint of the 1968 original.[18] G.-C. Rota and J. Shen. On the combinatorics of cumulants.
J. Combin. Theory Ser.A , 91(1-2):283–304, 2000. In memory of Gian-Carlo Rota.[19] G.-C. Rota and T. C. Wallstrom. Stochastic integrals: a combinatorial approach.
Ann. Probab. , 25(3):1257–1283, 1997.[20] T. P. Speed. Cumulants and partition lattices.
Austral. J. Statist. , 25(2):378–388,1983.
Dept. of Mathematical Sciences, Norwegian University of Science and Tech-nology (NTNU), 7491 Trondheim, Norway.
On leave from UHA, Mulhouse, France.
E-mail address : [email protected] URL : https://folk.ntnu.no/kurusche/ Univ. Cˆote d’Azur, CNRS, UMR 7351, Parc Valrose, 06108 Nice Cedex 02,France.
E-mail address : [email protected] URL : ∼ patras Departamento de Ingenier´ıa Matem´atica, Universidad de Chile, Santiago,Chile.
E-mail address : [email protected] URL : ∼ ntapia Laboratoire de Probabilit´es, Statistique et Mod´elisation (UMR 8001), Sor-bonne Universit´e, Paris, France
E-mail address : [email protected] URL ::