On Deterministic Markov Processes: Expandability and Related Topics
aa r X i v : . [ m a t h . P R ] A ug On Deterministic Markov Processes:Expandability and Related Topics
Alexander Schnurr ∗ July 1, 2018
Abstract
We analyze the class of universal Markov processes on R d which do not dependon random. For this, as well as for several subclasses, we prove criteria whether afunction f : [0 , ∞ [ → R d can be a path of a process in the respective class. This isuseful in particular in the construction of (counter-)examples. The semimartingaleproperty is characterized in terms of the jumps of a one-dimensional deterministicMarkov process. We emphasize the differences between the time homogeneous andthe time inhomogeneous case and we show that a deterministic Markov process isin general more complicated than a Hunt process plus ‘jump structure’. MSC 2010:
Keywords:
Markov semimartingale, deterministic process, Itˆo process, Feller process, sym-bol, expandability
Deterministic processes arise naturally in several parts of the theory of stochastic pro-cesses. Examples include the space dependent drift being one part of a Feller process orthe seasonal component of a time series in continuous time. Furthermore it is known thatevery PII (i.e. a c`adl`ag process with independent increments) can be written as the sumof a PII semimartingale and a deterministic process (cf. [12] Theorem II.5.1).When starting our investigation of deterministic universal Markov processes we had thefollowing three questions in mind:I) Is there a simple example of a Hunt semimartingale which is not an Itˆo process?II) Can we characterize every (one-dim.) deterministic Markov process?III) Given a function f : [0 , ∞ [ → R d . Can we directly say whether there exists a deter-ministic process of a certain class having this function as a path? ∗ Lehrstuhl IV, Fakult¨at f¨ur Mathematik, Technische Universit¨at Dortmund, D-44227 Dortmund, Ger-many, [email protected] , phone: +49-231-755-3099, fax: +49-231-755-3064. INTRODUCTION d = 1 paths do not have to bemonotone any more or even of finite variation. This leads to new questions like: when is adeterministic Markov process a semimartingale? The criterion is simple, while the proofis surprisingly difficult. A new technique had to be developed in order to prove it.Here and in the following we mean by a deterministic process a stochastic process( X, P x ) x ∈ R d = ( X x ) x ∈ R d which does not depend on ω , i.e. there exists a function f : R d × [0 , ∞ [ → R d such that X xt ( ω ) = f ( x, t )for every ω ∈ Ω. Since it is always assumed that P x ( X = x ) = 1 we write X x for thesimple process ( X, P x ) and call it a path of the process. Since deterministic processes areadapted to every possible filtration, we do not mention it further but we assume that afixed stochastic basis (Ω , F , F = ( F t ) t ≥ , P ) is always in the background. In constructingexamples it is an advantage of deterministic processes that one does not have to careabout the filtration.We are concerned with Markov processes in the sense of Blumenthal and Getoor (cf. [3])which are sometimes called universal Markov processes (cf. [1], [11]) or Markov families.This is due to the fact that the examples we have in mind are related to concepts likethe semigroup of the process, the generator or the martingale problem. With little effortmost of the results can be transformed to the case of simple Markov processes (with onlyone starting point). The Markov processes X = ( X t ) t ≥ we are treating here are allowedto start in every point of the respective state space and furthermore, time homogeneityis present, i.e., writing for s, t ≥ x, y, z ∈ R d and a Borel set B in R d P xs,t ( z, B ) := P x ( X t ∈ B | X s = z ) we have P t − s ( z, B ) := P xs,t ( z, B ) = P ys + h,t + h ( z, B ) , h ≥ . (1.1)For a deterministic Markov process, time homogeneity can be characterized as follows: ifthere exist s, t ≥ x, y ∈ R d such that X xs = X yt we obtain X xs + h = X yt + h (1.2)for every h ≥
0. The class of deterministic Markov processes serves as a rich source ofcounterexamples (cf. [20] and the present paper). Furthermore they can be used as (partsof the) mean processes in stochastic volatility models or as interesting mathematicalobjects in their own right.Since the definitions and notations for some of the classes of processes we are treatingare not unified, let us first fix some terminology: a Markov process in the above sense, i.e.satisfying (1.1), is called
Hunt process if it is quasi-left continuous (cf. Definition I.2.25of [12]) with respect to every P x ( x ∈ R d ). For a deterministic process this is equivalentto ordinary left continuity of the paths. INTRODUCTION X, P x ) x ∈ R d is a semimartingale , if X x is one for every x ∈ R d .A Markov semimartingale X is called Itˆo process (cf. [5] and [4]) if it has characteristics(
B, C, ν ) of the form: B ( j ) t ( ω ) = Z t ℓ ( j ) ( X s ( ω )) ds j = 1 , ..., dC jkt ( ω ) = Z t Q jk ( X s ( ω )) ds j, k = 1 , ..., dν ( ω ; ds, dy ) = N ( X s ( ω ) , dy ) ds where ℓ ( j ) , Q jk : R d −→ R are measurable functions, Q ( x ) = ( Q jk ( x )) ≤ j,k ≤ d is a positivesemidefinite matrix for every x ∈ R d , and N ( x, · ) is a Borel transition kernel on R d ×B ( R d \{ } ).The following diagram gives an overview on the interdependence of the classes of processeswe are treating here:richFeller ⊂ Itˆo ⊂ Huntsemimartingale ⊂ Markovsemimartingale ⊂ semimartingale ∩ ∩ ∩ Feller ⊂ Hunt ⊂ MarkovIn the present paper we are not too much concerned with Feller processes (only in Section2, where we recall the definition). This is because we are interested in processes whichexhibit jumps. A deterministic jump is a fixed time of discontinuity which Feller processesdo not have.We start with a simple construction principle which we proposed in [20] and which willbe generalized in Section 3 below:
Example . Let R ⊆ R and Φ : R → R be bijective and such that Φ(0) = 0. In thiscase a Markov process on R is given by X xt ( ω ) := Φ( t + Φ − ( x )) , for every ω ∈ Ω , (1.3)i.e. by shifting the function Φ to the left and to the right. By inverting the function x Φ( t + Φ − ( x )) we know where a path being at time t in z ∈ R has started at timezero: x = Φ(Φ − ( z ) − t ).The function Φ will be called a generating path of the process X since it contains allthe information of the process on R (cf. Definition 3.3 below). Obviously the restrictionΦ(0) = 0 is not needed and any shifted generating path t Φ( t − s ) ( s ∈ R ) would haveserved as well. In general one needs infinitely many generating paths in order to describea deterministic Markov process on R d .Let us have a look at a first example which emphasizes how rude a process can be if wedo not assume any regularity of the paths, though the Φ is even bijective on R , i.e. theprocess is given by a single generating path. INTRODUCTION Example . Let Φ( t ) := ( t + 1)1 R \ Q ( t ) + t Q ( t ) and use the construction principledescribed in Example 1.1: we obtain the deterministic Markov process on R X xt = t + x + 1 if x ∈ Q and x + t / ∈ Q t + x if x ∈ Q and x + t ∈ Q t + x if x / ∈ Q and x + t / ∈ Q t + x − x / ∈ Q and x + t ∈ Q . For this process every path t X xt is discontinuous in every point. The image of everypath is dense in [ x − , ∞ [.In the deterministic world homogeneity in space can be written as follows:for every t ≥ x, z ∈ R d we have X x + zt = X xt + z (1.4)The process above is not homogeneous in space since X −√ √ + √ √ − = √ X √ but it is a natural question whether there are Markov processes which are homogeneousin space and time, but which are not L´evy processes. It turns out that the axiom ofchoice is needed in order to construct such processes. We have not found a proof for thefollowing result in the literature, but somehow it belongs to the folklore of our subject.The proof it not too difficult and hence left to the reader (compare in this context [10]). Proposition 1.3.
Let X be a deterministic Markov process. X is homogeneous in time and space, iff t X t is a Q -homomorphism on the Q -vector space R and the other pathsare given by X xt = X t + x . Deterministic L´evy processes form a one-dimensional subspace of the infinite-dimensional Q -vector space of deterministic Markov processes which are homogeneous in space andtime. Example . In order to get a concrete example of a deterministic Markov process whichis homogeneous in space and time but not a L´evy process, let B = { } ∪ { b i : i ∈ I } be(an uncountable) algebraic Q -basis of R and set Φ(1) = 1, Φ( b i ) = 2 b i for every i ∈ I .The Φ defined as such is bijective since for every y ∈ R written as q q b + ... + q n b n we obtain q q b + ... + q n b n as its pre-image under Φ.In order to avoid pathological examples as above we will assume in the following that theprocesses we encounter are c`adl`ag .Let ( X x ) x ∈ R d be a deterministic Markov process and let x ∈ R d . Immediately by (1.2) weobtain: if there exists t < t such that X xt = X xt then X xt + h = X xt + h for every h ≥ EXPANDABILITY X x and every path starting in the range R of X x one could use Φ( t ) := X xt . Thisshows that we can be a bit more general by allowing generating paths which do becomeperiodic in a certain sense. This is analyzed in detail in Section 3 where we solve ourinitial question III). While in the Hunt case a countable number of generating paths wassufficient, we show that in general an uncountable number of generating paths is neededin the presence of jumps. The case of ‘well behaved’ paths leads to a different perspectiveon the classification theorem for Hunt processes (cf. [20] Theorem 2.11).Let us give a brief outline on how the paper is organized: in the subsequent sectionwe treat the question of expandability, i.e. given a path, does there exist an elementof a certain class of processes containing this path. Section 3 deals with the generalstructure of such a process while in Section 4 we prove a criterion when a deterministicMarkov process is a semimartingale. Section 5 is devoted to the more general case oftime inhomogeneous processes. Some examples complementing those given in the textare collected in Section 6. The so called space-time Markov process is included here. Ourmain results are Theorem 2.4 and the closely related Theorems 4.3 and 4.4.Most of the notation we are using is more or less standard. Vectors are column vectorsand ′ denotes a transposed vector or matrix. The j -th entry of the vector v is v ( j ) . For aset A ⊆ R and n ∈ N we write A + n := { r ∈ R : r = a + n for an a ∈ A } . Note that weprefer to write ] s, t [ for an open interval rather than ( s, t ) and use the same conventionfor semi-open intervals. In the context of semimartingales we follow mainly [12], in thecontext of Feller processes [16] and [11]. In the construction of counterexamples one is often concerned with only one path ratherthan with a universal process, i.e. one considers a single starting point x and constructsa path which meets certain requirements. However, one is still interested if this path canreally be a part of a process in the desired class. It is important for us to consider universalprocesses, since the definition of e.g. a Feller process does not make sense otherwise.In this section we answer the question whether a given right-continuous function f :[0 , ∞ [ → R d is a path of a deterministic Markov process or of one of its subclasses. Bytime homogeneity one automatically knows how the process has to behave starting fromany point within the range of f . Outside of that range one can usually set X xt = x forevery t ≥
0. The only exception to this rule are Feller processes (see below).
Definition 2.1.
A right-continuous function f : [0 , ∞ [ → R d is expandable to an elementof a certain class of processes C if there exists a deterministic process X ∈ C such that X f (0) t = f ( t ) for every t ≥
0. We write C -expandable for short.By the same reasoning as in the proof of Theorem 2.7 of [20] we obtain that a path ofa one-dimensional deterministic Markov process can not move continuously to a point ithas visited before, i.e. either the function becomes constant at a time t or it jumps to apoint it has visited before. The two cases are illustrated below. EXPANDABILITY –101234x 1 2t –101234x 1 2t Therefore, the key is the following kind of periodicity.
Definition 2.2.
Let f : [0 , ∞ [ → R d be right-continuous. We define t := inf { t ≥ s < t such that f ( s ) = f ( t ) } and t := inf { s ≥ t > s such that f ( s ) = f ( t ) } (a) f is called jump-periodic if the following holds: t < t and for every u ≥ t we have f ( u ) = f ( u − ( t − t )).(b) f is called finally constant , if there exists a t ≥ f is injective on [0 , t [and constant on [ t , ∞ [. (Hence t = t )(c) Let f be jump-periodic. The function f ← : f ([0 , ∞ [) → [0 , t [ given by f ← ( y ) :=inf { t ≥ f ( t ) = y } is called generalized inverse of f . Remark . (a) In the finally constant case one could speak of a period of length zero,in the jump-periodic case of length t − t . If a path never visits a point it has visitedbefore we obtain a period of length infinity.(b) Not every periodic function is Markov-expandable (e.g. t sin( t )). However, for ajump-periodic function f the shifted function t f ( t − t ) is periodic.(c) By the right-continuity of the function all infima of the above definition are minima,except for inf ∅ = ∞ .(d) The notions finally constant and jump-periodic are transfered easily to generatingpaths, i.e. functions on intervals I ⊆ R (see Definition 3.3 below). Obviously this makesonly sense if the right endpoint of I is ∞ .(e) The notion jump-periodic might be misleading in two or more dimensions, becausethere does not have to be a jump in order to reach a point the process has visited before.An example of this kind is given by f ( t ) := (sin( t ) , cos( t )) ′ . Theorem 2.4.
A function f : [0 , ∞ [ → R d is Markov-expandable iff it is injective orfinally constant or jump-periodic.Proof. Let f be Markov-expandable, i.e. there exists a deterministic Markov processsuch that X f (0) t = f ( t ). If this path is not injective, there exists points s < t suchthat f ( s ) = f ( t ). Now let t ≤ t < ∞ be defined as above. If t = t the path is EXPANDABILITY t , t + ε [ and hence constant on [ t , ∞ [. Otherwise weobtain f ( t + h ) = f ( t + h ) for every h ≥ u ≥ t can be written as u = t + h with h ≥ u − ( t − t ) = t + h we obtain f ( u ) = f ( u − ( t − t )).Now let one of the three properties be fulfilled. Case 1: t = t = ∞ . In this case theconstruction described in 1.1 works, i.e. take f as generating path on f ([0 , ∞ [). Outsideof the range of f we set X xt = x for every t ≥ Case 2: t = t < ∞ . The function becomes constant at t , i.e. f ( t ) = c for t ≥ t . In thiscase there is only one point which is visited more than once. The construction works as inCase 1, the only difference being that f is inverted only on [0 , t [. Paths hitting c becomeconstant. In particular we have X ct = c for every t ≥
0. Time homogeneity is then clearand the Markov property is trivially fulfilled since X xt = X yt for x = y happens only inthe case X xt = c = X yt and this case we have X xt + h = c = X yt + h for every h ≥ Case 3: t < t < ∞ . Set X xt := ( f ( t + f ← ( x )) if x ∈ f ([0 , ∞ [) x else.Now let z, w ∈ R d , t, h ≥ x ∈ f ([0 , ∞ [) such that X xt = z . For t < t we knowwhere the path being at time t in z ∈ f ([0 , ∞ [) has started at time zero: x = f ( f ← ( z ) − t )if t ≤ f ← ( z ). Otherwise (i.e. f ← ( z ) < t < t ) either f ← ( z ) ≤ t and there is no x suchthat X xt = z or t < f ← ( z ) < t < t and we have x = f ( f ← ( z ) − t + t − t ).The Markov property is clear, since by definition we have for every t ≥ x = y implies X xt = X yt . By jump-periodicity we can subtract n · ( t − t ) for a suitable n ∈ N ‘inside’ of f and can therefore assume w.l.o.g. t < t . We obtain: P xt,t + h ( z, { w } ) = 1 ⇔ f (cid:16) ( t + h ) + f ← ( x ) (cid:17) = w ⇔ f (cid:16) ( t + h ) + f ← ( f ( f ← ( z ) − t )) (cid:17) = w ⇔ f (cid:16) h + f ← ( z ) (cid:17) = w ⇔ P ,h ( z, { w } ) = 1 . This yields the homogeneity in time. Finally we have X f (0) t = f ( t ) since f ← ( f (0)) = 0.Virtually all examples of homogeneous diffusions (with jumps) in the sense of [12] inthe literature are Markov processes. As a first application of the theorem we prove theexistence of a homogeneous diffusion which is not Markovian. Example . We define the following deterministic process: denote for t ∈ R the saber-tooth function { t } := t − ⌊ t ⌋ and define f ( t ) := (cid:18) { t } (cid:19) · [0 , / (cid:18)(cid:26) t (cid:27)(cid:19) + (cid:18) − { t }{ t } (cid:19) · [1 / , / (cid:18)(cid:26) t (cid:27)(cid:19) + (cid:18) − { t } (cid:19) · [2 / , / (cid:18)(cid:26) t (cid:27)(cid:19) EXPANDABILITY (cid:18) −{ t } (cid:19) · [3 / , / (cid:18)(cid:26) t (cid:27)(cid:19) + (cid:18) { t } − −{ t } (cid:19) · [4 / , / (cid:18)(cid:26) t (cid:27)(cid:19) + (cid:18) { t } − (cid:19) · [5 / , (cid:18)(cid:26) t (cid:27)(cid:19) We define the process X by X xt := ( f ( t + f ← ( x )) if x ∈ f ([0 , ∞ [) x else.For the readers convenience we include the following graphic which shows what the tra-jectory of the process looks like: ✻✠ ✲❄ ✒✛ X t = x + Z t ℓ ( X s ) ds (2.1)for every starting point x ∈ R where ℓ : R → R is given by ℓ ( y ) = (sign( y (1) ) , ′ if y (1) = 0 and (cid:12)(cid:12) y (2) (cid:12)(cid:12) ≤ , − sign( y (2) )) ′ if y (2) = 0 and (cid:12)(cid:12) y (1) (cid:12)(cid:12) ≤ − sign( y (1) ) , sign( y (2) )) ′ if (cid:12)(cid:12) y (1) − y (2) (cid:12)(cid:12) = 1 and (cid:12)(cid:12) y (1) (cid:12)(cid:12) ≤ , ′ else.Therefore the process is a homogeneous diffusion, for which ℓ does not depend on thestarting point, but by Theorem 2.4 it is not a Markov process. Remark . The integral equation (2.1) has infinitely many solutions. Out of these only2 solutions are Markovian. Namely the one where the process reaching (0 , ′ always goes‘up’ and the one which always goes ‘down’ when reaching this point. Corollary 2.7.
A function f : [0 , ∞ [ → R d is Hunt-expandable iff it is Markov expandableand continuous. The order structure of R gives rise to the following characterization in the one-dimensionalcase. Compare in this context [20] Theorem 2.7. Corollary 2.8.
A function f : [0 , ∞ [ → R is Hunt-expandable iff it is continuous andthere exists a t ∈ [0 , ∞ ] such that t f ( t ) is strictly monotone (increasing or decreasing)on [0 , t [ and constant on [ t , ∞ [ . Since t ∈ [0 , ∞ ] the ‘pure types’ of paths which are only constant or strictly monotoneare included in this corollary. EXPANDABILITY Proposition 2.9.
A function f : [0 , ∞ [ → R d is Itˆo-expandable iff it is Hunt-expandableand in addition each component of the vector f − f (0) is absolutely continuous w.r.t.Lebesgue measure.Proof. For a deterministic Hunt semimartingale the second and third characteristic van-ish. Therefore the first characteristic can be written as B xt = X xt − X x on the imageof f where X xt is just f shifted to the left. The absolute continuity is inherited from f .Anywhere else we set X xt = x for every t ≥
0, hence B xt = 0 which is as well absolutelycontinuous. The density ˜ b we obtain is deterministic and hence optional w.r.t. any filtra-tion (Compare in this context [5] Theorem 6.25 and the introduction to Section 7 of thatarticle).An important subclass of Hunt processes are Feller processes. If the semigroup ( T t ) t ≥ ofoperators which is defined on the bounded Borel measurable functions by T t u ( x ) = E x u ( X t ) = Z Ω u ( X t ( ω )) P x ( dω ) = Z R d u ( y ) P t ( x, dy ) . satisfies the following conditions( F T t : C ∞ ( R d ) → C ∞ ( R d ) for every t ≥ F
2) lim t ↓ k T t u − u k ∞ = 0 for every u ∈ C ∞ ( R d ).we call the process (and the semigroup) Feller .The following proposition only holds in the one-dimensional case.
Proposition 2.10.
A function f : [0 , ∞ [ → R is Feller-expandable iff it is Hunt-expandable.Proof. The ‘only if’-part is clear. Now let f be Hunt-expandable. W.l.o.g. let f be in-creasing up to time t (cf. Corollary 2.8). SetΦ( t ) := ( f ( t ) if t ≥ f (0) + t if t < X xt := ( Φ( t + Φ ← ( x )) if x ∈ Φ(] − ∞ , ∞ [) x else.The image of Φ is either ] − ∞ , ∞ [ or there exists an a ∈ R such that the image is] − ∞ , a [ or ] − ∞ , a ] in the latter cases we set X xt := x for every t ≥
0. It is simple toshow that (F1) and (F2) are fulfilled by this process. Compare in this context Theorem3.5 of [20].In the general case we have the following:
Proposition 2.11.
If a function f : [0 , ∞ [ → R d is continuous and finally constant orjump-periodic it is Feller-expandable. EXPANDABILITY Proof.
The property (F2) is trivially fulfilled. The whole information is contained inthe restriction f | [0 , t ]. Since the image f ([0 , t ]) is compact, vanishing at infinity is nota problem. In fact the only problems which could occur are by destroying continuity.However, since f ([0 , t ]) is closed, those problems could only show up ‘within’ the curve.Now assume that there was a sequence ( x n ) n ∈ N ⊆ f ([0 , t ]) and an h > X t n = x n −−−→ n →∞ x and ( X t n + h ) n ∈ N is divergent.We have x ∈ f ([0 , t ]). By compactness we have that there are two points y = z andsubsequences ( n ( k )) k ∈ N and ( n ( j )) j ∈ N such that X t n ( k ) + h → y = z ← X t n ( j ) + h Since t n ( k ) and t n ( j ) are both restricted to the compact interval [0 , t ] there exists conver-gent sub-subsequences such that( t n ( k ( l )) ) → t and X t n ( k ( l )) + h → y and X t n ( k ( l )) → x ( t n ( j ( i )) ) → t and X t n ( j ( i )) + h → y and X t n ( j ( i )) → x By continuity of f we obtain X t = x and X t + h = y while X s = x and X s + h = z . Thiscontradicts time homogeneity.We complement the previous two propositions by an example of a two-dimensional Hunt-expandable functions which is not Feller-expandable. Example . The following function f : [0 , ∞ [ → R is Hunt-expandable but not Feller-expandable: f ( t ) := X n ∈ N (cid:18)(cid:18) − n (cid:19) + (cid:18) − n (cid:19) ( t − n ) (cid:19) [2 n, n +1[ ( t )+ (cid:18)(cid:18) − n ( − n (cid:19) + (cid:18) n +1 ( − n +1 (cid:19) ( t − (2 n + 1)) (cid:19) [2 n +1 , n +2[ ( t ) ! For every u ∈ C ∞ ( R ) which coincides with the projection on the second component on[0 , × [ − ,
1] we obtain x n := X n = (cid:18) − (cid:0) (cid:1) n (cid:19) −−−→ n →∞ (cid:18) (cid:19) but u ( X n +1 ) = ( n is even, − T ( u ( x n )) = u ( X x n ) = u ( X n +1 ) does not converge.The generator ( A, D ( A )) is the strong right-hand side derivative of the semigroup ( T t ) t ≥ at zero. A Feller process is called rich if the domain D ( A ) of this closed operator containsthe test functions C ∞ c ( R d ). In [21] it was shown that every rich Feller process is an Itˆoprocess. By Proposition 3.8 of [20] and the above reasoning we obtain the following: THE GENERAL STRUCTURE OF A DETERMINISTIC MARKOV PROCESS Proposition 2.13.
A function f : [0 , ∞ [ → R is rich-Feller-expandable iff it is continu-ously differentiable on ]0 , ∞ [ , right-differentiable at zero and if there exists a t ∈ [0 , ∞ ] such that ∂ t f ( t ) = 0 on [0 , t [ and ∂ t f ( t ) = 0 on ] t , ∞ [ . Rich Feller processes are interesting, because by a classical result which is due to Courr`ege[6] one knows that their generator A is a pseudo-differential operator. The symbol of thisoperator contains a lot of information about the corresponding process (cf. [18], [19]). Remark . For a function f : [0 , ∞ [ → R d to be L´evy-expandable it is necessary andsufficient to be of the form t f (0) + c · t with c ∈ R d . This follows directly from theL´evy-Itˆo decomposition (cf. e.g. [15] Theorem I.42). Our starting point is the following result for Hunt processes (see [20] Section 2). We willsee in the following that the generalization for deterministic processes with jumps is notstraight forward and a new approach is needed. We will work one-dimensional since evenin the Hunt case there is no possibility to generalize the result to higher dimensions.
Theorem 3.1.
A family of functions t X xt , each mapping [0 , ∞ [ into R , is a deter-ministic Hunt process if and only if there exists a decomposition of R into disjoint orderedintervals ( J j ) j ∈ Z where Z ⊂ {− n, ..., , ..., m } with n, m ∈ N ∪ {∞} such that on everyInterval J j the functions t X xt (for x ∈ J j ) are either all constant or there exists acontinuous generating path Φ j : I j → J j for J j which is surjective and either strictlymonotonically increasing or strictly monotonically decreasing and such that X xt = Φ j ( t + Φ − j ( x )) for x ∈ J j and t ∈ [0 , ∞ [ ∩ ( I j − Φ − j ( x )) and the I j are intervals containing zero. With every interval J j we associate the type ⊕ , ⊖ resp. ⊙ , if Φ is increasing, decreasingresp. the process is constant on the interval. This allows to describe the process froman abstract point-of-view as a sequence like ... | ⊙ | ⊕ | ⊙ | ⊖ | ... . This sequence is calledthe structure of the process. To emphasize that the upper endpoint of the lower intervalbelongs to the lower (resp. higher) interval we write ⊙ ] ⊕ (resp. ⊙ [ ⊕ ). The behavior of thepaths of the deterministic Hunt process is totally described by the decomposition ( J j ) j ∈ Z and the sequence of generating paths. Compare in this context Remark 2.13 of [20]. Inparticular we described there which structures are forbidden and by which conventionsone gets a unique representation.The theorem above can be used to construct deterministic Markov processes with jumps.A first idea is to just add a ‘jump structure’ to a given Hunt process, i.e. a sequence( a n , b n ) with a n ∈ R , b n ∈ R meaning that iflim s ↑ t X s = a n then X t = b n THE GENERAL STRUCTURE OF A DETERMINISTIC MARKOV PROCESS b n the process moves on as the underlying Hunt process. In additionwe could allow the process to jump from a n to b + n resp. b − n depending on whether thepath was increasing before reaching a n or decreasing. Let us remark that this does notcontradict the Markov property, since the process does not actually reach a n . For a (one-dimensional) Hunt process there exist only the possibility to reach a point from above or from below. Using such a jump structure one can construct several interesting examplesof Markov processes.It would be nice if every deterministic Markov process could be described by the aboveconstruction. However, this is not the case. The following example shows that we do noteven necessarily have an interval of monotonicity before a jump: Example . Let X t := (1 − t )1 A ( t ) + ( t − B ( t ) + 2 · { } ( t ) + (1 + t )1 C ( t ) + (3 − t )1 D ( t ) + 3 · [2 , ∞ [ ( t )where A := [ n ∈ (2 N ) (cid:20) n − n , n +1 − n +1 (cid:20) B := [ n ∈ (2 N +1) (cid:20) n − n , n +1 − n +1 (cid:20) C := [ n ∈ (2 N +1) (cid:20) n +1 , n (cid:20) D := [ n ∈ (2 N ) (cid:20) n +1 , n (cid:20) This path looks as follows: ✻ ✲ a n in differentways (which are difficult to describe since there is no interval of monotonicity) resultingin different ‘landing points’ b kn ( k = 1 , b kn ( k ∈ N ) for one a n .Since it is not enough to use the generating paths of a Hunt process plus a jump structurewe have to generalize the notion of generating paths itself, in order to capture the behaviorof non-Hunt paths: THE GENERAL STRUCTURE OF A DETERMINISTIC MARKOV PROCESS Definition 3.3.
Let X be a deterministic Markov process and I be an interval of theform ] a, ∞ [ respectively [ a, ∞ [ (with a ∈ [ −∞ , ∞ [ respectively a ∈ ] −∞ , ∞ [). A surjectivefunction Φ : I → J is called a generating path of X on J ⊆ R d , if it is injective or jump-periodic or finally constant and (1.3) holds for every x ∈ J and t ∈ [0 , ∞ [ ∩ ( I − Φ − ( x )). Remark . Let us emphasize why generating paths are useful: think of a one-dimensionaldeterministic L´evy process which is increasing. Using a function f ( y ) = cy , defined on[0 , ∞ [ as in the previous section, we can only describe the process for x ∈ [0 , ∞ [ byshifting this function to the left. In order to describe the whole process, we would need asequence of such functions like: f n ( y ) = − n + cy (describing the process on [ − n, ∞ [). Incontrast to this, one generating path Φ( y ) := cy defined on Φ :] − ∞ , ∞ [ describes thewhole process by shifting it to the left and right.Even with the examples 3.2, 6.6 and 6.7 in mind one could still hope that a countable number of generating paths is sufficient in order to describe the whole process. Thefollowing example shows that this is not the case, even if the process is a semimartingalewith bounded increasing paths. Example . Let C be the Cantor set. It is well known that C = ( x ∈ [0 ,
1] : there exists a representation x = X j ∈ N a j (cid:18) (cid:19) j with a j ∈ { , } ) ( ⋆ )and that every y ∈ [0 ,
1] can be written as y = P j ∈ N b j (1 / j with b j ∈ { , , } . Theserepresentations are unique up to 0- respective 2-periods:13 j = 2 (cid:18) (cid:19) j +1 + 2 (cid:18) (cid:19) j +2 + ... Let e C ⊆ C be all points in the Cantor set with a unique representation. Obviously e C is uncountable since C \ e C is countable. For every x ∈ e C infinitely many a j s are 0 andinfinitely many a j s are 2. e C will be the set of our starting points. Now we define for every x ∈ e C a path X x such that the paths have no common points: let x ∈ e C , ( q n ) n ∈ N ⊆ Q ∗ + be a denumeration of Q ∗ + = Q ∩ ]0 , ∞ [ and ( h xn ) n ∈ N ⊆ N be a denumeration of the indices j in the representation ( ⋆ ) such that a j = 0. Set X xt := x + X n ∈ N (cid:18) (cid:19) h xn [ q n , ∞ [ ( t ) . Since the convergence of the series is is uniform, the function t X xt is c`adl`ag andby definition the function is strictly increasing (the function jumps upwards in every q ∈ Q ∗ + ). In particular the process is a semimartingale which is bounded since X xt reachesonly values in [0 , t X xt takes only values y which can be representedin the following way: y = X j ∈ N c j (cid:18) (cid:19) j such that ( c j = 2 if a j = 2 c j ∈ { , } if a j = 0 . THE SEMIMARTINGALE PROPERTY ⋆ ) is unique (up to the excluded case of periods), we obtainthat the paths are disjoint. In every point z which is not reached by one of the paths weset X zt = z for every t ≥ Remark . A countable number of generating paths is sufficient, if the process is wellbehaved in the following sense: the image f x ([0 , ∞ [) of every path can be written as theunion of intervals I xi such that there exist t < t with f x ([ t , t [) = I xi where t is a jump time and t is a jump time or the time where the path becomesconstant and there is no jump time between t and t . Though this definition of wellbehaved seems to be rather restrictive, it is met by every example in this paper except of3.5 (and those of Section 1 which are not c`adl`ag). Since the paths have to be continuousbetween the times t and t defined as above we obtain by time homogeneity that wehave for x = y and indices i, jI xi ∩ I yj = ∅ or I xi ⊆ I yj or I yj ⊆ I xi . This allows us to take a system S of disjoint intervals such that every I xi is either containedin S or it is a subset of an element of S . This system is countable since every union ofdisjoint intervals of R is. On every such interval we get a generating path as in the proofof Theorem 2.11 of [20]. In the present section we characterize the semimartingale property of a one-dimensionaldeterministic Markov process and show that such a result can not hold in dimensions n ≥
2. The following classical result is adapted from [12] Proposition I.4.28:
Proposition 4.1.
Let X be a deterministic process. X is a semimartingale iff t X xt is c`adl`ag and of finite variation on compact intervals for every x ∈ R d . Every one-dimensional deterministic
Hunt process is a semimartingale. This is not thecase for Markov processes with jumps as the following example illustrates, in which weuse the construction principle of the previous section:
Example . Let X be the deterministic Feller process with structure ⊕| ⊙ |⊖ . Let the ⊙ -domain be { } , that is X t = 0, and the generating pathsΦ ⊖ :] − ∞ , → ]0 , ∞ [ given by Φ ⊖ ( t ) = 1 − t THE SEMIMARTINGALE PROPERTY ⊕ :] − ∞ , → ] − ∞ ,
0[ given by Φ ⊕ ( t ) = t − . define the process on ]0 , ∞ [ respectively on ] − ∞ , x = 1, andadd a jump structure such that the resulting path starting in x jumps so often from y to − y ( y ∈ ] − , ,
1] that the jumps are not summable. If the onlyaccumulation point of jump times is 1, this results in a c`adl`ag Markov process which isnot a semimartingale, because the path starting in 1 is of infinite variation on [0 , Theorem 4.3.
A one-dimensional deterministic c`adl`ag Markov process X is a semi-martingale iff the jumps of every path t X xt ( x ∈ R ) are locally absolutely summable. This result is wrong for dimensions d ≥
2. Just take the function g : [0 , ∞ [ → R given by g ( t ) = ( f ( t ) , t ) ′ where f ( t ) is a continuous function of infinite variation on compacts. Thedrift in the second variable ensures that the path is injective. Theorem 2.4 shows thatthis function is Markov expandable (setting X xt = x for every t ≥ x outsidethe range of g ). However, since one path is of infinite variation, the process, though itdoes not have jumps at all, is still no semimartingale.In order to prove this theorem we need the following result which belongs to one-dimensional analysis. Even in detailed accounts on the interplay between properties ofreal valued functions like [13], we have not found it. Therefore, we provide a proof. Theorem 4.4.
A function h : [0 , T ] → R , which is c`adl`ag and injective, is of finitevariation iff its jumps are absolutely summable. While the ‘only if’-part of the theorem is well known (injectivity is not even needed forthis direction) the ‘if’-part is more involved. We need the following two lemmas for toprove this direction. The proof of the first one is elementary, while the second one can befound e.g. in [14] Corollary VIII.3.1.
Lemma 4.5.
Let a < c and f : [ a, c [ → R be a continuous function such that f ( a ) = f ( c − ) . Furthermore let b ∈ [ a, c [ with f ( a ) = f ( b ) . Let g : [ a, c [ → R be a c`adl`ag functionwhich is pure jump, that is g ( x ) = X a ≤ s ≤ x g ( s ) − g ( s − ) . If P a ≤ s ≤ c | ∆ g ( s ) | < (1 / · | f ( a ) − f ( b ) | then f + g is not injective. Lemma 4.6.
Let a < b < c . A function f : [ a, c ] → R is of finite variation on [ a, c ] iff itis of finite variation on [ a, b ] and [ b, c ] .Proof of Theorem 4.4. Let h : [0 , T ] → R be an injective c`adl`ag function of boundedvariation. Divide this function into two increasing c`adl`ag functions: h = h + − h − suchthat h + and h − do not jump at the same time. We obtain X ≤ s ≤ T | ∆ h ( s ) | > | ∆ h ( s ) | = X ≤ s ≤ T | ∆ h + ( s ) | > (cid:12)(cid:12) ∆ h + ( s ) (cid:12)(cid:12) + X ≤ s ≤ T | ∆ h − ( s ) | > (cid:12)(cid:12) h − ( s ) (cid:12)(cid:12) THE SEMIMARTINGALE PROPERTY ≤ ( h + ( T ) − h + (0)) + ( h − ( T ) − h − (0)) .< ∞ . Now suppose the theorem was false. Then we could find an injective c`adl`ag function h : [0 , T ] → R , with absolutely summable jumps, which is of infinite variation. Dividethe c`adl`ag function h into its continuous part f and its pure jump part g . g is of finitevariation, since the jumps of h are absolutely summable on [0 , T ]. Therefore, f has tobe of infinite variation on [0 , T ], because the functions of finite variation form a vectorspace. Hence, there exists a sequence of partitions π n := (0 = t n < t n < ... < t nk ( n ) = T )of [0 , T ] such that X t nj ,t nj +1 ∈ π n (cid:12)(cid:12) f ( t nj +1 ) − f ( t nj ) (cid:12)(cid:12) −−−→ n →∞ ∞ . (4.1)Since the jumps of h are absolutely summable, there exists a constant C g > P | ∆ g ( s ) | > | ∆ g ( s ) | = C g < ∞ and since f is continuous, there exists a constant C f > | f | is bounded by C f . By (4.1) we obtain that there exists an N ∈ N such that X t Nj ,t Nj +1 ∈ π N (cid:12)(cid:12) f ( t Nj +1 ) − f ( t Nj ) (cid:12)(cid:12) > C g + C f . This N is fixed from now on and we write t j := t Nj ( j = 1 , ..., k ) in order to simplifynotation. W.l.o.g we assume that f (0) = 0 and f ≥
0. If these properties are not met, wecan shift the function (up/down) and analyze the positive and negative part separately.Furthermore we assume w.l.o.g that for every j we have f ( t j ) − f ( t j − ) = 0 and f ( t j ) − f ( t j − ) < ⇔ f ( t j +1 ) − f ( t j ) > , that is, there are no ‘useless points’ in the partition π N . The idea is now to rearrange f (and hence the original function h ) in such a way that lemma 4.5 is applicable. step 1: introduce new points into the partition We set s l := t s r := t (4.2) s l := min { t > s r : f ( t ) = f ( s r ) } s r := min { t k ∈ π N : t k > s l } s l := min { t > s r : f ( t ) = f ( s r ) } ... Stop this procedure when reaching the last point T = t k or the value of s rj is zero. Inthe latter case start the procedure again in s lj +1 := s rj to built up the next ‘hill’. We callthe function f restricted to the set [ s l , s r [ ∪ ... ∪ [ s lj , s rj [ a hill though it is not the functionwhich looks like a hill, but the traverse line through the points ( s l , f ( s l )), ..., ( s l , f ( s l )).We continue with the next hill until finally reaching T new := ( T , if f ( t k ) < f ( t k − ) t k − , else. THE SEMIMARTINGALE PROPERTY T new , since otherwise,starting from T , line (4.2) is not defined (we do not need the next, but the next but onepoint).We left several gaps between the intervals. Now we treat every interval [ s rj , s lj +1 ] in thesame way as above, that is, s lj := s r s rj := min { t k +1 ∈ π N : t k > s lj } s lj := min { t > s rj : f ( t ) = f ( s rj ) } s rj := min { t k ∈ π N : t k > s lj } s lj := min { t > s rj : f ( t ) = f ( s rj ) } ... until we reach s lj +1 . Of course, it can happen that s rj = s lj +1 . In this case a third hier-archical level is not needed on the interval [ s rj , s lj +1 ]. If there are intervals in the secondhierarchical level, where we have to introduce new points s rij and s i ( j + 1) l then we treatthe interval [ s rij , s i ( j + 1) l ] again as above. We continue with this procedure until theunion of all intervals (of all hierarchical levels) covers [0 , T new ]. Note that the number ofadditional points is finite. It is simple to prove that an upper bound is given by k , butone can prove better estimates. step 2: the last hill remains unfinished Let s lM be the maximum of interval-endpoints in the first hierarchical level with the prop-erty f ( s lM ) = 0. If f ( T new ) = 0 we do not have the property f ( s lM ) = f ( T new ). In orderto bring Lemma 4.5 into account, we introduce a new intermediate point: f ( s mM ) := inf { t > s lM : f ( t ) = f ( T new ) } This gives us two new intervals: one with no intermediate points and the property f ( s lM )
The if part is clear by the above Theorem. Now let the jumps of X be locally absolutely summable. Fix x ∈ R and T >
0. By Theorem 2.4 we know whatthe paths of a Markov process look like. We exclude the trivial case X xt = x . By Lemma4.6 we can and will assume that the path h ( t ) := X xt is injective on [0 , T ] for a T > t ≤ T (cf.Remark 2.2) and then separate the interval into[0 , t − d ] , [ t − d, t ] , [ t , t + d ] , [ t + d, t + 2 d ] , ..., [ t + nd, T ]where d := ( t − t ) /
2. We divide by 2, because h ( t ) = h ( t ). The result follows fromTheorem 4.4. THE TIME INHOMOGENEOUS CASE For most of our results time homogeneity is a key ingredient and the results (or analogousof the results) do not hold for the time inhomogeneous case. However, complementingthe results of Section 2 we have the following:
Proposition 5.1.
Every function f : [0 , ∞ [ → R d is expandable to a space homogeneousMarkov process (which is in general not time homogeneous).Proof. Just set X xt = x + ( f ( t ) − f (0)).The proposition says in particular, that every function is expandable to a time inhomo-geneous Markov process. The requirements on f (cf. Theorem 2.4) are not needed.To find a criterion for a time inhomogeneous Markov process to be a semimartingale (cf.Section 4), which is better than Proposition 4.1 is hopeless. Just take Φ to be any functionof unbounded variation plus the standard construction principle. Note that we can notuse the space-time Markov process ( X t , t ) in order to establish an analogous result, sinceour result does only hold in one dimension. This is interesting from a philosophical point-of-view: in the literature one sometimes gets the impression that time inhomogeneousMarkov processes are not of any interest since one can always use the space-time Markovprocess in order to transfer results. Here we can see that this is not the case. Compare inthis context example 6.8. Remark . The difference between the time homogeneous and the time inhomogeneouscase can be emphasized by having a look at the following situation: two continuous paths X x and X y with x < y hit each other for the first time at t . By the Markov property theyhave to ‘stick together’ afterwards. In the time homogeneous case this is only possible if X x is monotone increasing on [0 , t [ and X y is monotone decreasing on [0 , t [. Furthermore X x and x y are constant on [ t , ∞ [. Every path X z with ( x ≤ z ≤ y ) is known and if forany w ∈ ] − ∞ , x [ ∪ ] y, ∞ [ there exists a t > X wt ∈ [ x, y ] we know how thepath behaves on [ t , ∞ [. In the time inhomogeneous case we do not know anything aboutthe behavior of X x (resp. X y ) on ] t , ∞ [ or about any path X wt with w ∈ ] − ∞ , x [ ∪ ] y, ∞ [.The only thing we can say is that for every continuous path X z ( x ≤ z ≤ y ) we have X z = X y on [ t , ∞ [. –101234x 1 2t –101234x 1 2t time homogeneous time inhomogeneous FURTHER EXAMPLES In this final section we have collected several examples which complement the results ofthe previous sections.
Example . Let h : R → [0 ,
1] be the Cantor function (cf. [7] and [8] Section 8.4) anddefine g : [0 , → [0 ,
1] by g ( y ) := (1 / h ( y ) + y ). For x ∈ R let Φ( x ) := g ( x − [ x ]) + [ x ]with g defined above and x [ x ] denoting the floor function. The stochastic process X = ( X t ) t ≥ defined by X xt := Φ( t + Φ − ( x ))is called Cantor process . In [20] we have seen that this process is Feller, but not rich andthat it is not an Itˆo process, but a Hunt semimartingale.
Example . Define X xt = x exp( t ). The structure of this process is ⊖| ⊙ |⊕ , with thegenerating paths t exp( t ) respective t
7→ − exp( t ) on ]0 , ∞ [ respective ] −∞ , Example . A deterministic L´evy process does not admit a killing and the only wayhow a killing can arise in the context of deterministic Hunt processes is by a state-spacedependent drift which reaches infinity in finite time. If jumps are taken into account,we have two new ways of killing a process: first, we could start e.g. with a L´evy process X xt = x + ct ( c = 0) and add a jump structure like( a n , b n ) = c · n − n + n − X j =0 j , c · n − n + n − X j =0 j ! . This results in a deterministic Hunt process which reaches infinity in finite time, if westart in [ n ∈ N (cid:20) n + c · n − n , n + c · n +1 − n +1 (cid:20) . The second way is by adding an ‘instant killing’, i.e. if the process reaches a certain set,it is sent directly to a cemetery state ∆.Deterministic processes which are strictly monotone increasing can be used as a timechange in order to kill a given arbitrary process.
Example . Let X xt = x + ( t · ( − x )) = x · (1 − t ). All paths are zero at time one, butthe behavior afterward depends on the starting point x . This is a process which is notMarkovian, but still a semimartingale and even a homogeneous diffusion in the sense ofDefinition III.2.18 of [12] with respect to every starting point. Unlike Example 2.5 thedifferential characteristic ℓ depends on the starting point here. Example . We show that the sum of two simple time homogeneous Markov processesstarting in zero does not have to be again a time homogeneous Markov process (no matterhow the filtrations are chosen): let X t := t · [0 , ( t ) + 1 [1 , ∞ [ ( t ) and Y t := − t · [0 , ( t ) − · [2 , ∞ [ ( t ). Their sum is Z t = X t + Y t = (1 − t ) · [1 , ( t ) − [2 , ∞ [ ( t )contradicting time homogeneity. FURTHER EXAMPLES Example . The following path reaches zero (in the sense of left-limits) infinitely often,but in two different ways, which can not be described by means of monotonicity. Let X t := X n ∈ (2 N ) (( n +1) − t )1 A + n ( t )+( t − ( n +1))1 B + n ( t )+( t − ( n +2))1 A +( n +1) ( t )+(( n +2) − t )1 B +( n +1) ( t )with A and B as in Example 3.2: ✻ ✲ Example . For n ∈ N let X − nt := ( t − n )1[ , n − n [( t ) + ∞ X j = n (cid:18) j t + 2 j − j − n j +1 (cid:19) h j − j , j +1 − j +1 h ( t ) + n [1 , ∞ [ ( t )this results in infinitely many paths with lim t ↑ X − nt = 0 but X − n = n . Example . Here we consider the space-time Markov process. For this purpose let X bea one-dimensional deterministic Markov process which is not homogeneous in time, i.e.there exist s < t and s < t such that t − s = t − s and X xs = X ys but X xt = X yt for starting points x and y . It is a well-known fact that the corresponding space-timeprocess ( X t , t ) ′ is homogeneous in time. This is due to the simple fact that the newpaths do not coincide in s and s . This does not mean that the structure of the processbecomes simpler. In the context of universal Markov processes it means that we get awhole new dimension of possible starting points. For these new starting points we getthe paths by using the injective functions Φ x ( t ) := ( X xt , t ) ( x ∈ R ) as generating paths.This results in a process Y on R × [0 , ∞ [ and makes things more complicated e.g. in thecontext of generators: assume that the paths of X are continuously differentiable andwrite f x ( t ) := X xt . The symbol of the generator of Y is p ( y, ξ ) = − iξ ∂ + f ((Φ x ) − ( y )) − iξ where x is a starting point in the original state space for which we have y = X xy . Thismight be the case for different x . However, for these different starting points the right-hand side derivatives in (Φ x ) − ( y ) do coincide by the Markov property. Therefore theabove expression is well defined. If we considered the original process instead, we wouldhave a family of generators with the time dependent symbol p t ( x, η ) = − iη∂ + f ( t ) . which appears more natural to us. This shows that it is sometimes worthwhile to considerthe original process and that time inhomogeneous Markov processes have some value ontheir own right. EFERENCES Acknowledgements
Part of this work has been done while the author was visiting K. Bogdan (WroclawUniversity of Technology) and Y. Xiao (Michigan State University). He would like tothank both professors and their institutes for the kind hospitality and for interestingdiscussions. Financial support by the German Science Foundation (DFG) for the projectSCHN1231/1-1 is gratefully acknowledged.
References [1] Bauer, H.
Wahrscheinlichkeitstheorie . de Gruyter, Berlin 2002.[2] Berg, C. and Forst, G.
Potential Theory on Locally Compact Abelian Groups .Springer, Berlin 1975.[3] Blumenthal, R. M. and Getoor, R. K.
Markov Processes and Potential Theory .Academic Press, New York 1968.[4] Cinlar, E. and Jacod, J. Representation of Semimartingale Markov Processes inTerms of Wiener Processes and Poisson Random Measures.
Seminar on StochasticProcesses , p. 159–242, 1981.[5] Cinlar, E., Jacod, J., Protter, P. and Sharpe, M. J. Semimartingales and MarkovProcesses.
Z. Wahrscheinlichkeitstheorie verw. Gebiete , (1980): 161–219.[6] Courr`ege, P. Sur la forme int´egro-diff´erentielle des op´erateurs de C ∞ k dans C sat-isfaisant au principe du maximum. S´em. Th´eorie du potentiel . 1965/66, Expos´e 2,38pp.[7] Dovgoshey, O., Martio, O., Ryazanov, V., and Vuorinen, M. The Cantor function.
Expo. Math. , (2006): 1–37.[8] Elstrodt, J. Maß- und Integrationstheorie . Springer, Berlin 2002.[9] Ethier, S. N. and Kurtz, T. G.
Markov Processes - Characterization and Convergence .Wiley, New York 1986.[10] Hamel, G. Eine Basis aller Zahlen und die unstetigen L¨osungen der Funktional-gleichungen: f ( x + y ) = f ( x ) + f ( y ). Math. Annalen (1905), 459–462.[11] Jacob, N. Pseudo-Differential Operators and Markov Processes III. Markov Processesand Applications . Imperial College Press, London 2005.[12] Jacod, J. and Shiryaev, A.
Limit Theorems for Stochastic Processes . Springer, Berlin1987.[13] Muntean, I. Relations Among Some Classes of Real Functions on a Compact Interval
Studia Univ. Babes-Bolyai, Mathematica (1987), 60–70
EFERENCES
Theorie der Funktionen eine reellen Ver¨anderlichen . Akademie Ver-lag, Berlin 1969[15] Protter, P.
Stochastic Integration and Differential Equations . Second edition, version2.1, Springer-Verlag, Berlin 2005.[16] Revuz, D. and Yor, M.
Continuous Martingales and Brownian Motion . Third edition,Springer, Berlin 1999.[17] Sato, K.
L´evy Processes and Infinitely Divisible Distributions . Cambridge UniversityPress, Cambridge 1999.[18] R. L. Schilling, Growth and H¨older conditions for the sample paths of Feller pro-cesses,
Probab. Theory Rel. Fields , (1998): 565–611.[19] R. L. Schilling and A. Schnurr, The Symbol Associated with the Solution of aStochastic Differential Equation, Electr. J. Probab. , (2010): 1369–1393.[20] Schnurr, A. A Classification of Deterministic Hunt Processes with Some Applica-tions. Markov Proc. related Fields. (2011): 259–276.[21] Schnurr, A.
The Symbol of a Markov Semimartingale . PhD thesis, TU Dresden,2009.[22] Yor, M. Un example de processus qui n’est pas une semi-martingale.
Ast´erisque ,52-53