Free Essay

Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Basic notions of the general theory . . . . . . . . . . . . . . . . . . . . 2.1 Stopping times . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Progressive, Optional and Predictable σ-ﬁelds . . . . . . . . . . . 2.3 Classiﬁcation of stopping times . . . . . . . . . . . . . . . . . . . 2.4 D´but theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . e 3 Section theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Projection theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 The optional and predictable projections . . . . . . . . . . . . . . 4.2 Increasing processes and projections . . . . . . . . . . . . . . . . 4.3 Random measures on (R+ × Ω) and the dual projections . . . . . 5 The Doob-Meyer decomposition and multiplicative decompositions . . 6 Multiplicative decompositions . . . . . . . . . . . . . . . . . . . . . . . 7 Some hidden martingales . . . . . . . . . . . . . . . . . . . . . . . . . 8 General random times, their associated σ-ﬁelds and Az´ma’s supere martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Arbitrary random times and some associated sigma ﬁelds . . . .

∗ This

346 347 347 347 348 351 353 355 357 357 360 362 371 372 375 381 381

is an original survey paper 345

A. Nikeghbali/The general theory of stochastic processes

346

Az´ma’s supermartingales and dual projections associated with e random times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 The case of honest times . . . . . . . . . . . . . . . . . . . 8.2.2 The case of pseudo-stopping times . . . . . . . . . . . . . 8.3 Honest times and Strong Brownian Filtrations . . . . . . . . . . 9 The enlargements of ﬁltrations . . . . . . . . . . . . . . . . . . . . . . 9.1 Initial enlargements of ﬁltrations . . . . . . . . . . . . . . . . . . 9.2 Progressive enlargements of ﬁltrations . . . . . . . . . . . . . . . ρ 9.2.1 A description of predictable and optional processes in (Gt ) L and Ft . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 The decomposition formula before ρ . . . . . . . . . . . . 9.2.3 The decomposition formula for honest times . . . . . . . . 9.3 The (H) hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Concluding remarks on enlargements of ﬁltrations . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Introduction

8.2

384 385 391 395 396 397 401 402 403 405 407 408 408

P.A. Meyer and C. Dellacherie have created the so called general theory of stochastic processes, which consists of a number of fundamental operations on either real valued stochastic processes indexed by [0, ∞), or random measures

on [0, ∞), relative to a given ﬁltered probability space Ω, F , (Ft )t≥0 , P , where (Ft ) is a right continuous ﬁltration of (F , P) complete sub-σ-ﬁelds of F . This theory was gradually created from results which originated from the study of Markov processes, and martingales and additive functionals associated with them. A guiding principle for Meyer and Dellacherie was to understand to which extent the Markov property could be avoided; in fact, they were able to get rid of the Markov property in a radical way. At this point, we would like to emphasize that, perhaps to the astonishment of some readers, stochastic calculus was not thought of as a basic “elementary” tool in 1972, when C. Dellacherie’s little book appeared. Thus it seemed interesting to view some important facts of the general theory in relation with stochastic calculus. The present essay falls into two parts: the ﬁrst part, consisting of sections 2 to 5, is a review of the General Theory of Stochastic Processes and is fairly well known. The second part is a review of more recent results, and is much less so. Throughout this essay we try to illustrate as much as possible the results with examples. More precisely, the plan of the essay is as follows: • in Section 2, we recall the basic notions of the theory: stopping times, the optional and predictable σ-ﬁelds and processes,etc. • in Section 3, we present the fundamental Section theorems; • in Section 4, we present the fundamental Projection theorems;

A. Nikeghbali/The general theory of stochastic processes

347

• in Section 5, we recall the Doob-Meyer decomposition of semimartingales; • in Section 6, we present a small theory of multiplicative decompositions of nonnegative local submartingales; • in Section 7, we highlight the role of certain “hidden” martingales in the general theory of stochastic processes; • in Section 8, we illustrate the theory with the study of arbitrary random times; • in Section 9, we study how the basic operations depend on the underlying ﬁltration, which leads us in fact to some introduction of the theory of enlargement of ﬁltrations; Acknowledgements I would like to thank an anonymous referee for his comments and suggestions which helped to improve the present text. 2. Basic notions of the general theory Throughout this essay, we assume we are given a ﬁltered probability space Ω, F , (Ft )t≥0 , P that satisﬁes the usual conditions, that is (Ft )

is a right continuous ﬁltration of (F , P) complete sub-σ-ﬁelds of F . A stochastic process is said to be c`dl`g if it almost surely has sample paths a a which are right continuous with left limits. A stochastic process is said to be c`gl`d if it almost surely has sample paths which are left continuous with right a a limits. 2.1. Stopping times Deﬁnition 2.1. A stopping time is a mapping T : Ω → R+ such that {T ≤ t} ∈ Ft for all t ≥ 0. To a given stopping time T , we associate the σ-ﬁeld FT deﬁned by: FT = {A ∈ F, A ∩ {T ≤ t} ∈ Ft f or all t ≥ 0} . We can also associate with T the σ-ﬁeld FT − generated by F0 and sets of the form: A ∩ {T > t} , with A ∈ Ft and t ≥ 0. Proposition 2.2. Let T be a stopping time. Then T is measurable with respect to FT − and FT − ⊂ FT . Proposition 2.3. Let T be a stopping time. If A ∈ FT , then TA (ω) = T (ω) if ω ∈ A +∞ if ω ∈ A /

We recap here without proof some of the classical properties of stopping times.

A. Nikeghbali/The general theory of stochastic processes

348

is also a stopping time. Proposition 2.4 ([26], Theorem 53, p.187). Let S and T be two stopping times. 1. For every A ∈ FS , the set A ∩ {S ≤ T } ∈ FT . 2. For every A ∈ FS , the set A ∩ {S < T } ∈ FT −. Proposition 2.5 ([26], Theorem 56, p.189). Let S and T be two stopping times such that S ≤ T . Then FS ⊂ FT . One of the most used properties of stopping times is the optional stopping theorem. Theorem 2.6 ([69], Theorem 3.2, p.69). Let (Mt ) be an (Ft ) uniformly integrable martingale and let T be a stopping time. Then, one has: E [M∞ | FT ] = MT and hence: E [M∞ ] = E [MT ] (2.2) One can naturally ask whether there exist some other random times (i.e. nonnegative random variables) such that (2.1) or (2.2) hold. We will answer these questions in subsequent sections. 2.2. Progressive, Optional and Predictable σ-ﬁelds Now, we shall deﬁne the three fundamental σ-algebras we always deal with in the theory of stochastic processes. Deﬁnition 2.7. A process X = (Xt )t≥0 is called (Ft ) progressive if for every t ≥ 0, the restriction of (t, ω) → Xt (ω) to [0, t] × Ω is B [0, t] ⊗ Ft measurable. A set A ∈ R+ × Ω is called progressive if the process 1A (t, ω) is progressive. The set of all progressive sets is a σ-algebra called the progressive σ-algebra, which we will denote M. Proposition 2.8 ([69], Proposition 4.9, p.44). If X is a (Ft ) progressive process and T is a (Ft ) stopping time, then XT 1{T ε} if Tn+1 < ∞.

Since X has left limits, Tn ↑ ∞. Now set: Y ≡ Zn 1[Tn ,Tn+1 [ . n≥0 Then |X − Y | ≤ ε and this completes the proof. Remark 2.17. We also have: O = σ {[0, T [ , T is a stopping time} . Remark 2.18. It is useful to note that for a random time T , we have that [T, ∞[ is in the optional sigma ﬁeld if and only if T is a stopping time. A similar result holds for the predictable σ-algebra (see [26], [39] or [69]). Proposition 2.19 ([26], Theorem 67, p. 200). The predictable σ-algebra is generated by one of the following collections of random sets: 1. A × {0} where A ∈ F0 , and [0, T ] where T is a stopping time; 2. A × {0} where A ∈ F0 , and A × (s, t]where s < t, A ∈ Fs ; Now we give an easy result which is often used in martingale theory. Proposition 2.20. Let X = (Xt )t≥0 be an optional process. Then: 1. The jump process ∆X ≡ X − X− is optional; 2. X is predictable; 3. if moreover X is predictable, then ∆X is predictable.

A. Nikeghbali/The general theory of stochastic processes

351

2.3. Classiﬁcation of stopping times We shall now give ﬁner results about stopping times. The notions developped here are very useful in the study of discontinuous semimartingales (see [39] for example). The proofs of the results presented here can be found in [24] or [26]. We ﬁrst introduce the concept of predictable stopping times. Deﬁnition 2.21. A predictable time is a mapping T : Ω → R+ such that the stochastic interval [0, T [ is predictable. Every predictable time is a stopping time since [T, ∞[ ∈ P ⊂ O. Moreover, as [T ] = [0, T ] \ [0, T [, we deduce that [T ] ∈ P. We also have the following characterization of predictable times: Proposition 2.22 ([26], Theorem 71, p.204). A stopping time T is predictable if there exists a sequence of stopping times (Tn ) satisfying the following conditions: 1. (Tn ) is increasing with limit T . 2. we have {Tn < T } for all n on the set {T > 0}; The sequence (Tn ) is called an announcing sequence for T . Now we enumerate some important properties of predictable stopping times, which can be found in [24] p.54, or [26] p.205. Theorem 2.23. Let S be a predictable stopping time and T any stopping time. For all A ∈ FS− , the set A ∩ {S ≤ T } ∈ FT − . In particular, the sets {S ≤ T } and {S = T } are in FT − . Proposition 2.24. Let S and T be two predictable stopping times. Then the stopping times S ∧ T and S ∨ T are also predictable. Proposition 2.25. Let A ∈ FT − and T a predictable stopping time. Then the time TA is also predictable. Proposition 2.26. Let (Tn ) be an increasing sequence of predictable stopping times and T = limn Tn . Then T is predictable. We recall that a random set A is called evanescent if the set {ω : ∃ t ∈ R+ with (t, ω) ∈ A} is P−null. Deﬁnition 2.27. Let T be a stopping time. 1. We say that T is accessible if there exists a sequence (Tn ) of predictable stopping times such that: [T ] ⊂ (∪n [Tn ]) up to an evanescent set, or in other words, P [∪n {ω : Tn (ω) = T (ω) < ∞}] = 1

A. Nikeghbali/The general theory of stochastic processes

352

2. We say that T is totally inaccessible if for all predictable stopping times S we have: [T ] ∩ [S] = ∅ up to an evanescent set, or in other words: P [{ω : T (ω) = S (ω) < ∞}] = 0. Remark 2.28. It is obvious that predictable stopping times are accessible and that the stopping times which are both accessible and totally inaccessible are almost surely inﬁnite. Remark 2.29. There exist stopping times which are accessible but not predictable. Theorem 2.30 ([26] Theorem 81, p.215). Let T be a stopping time. There exists a unique (up to a P−null set) partition of the set {T < ∞} into two sets A and B which belong to FT − such that TA is accessible and TB is totally inaccessible. The stopping time TA is called the accessible part of T while TB is called the totally inaccessible part of T . Now let us examine a special case where the accessible times are predictable. For this, we need to deﬁne the concept of quasi-left continuous ﬁltrations. Deﬁnition 2.31. The ﬁltration (Ft ) is quasi-left continuous if FT = FT − for all predictable stopping times. Theorem 2.32 ([26] Theorem 83, p.217). The following assertions are equivalent: 1. The accessible stopping times are predictable; 2. The ﬁltration (Ft ) is quasi-left continuous; 3. The ﬁltration (Ft ) does not have any discontinuity time: FTn = F(lim Tn ) for all increasing sequences of stopping times (Tn ). Deﬁnition 2.33. A c`dl`g process X is called quasi-left continuous if ∆XT = 0, a a a.s. on the set {T < ∞} for every predictable time T . Deﬁnition 2.34. A random set A is called thin if it is of the form A = ∪ [Tn ], where (Tn ) is a sequence of stopping times; if moreover the sequence (Tn ) satisﬁes [Tn ] ∩ [Tm ] = ∅ for all n = m, it is called an exhausting sequence for A. Proposition 2.35. Let X be a c`dl`g adapted process. The following are equiva a alent: 1. X is quasi-left continuous;

A. Nikeghbali/The general theory of stochastic processes

353

2. there exists a sequence of totally inaccessible stopping times that exhausts the jumps of X; 3. for any increasing sequence of stopping times (Tn ) with limit T , we have lim XTn = XT a.s. on the set {T < ∞}. 2.4. D´but theorems e In this section, we give a fundamental result for realizations of stopping times: the d´but theorem. Its proof is diﬃcult and uses the same hard theory (capacities e theory) as the section theorems which we shall state in the next section. Deﬁnition 2.36. Let A be a subset of R+ × Ω. The d´but of A is the function e DA deﬁned as: DA (ω) = inf {t ∈ R+ : (t, ω) ∈ A} , with DA (ω) = ∞ if this set is empty. It is a nice and diﬃcult result that when the set A is progressive, then DA is a stopping time ([24], [26]): Theorem 2.37 ([24], Theorem 23, p. 51). Let A be a progressive set, then DA is a stopping time. Conversely, every stopping time is the d´but of a progressive (in fact optional) e set: indeed, it suﬃces to take A = [T, ∞[ or A = [T ]. The proof of the d´but theorem is an easy consequence of the following dife ﬁcult result from measure theory: Theorem 2.38. If (E, E) is a locally compact space with a countable basis with its Borel σ-ﬁeld and (Ω, F , P) is a complete probability space, for every set A ∈ E ⊗ F, the projection π (A) of A into Ω belongs to F . Proof of the d´but theorem. We apply Theorem 2.38 to the set At = A∩([0, t[ × Ω) e which belongs to B ([0, t[)⊗Ft . As a result, {DA ≤ t} = π (At ) belongs to Ft . We can deﬁne the n-d´but of a set A by e n DA (ω) = inf {t ∈ R+ : [0, t] ∩ A contains at least n points} ;

we can also deﬁne the ∞−d´but of A by: e n DA (ω) = inf {t ∈ R+ : [0, t] ∩ A contains inﬁnitely many points} .

Theorem 2.39. The n-d´but of a progressive set A is a stopping time for e n = 1, 2, . . . , ∞.

1 Proof. The proof is easy once we know that DA (ω) is a stopping time. Indeed, n+1 by induction on n, we prove that DA (ω) which is the d´but of the progressive e n ∞ set An = A ∩ ]DA (ω) , ∞[. DA (ω) is also a stopping time as the d´but of the e progressive set ∩An .

A. Nikeghbali/The general theory of stochastic processes

354

It is also possible to show that the penetration time T of a progressive set A, deﬁned by: T (ω) = inf {t ∈ R+ : [0, t] ∩ A contains inﬁnitely non countable many points} is a stopping time. We can naturally wonder if the d´but of a predictable set is a predictable e stopping time. One moment of reﬂexion shows that the answer is negative: every stopping time is the d´but of the predictable set ]T, ∞[ without being e predictable itself. However, we have: Proposition 2.40. Let DA be the d´but of a predictable set A. If [DA ] ⊂ A, e then DA is a predictable stopping time. Proof. If [DA ] ⊂ A, then [DA ] = A∩[0, DA ], is predictable since A is predictable and DA is a stopping time. Hence DA is predictable. One can deduce from there that: Proposition 2.41. Let A be a predictable set which is closed for the right e topology1 . Then its d´but DA is a predictable stopping time.

Now we are going to link the above mentioned notions to the jumps of some stochastic processes. We will follow [39], chapter I. Lemma 2.42. Any thin random set admits an exhausting sequence of stopping times. Proposition 2.43. If X is a c`dl`g adapted process, the random set U ≡ a a {∆X = 0} is thin; an exhausting sequence (Tn ) for this set is called a sequence that exhausts the jumps of X. Moreover, if X is predictable, the stopping times (Tn ) can be chosen predictable. Proof. Let Un ≡ (t, ω) : |Xt (ω) − Xt− (ω) > 2−n , for n an integer and set V0 = U0 and Vn = Un − Un−1 . The sets Vn are optional (resp. predictable if X is predictable) and are disjoint. Now, let us deﬁne the stopping times

1 Dn k+1 Dn

= =

k inf t > Dn : (t, ω) ∈ Vn ,

inf {t : (t, ω) ∈ Vn }

1 We recall that the right topology on the real line is a topology whose basis is given by intervals [s, t[.

A. Nikeghbali/The general theory of stochastic processes

355

j so that Dn represents the j−th jump of X whose size in absolute value is between 2−n and 2−n+1 . Since X is c`dl`g, Vn does not have any accumulation point and a a k the stopping times Dn (k,n)∈N2 enumerate all the points in Vn . Moreover, from k Proposition 2.41, the stopping times Dn are predictable if X is predictable. k To complete the proof, it suﬃces to index the doubly indexed sequence Dn into a simple indexed sequence (Tn ).

In fact, we have the following characterization for predictable processes: Proposition 2.44. If X is c`dl`g adapted process, then X is predictable if and a a only if the following two conditions are satisﬁed: 1. For all totally inaccessible stopping times T , ∆XT = 0, a.s.on {T < ∞} 2. For every predictable stopping time T , XT 1{T 1 − t . g g g

1 µ−1 Γ (µ) 2

∞

R √ t 1−t

dyy 2µ−1 exp −

y2 2

.

(8.3)

(−ν)

Now, following Borodin and Salminen ([21], p. 70-71), if for −ν > 0, P0 denotes the law of a Bessel process of parameter −ν, starting from 0, then the law of Ly ≡ sup {t : Rt = y}, is given by: P0

(−ν)

(Ly ∈ dt) =

y −2ν 2−ν Γ (−ν) t−ν+1

exp −

y2 2t

dt.

Now, from the time reversal property for Bessel processes ([21] p.70, or [69]), we have: (−ν) P H0 ∈ dt = P0 (LRt ∈ dt) ; consequently, from (8.3), we have (recall µ = −ν): g Zt µ R2

R2µ =1− µ t 2 Γ (µ)

∞ 1−t

du

t exp − 2u

u1+µ

,

and the desired result is obtained by straightforward change of variables in the above integral. Remark 8.17. The previous proof can be applied mutatis mutandis to obtain: P [gµ (T ) > t | Ft ] = and At µ g (T )

1 2µ−1 Γ (µ) 1 2µ Γ (1 + µ)

∞ √Rt

T −t

dyy 2µ−1 exp − dLu µ. (T − u)

y2 2

;

t∧T 0

=

A. Nikeghbali/The general theory of stochastic processes

387

Remark 8.18. It can be easily deduced from Proposition 8.16 that the dual g predictable projection At µ of 1(gµ ≤t) is: At µ = g 1 µ Γ (1 + µ) 2

t∧1 0

dLu µ. (1 − u) g Indeed, it is a consequence of Itˆ’s formula applied to Zt µ and the fact that o 2µ Nt ≡ Rt − Lt is a martingale and (dLt ) is carried by {t : Rt = 0}.

1 When µ = , Rt can be viewed as |Bt |, the absolute value of a standard 2 Brownian Motion. Thus, we recover as a particular case of our framework the celebrated example of the last zero before 1 of a standard Brownian Motion (see [42] p.124, or [81] for more references). Corollary 8.19. Let (Bt ) denote a standard Brownian Motion and let g ≡ sup {t ≤ 1 : Bt = 0} .

Then: P [g > t | Ft ] = and Ag = t Proof. It suﬃces to take µ ≡ 2 π

0

2 π

∞

|B √ t| 1−t

dy exp −

y2 2

,

t∧1

1 in Proposition 8.16. 2 Corollary 8.20. The variable 1 2µ Γ (1 + µ)

1 0

dLu √ . 1−u

dLu µ (1 − u)

is exponentially distributed with expectation 1; consequently, its law is independent of µ. Proof. The random time gµ is honest by deﬁnition (it is the end of a predictable g set). It also avoids stopping times since At µ is continuous (this can also be seen as a consequence of the strong Markov property for R and the fact that 0 is instantaneously reﬂecting). Thus the result of the corollary is a consequence of Remark 8.18 following Proposition 8.16 and Lemma 8.15. Given an honest time, it is not in general easy to compute its associated supermartingale Z L . Hence it is important (in view of the theory of progressive enlargements of ﬁltrations) to dispose some characterizations of Az´ma’s supere martingales which also provide a method way to compute them explicitly. We will give two results in this direction, borrowed from [63] and [60].

A. Nikeghbali/The general theory of stochastic processes

388

Let (Nt )t≥0 be a continuous local martingale such that N0 = 1, and limt→∞ Nt = 0. Let St = sups≤t Ns . We consider: g = = sup {t ≥ 0 : N t = S∞ }

sup {t ≥ 0 :

St − Nt = 0} .

(8.4)

Proposition 8.21 ([63]). Consider the supermartingale Zt ≡ P (g > t | Ft ) . 1. In our setting, the formula: Zt = Nt , t≥0 St

holds. 2. The Doob-Meyer additive decomposition of (Zt ) is: Zt = E [log S∞ | Ft ] − log (St ) . (8.5)

The above proposition gives a large family of examples. In fact, quite remarkably , every supermartingale associated with an honest time is of this form. More precisely: Theorem 8.22 ([63]). Let L be an honest time. Then, under the conditions (CA), there exists a continuous and nonnegative local martingale (Nt )t≥0 , with N0 = 1 and limt→∞ Nt = 0, such that: Zt = P (L > t | Ft ) = Nt . St

We shall now outline a nontrivial consequence of Theorem 8.22 here. In [7], the authors are interested in giving explicit examples of dual predictable projections of processes of the form 1L≤t , where L is an honest time. Indeed, these dual projections are natural examples of increasing injective processes (see [7] for more details and references). With Theorem 8.22, we have a complete characterization of such projections: Corollary 8.23. Assume the assumption (C) holds, and let (Ct ) be an increasing process. Then C is the dual predictable projection of 1g≤t , for some honest time g that avoids stopping times, if and only if there exists a continuous local martingale Nt in the class C0 such that Ct = log St . Now let us give some examples. Example 8.24. Let Nt ≡ Bt ,

A. Nikeghbali/The general theory of stochastic processes

389

where (Bt )t≥0 is a Brownian Motion starting at 1, and stopped at T0 = inf {t : Bt = 0}. Let St ≡ sup Bs . s≤t Let g = sup {t : Bt = St } . Then P (g > t | Ft ) = Example 8.25. Let Nt ≡ exp 2νBt − 2ν 2 t , where (Bt ) is a standard Brownian Motion, and ν > 0. We have: St = exp sup 2ν (Bs − νs) , s≤t Bt . St

and g = sup t : (Bt − νt) = sup (Bs − νs) . s≥0 Consequently, P (g > t | Ft ) = exp 2ν (Bs − νs) − sup (Bs − νs) s≤t .

Example 8.26. Now, we consider (Rt ), a transient diﬀusion with values in [0, ∞), which has {0} as entrance boundary. Let s be a scale function for R, which we can choose such that: s (0) = −∞, and s (∞) = 0. s (Rt ) , t≥0 s (x) satisﬁes the required conditions of Proposition 8.21, and we have: Then, under the law Px , x > 0, the local martingale Nt = Px (g > t|Ft ) = where g = sup {t : Rt = It } , and It = inf Rs . s≤t s (Rt ) s (It )

Theorem 8.22 is a multiplicative characterization; now we shall give an additive one.

A. Nikeghbali/The general theory of stochastic processes

390

Theorem 8.27 ([60]). Again, we assume that the conditions (CA) hold. Let (Xt ) be a submartingale of the class (Σc D) satisfying: lim Xt = 1. Let t→∞ L = sup {t : Xt = 0} . Then (Xt ) is related to the Az´ma’s supermartingale associated with L in the e following way: L Xt = 1 − Zt = P (L ≤ t|Ft ) . Consequently, if (Zt ) is a nonnegative supermartingale, with Z0 = 1, then, Z may be represented as P (L > t|Ft ), for some honest time L which avoids stopping times, if and only if (Xt ≡ 1 − Zt ) is a submartingale of the class (Σ), with the limit condition: lim Xt = 1. t→∞ Now, we give some fundamental examples: Example 8.28. First, consider (Bt ), the standard Brownian Motion, and let + T1 = inf {t ≥ 0 : Bt = 1) . Let σ = sup {t < T1 : Bt = 0}. Then Bt∧T1 satisﬁes the conditions of Theorem 8.27, and hence:

+ P (σ ≤ t|Ft ) = Bt∧T1 = t∧T1 0

1 1Bu >0 dBu + ℓt∧T1 , 2

where (ℓt ) is the local time of B at 0. This example plays an important role in the celebrated Williams’ path decomposition for the standard Brownian Motion on [0, T1 ]. One can also consider T±1 = inf {t ≥ 0 : |Bt | = 1) and τ = sup {t < T±1 : |Bt | = 0}. |Bt∧T±1 | satisﬁes the conditions of Theorem 8.27, and hence: t∧T±1 P (τ ≤ t|Ft ) = |Bt∧T±1 | =

sgn (Bu ) dBu + ℓt∧T±1 .

0

Example 8.29. Let (Yt ) be a real continuous recurrent diﬀusion process, with Y0 = 0. Then from the general theory of diﬀusion processes, there exists a unique continuous and strictly increasing function s, with s (0) = 0, limx→+∞ s (x) = +∞, limx→−∞ s (x) = −∞, such that s (Yt ) is a continuous local martingale. Let T1 ≡ inf {t ≥ 0 : Yt = 1) . Now, if we deﬁne Xt ≡ s (Yt∧T1 ) , s (1)

+

we easily note that X is a local submartingale of the class (Σc ) which satisﬁes the hypotheses of Theorem 8.27. Consequently, if we note σ = sup {t < T1 : Yt = 0} ,

A. Nikeghbali/The general theory of stochastic processes

391

we have: P (σ ≤ t|Ft ) =

s (Yt∧T1 ) . s (1)

+

Example 8.30. Now let (Mt ) be a positive local martingale, such that: M0 = x, Mt ∧1 , x > 0 and limt→∞ Mt = 0. Then, Tanaka’s formula shows us that 1 − y for 0 ≤ y ≤ x, is a local submartingale of the class (Σc ) satisfying the assumptions of Theorem 8.27, and hence with g = sup {t : Mt = y} , we have: P (g > t|Ft ) = Mt 1 ∧1=1+ y y t 0

1(Mu 0, the local martingale (Mt = −s (Rt )) satisﬁes the conditions of the previous example and for 0 ≤ x ≤ y, we have: Px (gy > t|Ft ) = where Lt s(y) s (Rt ) 1 ∧1=1+ s (y) s (y)

t 0

1(Ru >y) d (s (Ru )) +

1 s(y) L , 2s (y) t

is the local time of s (R) at s (y), and where gy = sup {t : Rt = y} .

This last formula was the key point for deriving the distribution of gy in [67], Theorem 6.1, p.326. 8.2.2. The case of pseudo-stopping times In this paragraph, we give some characteristic properties and some examples of pseudo-stopping times. We do not assume here that condition (A) holds, but we assume that P [ρ = ∞] = 0. Theorem 8.32 ([59]). The following properties are equivalent: 1. ρ is a (Ft ) pseudo-stopping time, i.e (8.1) is satisﬁed; 2. Aρ ≡ 1, a.s ∞

A. Nikeghbali/The general theory of stochastic processes

392

Remark 8.33. We shall give a more complete version of Theorem 8.32 in the section on progressive expansions of ﬁltrations. Proof. We have: E [Mρ ] = E

0 ∞

Ms dAρ = E [M∞ Aρ ] . ∞ s

Hence, E [Mρ ] = E [M∞ ] ⇔ E [M∞ (Aρ − 1)] = 0, ∞ and the announced equivalence follows now easily. Remark 8.34. More generally, the approach adopted in the proof can be used to solve the equation E [Mρ ] = E [M∞ ] , where the random time ρ is ﬁxed and where the unknown are martingales in H1 . For more details and resolutions of such equations, see [64].

ρ Corollary 8.35. Under the assumptions of Theorem 8.32, Zt = 1 − Aρ is a t ρ decreasing process. Furthermore, if ρ avoids stopping times, then (Zt ) is continuous.

Proof. The follows from the fact that µρ = E [Aρ |Ft ] = 1. t ∞ Remark 8.36. In fact, we shall see in next section, that under condition (C), ρ ρ is a pseudo-stopping time if and only if (Zt ) is a predictable decreasing process. For honest times, Az´ma proved that AL follows the standard exponential e law. For pseudo-stopping times, we have: ρ Proposition 8.37 ([59]). For simplicity, we shall write (Zu ) instead of (Zu ). Under condition (A), for all bounded (Ft ) martingales (Mt ), and all bounded Borel measurable functions f , one has: 1

E [Mρ f (Zρ )] = E [M0 ]

0 1

f (x) dx f (x) dx.

0

= E [Mρ ]

Consequently, Zρ follows the uniform law on (0, 1).

A. Nikeghbali/The general theory of stochastic processes

393

Proof. Under our assumptions, we have E [Mρ f (Zρ )] = = = = = E

0 ∞ ∞ 0

Mu f (Zu ) dAρ u Mu f (1 − Aρ ) dAρ u u

∞ 0 1

E

E M∞ E M∞ E M∞

f (1 − Aρ ) dAρ u u

0 1

f (1 − x) dx f (x) dx .

0

Now, we give a systematic construction for pseudo-stopping times, generalizing D. Williams’s example. We assume we are given an honest time L and that conditions (CA) hold (the condition (A) holds with respect to L). Then the following holds:

L Proposition 8.38 ([59]). (i) IL = inf u≤L Zu is uniformly distributed on [0, 1]; ρ (ii) The supermartingale Zt = P [ρ > t | Ft ] associated with ρ is given by ρ L Zt = inf Zu . u≤t

As a consequence, ρ is a (Ft ) pseudo-stopping time.

L Proof. For simplicity, we write Zt for Zt . (i) Let

Tb = inf {t, then

Zt ≤ b} ,

0 < b < 1,

P [IL ≤ b] = P [Tb < L] = E [ZTb ] = b. (ii) Note that for every (Ft ) stopping time T , we have {T < ρ} = T < L where T = inf Consequently, we have ρ E [ZT ] = P [T < ρ] = P T < L = E [ZT ′ ] = E inf Zu , u≤T

′ ′ ′

t > T,

Zt ≤ inf Zs . s≤T A. Nikeghbali/The general theory of stochastic processes

394

which yields: ρ E ZT 1{T 0

which satisﬁes the usual assumptions. We ﬁrst need the conditional laws of A∞ which were obtained under conditions (CA) in [6] and in a more general setting and by diﬀerent methods in [61]. Proposition 9.8 ([61],[6]). Let G be a Borel bounded function. Deﬁne: MtG ≡ E (G (A∞ ) |Ft ) . Then, MtG = F (At ) − (F (At ) − G (At )) (1 − Zt ) , where F (x) = exp (x)

∞ x

dy exp (−y) G (y) .

Moreover, MtG has the following stochastic integral representation: MtG = E [G (A∞ )] + t 0

(F − G) (Au ) dµu .

Now, deﬁne, for G any Borel bounded function, g λt (G) ≡ MtG = F (At ) − (F (At ) − G (At )) (1 − Zt ) .

From Proposition 9.8, we also have: t λt (G)

= E [G (A∞ )] + ≡ E [G (A∞ )] +

0 t 0

(F − G) (As ) dµs ˙ λs (G) dµs .

Hence we have: λt (G) = λt (dx) G (x) ,

A. Nikeghbali/The general theory of stochastic processes

399

with λt (dx) = (1 − Zt ) δAt (dx) + Zt exp (At ) 1(At ,∞) (x) exp (−x) dx, where δAt denotes the Dirac mass at At . Similarly, we have: ˙ λt (G) = with: ˙ λt (dx) G (x) ,

˙ λt (dx) = −δAt (dx) + exp (At ) 1(At ,∞) (x) exp (−x) dx. ˙ λt (dx) = λt (dx) ρ (x, t) , ρ (x, t) = 1 1 1{x>At } − 1{x=At } . Zt 1 − Zt (9.1) (9.2)

It then follows that: with

Now we can state our result about initial expansion with A∞ , which was ﬁrst obtained by Jeulin ([42]), but the proof we shall present is borrowed from [60]. Theorem 9.9. Let L be an honest time. We assume, as usual, that the condiσ(A ) tions (CA) hold. Then, every (Ft ) local martingale M is an Ft ∞ semimartingale and decomposes as: t Mt = Mt +

0

1{L>s}

d M, µ Zs

s

t

−

0

1{L≤s}

d M, µ s , 1 − Zs

(9.3)

where Mt

t≥0

denotes an Ft

σ(A∞ )

local martingale.

Proof. We can ﬁrst assume that M is an L2 martingale; the general case follows by localization. Let Λs be an Fs measurable set, and take t > s. Then, for any bounded test function G, we have: E (1Λs G (A∞ ) (Mt − Ms )) = E (1Λs (λt (G) Mt − λs (G) Ms )) = E (1Λs ( λ (G) , M t s t

(9.4) (9.5) (9.6) u t − λ (G) , M s )) u

= E 1Λs = E 1Λs s t

˙ λu (G) d M, µ

λu (dx) ρ (x, u) G (x) d M, µ d M, µ u ρ (A∞ , u) .

= E 1Λs s (9.7)

But from (9.2), we have: ρ (A∞ , t) = 1 1 1{A∞ >At } − 1{A∞ =At } . Zt 1 − Zt

A. Nikeghbali/The general theory of stochastic processes

400

It now suﬃces to notice that (At ) is constant after L and L is the ﬁrst time when A∞ = At , or in other words (for example, see [28] p. 134): 1{A∞ >At } = 1{L>t} , and 1{A∞ =At } = 1{L≤t} . Let us emphasize again that the method we have used here applies to many other situations, where the theorems of Jacod do not apply. Each time the diﬀer˙ ent relationships we have just mentioned between the quantities: λt (G) , λt (G) , ˙ t (dx) , ρ (x, t) , hold, the above method and decomposition formula and λt (dx) , λ apply. Moreover, the condition (C) can be dropped and it is enough to have only a stochastic integral representation for λt (G) (see [63] for a discussion). In the case of enlargement with A∞ , everything is nice since every (Ft ) local martinσ(A ) gale M is an Ft ∞ semimartingale. Sometimes, an integrability condition is needed as is shown by the following example. Example 9.10 ([81], p.34). Let Z = 0 ϕ (s) dBs , for some ϕ ∈ L2 (R+ , ds). Recall that σ {Z} . Gt = ∩ε>0 Ft+ε We wish to address the following question: is (Bt ) a (Gt ) semimartingale? The above method applies step by step: it is easy to compute λt (dx), since t conditionally on Ft , Z is gaussian, with mean mt = 0 ϕ (s) dBs , and variance t 2 2 σt = 0 ϕ (s) ds. Consequently, the absolute continuity requirement (9.1) is satisﬁed with: x − ms ρ (x, t) = ϕ (s) . 2 σs But here, the arguments in the proof of Theorem 9.9 (replace M with B) do not always work since the quantities involved there (equations (9.4) to (9.7)) might be inﬁnite; hence we have to impose an integrability condition. For example, if we assume that t |ϕ (s) | ds < ∞, σs 0 then (Bt ), is a (Gt ) semimartingale with canonical decomposition: t ∞

Bt = B0 + Bt +

0

ds

ϕ (s) 2 σs

∞ s

ϕ (u) dBu ,

where Bt is a (Gt ) Brownian Motion. As a particular case, we may take: Z = Bt0 , for some ﬁxed t0 . The above formula then becomes: t∧t0 Bt = B0 + Bt +

0

ds

Bt0 − Bs , t0 − s

A. Nikeghbali/The general theory of stochastic processes

401

where

Bt

is a (Gt ) Brownian Motion. In particular,

Bt

is independent

of G0 = σ {Bt0 }, so that conditionally on Bt0 = y, or equivalently, when (Bt , t ≤ t0 ) is considered under the bridge law Pt0 , its canonical decompox,y sition is: t y − Bs Bt = x + Bt + ds , t0 − s 0 where Bt , t ≤ t0 is now a Pt0 ; (Ft ) Brownian Motion. x,y Example 9.11. For more examples of initial enlargements using this method, see the forthcoming book [50]. 9.2. Progressive enlargements of ﬁltrations The theory of progressive enlargements of ﬁltrations was originally motivated by a paper of Millar [57] on random times and decomposition theorems. It was ﬁrst independently developed by Barlow [13] and Yor [77], and further developed by Jeulin and Yor [43] and Jeulin [41, 42]. For further developments and details, the reader can also refer to [45] which is written in French or to [81, 50] or [68] chapter VI. for an English text. Let (Ω, F , (Ft ) , P) be a ﬁltered probability space satisfying the usual assumptions, and for simplicity (and because it is always the case with practical examples), we shall assume that: F = F∞ = Ft .

t≥0

Again, we will have to distinguish two cases: the case of arbitrary random times and honest times. Let ρ be random time. We enlarge the initial ﬁltration (Ft ) ρ with the process (ρ ∧ t)t≥0 , so that the new enlarged ﬁltration (Ft )t≥0 is the smallest ﬁltration (satisfying the usual assumptions) containing (Ft ) and making ρ o o ρ a stopping time (i.e. Ft = Kt+ , where Kt = Ft σ (ρ ∧ t)). Sometimes it is more convenient to introduce the larger ﬁltration ρ Gt = {A ∈ F∞ : ∃At ∈ Ft , A ∩ {L > t} = At ∩ {L > t}} , ρ which coincides with Ft before ρ and which is constant after ρ and equal to F∞ ([28], p. 186). In the case of an honest time L, one can show that in fact (see [41]): L Ft = {A ∈ F∞ : ∃At , Bt ∈ Ft , A = (At ∩ {L > t}) ∪ (Bt ∩ {L ≤ t})} . ρ L In the sequel, we shall only consider the ﬁltrations (Gt ) and Ft : the ﬁrst one when we study arbitrary random times and the second one when we consider the special case of honest times.

A. Nikeghbali/The general theory of stochastic processes ρ L 9.2.1. A description of predictable and optional processes in (Gt ) and Ft

402

All the results we shall mention in what follows can be found in [43] (or in [42, 28]) and are particulary useful in mathematical ﬁnance ([30], [40]). Proposition 9.12. Let ρ be an arbitrary random time. The following hold: ρ 1. If H is a (Gt ) predictable process, then there exists a (Ft ) predictable process J such that Ht 1t≤ρ = Jt 1t≤ρ . ρ 2. If T is a (Gt ) stopping time, then there exists a (Ft ) stopping time S such that: T ∧ ρ = S ∧ ρ.

ρ 3. Let ξ ∈ L1 . Then a c`dl`g version of the martingale ξt = E [ξ|Gt ] is given a a by: 1 ξt = ρ 1ts ρ Zs−

(9.8)

We shall now give two applications of this decomposition. The ﬁrst one is a reﬁnement of Theorem 8.32, which brings a new insight to peudo-stopping times: Theorem 9.20. The following four properties are equivalent: 1. ρ is a (Ft ) pseudo-stopping time, i.e (8.1) is satisﬁed; 2. µρ ≡ 1, a.s t 3. Aρ ≡ 1, a.s ∞

A. Nikeghbali/The general theory of stochastic processes

404

4. every (Ft ) local martingale (Mt ) satisﬁes ρ (Mt∧ρ )t≥0 is a local (Gt ) martingale.

If, furthermore, all (Ft ) martingales are continuous, then each of the preceding properties is equivalent to 5. ρ (Zt )t≥0 is a decreasing (Ft ) predictable process

Proof. (1) =⇒ (2) For every square integrable (Ft ) martingale (Mt ), we have E [Mρ ] = E

0 ∞

Ms dAρ = E [M∞ Aρ ] = E [M∞ µρ ] . ∞ ∞ s

Since EMρ = EM0 = EM∞ , we have E [M∞ ] = E [M∞ Aρ ] = E [M∞ µρ ] . ∞ ∞ ρ Consequently, µρ ≡ 1, a.s, hence µρ ≡ 1, a.s which is equivalent to: A∞ ≡ 1, ∞ t a.s. Hence, 2. and 3. are equivalent. (2) =⇒ (4) . This is a consequence of the decomposition formula (9.8). (4) =⇒ (1) It suﬃces to consider any H1 -martingale (Mt ), which, assuming ρ (4), satisﬁes: (Mt∧ρ )t≥0 is a martingale in the enlarged ﬁltration (Gt ). Then as ρ a consequence of the optional stopping theorem applied in (Gt ) at time ρ, we get E [Mρ ] = E [M0 ] ,

hence ρ is a pseudo-stopping time. Finally, in the case where all (Ft ) martingales are continuous, we show: ρ a) (2) ⇒ (5) If ρ is a pseudo-stopping time, then Zt decomposes as ρ Zt = 1 − Aρ . t

As all (Ft ) martingales are continuous, optional processes are in fact predictable, ρ and so (Zt ) is a predictable decreasing process. ρ b) (5) ⇒ (2) Conversely, if (Zt ) is a predictable decreasing process, then from the uniqueness in the Doob-Meyer decomposition, the martingale part µρ t is constant, i.e. µρ ≡ 1, a.s. Thus, 2 is satisﬁed. t Now, we apply the progressive enlargements techniques to the study of the Burkholder-Davis-Gundy inequalities. More precisely, what remains of the Burkholder-Davis-Gundy inequalities when stopping times T are replaced by arbitrary random times ρ? The question of probabilistic inequalities at an arbitrary random time has been studied in depth by M. Yor (see [79], [81, 50] for details and references). For example, taking the special case of Brownian motion, it can easily be shown that there cannot exist a constant C such that: √ E [|Bρ |] ≤ CE [ ρ]

A. Nikeghbali/The general theory of stochastic processes

405

for any random time ρ. For if it were the case, we could take ρ = 1A , for A ∈ F∞ , and we would obtain: E [|B1 | 1A ] ≤ CE [1A ] which is equivalent to: |B1 | ≤ C, a.s., which is absurd. Hence it is not obvious that the ”strict” BDG inequalities might hold for stopped local martingales at other random times than stopping times. However, we have the following positive result: Theorem 9.21 ([66]). Let p > 0. There exist two universal constants cp and Cp depending only on p, such that for any (Ft ) local martingale (Mt ), with M0 = 0, and any (Ft ) pseudo-stopping time ρ we have cp E (< M >ρ ) 2 ≤ E p ∗ Mρ

p

≤ Cp E (< M >ρ ) 2 .

p

Proof. It suﬃces, with the previous Theorem, to notice that in the enlarged ρ ﬁltration (Gt ), (Mt∧ρ ) is a martingale and ρ is a stopping time in this ﬁltration; then, we apply the classical BDG inequalities. Remark 9.22. The constants cp and Cp are the same as those obtained for martingales in the classical framework; in particular the asymptotics are the same (see [17]). Remark 9.23. It would be possible to show the above Theorem, just using the deﬁnition of pseudo-stopping times (as random times for which the optional stopping theorem holds); but the proof is much longer. 9.2.3. The decomposition formula for honest times One of the remarkable features of honest time (discovered by Barlow [13]) is the L fact that the pair of ﬁltrations Ft , Ft satisﬁes the (H ′ ) hypothesis and every L (Ft ) local martingale X is an Ft semimartingale. More precisely: Theorem 9.24. An (Ft ) local martingale (Mt ), is a semimartingale in the L larger ﬁltration Ft and decomposes as: t∧L Mt = Mt +

0

d M, Z L L Zs−

s

t

−

L

d M, Z L s , L 1 − Zs−

(9.9)

where Mt

t≥0

denotes a

L Ft , P local martingale.

ρ Remark 9.25. There are non honest times ρ such that the pair (Ft , Gt ) satisﬁes the (H ′ ) hypothesis: for example, the pseudo-stopping times of Proposition 8.38 enjoy this remarkable property (see [60]) (for other examples see [42]).

Proof. We shall give a proof under the conditions (CA), which are general enough for most of the applications. In this special case, it is an consequence

A. Nikeghbali/The general theory of stochastic processes

406

of Theorem 9.9. Indeed, we saw in the course of the proof of Theorem 9.9 that (for ease of notations we drop the upper index L): 1{A∞ >At } = 1{L>t} , and 1{A∞ =At } = 1{L≤t} .

L Thus, by deﬁnition of Ft , we have: L Ft ⊂ Ft σ(A∞ )

.

Now, let M be an L2 bounded (Ft ) martingale; the general case follows by localization. From Theorem 9.9 t Mt = Mt +

0

1{L>s}

d M, µ Zs

s

t

−

0

1{L≤s}

d M, µ s , 1 − Zs

where Mt

t≥0

L denotes an Ft L2 martingale. Thus, Mt , which is equal to: t

Mt −

0

1{L>s}

d M, µ Zs

s

t

−

0

1{L≤s}

d M, µ s 1 − Zs

,

L L is Ft adapted, and hence it is an L2 bounded Ft martingale.

There are many applications of progressive enlargements of ﬁltrations with honest times, but we do not have the place here to give them. At the end of this section, we shall give a list of applications and references. Nevertheless, we mention an extension of the BDG inequalities obtained by Yor: Proposition 9.26 (Yor [81], p. 57). Assume that (Ft ) is the ﬁltration of a standard Brownian Motion and let L be an honest time. Then we have: √ E [|BL |] ≤ CE ΦL L , with ΦL = 1 + log 1 IL

1/2 L where IL = inf Zu , u

Free Essay

...Steven E. Shreve Stochastic Calculus for Finance I Student’s Manual: Solutions to Selected Exercises December 14, 2004 Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo Preface This document contains solutions to half the exercises appearing in Stochastic Calculus for Finance I: The Binomial Asset Pricing Model, Springer, 2003. Steven E. Shreve December 2004 Pittsburgh, Pennsylvania USA Contents 1 The Binomial No-Arbitrage Pricing Model . . . . . . . . . . . . . . . . 1.7 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 7 7 2 Probability Theory on Coin Toss Space . . . . . . . . . . . . . . . . . . . . 2.9 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 State Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4 American Derivative Securities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.9 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5 Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.8 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6 Interest-Rate-Dependent Assets . . . . . . . . . . . . . . . . . . . . . . ....

Words: 12957 - Pages: 52

Free Essay

...Calculus From Wikipedia, the free encyclopedia This article is about the branch of mathematics. For other uses, see Calculus (disambiguation). Topics in Calculus Fundamental theorem Limits of functions Continuity Mean value theorem [show]Differential calculus [show]Integral calculus [show]Vector calculus [show]Multivariable calculus Calculus (Latin, calculus, a small stone used for counting) is a branch of mathematics focused on limits,functions, derivatives, integrals, and infinite series. This subject constitutes a major part of modernmathematics education. It has two major branches,differential calculus and integral calculus, which are related by the fundamental theorem of calculus. Calculus is the study of change,[1] in the same way that geometry is the study of shape and algebra is the study of operations and their application to solving equations. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, broadly called mathematical analysis. Calculus has widespread applications in science,economics, and engineering and can solve many problems for which algebra alone is insufficient. Historically, calculus was called "the calculus of infinitesimals", or "infinitesimal calculus". More generally, calculus (plural calculi) refers to any method or system of calculation guided by the symbolic manipulation of expressions. Some examples of other well-known calculi are propositional calculus...

Words: 5650 - Pages: 23

Premium Essay

...Coherence and Stochastic Resonance of FHN Model 1 Introduction Deterministic, nonlinear systems with excitable dynamics, e.g. the FitzHugh Nagumo (FHN) Model, undergo bifurcation from stable focus to limit cycle on tuning the system parameter. However, addition of uncorrelated noise to the system can kick the system to the limit cycle region, thus exhibiting spiking behaviour if the parameter is hold on the ﬁxed point side. Thus the system exhibits intermittent cyclic behaviour, manifesting as spikes in the dynamical variable. It is interenting to note that at an optimal value of noise, the seemingly irregular behaviour of the spikes becomes strangely regular. The interspike interval τp becomes almost regular and the Normal√ p ized Variance of the interspike interval, deﬁned by VN = exhibits τp a minima as a function of noise strength (D). The phenomenon is termed as Coherence Resonance. Coherence Resonance is a system generated response to the noise. However, there is another form of resonance that is found at lower level of noise in response to a subthreshold signal, known as Stochastic Resonance. Subthreshold signals that are in general undetectable can often be detected in presence of noise. There is an optimal level of noise at which such information transmission is optimal. Stochastic resonance has been investigated in many physical, chemical and biological systems. It can be utilised for enhancing signal detection and information transfer. SR has been obversed...

Words: 744 - Pages: 3

Free Essay

...Project Gutenberg EBook of Calculus Made Easy, by Silvanus Thompson This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org Title: Calculus Made Easy Being a very-simplest introduction to those beautiful methods which are generally called by the terrifying names of the Differentia Author: Silvanus Thompson Release Date: October 9, 2012 [EBook #33283] Language: English Character set encoding: ISO-8859-1 *** START OF THIS PROJECT GUTENBERG EBOOK CALCULUS MADE EASY *** Produced by Andrew D. Hwang, Brenda Lewis and the Online Distributed Proofreading Team at http://www.pgdp.net (This file was produced from images generously made available by The Internet Archive/American Libraries.) transcriber’s note Minor presentational changes, and minor typographical and numerical corrections, have been made without comment. All A textual changes are detailed in the L TEX source ﬁle. This PDF ﬁle is optimized for screen viewing, but may easily be A recompiled for printing. Please see the preamble of the L TEX source ﬁle for instructions. CALCULUS MADE EASY MACMILLAN AND CO., Limited LONDON : BOMBAY : CALCUTTA MELBOURNE THE MACMILLAN COMPANY NEW YORK : BOSTON : CHICAGO DALLAS : SAN FRANCISCO THE MACMILLAN CO. OF CANADA, Ltd. TORONTO CALCULUS MADE EASY: BEING A VERY-SIMPLEST...

Words: 55659 - Pages: 223

Premium Essay

...Derivative=limf(x+change in x) –f(x) R=x*p P=R-C Change in x Limits: Point Slope form: y-y,=m(x-x,) Hole (removable discontinuity) Jump: Limit does not exist Vertical Asymptote: Limit does not exist Walking on graph at x=#, what is the y-value? Find the equation of a tangent line on f(x)=1/x at (1,1) Ex1: lim x^2+4x+3 = (-1)^2 +4(-1) +3 =0 point: (1,1) f(x)=x^-1 m=-1 = -1 = -1 = m x-1 x+1 -1+1 0 m=f’(x) f’(x)=-1x^-2 (1)^2 1 ------------------------------------------------- y-1=-1(x-1) y-1=-x+1 = y=-x+2= EQUATION Product Rule: f’(x)=u’v+v’u Quotient Rule: f(x)=h(x)g’(x)-g(x)h’(x)=lowd’high-highd’low [h(x)]^2 bottom^2 Chain Rule: derivative of the outside(leave inside alone)*derivative of the inside Implicit Differentiation: (1) take derivative of each term normally, if term has y on it, we will multiply it by y’ Critical Points: (1) Find f’(x); (2) Set f’(x)=0, solve it; (3) Plot points on # line; (4) Test points around the points in step 3, by plugging them into derivative. If positive: up If negative: down; (5) Write our answer as an interval Max and Mins (relative extrema): (1) Do all the up and down stuff from 3.1; (2) If you went up then down you have a max; if you went down then up you have a min; (3) Label the points (x,y) for max and mins to get the y, go back to f(x) Ex2: f(x) =1/4x^4-2x^2 a) Find the open intervals on which the function is increasing or...

Words: 576 - Pages: 3

Free Essay

...University of Connecticut Department of Mathematics Math 1131 Sample Exam 1 Spring 2014 Name: This sample exam is just a guide to prepare for the actual exam. Questions on the actual exam may or may not be of the same type, nature, or even points. Don’t prepare only by taking this sample exam. You also need to review your class notes, homework and quizzes on WebAssign, quizzes in discussion section, and worksheets. The exam will cover up through section 3.2 (product and quotient rule). Read This First! • Please read each question carefully. Other than the question of true/false items, show all work clearly in the space provided. In order to receive full credit on a problem, solution methods must be complete, logical and understandable. • Answers must be clearly labeled in the spaces provided after each question. Please cross out or fully erase any work that you do not want graded. The point value of each question is indicated after its statement. No books or other references are permitted. • Give any numerical answers in exact form, not as approximations. For example, one-third 1 is 3 , not .33 or .33333. And one-half of π is 1 π, not 1.57 or 1.57079. 2 • Turn smart phones, cell phones, and other electronic devices oﬀ (not just in sleep mode) and store them away. • Calculators are allowed but you must show all your work in order to receive credit on the problem. • If you ﬁnish early then you can hand in your exam early. Grading - For Administrative Use Only Question:...

Words: 942 - Pages: 4

Premium Essay

...Differentiation Rules (Differential Calculus) 1. Notation The derivative of a function f with respect to one independent variable (usually x or t) is a function that will be denoted by D f . Note that f (x) and (D f )(x) are the values of these functions at x. 2. Alternate Notations for (D f )(x) f (x) d For functions f in one variable, x, alternate notations are: Dx f (x), dx f (x), d dx , d f (x), f (x), f (1) (x). The dx “(x)” part might be dropped although technically this changes the meaning: f is the name of a function, dy whereas f (x) is the value of it at x. If y = f (x), then Dx y, dx , y , etc. can be used. If the variable t represents time then Dt f can be written f˙. The differential, “d f ”, and the change in f , “∆ f ”, are related to the derivative but have special meanings and are never used to indicate ordinary differentiation. dy Historical note: Newton used y, while Leibniz used dx . About a century later Lagrange introduced y and ˙ Arbogast introduced the operator notation D. 3. Domains The domain of D f is always a subset of the domain of f . The conventional domain of f , if f (x) is given by an algebraic expression, is all values of x for which the expression is deﬁned and results in a real number. If f has the conventional domain, then D f usually, but not always, has conventional domain. Exceptions are noted below. 4. Operating Principle Many functions are formed by successively combining simple functions, using constructions such as sum...

Words: 1124 - Pages: 5

Free Essay

...Computer Assignment Use Wolfram Alpha or any other technology to answer the questions below. Copy all relevant answers into this Word document. Save the Word document and send it to me via email attachment. Do NOT forget to type your name into the document, and include in your responses the commands you used to get the answers. 1. Consider the function f(x)=(e^2x-1)/x Find the limit of f(x) as x approaches zero. 2. Define the function Find the derivative of that function. Find f’(0.67) (the first derivative at 0.67). What does that mean for the function f at the point? Find f’’(0.67) (the second derivative at 0.67). What does it mean for the function f at that point? Find all points where the derivative is zero. A) B) C) D) 3. Define the function Find the derivative of the function and use Wolfram Alpha to confirm your answer. Find all points where the derivative is zero and classify them as local extrema, if possible Determine if f is increasing (going up) or decreasing (going down) between the points found in (b) A) B) Local extreme’s are listed C) Increasing 4. Find the following integrals: a) b) 5. Find the area between the graph of f(x) = (x2 – 4) (x2 - 1) and the x axis. Note that one simple definite integral won’t do it, you will need to carefully determine where the function is positive and negative and integrate accordingly, perhaps using multiple steps. 6. Solve the system...

Words: 290 - Pages: 2

Premium Essay

...Gateway Case Analysis As part of our analysis, we focused on the following goals: 1) Minimize the chance of aircraft crash losses and insurance costs combined exceeding $37 million in the next year, and 2) Obtain the insurance coverage at lowest cost over the five-year period. We performed our analysis in 5 stages, eliminating one prospective insurance plan or confirming our observations in each stage. Crystal Ball software served as an analytical tool for our analysis. The simulations conducted for our analysis consisted of 10,000 trials each. See Appendix for illustrations related to this analysis. Analysis Detail Stage 1: A comparative analysis of the maximum values of the total losses and insurance costs in the first year showed that there is only one insurance plan that guarantees that the next year losses are not going to exceed $37 million: HIC ($33.8 million maximum). RNCN1 has a maximum of $43.8 million, which is reasonably close to the $41 million limit that we are aiming to avoid in the first year. We also noted that the Self-insurance plan has minimal expected costs of $4.6 million in first year, but bears the highest volatility, which may lead to losses of up to $170.5 million (See Exhibit 1). At this stage, we exclude the RNCN2 plan from further considerations because even though is offers low expected costs, the probability of going over the limit of $37 million and incurring costs of $60 million is significant for the purposes of this analysis. We plan to...

Words: 783 - Pages: 4

Free Essay

...Analytic functions Introduction I was interested in doing my Internal assessment in functions particularly in analytic functions as I was much fascinated with topic (functions)during the study of mathematics during the course. The theory part that includes Taylor series as well as its coverage on complex functionality. In this exploration I surveyed on the theorems associated with analytic functions as well as its functions. This has helped me widen my knowledge and mathematical skills on complex numbers, calcus and functions as topics studied during the course. I have always to kept correct justification on the theories and have used required mathematical models and correct interpretation in the results that I got in the theory processing. Analytic functions has a very wider application on Analytic modulated system whereas there is a general theory of analytic modulation system and they are developed in the transmitted signal σ(t) = Re {eiwctf(z(t)}.Due to this noticeable physical application in life, it motivated me to write a this maths exploration. In mathematics, an analytic function could be simply defined as a function that is locally given by a convergent power series. There exist two parts namely real analytic functions and complex analytic functions, functioning differently. These functions are infinitely differentiable, But as said above functions of each type are infinitely differentiable, having the complex analytic functions exhibiting properties that...

Words: 595 - Pages: 3

Free Essay

...The Application of the Profit Function in a Business’s Growth Differential Calculus May 24, 2014 The application of calculus in a business is extremely important since calculus is considered as the study of changes. Its complexity in the study of changes has become one of the humankind’s greatest tool for analyzing changes in the marketplace. The profit function was created with the main purpose for businesses to understand how the changes in revenues and in costs would generate a profit or not profit at all. A profit can be generated when the amount of revenues is higher than the amount of costs. When a business starts, is normal to not gain any profit at all, in fact, most of the time, a business tends to lose money in its first year. However with that information, a business can analyze the money loss of that year and determine any gaps or holes that prevents the maximization of the profit and have a more prosperous result for the following year. For instance, if a company suffer a loss in profit, they can analyze the profit function to determine the main reason of why there was not a positive profit. If they see that the problem of the profit function was a low revenue then they can regulate the sale of products or services and price control, or if the problem lies in the cost function, they can adjust or lower the costs and expenses made by the business. The graph of a profit function can show at what time of the year the company tends to...

Words: 529 - Pages: 3

Free Essay

...50=3x+5 50-5=3x 45=3x 45/3=x X=15 minimum 80=3x+5 80-5=3x 75=3x 75/3=x X=25 maximum F(x)=-x^2+2x+2 The horizontal distance, x, is greatest at f(x)=0 where f(x) is the height -x^2+2x+2=0 Using the quadratic formula X= (-b±√(b^2-4ac))/2a A=-1 B=2 C=2 X=(-2±√(2^2-4(-1)*2))/(2*(-1)) X=(-2±√(4+8))/(-2) X=(-2-2*√3)/(-2) X=1+√3 X=1+1.732 X=2.732 df/dx = -32x+200=0 -32x+200=0 -32x=-200 X=25/4 f(25¦4)=-16*(25¦4)2 + 200*(25¦4)+4 f(25¦4)=-625+1250+4 f(25¦4)= 629 feet F(x) = 2x +1. Where x is the 6th hour. Therefore the 6th hour, x=6. f(6) = 2*(6) +1 = 12 +1 =13 customers P(x)=x^2-4000x + 7800000 3800000=x^2-4000x + 7800000 x^2-4000x+4000000=0 (x-2000)^2=0 x=2000 P(2000)=3800000 number of items sold=2000 f(x) = 20000(1/2)x Value in 3 years = f(3) = 20000(1/2)3 = $2500 The max and min occur when the derivative is zero. Let's take the derivative: -2 * X + 3 = 0 -2 * X = -3 X = 3/2 = 1.5 the top is at 1.5 meters. Substitute x into the equation: f(x) = -x^2 + 3x + 6. f(1.5) = - 1.5^2 + 3 * 1.5 + 6 = - 2.25 + 4.5 + 6 = 8.25 meters x=y^2 x = 225, y = -15 x = 196, y =...

Words: 280 - Pages: 2

Premium Essay

...Academic Year 21250 Stevens Creek Blvd. Cupertino, CA 95014 408-864-5678 www.deanza.edu 2015 - 2016 Please visit the Counseling Center to apply for degrees and for academic planning assistance. A.A.T./A.S.T. Transfer Degree Requirements 1. Completion of all major requirements. Each major course must be completed with a minimum “C” grade. Major courses can also be used to satisfy GE requirements (except for Liberal Arts degrees). 2. Certified completion of either the California State University (CSU) General Education Breadth pattern (CSU GE) or the Intersegmental General Education Transfer Curriculum (IGETC for CSU). 3. Completion of a minimum of 90 CSU-transferrable quarter units (De Anza courses numbered 1-99) with a minimum 2.0 GPA (“C” average). 4. Completion of all De Anza courses combined with courses transferred from other academic institutions with a minimum 2.0 degree applicable GPA (“C” average). Note: A minimum of 18 quarter units must be earned at De Anza College. Major courses for certificates and degrees must be completed with a letter grade unless a particular course is only offered on a pass/no-pass basis. Associate in Science in Business Administration for Transfer A.S.-T. Degree The Business major consists of courses appropriate for an Associate in Science in Business Administration for Transfer degree, which provides a foundational understanding of the discipline, a breadth of coursework in the discipline, and preparation...

Words: 547 - Pages: 3

Premium Essay

...the prologue and chapter one of “The Calculus Diaries”, my perspective on calculus and its concepts have changed. “The Calculus Diaries” describes the history of calculus, such as who discovered it, when it was discovered, and how it can be used in everyday life. It starts out by describing the story of Archimedes, who invented devices to help fend off the Roman Empire from invading Syracuse. He was considered to be the first person to describe calculus concepts. The author describes that the two main concepts that make up calculus are the derivative and the integral. The author also describes his personal conflicts with calculus in the past and what it took for him to overcome his hatred for calculus and math in general. The...

Words: 1003 - Pages: 5

Free Essay

...Game Theory Math Review 1 1.1 Function Deﬁnition If we write down the relation of x and y as follows, y = f (x) this means y is related to x under the rule f . Also we say that the value of y depends on the value of x. This relation y = f (x) is called as a function if 1) the rule f assigns a single x value to single y value or 2) assigns multiple x values to single y value. 2 2.1 Shape of function When f (x) = ax + b Suppose that the function is given as follows. y = f (x) = ax + b ´ 1) Slope: f (x) = a and Y −intercept: b Y − axis is b.. b −a 2) Intersection with Y − axis: This is the case when x = 0. So from f (0) = b, the Intersection with 3) Intersection with X − axis: This is the case when y = 0. So the intersection with X − axis is from ax + b = 0. Example Suppose y = f (x) = ax + b. Draw this function in following each case. 1) When a > 0, b > 0 2) When a > 0, b < 0 3) When a < 0, b > 0 4) When a < 0, b < 0 Now suppose you want to ﬁnd the linear function that passes through following two points,. x = (a, b) and y = (c, d). Then the linear function is deﬁned as follows. ¶ µ b−d (x − a) + b f (x) = a−c µ ¶ b−d = (x − c) + d a−c ´ ³ ´ ³ b−d b−d So from f (x) = a−c (x − a) + b or f (x) = a−c (x − c) + d, ¶ (bc − ad) b−d x+ f (x) = a−c (c − a) | {z } | {z } µ 1 Here, the ﬁrst term is the slope and the second term is Y-intercept. Example Find the linear function that passes through following two points. A = (2, 4) and B = (−4, −2) 2...

Words: 1496 - Pages: 6