Free Essay

Solution Manual of Stochastic Process- Shreve

In: Science

Submitted By kubcam
Words 19710
Pages 79
Stochastic Calculus for Finance, Volume I and II by Yan Zeng Last updated: August 20, 2007

This is a solution manual for the two-volume textbook Stochastic calculus for finance, by Steven Shreve. If you have any comments or find any typos/errors, please email me at yz44@cornell.edu. The current version omits the following problems. Volume I: 1.5, 3.3, 3.4, 5.7; Volume II: 3.9, 7.1, 7.2, 7.5–7.9, 10.8, 10.9, 10.10. Acknowledgment I thank Hua Li (a graduate student at Brown University) for reading through this solution manual and communicating to me several mistakes/typos.

1
1.1.

Stochastic Calculus for Finance I: The Binomial Asset Pricing Model

1. The Binomial No-Arbitrage Pricing Model

Proof. If we get the up sate, then X1 = X1 (H) = ∆0 uS0 + (1 + r)(X0 − ∆0 S0 ); if we get the down state, then X1 = X1 (T ) = ∆0 dS0 + (1 + r)(X0 − ∆0 S0 ). If X1 has a positive probability of being strictly positive, then we must either have X1 (H) > 0 or X1 (T ) > 0. (i) If X1 (H) > 0, then ∆0 uS0 + (1 + r)(X0 − ∆0 S0 ) > 0. Plug in X0 = 0, we get u∆0 > (1 + r)∆0 . By condition d < 1 + r < u, we conclude ∆0 > 0. In this case, X1 (T ) = ∆0 dS0 + (1 + r)(X0 − ∆0 S0 ) = ∆0 S0 [d − (1 + r)] < 0. (ii) If X1 (T ) > 0, then we can similarly deduce ∆0 < 0 and hence X1 (H) < 0. So we cannot have X1 strictly positive with positive probability unless X1 is strictly negative with positive probability as well, regardless the choice of the number ∆0 . Remark: Here the condition X0 = 0 is not essential, as far as a property definition of arbitrage for arbitrary X0 can be given. Indeed, for the one-period binomial model, we can define arbitrage as a trading strategy such that P (X1 ≥ X0 (1 + r)) = 1 and P (X1 > X0 (1 + r)) > 0. First, this is a generalization of the case X0 = 0; second, it is “proper” because it is comparing the result of an arbitrary investment involving money and stock markets with that of a safe investment involving only money market. This can also be seen by regarding X0 as borrowed from money market account. Then at time 1, we have to pay back X0 (1 + r) to the money market account. In summary, arbitrage is a trading strategy that beats “safe” investment. Accordingly, we revise the proof of Exercise 1.1. as follows. If X1 has a positive probability of being strictly larger than X0 (1 + r), the either X1 (H) > X0 (1 + r) or X1 (T ) > X0 (1 + r). The first case yields ∆0 S0 (u − 1 − r) > 0, i.e. ∆0 > 0. So X1 (T ) = (1 + r)X0 + ∆0 S0 (d − 1 − r) < (1 + r)X0 . The second case can be similarly analyzed. Hence we cannot have X1 strictly greater than X0 (1 + r) with positive probability unless X1 is strictly smaller than X0 (1 + r) with positive probability as well. Finally, we comment that the above formulation of arbitrage is equivalent to the one in the textbook. For details, see Shreve [7], Exercise 5.7. 1.2.

1

5 Proof. X1 (u) = ∆0 × 8 + Γ0 × 3 − 5 (4∆0 + 1.20Γ0 ) = 3∆0 + 1.5Γ0 , and X1 (d) = ∆0 × 2 − 4 (4∆0 + 1.20Γ0 ) = 4 −3∆0 − 1.5Γ0 . That is, X1 (u) = −X1 (d). So if there is a positive probability that X1 is positive, then there is a positive probability that X1 is negative. Remark: Note the above relation X1 (u) = −X1 (d) is not a coincidence. In general, let V1 denote the ¯ ¯ payoff of the derivative security at time 1. Suppose X0 and ∆0 are chosen in such a way that V1 can be ¯ 0 − ∆0 S0 ) + ∆0 S1 = V1 . Using the notation of the problem, suppose an agent begins ¯ ¯ replicated: (1 + r)(X with 0 wealth and at time zero buys ∆0 shares of stock and Γ0 options. He then puts his cash position ¯ −∆0 S0 − Γ0 X0 in a money market account. At time one, the value of the agent’s portfolio of stock, option and money market assets is

¯ X1 = ∆0 S1 + Γ0 V1 − (1 + r)(∆0 S0 + Γ0 X0 ). Plug in the expression of V1 and sort out terms, we have ¯ X1 = S0 (∆0 + ∆0 Γ0 )( S1 − (1 + r)). S0

¯ Since d < (1 + r) < u, X1 (u) and X1 (d) have opposite signs. So if the price of the option at time zero is X0 , then there will no arbitrage. 1.3.
S0 1 Proof. V0 = 1+r 1+r−d S1 (H) + u−1−r S1 (T ) = 1+r 1+r−d u + u−1−r d = S0 . This is not surprising, since u−d u−d u−d u−d this is exactly the cost of replicating S1 . Remark: This illustrates an important point. The “fair price” of a stock cannot be determined by the risk-neutral pricing, as seen below. Suppose S1 (H) and S1 (T ) are given, we could have two current prices, S0 and S0 . Correspondingly, we can get u, d and u , d . Because they are determined by S0 and S0 , respectively, it’s not surprising that risk-neutral pricing formula always holds, in both cases. That is, 1+r−d u−d S1 (H)

S0 =

+

u−1−r u−d S1 (T )

1+r

, S0 =

1+r−d u −d

S1 (H) +

u −1−r u −d S1 (T )

1+r

.

Essentially, this is because risk-neutral pricing relies on fair price=replication cost. Stock as a replicating component cannot determine its own “fair” price via the risk-neutral pricing formula. 1.4. Proof. Xn+1 (T ) = = ∆n dSn + (1 + r)(Xn − ∆n Sn )

∆n Sn (d − 1 − r) + (1 + r)Vn pVn+1 (H) + q Vn+1 (T ) ˜ ˜ Vn+1 (H) − Vn+1 (T ) (d − 1 − r) + (1 + r) = u−d 1+r = p(Vn+1 (T ) − Vn+1 (H)) + pVn+1 (H) + q Vn+1 (T ) ˜ ˜ ˜ = pVn+1 (T ) + q Vn+1 (T ) ˜ ˜ = Vn+1 (T ).

1.6.

2

Proof. The bank’s trader should set up a replicating portfolio whose payoff is the opposite of the option’s payoff. More precisely, we solve the equation (1 + r)(X0 − ∆0 S0 ) + ∆0 S1 = −(S1 − K)+ .
1 Then X0 = −1.20 and ∆0 = − 2 . This means the trader should sell short 0.5 share of stock, put the income 2 into a money market account, and then transfer 1.20 into a separate money market account. At time one, the portfolio consisting of a short position in stock and 0.8(1 + r) in money market account will cancel out with the option’s payoff. Therefore we end up with 1.20(1 + r) in the separate money market account. Remark: This problem illustrates why we are interested in hedging a long position. In case the stock price goes down at time one, the option will expire without any payoff. The initial money 1.20 we paid at time zero will be wasted. By hedging, we convert the option back into liquid assets (cash and stock) which guarantees a sure payoff at time one. Also, cf. page 7, paragraph 2. As to why we hedge a short position (as a writer), see Wilmott [8], page 11-13.

1.7. Proof. The idea is the same as Problem 1.6. The bank’s trader only needs to set up the reverse of the replicating trading strategy described in Example 1.2.4. More precisely, he should short sell 0.1733 share of stock, invest the income 0.6933 into money market account, and transfer 1.376 into a separate money market account. The portfolio consisting a short position in stock and 0.6933-1.376 in money market account will replicate the opposite of the option’s payoff. After they cancel out, we end up with 1.376(1 + r)3 in the separate money market account. 1.8. (i)
2 s s Proof. vn (s, y) = 5 (vn+1 (2s, y + 2s) + vn+1 ( 2 , y + 2 )).

(ii) Proof. 1.696. (iii) Proof. δn (s, y) = vn+1 (us, y + us) − vn+1 (ds, y + ds) . (u − d)s

1.9. (i) Proof. Similar to Theorem 1.2.2, but replace r, u and d everywhere with rn , un and dn . More precisely, set pn = 1+rn −dn and qn = 1 − pn . Then un −dn Vn = pn Vn+1 (H) + qn Vn+1 (T ) . 1 + rn

(ii) Proof. ∆n = (iii) 3
Vn+1 (H)−Vn+1 (T ) Sn+1 (H)−Sn+1 (T )

=

Vn+1 (H)−Vn+1 (T ) . (un −dn )Sn

10 10 Proof. un = Sn+1 (H) = Sn +10 = 1+ Sn and dn = Sn+1 (T ) = Sn −10 = 1− Sn . So the risk-neutral probabilities Sn Sn Sn Sn at time n are pn = u1−dnn = 1 and qn = 1 . Risk-neutral pricing implies the price of this call at time zero is ˜ ˜ 2 2 n −d 9.375.

2. Probability Theory on Coin Toss Space 2.1. (i) Proof. P (Ac ) + P (A) = (ii) Proof. By induction, it suffices to work on the case N = 2. When A1 and A2 are disjoint, P (A1 ∪ A2 ) = ω∈A1 ∪A2 P (ω) = ω∈A1 P (ω) + ω∈A2 P (ω) = P (A1 ) + P (A2 ). When A1 and A2 are arbitrary, using the result when they are disjoint, we have P (A1 ∪ A2 ) = P ((A1 − A2 ) ∪ A2 ) = P (A1 − A2 ) + P (A2 ) ≤ P (A1 ) + P (A2 ). 2.2. (i)
1 3 1 Proof. P (S3 = 32) = p3 = 8 , P (S3 = 8) = 3p2 q = 3 , P (S3 = 2) = 3pq 2 = 8 , and P (S3 = 0.5) = q 3 = 8 . 8 ω∈Ac

P (ω) +

ω∈A

P (ω) =

ω∈Ω

P (ω) = 1.

(ii) Proof. E[S1 ] = 8P (S1 = 8) + 2P (S1 = 2) = 8p + 2q = 5, E[S2 ] = 16p2 + 4 · 2pq + 1 · q 2 = 6.25, and 3 1 E[S3 ] = 32 · 1 + 8 · 8 + 2 · 3 + 0.5 · 8 = 7.8125. So the average rates of growth of the stock price under P 8 8 5 are, respectively: r0 = 4 − 1 = 0.25, r1 = 6.25 − 1 = 0.25 and r2 = 7.8125 − 1 = 0.25. 5 6.25 (iii)
8 1 Proof. P (S3 = 32) = ( 2 )3 = 27 , P (S3 = 8) = 3 · ( 2 )2 · 1 = 4 , P (S3 = 2) = 2 · 1 = 2 , and P (S3 = 0.5) = 27 . 3 3 3 9 9 9 Accordingly, E[S1 ] = 6, E[S2 ] = 9 and E[S3 ] = 13.5. So the average rates of growth of the stock price 9 6 under P are, respectively: r0 = 4 − 1 = 0.5, r1 = 6 − 1 = 0.5, and r2 = 13.5 − 1 = 0.5. 9

2.3. Proof. Apply conditional Jensen’s inequality. 2.4. (i) Proof. En [Mn+1 ] = Mn + En [Xn+1 ] = Mn + E[Xn+1 ] = Mn . (ii)
2 n+1 Proof. En [ SSn ] = En [eσXn+1 eσ +e−σ ] = 2 σXn+1 ] eσ +e−σ E[e

= 1.

2.5. (i)
2 2 Proof. 2In = 2 j=0 Mj (Mj+1 − Mj ) = 2 j=0 Mj Mj+1 − j=1 Mj − j=1 Mj = 2 j=0 Mj Mj+1 + n−1 n−1 n−1 n−1 2 2 2 2 2 2 2 2 Mn − j=0 Mj+1 − j=0 Mj = Mn − j=0 (Mj+1 − Mj ) = Mn − j=0 Xj+1 = Mn − n. n−1 n−1 n−1 n−1 n−1

(ii) Proof. En [f (In+1 )] = En [f (In + Mn (Mn+1 − Mn ))] = En [f (In + Mn Xn+1 )] = 1 [f (In + Mn ) + f (In − Mn )] = 2 √ √ √ g(In ), where g(x) = 1 [f (x + 2x + n) + f (x − 2x + n)], since 2In + n = |Mn |. 2 2.6.

4

Proof. En [In+1 − In ] = En [∆n (Mn+1 − Mn )] = ∆n En [Mn+1 − Mn ] = 0. 2.7. Proof. We denote by Xn the result of n-th coin toss, where Head is represented by X = 1 and Tail is 1 represented by X = −1. We also suppose P (X = 1) = P (X = −1) = 2 . Define S1 = X1 and Sn+1 = n Sn +bn (X1 , · · · , Xn )Xn+1 , where bn (·) is a bounded function on {−1, 1} , to be determined later on. Clearly (Sn )n≥1 is an adapted stochastic process, and we can show it is a martingale. Indeed, En [Sn+1 − Sn ] = bn (X1 , · · · , Xn )En [Xn+1 ] = 0. For any arbitrary function f , En [f (Sn+1 )] = 1 [f (Sn + bn (X1 , · · · , Xn )) + f (Sn − bn (X1 , · · · , Xn ))]. Then 2 intuitively, En [f (Sn+1 ] cannot be solely dependent upon Sn when bn ’s are properly chosen. Therefore in general, (Sn )n≥1 cannot be a Markov process. Remark: If Xn is regarded as the gain/loss of n-th bet in a gambling game, then Sn would be the wealth at time n. bn is therefore the wager for the (n+1)-th bet and is devised according to past gambling results. 2.8. (i) Proof. Note Mn = En [MN ] and Mn = En [MN ]. (ii) Proof. In the proof of Theorem 1.2.2, we proved by induction that Xn = Vn where Xn is defined by (1.2.14) of Chapter 1. In other words, the sequence (Vn )0≤n≤N can be realized as the value process of a portfolio, Xn which consists of stock and money market accounts. Since ( (1+r)n )0≤n≤N is a martingale under P (Theorem
Vn 2.4.5), ( (1+r)n )0≤n≤N is a martingale under P .

(iii) Proof. (iv) Proof. Combine (ii) and (iii), then use (i). 2.9. (i)
(H) S1 (H) 1 = 2, d0 = S1S0 = 2 , S0 (T and d1 (T ) = S21 (TT)) = 1. S 1 1 0 −d So p0 = 1+r−d0 0 = 2 , q0 = 2 , p1 (H) u0 5 q1 (T ) = 6 . Therefore P (HH) = p0 p1 (H) = 1 , 4 5 q0 q1 (T ) = 12 . Vn (1+r)n

= En

VN (1+r)N

, so V0 ,

V1 1+r ,

···,

VN −1 , VN (1+r)N −1 (1+r)N

is a martingale under P .

Proof. u0 =

u1 (H) = =

S2 (HH) S1 (H)

= 1.5, d1 (H) =

S2 (HT ) S1 (H)

= 1, u1 (T ) =

S2 (T H) S1 (T )

=4

1+r1 (H)−d1 (H) u1 (H)−d1 (H)

1 = 1 , q1 (H) = 2 , p1 (T ) = 2 1 4,

1+r1 (T )−d1 (T ) u1 (T )−d1 (T ) 1 12

1 = 6 , and

P (HT ) = p0 q1 (H) =

P (T H) = q0 p1 (T ) =

and P (T T ) =

The proofs of Theorem 2.4.4, Theorem 2.4.5 and Theorem 2.4.7 still work for the random interest rate model, with proper modifications (i.e. P would be constructed according to conditional probabilities P (ωn+1 = H|ω1 , · · · , ωn ) := pn and P (ωn+1 = T |ω1 , · · · , ωn ) := qn . Cf. notes on page 39.). So the time-zero value of an option that pays off V2 at time two is given by the risk-neutral pricing formula V0 = E (1+r0V2 1 ) . )(1+r (ii) Proof. V2 (HH) = 5, V2 (HT ) = 1, V2 (T H) = 1 and V2 (T T ) = 0. So V1 (H) = 2.4, V1 (T ) = p1 (T )V2 (T H)+q1 (T )V2 (T T ) 1+r1 (T ) p1 (H)V2 (HH)+q1 (H)V2 (HT ) 1+r1 (H)

=

=

1 9,

and V0 =

p0 V1 (H)+q0 V1 (T ) 1+r0

≈ 1.

5

(iii) Proof. ∆0 = (iv) Proof. ∆1 (H) = 2.10. (i)
Xn+1 Proof. En [ (1+r)n+1 ] = En [ ∆n Yn+1 Sn + (1+r)n+1 (1+r)(Xn −∆n Sn ) ] (1+r)n+1 Xn (1+r)n . V2 (HH)−V2 (HT ) S2 (HH)−S2 (HT ) V1 (H)−V1 (T ) S1 (H)−S1 (T )

=

1 2.4− 9 8−2

= 0.4 −

1 54

≈ 0.3815.

=

5−1 12−8

= 1.

=

∆n Sn (1+r)n+1 En [Yn+1 ]

+

Xn −∆n Sn (1+r)n

=

∆n Sn (1+r)n+1 (up

+

dq) +

Xn −∆n Sn (1+r)n

=

∆n Sn +Xn −∆n Sn (1+r)n

=

(ii) Proof. From (2.8.2), we have ∆n uSn + (1 + r)(Xn − ∆n Sn ) = Xn+1 (H) ∆n dSn + (1 + r)(Xn − ∆n Sn ) = Xn+1 (T ). So ∆n =
Xn+1 (H)−Xn+1 (T ) uSn −dSn

and Xn = En [ Xn+1 ]. To make the portfolio replicate the payoff at time N , we 1+r

VN X must have XN = VN . So Xn = En [ (1+r)N −n ] = En [ (1+r)N −n ]. Since (Xn )0≤n≤N is the value process of the N unique replicating portfolio (uniqueness is guaranteed by the uniqueness of the solution to the above linear VN equations), the no-arbitrage price of VN at time n is Vn = Xn = En [ (1+r)N −n ].

(iii) Proof. En [ Sn+1 ] (1 + r)n+1 = = < = 1 En [(1 − An+1 )Yn+1 Sn ] (1 + r)n+1 Sn [p(1 − An+1 (H))u + q(1 − An+1 (T ))d] (1 + r)n+1 Sn [pu + qd] (1 + r)n+1 Sn . (1 + r)n
Sn (1+r)n+1 (1−a)(pu+qd)

Sn+1 If An+1 is a constant a, then En [ (1+r)n+1 ] = Sn (1+r)n (1−a)n .

=

Sn (1+r)n (1−a).

Sn+1 So En [ (1+r)n+1 (1−a)n+1 ] =

2.11. (i) Proof. FN + PN = SN − K + (K − SN )+ = (SN − K)+ = CN . (ii)
CN FN PN Proof. Cn = En [ (1+r)N −n ] = En [ (1+r)N −n ] + En [ (1+r)N −n ] = Fn + Pn .

(iii)
FN Proof. F0 = E[ (1+r)N ] = 1 (1+r)N

E[SN − K] = S0 −

K (1+r)N

.

(iv) 6

Proof. At time zero, the trader has F0 = S0 in money market account and one share of stock. At time N , the trader has a wealth of (F0 − S0 )(1 + r)N + SN = −K + SN = FN . (v) Proof. By (ii), C0 = F0 + P0 . Since F0 = S0 − (vi)
SN −K Proof. By (ii), Cn = Pn if and only if Fn = 0. Note Fn = En [ (1+r)N −n ] = Sn − So Fn is not necessarily zero and Cn = Pn is not necessarily true for n ≥ 1. (1+r)N S0 (1+r)N −n (1+r)N S0 (1+r)N

= 0, C0 = P0 .

= Sn − S0 (1 + r)n .

2.12. Proof. First, the no-arbitrage price of the chooser option at time m must be max(C, P ), where C=E (SN − K)+ (K − SN )+ , and P = E . (1 + r)N −m (1 + r)N −m

That is, C is the no-arbitrage price of a call option at time m and P is the no-arbitrage price of a put option at time m. Both of them have maturity date N and strike price K. Suppose the market is liquid, then the chooser option is equivalent to receiving a payoff of max(C, P ) at time m. Therefore, its current no-arbitrage price should be E[ max(C,P ) ]. (1+r)m K K By the put-call parity, C = Sm − (1+r)N −m + P . So max(C, P ) = P + (Sm − (1+r)N −m )+ . Therefore, the time-zero price of a chooser option is E
K (Sm − (1+r)N −m )+ P +E (1 + r)m (1 + r)m

=E

K (Sm − (1+r)N −m )+ (K − SN )+ . +E (1 + r)N (1 + r)m

The first term stands for the time-zero price of a put, expiring at time N and having strike price K, and the K second term stands for the time-zero price of a call, expiring at time m and having strike price (1+r)N −m . If we feel unconvinced by the above argument that the chooser option’s no-arbitrage price is E[ max(C,P ) ], (1+r)m due to the economical argument involved (like “the chooser option is equivalent to receiving a payoff of max(C, P ) at time m”), then we have the following mathematically rigorous argument. First, we can construct a portfolio ∆0 , · · · , ∆m−1 , whose payoff at time m is max(C, P ). Fix ω, if C(ω) > P (ω), we can construct a portfolio ∆m , · · · , ∆N −1 whose payoff at time N is (SN − K)+ ; if C(ω) < P (ω), we can construct a portfolio ∆m , · · · , ∆N −1 whose payoff at time N is (K − SN )+ . By defining (m ≤ k ≤ N − 1) ∆k (ω) = ∆k (ω) ∆k (ω) if C(ω) > P (ω) if C(ω) < P (ω),

we get a portfolio (∆n )0≤n≤N −1 whose payoff is the same as that of the chooser option. So the no-arbitrage price process of the chooser option must be equal to the value process of the replicating portfolio. In Xm particular, V0 = X0 = E[ (1+r)m ] = E[ max(C,P ) ]. (1+r)m 2.13. (i) Proof. Note under both actual probability P and risk-neutral probability P , coin tosses ωn ’s are i.i.d.. So n+1 without loss of generality, we work on P . For any function g, En [g(Sn+1 , Yn+1 )] = En [g( SSn Sn , Yn + = pg(uSn , Yn + uSn ) + qg(dSn , Yn + dSn ), which is a function of (Sn , Yn ). So (Sn , Yn )0≤n≤N is Markov under P . (ii) 7
Sn+1 Sn Sn )]

Proof. Set vN (s, y) = f ( Ny ). Then vN (SN , YN ) = f ( +1 Vn = where En [ Vn+1 ] 1+r = n+1 En [ vn+1 (S1+r ,Yn+1 ) ]

N n=0 Sn N +1 )

= VN . Suppose vn+1 is given, then

=

1 1+r [pvn+1 (uSn , Yn

+ uSn ) + qvn+1 (dSn , Yn + dSn )] = vn (Sn , Yn ),

vn (s, y) =

vn+1 (us, y + us) + vn+1 (ds, y + ds) . 1+r

2.14. (i) Proof. For n ≤ M , (Sn , Yn ) = (Sn , 0). Since coin tosses ωn ’s are i.i.d. under P , (Sn , Yn )0≤n≤M is Markov under P . More precisely, for any function h, En [h(Sn+1 )] = ph(uSn ) + h(dSn ), for n = 0, 1, · · · , M − 1. For any function g of two variables, we have EM [g(SM +1 , YM +1 )] = EM [g(SM +1 , SM +1 )] = pg(uSM , uSM )+ n+1 n+1 qg(dSM , dSM ). And for n ≥ M +1, En [g(Sn+1 , Yn+1 )] = En [g( SSn Sn , Yn + SSn Sn )] = pg(uSn , Yn +uSn )+ qg(dSn , Yn + dSn ), so (Sn , Yn )0≤n≤N is Markov under P . (ii) y Proof. Set vN (s, y) = f ( N −M ). Then vN (SN , YN ) = f (
N K=M +1

Sk

N −M

) = VN . Suppose vn+1 is already given.

a) If n > M , then En [vn+1 (Sn+1 , Yn+1 )] = pvn+1 (uSn , Yn + uSn ) + qvn+1 (dSn , Yn + dSn ). So vn (s, y) = pvn+1 (us, y + us) + qvn+1 (ds, y + ds). b) If n = M , then EM [vM +1 (SM +1 , YM +1 )] = pvM +1 (uSM , uSM ) + vn+1 (dSM , dSM ). So vM (s) = pvM +1 (us, us) + qvM +1 (ds, ds). c) If n < M , then En [vn+1 (Sn+1 )] = pvn+1 (uSn ) + qvn+1 (dSn ). So vn (s) = pvn+1 (us) + qvn+1 (ds). 3. State Prices 3.1. Proof. Note Z(ω) :=
P (ω) P (ω)

=

1 Z(ω) .

Apply Theorem 3.1.1 with P , P , Z replaced by P , P , Z, we get the

analogous of properties (i)-(iii) of Theorem 3.1.1. 3.2. (i) Proof. P (Ω) = (ii) Proof. E[Y ] = (iii) ˜ Proof. P (A) = (iv) Proof. If P (A) = ω∈A Z(ω)P (ω) = 0, by P (Z > 0) = 1, we conclude P (ω) = 0 for any ω ∈ A. So P (A) = ω∈A P (ω) = 0. (v) Proof. P (A) = 1 ⇐⇒ P (Ac ) = 0 ⇐⇒ P (Ac ) = 0 ⇐⇒ P (A) = 1. (vi) ω∈A ω∈Ω ω∈Ω

P (ω) =

ω∈Ω

Z(ω)P (ω) = E[Z] = 1.

Y (ω)P (ω) =

ω∈Ω

Y (ω)Z(ω)P (ω) = E[Y Z].

Z(ω)P (ω). Since P (A) = 0, P (ω) = 0 for any ω ∈ A. So P (A) = 0.

8

Proof. Pick ω0 such that P (ω0 ) > 0, define Z(ω) =
1 P (ω0 )

0,
1 P (ω0 ) ,

if ω = ω0 Then P (Z ≥ 0) = 1 and E[Z] = if ω = ω0 .

· P (ω0 ) = 1. ω=ω0 Clearly P (Ω \ {ω0 }) = E[Z1Ω\{ω0 } ] =

Z(ω)P (ω) = 0. But P (Ω \ {ω0 }) = 1 − P (ω0 ) > 0 if

P (ω0 ) < 1. Hence in the case 0 < P (ω0 ) < 1, P and P are not equivalent. If P (ω0 ) = 1, then E[Z] = 1 if and only if Z(ω0 ) = 1. In this case P (ω0 ) = Z(ω0 )P (ω0 ) = 1. And P and P have to be equivalent. In summary, if we can find ω0 such that 0 < P (ω0 ) < 1, then Z as constructed above would induce a probability P that is not equivalent to P . 3.5. (i) Proof. Z(HH) = (ii) Proof. Z1 (H) = E1 [Z2 ](H) = Z2 (HH)P (ω2 = H|ω1 = H) + Z2 (HT )P (ω2 = T |ω1 = H) = 3 E1 [Z2 ](T ) = Z2 (T H)P (ω2 = H|ω1 = T ) + Z2 (T T )P (ω2 = T |ω1 = T ) = 2 . (iii) Proof. V1 (H) = [Z2 (HH)V2 (HH)P (ω2 = H|ω1 = H) + Z2 (HT )V2 (HT )P (ω2 = T |ω1 = T )] = 2.4, Z1 (H)(1 + r1 (H)) [Z2 (T H)V2 (T H)P (ω2 = H|ω1 = T ) + Z2 (T T )V2 (T T )P (ω2 = T |ω1 = T )] 1 = , Z1 (T )(1 + r1 (T )) 9
3 4. 9 16 ,

Z(HT ) = 9 , Z(T H) = 8

3 8

and Z(T T ) =

15 4 .

Z1 (T ) =

V1 (T ) = and V0 =

Z2 (HH)V2 (HH) Z2 (HT )V2 (HT ) Z2 (T H)V2 (T H) P (HH) + P (T H) + 0 ≈ 1. 1 1 1 1 P (HT ) + 1 (1 + 4 )(1 + 4 ) (1 + 4 )(1 + 4 ) (1 + 4 )(1 + 1 ) 2

3.6. Proof. U (x) = have XN =
1 x, (1+r)N λZ

so I(x) = =
1 Z]

1 x.

Z (3.3.26) gives E[ (1+r)N

1 X0 (1 + r)n Zn En [Z ·

X0 N Z (1 + r) . 0 = Xn , where ξ

Hence Xn =

(1+r)N λZ X En [ (1+r)N −n ] N

] = X0 . So λ = = En [ X0 (1+r) Z n 1 X0 .

By (3.3.25), we

1 ] = X0 (1 + r)n En [ Z ] =

the second to last “=” comes from Lemma 3.2.6.

3.7.
Z λZ Proof. U (x) = xp−1 and so I(x) = x p−1 . By (3.3.26), we have E[ (1+r)N ( (1+r)N ) p−1 ] = X0 . Solve it for λ, we get  p−1
1 1

 λ= 

X0 p E
1

Z p−1
Np

  

=

p−1 X0 (1 + r)N p

(E[Z p−1 ])p−1
1

p

.

(1+r) p−1 λZ So by (3.3.25), XN = ( (1+r)N ) p−1 =
1 1 Np

λ p−1 Z p−1
N (1+r) p−1

=

X0 (1+r) p−1 E[Z p p−1

Z p−1
N (1+r) p−1

=

(1+r)N X0 Z p−1 E[Z p p−1

1

.

]

]

3.8. (i)

9

d d Proof. dx (U (x) − yx) = U (x) − y. So x = I(y) is an extreme point of U (x) − yx. Because dx2 (U (x) − yx) = U (x) ≤ 0 (U is concave), x = I(y) is a maximum point. Therefore U (x) − y(x) ≤ U (I(y)) − yI(y) for every x.

2

(ii) Proof. Following the hint of the problem, we have E[U (XN )] − E[XN λZ λZ λZ λZ ] ≤ E[U (I( ))] − E[ I( )], N N N (1 + r) (1 + r) (1 + r) (1 + r)N

λ ∗ ∗ ∗ ∗ i.e. E[U (XN )] − λX0 ≤ E[U (XN )] − E[ (1+r)N XN ] = E[U (XN )] − λX0 . So E[U (XN )] ≤ E[U (XN )].

3.9. (i)
X Proof. Xn = En [ (1+r)N −n ]. So if XN ≥ 0, then Xn ≥ 0 for all n. N

(ii)
1 Proof. a) If 0 ≤ x < γ and 0 < y ≤ γ , then U (x) − yx = −yx ≤ 0 and U (I(y)) − yI(y) = U (γ) − yγ = 1 − yγ ≥ 0. So U (x) − yx ≤ U (I(y)) − yI(y). 1 b) If 0 ≤ x < γ and y > γ , then U (x) − yx = −yx ≤ 0 and U (I(y)) − yI(y) = U (0) − y · 0 = 0. So U (x) − yx ≤ U (I(y)) − yI(y). 1 c) If x ≥ γ and 0 < y ≤ γ , then U (x) − yx = 1 − yx and U (I(y)) − yI(y) = U (γ) − yγ = 1 − yγ ≥ 1 − yx. So U (x) − yx ≤ U (I(y)) − yI(y). 1 d) If x ≥ γ and y > γ , then U (x) − yx = 1 − yx < 0 and U (I(y)) − yI(y) = U (0) − y · 0 = 0. So U (x) − yx ≤ U (I(y)) − yI(y).

(iii)
XN λZ Proof. Using (ii) and set x = XN , y = (1+r)N , where XN is a random variable satisfying E[ (1+r)N ] = X0 , we have λZ λZ ∗ E[U (XN )] − E[ XN ] ≤ E[U (XN )] − E[ X ∗ ]. (1 + r)N (1 + r)N N ∗ ∗ That is, E[U (XN )] − λX0 ≤ E[U (XN )] − λX0 . So E[U (XN )] ≤ E[U (XN )].

(iv) Proof. Plug pm and ξm into (3.6.4), we have
2N 2N

X0 = m=1 pm ξm I(λξm ) = m=1 1 pm ξm γ1{λξm ≤ γ } .

So
X0 γ

X0 γ

{m :

= we are looking for positive solution λ > 0). Conversely, suppose there exists some K so that ξK < ξK+1 and K X0 1 m=1 ξm pm = γ . Then we can find λ > 0, such that ξK < λγ < ξK+1 . For such λ, we have Z λZ 1 E[ I( )] = pm ξm 1{λξm ≤ γ } γ = pm ξm γ = X0 . N (1 + r) (1 + r)N m=1 m=1 Hence (3.6.4) has a solution. 10
2N K

2N X0 1 m=1 pm ξm 1{λξm ≤ γ } . Suppose there is a solution λ to (3.6.4), note γ > 0, we then can conclude 1 1 1 λξm ≤ γ } = ∅. Let K = max{m : λξm ≤ γ }, then λξK ≤ γ < λξK+1 . So ξK < ξK+1 and K N m=1 pm ξm (Note, however, that K could be 2 . In this case, ξK+1 is interpreted as ∞. Also, note

=

(v)
∗ 1 Proof. XN (ω m ) = I(λξm ) = γ1{λξm ≤ γ } =

γ, if m ≤ K . 0, if m ≥ K + 1

4. American Derivative Securities Before proceeding to the exercise problems, we first give a brief summary of pricing American derivative securities as presented in the textbook. We shall use the notation of the book. From the buyer’s perspective: At time n, if the derivative security has not been exercised, then the buyer can choose a policy τ with τ ∈ Sn . The valuation formula for cash flow (Theorem 2.4.8) gives a fair price for the derivative security exercised according to τ :
N

Vn (τ ) = k=n En 1{τ =k}

1 1 Gk = En 1{τ ≤N } Gτ . (1 + r)k−n (1 + r)τ −n

The buyer wants to consider all the possible τ ’s, so that he can find the least upper bound of security value, which will be the maximum price of the derivative security acceptable to him. This is the price given by 1 Definition 4.4.1: Vn = maxτ ∈Sn En [1{τ ≤N } (1+r)τ −n Gτ ]. From the seller’s perspective: A price process (Vn )0≤n≤N is acceptable to him if and only if at time n, he can construct a portfolio at cost Vn so that (i) Vn ≥ Gn and (ii) he needs no further investing into the portfolio as time goes by. Formally, the seller can find (∆n )0≤n≤N and (Cn )0≤n≤N so that Cn ≥ 0 and Sn Vn+1 = ∆n Sn+1 + (1 + r)(Vn − Cn − ∆n Sn ). Since ( (1+r)n )0≤n≤N is a martingale under the risk-neutral measure P , we conclude En Cn Vn+1 Vn =− ≤ 0, − n+1 n (1 + r) (1 + r) (1 + r)n

Vn i.e. ( (1+r)n )0≤n≤N is a supermartingale. This inspired us to check if the converse is also true. This is exactly the content of Theorem 4.4.4. So (Vn )0≤n≤N is the value process of a portfolio that needs no further investing

if and only if
Vn (1+r)n

Vn (1+r)n

is a supermartingale under P (note this is independent of the requirement
0≤n≤N

Vn ≥ Gn ). In summary, a price process (Vn )0≤n≤N is acceptable to the seller if and only if (i) Vn ≥ Gn ; (ii) is a supermartingale under P .
0≤n≤N

Theorem 4.4.2 shows the buyer’s upper bound is the seller’s lower bound. So it gives the price acceptable to both. Theorem 4.4.3 gives a specific algorithm for calculating the price, Theorem 4.4.4 establishes the one-to-one correspondence between super-replication and supermartingale property, and finally, Theorem 4.4.5 shows how to decide on the optimal exercise policy. 4.1. (i) Proof. V2P (HH) = 0, V2P (HT ) = V2P (T H) = 0.8, V2P (T T ) = 3, V1P (H) = 0.32, V1P (T ) = 2, V0P = 9.28. (ii) Proof. V0C = 5. (iii) Proof. gS (s) = |4 − s|. We apply Theorem 4.4.3 and have V2S (HH) = 12.8, V2S (HT ) = V2S (T H) = 2.4, V2S (T T ) = 3, V1S (H) = 6.08, V1S (T ) = 2.16 and V0S = 3.296. (iv)

11

Proof. First, we note the simple inequality max(a1 , b1 ) + max(a2 , b2 ) ≥ max(a1 + a2 , b1 + b2 ). “>” holds if and only if b1 > a1 , b2 < a2 or b1 < a1 , b2 > a2 . By induction, we can show
S Vn

=

max gS (Sn ),

S S pVn+1 + Vn+1 1+r C P P pV C + Vn+1 pVn+1 + Vn+1 + n+1 1+r 1+r C C pVn+1 + Vn+1 1+r

≤ max gP (Sn ) + gC (Sn ), ≤ max gP (Sn ),
P C = Vn + Vn .

P P pVn+1 + Vn+1 1+r

+ max gC (Sn ),

S P C As to when “

C C pVn+1 +qVn+1 1+r

or gP (Sn ) >

P P pVn+1 +qVn+1 1+r

and gC (Sn ) <

C C pVn+1 +qVn+1 }. 1+r

4.2. Proof. For this problem, we need Figure 4.2.1, Figure 4.4.1 and Figure 4.4.2. Then ∆1 (H) = and ∆0 = V2 (HH) − V2 (HT ) 1 V2 (T H) − V2 (T T ) = − , ∆1 (T ) = = −1, S2 (HH) − S2 (HT ) 12 S2 (T H) − S2 (T T ) V1 (H) − V1 (T ) ≈ −0.433. S1 (H) − S1 (T )

The optimal exercise time is τ = inf{n : Vn = Gn }. So τ (HH) = ∞, τ (HT ) = 2, τ (T H) = τ (T T ) = 1. Therefore, the agent borrows 1.36 at time zero and buys the put. At the same time, to hedge the long position, he needs to borrow again and buy 0.433 shares of stock at time zero. At time one, if the result of coin toss is tail and the stock price goes down to 2, the value of the portfolio 1 is X1 (T ) = (1 + r)(−1.36 − 0.433S0 ) + 0.433S1 (T ) = (1 + 4 )(−1.36 − 0.433 × 4) + 0.433 × 2 = −3. The agent should exercise the put at time one and get 3 to pay off his debt. At time one, if the result of coin toss is head and the stock price goes up to 8, the value of the portfolio 1 is X1 (H) = (1 + r)(−1.36 − 0.433S0 ) + 0.433S1 (H) = −0.4. The agent should borrow to buy 12 shares of stock. At time two, if the result of coin toss is head and the stock price goes up to 16, the value of the 1 1 portfolio is X2 (HH) = (1 + r)(X1 (H) − 12 S1 (H)) + 12 S2 (HH) = 0, and the agent should let the put expire. If at time two, the result of coin toss is tail and the stock price goes down to 4, the value of the portfolio is 1 1 X2 (HT ) = (1 + r)(X1 (H) − 12 S1 (H)) + 12 S2 (HT ) = −1. The agent should exercise the put to get 1. This will pay off his debt. 4.3. Proof. We need Figure 1.2.2 for this problem, and calculate the intrinsic value process and price process of the put as follows. 2 For the intrinsic value process, G0 = 0, G1 (T ) = 1, G2 (T H) = 3 , G2 (T T ) = 5 , G3 (T HT ) = 1, 3 G3 (T T H) = 1.75, G3 (T T T ) = 2.125. All the other outcomes of G is negative.

12

2 5 For the price process, V0 = 0.4, V1 (T ) = 1, V1 (T H) = 3 , V1 (T T ) = 3 , V3 (T HT ) = 1, V3 (T T H) = 1.75, V3 (T T T ) = 2.125. All the other outcomes of V is zero. Therefore the time-zero price of the derivative security is 0.4 and the optimal exercise time satisfies

τ (ω) =

∞ if ω1 = H, 1 if ω1 = T .

4.4. Proof. 1.36 is the cost of super-replicating the American derivative security. It enables us to construct a portfolio sufficient to pay off the derivative security, no matter when the derivative security is exercised. So to hedge our short position after selling the put, there is no need to charge the insider more than 1.36. 4.5. Proof. The stopping times in S0 are (1) τ ≡ 0; (2) τ ≡ 1; (3) τ (HT ) = τ (HH) = 1, τ (T H), τ (T T ) ∈ {2, ∞} (4 different ones); (4) τ (HT ), τ (HH) ∈ {2, ∞}, τ (T H) = τ (T T ) = 1 (4 different ones); (5) τ (HT ), τ (HH), τ (T H), τ (T T ) ∈ {2, ∞} (16 different ones). When the option is out of money, the following stopping times do not exercise (i) τ ≡ 0; (ii) τ (HT ) ∈ {2, ∞}, τ (HH) = ∞, τ (T H), τ (T T ) ∈ {2, ∞} (8 different ones); (iii) τ (HT ) ∈ {2, ∞}, τ (HH) = ∞, τ (T H) = τ (T T ) = 1 (2 different ones). ∗ 4 For (i), E[1{τ ≤2} ( 4 )τ Gτ ] = G0 = 1. For (ii), E[1{τ ≤2} ( 5 )τ Gτ ] ≤ E[1{τ ∗ ≤2} ( 4 )τ Gτ ∗ ], where τ ∗ (HT ) = 5 5 1 4 4 ∗ 2, τ ∗ (HH) = ∞, τ ∗ (T H) = τ ∗ (T T ) = 2. So E[1{τ ∗ ≤2} ( 5 )τ Gτ ∗ ] = 4 [( 4 )2 · 1 + ( 5 )2 (1 + 4)] = 0.96. For 5 (iii), E[1{τ ≤2} ( 4 )τ Gτ ] has the biggest value when τ satisfies τ (HT ) = 2, τ (HH) = ∞, τ (T H) = τ (T T ) = 1. 5 This value is 1.36. 4.6. (i) Proof. The value of the put at time N , if it is not exercised at previous times, is K − SN . Hence VN −1 = VN K max{K − SN −1 , EN −1 [ 1+r ]} = max{K − SN −1 , 1+r − SN −1 } = K − SN −1 . The second equality comes from the fact that discounted stock price process is a martingale under risk-neutral probability. By induction, we can show Vn = K − Sn (0 ≤ n ≤ N ). So by Theorem 4.4.5, the optimal exercise policy is to sell the stock at time zero and the value of this derivative security is K − S0 . Remark: We cheated a little bit by using American algorithm and Theorem 4.4.5, since they are developed for the case where τ is allowed to be ∞. But intuitively, results in this chapter should still hold for the case τ ≤ N , provided we replace “max{Gn , 0}” with “Gn ”. (ii) Proof. This is because at time N , if we have to exercise the put and K − SN < 0, we can exercise the European call to set off the negative payoff. In effect, throughout the portfolio’s lifetime, the portfolio has intrinsic values greater than that of an American put stuck at K with expiration time N . So, we must have V0AP ≤ V0 + V0EC ≤ K − S0 + V0EC . (iii)

13

Proof. Let V0EP denote the time-zero value of a European put with strike K and expiration time N . Then V0AP ≥ V0EP = V0EC − E[ K SN − K ] = V0EC − S0 + . (1 + r)N (1 + r)N

4.7.
VN K K Proof. VN = SN − K, VN −1 = max{SN −1 − K, EN −1 [ 1+r ]} = max{SN −1 − K, SN −1 − 1+r } = SN −1 − 1+r . K By induction, we can prove Vn = Sn − (1+r)N −n (0 ≤ n ≤ N ) and Vn > Gn for 0 ≤ n ≤ N − 1. So the K time-zero value is S0 − (1+r)N and the optimal exercise time is N .

5. Random Walk 5.1. (i) Proof. E[ατ2 ] = E[α(τ2 −τ1 )+τ1 ] = E[α(τ2 −τ1 ) ]E[ατ1 ] = E[ατ1 ]2 . (ii) Proof. If we define Mn = Mn+τm − Mτm (m = 1, 2, · · · ), then (M· )m as random functions are i.i.d. with (m) distributions the same as that of M . So τm+1 − τm = inf{n : Mn = 1} are i.i.d. with distributions the same as that of τ1 . Therefore E[ατm ] = E[α(τm −τm−1 )+(τm−1 −τm−2 )+···+τ1 ] = E[ατ1 ]m .
(m) (m)

(iii) Proof. Yes, since the argument of (ii) still works for asymmetric random walk. 5.2. (i) Proof. f (σ) = peσ − qe−σ , so f (σ) > 0 if and only if σ > f (σ) > f (0) = 1 for all σ > 0. (ii)
1 1 1 n+1 Proof. En [ SSn ] = En [eσXn+1 f (σ) ] = peσ f (σ) + qe−σ f (σ) = 1. 1 2 (ln q

− ln p). Since

1 2 (ln q

− ln p) < 0,

(iii)
1 Proof. By optional stopping theorem, E[Sn∧τ1 ] = E[S0 ] = 1. Note Sn∧τ1 = eσMn∧τ1 ( f (σ) )n∧τ1 ≤ eσ·1 , by bounded convergence theorem, E[1{τ1 1 for all σ > σ0 .



(ii)
1 1 Proof. As in Exercise 5.2, Sn = eσMn ( f (σ) )n is a martingale, and 1 = E[S0 ] = E[Sn∧τ1 ] = E[eσMn∧τ1 ( f (σ) )τ1 ∧n ]. Suppose σ > σ0 , then by bounded convergence theorem,

1 = E[ lim eσMn∧τ1 ( n→∞ 1 n∧τ1 1 τ1 ) ] = E[1{τ1 K} ] = P (ST > K). Moreover, by Girsanov’s Theorem, Wt = Wt + in Theorem 5.4.1.) (iii) Proof. ST = xeσWT +(r− 2 σ
1 2 1 2

t (−σ)du 0

= Wt − σt is a P -Brownian motion (set Θ = −σ

)T

= xeσWT +(r+ 2 σ
1 2

1

2

)T

. So WT √ > −d+ (T, x) T = N (d+ (T, x)).

P (ST > K) = P (xeσWT +(r+ 2 σ

)T

> K) = P

46

5.4. First, a few typos. In the SDE for S, “σ(t)dW (t)” → “σ(t)S(t)dW (t)”. In the first equation for c(0, S(0)), E → E. In the second equation for c(0, S(0)), the variable for BSM should be   1 T 2 1 T r(t)dt, σ (t)dt . BSM T, S(0); K, T 0 T 0 (i) Proof. d ln St = X = − is a Gaussian with X ∼ N ( (ii) Proof. For the standard BSM model with constant volatility Σ and interest rate R, under the risk-neutral measure, we have ST = S0 eY , where Y = (R− 1 Σ2 )T +ΣWT ∼ N ((R− 1 Σ2 )T, Σ2 T ), and E[(S0 eY −K)+ ] = 2 2 eRT BSM (T, S0 ; K, R, Σ). Note R =
1

T (rt 0

T T dSt 1 2 1 1 2 2 St − 2St d S t = rt dt + σt dWt − 2 σt dt. So ST = S0 exp{ 0 (rt − 2 σt )dt + 0 T 1 2 2 σt )dt + 0 σt dWt . The first term in the expression of X is a number and the T 2 random variable N (0, 0 σt dt), since both r and σ ar deterministic. Therefore, T T 2 2 (rt − 1 σt )dt, 0 σt dt),. 2 0

σt dWt }. Let second term ST = S0 eX ,

1 T

(E[Y ] + 1 V ar(Y )) and Σ = 2 T, S0 ; K, 1 T

1 T

V ar(Y ), we can get 1 V ar(Y ) . T

E[(S0 eY − K)+ ] = eE[Y ]+ 2 V ar(Y ) BSM So for the model in this problem, c(0, S0 ) = = e− e−
T 0

1 E[Y ] + V ar(Y ) , 2

rt dt

E[(S0 eX − K)+ ] e BSM T, S0 ; K, 1 T
T 0

T 0

1 rt dt E[X]+ 2 V ar(X)

1 T 

1 E[X] + V ar(X) , 2

1 V ar(X) T

 = 1 BSM T, S0 ; K, T
0

T

rt dt,

2 σt dt .

5.5. (i)
1 1 Proof. Let f (x) = x , then f (x) = − x2 and f (x) = 2 x3 .

Note dZt = −Zt Θt dWt , so

d

1 Zt

1 1 1 2 2 2 Θt Θ2 t = f (Zt )dZt + f (Zt )dZt dZt = − 2 (−Zt )Θt dWt + 3 Zt Θt dt = Z dWt + Z dt. 2 Zt 2 Zt t t

(ii) Proof. By Lemma 5.2.2., for s, t ≥ 0 with s < t, Ms = E[Mt |Fs ] = E Zs Ms . So M = Z M is a P -martingale. (iii)
Zt Mt Zs |Fs

. That is, E[Zt Mt |Fs ] =

47

Proof. dMt = d Mt · 1 Zt = 1 1 1 Γt M t Θt M t Θ2 Γ t Θt t dMt + Mt d + dMt d = dWt + dWt + dt + dt. Zt Zt Zt Zt Zt Zt Zt

(iv) Proof. In part (iii), we have dMt = Let Γt = 5.6. Proof. By Theorem 4.6.5, it suffices to show Wi (t) is an Ft -martingale under P and [Wi , Wj ](t) = tδij (i, j = 1, 2). Indeed, for i = 1, 2, Wi (t) is an Ft -martingale under P if and only if Wi (t)Zt is an Ft -martingale under P , since Wi (t)Zt E[Wi (t)|Fs ] = E |Fs . Zs By Itˆ’s product formula, we have o d(Wi (t)Zt ) = Wi (t)dZt + Zt dWi (t) + dZt dWi (t) = Wi (t)(−Zt )Θ(t) · dWt + Zt (dWi (t) + Θi (t)dt) + (−Zt Θt · dWt )(dWi (t) + Θi (t)dt) d Γt M t Θt M t Θ2 Γt Θt Γt M t Θt t dWt + dWt + dt + dt = (dWt + Θt dt) + (dWt + Θt dt). Zt Zt Zt Zt Zt Zt then dMt = Γt dWt . This proves Corollary 5.3.2.

Γt +Mt Θt , Zt

= Wi (t)(−Zt ) j=1 d

Θj (t)dWj (t) + Zt (dWi (t) + Θi (t)dt) − Zt Θi (t)dt

=

Wi (t)(−Zt ) j=1 Θj (t)dWj (t) + Zt dWi (t)

This shows Wi (t)Zt is an Ft -martingale under P . So Wi (t) is an Ft -martingale under P . Moreover,
· ·

[Wi , Wj ](t) = Wi +
0

Θi (s)ds, Wj +
0

Θj (s)ds (t) = [Wi , Wj ](t) = tδij .

Combined, this proves the two-dimensional Girsanov’s Theorem. 5.7. (i) Proof. Let a be any strictly positive number. We define X2 (t) = (a + X1 (t))D(t)−1 . Then P X2 (T ) ≥ X2 (0) D(T ) = P (a + X1 (T ) ≥ a) = P (X1 (T ) ≥ 0) = 1,

and P X2 (T ) > X2 (0) = P (X1 (T ) > 0) > 0, since a is arbitrary, we have proved the claim of this problem. D(T ) Remark: The intuition is that we invest the positive starting fund a into the money market account, and construct portfolio X1 from zero cost. Their sum should be able to beat the return of money market account. (ii) 48

Proof. We define X1 (t) = X2 (t)D(t) − X2 (0). Then X1 (0) = 0, P (X1 (T ) ≥ 0) = P X2 (T ) ≥ X2 (0) D(T ) = 1, P (X1 (T ) > 0) = P X2 (T ) > X2 (0) D(T ) > 0.

5.8. The basic idea is that for any positive P -martingale M , dMt = Mt ·

sentation Theorem, dMt = Γt dWt for some adapted process Γt . So martingale must be the exponential of an integral w.r.t. Brownian motion. Taking into account discounting factor and apply Itˆ’s product rule, we can show every strictly positive asset is a generalized geometric o Brownian motion. (i) Proof. Vt Dt = E[e− 0 Ru du VT |Ft ] = E[DT VT |Ft ]. So (Dt Vt )t≥0 is a P -martingale. By Martingale Represent tation Theorem, there exists an adapted process Γt , 0 ≤ t ≤ T , such that Dt Vt = 0 Γs dWs , or equivalently, −1 t −1 t −1 Vt = Dt 0 Γs dWs . Differentiate both sides of the equation, we get dVt = Rt Dt 0 Γs dWs dt + Dt Γt dWt , i.e. dVt = Rt Vt dt + (ii) Proof. We prove the following more general lemma. Lemma 1. Let X be an almost surely positive random variable (i.e. X > 0 a.s.) defined on the probability space (Ω, G, P ). Let F be a sub σ-algebra of G, then Y = E[X|F] > 0 a.s. Proof. By the property of conditional expectation Yt ≥ 0 a.s. Let A = {Y = 0}, we shall show P (A) = 0. In∞ 1 1 deed, note A ∈ F, 0 = E[Y IA ] = E[E[X|F]IA ] = E[XIA ] = E[X1A∩{X≥1} ] + n=1 E[X1A∩{ n >X≥ n+1 } ] ≥
1 1 1 1 1 P (A∩{X ≥ 1})+ n=1 n+1 P (A∩{ n > X ≥ n+1 }). So P (A∩{X ≥ 1}) = 0 and P (A∩{ n > X ≥ n+1 }) = 0, ∞ 1 1 ∀n ≥ 1. This in turn implies P (A) = P (A ∩ {X > 0}) = P (A ∩ {X ≥ 1}) + n=1 P (A ∩ { n > X ≥ n+1 }) = 0. ∞ Γt Dt dWt .
T

1 Mt dMt . By Martingale RepreΓ dMt = Mt ( Mtt )dWt , i.e. any positive

By the above lemma, it is clear that for each t ∈ [0, T ], Vt = E[e− t Ru du VT |Ft ] > 0 a.s.. Moreover, by a classical result of martingale theory (Revuz and Yor [4], Chapter II, Proposition (3.4)), we have the following stronger result: for a.s. ω, Vt (ω) > 0 for any t ∈ [0, T ]. (iii)
1 1 Proof. By (ii), V > 0 a.s., so dVt = Vt Vt dVt = Vt Vt Rt Vt dt + Γt Dt dWt Γt = Vt Rt dt + Vt Vt Dt dWt = Rt Vt dt +

T

σt Vt dWt , where σt = 5.9.

Γt Vt Dt .

This shows V follows a generalized geometric Brownian motion.

Proof. c(0, T, x, K) = xN (d+ ) − Ke−rT N (d− ) with d± = then f (y) = −yf (y), cK (0, T, x, K) = xf (d+ )

1 √ σ T

x (ln K + (r ± 1 σ 2 )T ). Let f (y) = 2

y √1 e− 2 2π

2

,

∂d+ ∂d− − e−rT N (d− ) − Ke−rT f (d− ) ∂y ∂y −1 1 = xf (d+ ) √ − e−rT N (d− ) + e−rT f (d− ) √ , σ TK σ T

49

and cKK (0, T, x, K) x ∂d− e−rT 1 ∂d+ d− − √ − e−rT f (d− ) + √ (−d− )f (d− ) xf (d+ ) √ f (d+ )(−d+ ) 2 ∂y ∂y ∂y σ TK σ TK σ T x xd+ −1 −1 e−rT d− −1 √ √ − e−rT f (d− ) √ − √ f (d− ) √ f (d+ ) + √ f (d+ ) σ T K2 σ TK Kσ T Kσ T σ T Kσ T x d+ e−rT f (d− ) d− √ [1 − √ ] + √ f (d+ ) [1 + √ ] 2σ T K σ T Kσ T σ T e−rT x f (d− )d+ − 2 2 f (d+ )d− . Kσ 2 T K σ T

= = = =

5.10. (i) Proof. At time t0 , the value of the chooser option is V (t0 ) = max{C(t0 ), P (t0 )} = max{C(t0 ), C(t0 ) − F (t0 )} = C(t0 ) + max{0, −F (t0 )} = C(t0 ) + (e−r(T −t0 ) K − S(t0 ))+ . (ii) Proof. By the risk-neutral pricing formula, V (0) = E[e−rt0 V (t0 )] = E[e−rt0 C(t0 )+(e−rT K −e−rt0 S(t0 )+ ] = C(0) + E[e−rt0 (e−r(T −t0 ) K − S(t0 ))+ ]. The first term is the value of a call expiring at time T with strike price K and the second term is the value of a put expiring at time t0 with strike price e−r(T −t0 ) K. 5.11. Proof. We first make an analysis which leads to the hint, then we give a formal proof. (Analysis) If we want to construct a portfolio X that exactly replicates the cash flow, we must find a solution to the backward SDE dXt = ∆t dSt + Rt (Xt − ∆t St )dt − Ct dt XT = 0. Multiply Dt on both sides of the first equation and apply Itˆ’s product rule, we get d(Dt Xt ) = ∆t d(Dt St ) − o T T Ct Dt dt. Integrate from 0 to T , we have DT XT − D0 X0 = 0 ∆t d(Dt St ) − 0 Ct Dt dt. By the terminal T T −1 condition, we get X0 = D0 ( 0 Ct Dt dt − 0 ∆t d(Dt St )). X0 is the theoretical, no-arbitrage price of the cash flow, provided we can find a trading strategy ∆ that solves the BSDE. Note the SDE for S −R gives d(Dt St ) = (Dt St )σt (θt dt + dWt ), where θt = αtσt t . Take the proper change of measure so that Wt = t θ ds 0 s

+ Wt is a Brownian motion under the new measure P , we get
T T T

Ct Dt dt = D0 X0 +
0 T 0

∆t d(Dt St ) = D0 X0 +
0

∆t (Dt St )σt dWt .
T

This says the random variable 0 Ct Dt dt has a stochastic integral representation D0 X0 + 0 ∆t Dt St σt dWt . T This inspires us to consider the martingale generated by 0 Ct Dt dt, so that we can apply Martingale Representation Theorem and get a formula for ∆ by comparison of the integrands.

50

(Formal proof) Let MT = Xt =
−1 Dt (D0 X0

T 0

Ct Dt dt, and Mt = E[MT |Ft ]. Then by Martingale Representation Theot 0

rem, we can find an adapted process Γt , so that Mt = M0 + + t 0

Γt dWt . If we set ∆t =
T 0

∆u d(Du Su ) −

t 0

Γt Dt St σt ,

we can check

Cu Du du), with X0 = M0 = E[

Ct Dt dt] solves the SDE

dXt = ∆t dSt + Rt (Xt − ∆t St )dt − Ct dt XT = 0. Indeed, it is easy to see that X satisfies the first equation. To check the terminal condition, we note T T T XT DT = D0 X0 + 0 ∆t Dt St σt dWt − 0 Ct Dt dt = M0 + 0 Γt dWt − MT = 0. So XT = 0. Thus, we have found a trading strategy ∆, so that the corresponding portfolio X replicates the cash flow and has zero T terminal value. So X0 = E[ 0 Ct Dt dt] is the no-arbitrage price of the cash flow at time zero. Remark: As shown in the analysis, d(Dt Xt ) = ∆t d(Dt St ) − Ct Dt dt. Integrate from t to T , we get T T 0 − Dt Xt = t ∆u d(Du Su ) − t Cu Du du. Take conditional expectation w.r.t. Ft on both sides, we get T T −1 −Dt Xt = −E[ t Cu Du du|Ft ]. So Xt = Dt E[ t Cu Du du|Ft ]. This is the no-arbitrage price of the cash flow at time t, and we have justified formula (5.6.10) in the textbook. 5.12. (i) Proof. dBi (t) = dBi (t) + γi (t)dt = martingale. Since dBi (t)dBi (t) = P. (ii) Proof. dSi (t) = = = R(t)Si (t)dt + σi (t)Si (t)dBi (t) + (αi (t) − R(t))Si (t)dt − σi (t)Si (t)γi (t)dt d d σij (t) σij (t) d d j=1 σi (t) Θj (t)dt = j=1 σi (t) dWj (t) + σij (t)2 d e j=1 σi (t)2 dt = dt, by L´vy’s Theorem, Bi σij (t) d j=1 σi (t) dWj (t).

So Bi is a

is a Brownian motion under

R(t)Si (t)dt + σi (t)Si (t)dBi (t) + j=1 σij (t)Θj (t)Si (t)dt − Si (t) j=1 σij (t)Θj (t)dt

R(t)Si (t)dt + σi (t)Si (t)dBi (t).

(iii) Proof. dBi (t)dBk (t) = (dBi (t) + γi (t)dt)(dBj (t) + γj (t)dt) = dBi (t)dBj (t) = ρik (t)dt. (iv) Proof. By Itˆ’s product rule and martingale property, o t t t

E[Bi (t)Bk (t)]

= E[
0 t

Bi (s)dBk (s)] + E[
0 t

Bk (s)dBi (s)] + E[
0

dBi (s)dBk (s)]

= E[
0

ρik (s)ds] =
0

ρik (s)ds. t 0

Similarly, by part (iii), we can show E[Bi (t)Bk (t)] = (v)

ρik (s)ds.

51

Proof. By Itˆ’s product formula, o t t

E[B1 (t)B2 (t)] = E[
0

sign(W1 (u))du] =
0

[P (W1 (u) ≥ 0) − P (W1 (u) < 0)]du = 0.

Meanwhile, t E[B1 (t)B2 (t)]

= E[
0 t

sign(W1 (u))du [P (W1 (u) ≥ 0) − P (W1 (u) < 0)]du

=
0 t

=
0 t

[P (W1 (u) ≥ u) − P (W1 (u) < u)]du 2
0

= < 0,

1 − P (W1 (u) < u) du 2

for any t > 0. So E[B1 (t)B2 (t)] = E[B1 (t)B2 (t)] for all t > 0. 5.13. (i) Proof. E[W1 (t)] = E[W1 (t)] = 0 and E[W2 (t)] = E[W2 (t) − (ii) Proof. Cov[W1 (T ), W2 (T )] = E[W1 (T )W2 (T )]
T T t 0

W1 (u)du] = 0, for all t ∈ [0, T ].

= E
0 T

W1 (t)dW2 (t) +
0

W2 (t)dW1 (t)
T

= E
0

W1 (t)(dW2 (t) − W1 (t)dt) + E
0 T

W2 (t)dW1 (t)

= −E
0 T

W1 (t)2 dt tdt

= −
0

1 = − T 2. 2

5.14. Equation (5.9.6) can be transformed into d(e−rt Xt ) = ∆t [d(e−rt St ) − ae−rt dt] = ∆t e−rt [dSt − rSt dt − adt]. So, to make the discounted portfolio value e−rt Xt a martingale, we are motivated to change the measure t in such a way that St −r 0 Su du−at is a martingale under the new measure. To do this, we note the SDE for S is dSt = αt St dt+σSt dWt . Hence dSt −rSt dt−adt = [(αt −r)St −a]dt+σSt dWt = σSt Set θt =
(αt −r)St −a σSt (αt −r)St −a dt σSt

+ dWt .

and Wt =

t θ ds 0 s

+ Wt , we can find an equivalent probability measure P , under which

S satisfies the SDE dSt = rSt dt + σSt dWt + adt and Wt is a BM. This is the rational for formula (5.9.7). This is a good place to pause and think about the meaning of “martingale measure.” What is to be a martingale? The new measure P should be such that the discounted value process of the replicating 52

portfolio is a martingale, not the discounted price process of the underlying. First, we want Dt Xt to be a martingale under P because we suppose that X is able to replicate the derivative payoff at terminal time, XT = VT . In order to avoid arbitrage, we must have Xt = Vt for any t ∈ [0, T ]. The difficulty is how to calculate Xt and the magic is brought by the martingale measure in the following line of reasoning: −1 −1 Vt = Xt = Dt E[DT XT |Ft ] = Dt E[DT VT |Ft ]. You can think of martingale measure as a calculational convenience. That is all about martingale measure! Risk neutral is a just perception, referring to the actual effect of constructing a hedging portfolio! Second, we note when the portfolio is self-financing, the discounted price process of the underlying is a martingale under P , as in the classical Black-Scholes-Merton model without dividends or cost of carry. This is not a coincidence. Indeed, we have in this case the relation d(Dt Xt ) = ∆t d(Dt St ). So Dt Xt being a martingale under P is more or less equivalent to Dt St being a martingale under P . However, when the underlying pays dividends, or there is cost of carry, d(Dt Xt ) = ∆t d(Dt St ) no longer holds, as shown in formula (5.9.6). The portfolio is no longer self-financing, but self-financing with consumption. What we still want to retain is the martingale property of Dt Xt , not that of Dt St . This is how we choose martingale measure in the above paragraph. Let VT be a payoff at time T , then for the martingale Mt = E[e−rT VT |Ft ], by Martingale Representation rt t Theorem, we can find an adapted process Γt , so that Mt = M0 + 0 Γs dWs . If we let ∆t = Γt e t , then the σS value of the corresponding portfolio X satisfies d(e−rt Xt ) = Γt dWt . So by setting X0 = M0 = E[e−rT VT ], we must have e−rt Xt = Mt , for all t ∈ [0, T ]. In particular, XT = VT . Thus the portfolio perfectly hedges VT . This justifies the risk-neutral pricing of European-type contingent claims in the model where cost of carry exists. Also note the risk-neutral measure is different from the one in case of no cost of carry. Another perspective for perfect replication is the following. We need to solve the backward SDE dXt = ∆t dSt − a∆t dt + r(Xt − ∆t St )dt XT = VT for two unknowns, X and ∆. To do so, we find a probability measure P , under which e−rt Xt is a martingale, t then e−rt Xt = E[e−rT VT |Ft ] := Mt . Martingale Representation Theorem gives Mt = M0 + 0 Γu dWu for some adapted process Γ. This would give us a theoretical representation of ∆ by comparison of integrands, hence a perfect replication of VT . (i) Proof. As indicated in the above analysis, if we have (5.9.7) under P , then d(e−rt Xt ) = ∆t [d(e−rt St ) − ae−rt dt] = ∆t e−rt σSt dWt . So (e−rt Xt )t≥0 , where X is given by (5.9.6), is a P -martingale. (ii)
1 1 Proof. By Itˆ’s formula, dYt = Yt [σdWt + (r − 2 σ 2 )dt] + 2 Yt σ 2 dt = Yt (σdWt + rdt). So d(e−rt Yt ) = o t a σe−rt Yt dWt and e−rt Yt is a P -martingale. Moreover, if St = S0 Yt + Yt 0 Ys ds, then t

dSt = S0 dYt +
0

a dsdYt + adt = Ys

t

S0 +
0

a ds Yt (σdWt + rdt) + adt = St (σdWt + rdt) + adt. Ys

This shows S satisfies (5.9.7). Remark: To obtain this formula for S, we first set Ut = e−rt St to remove the rSt dt term. The SDE for U is dUt = σUt dWt + ae−rt dt. Just like solving linear ODE, to remove U in the dWt term, we consider Vt = Ut e−σWt . Itˆ’s product formula yields o dVt = = e−σWt dUt + Ut e−σWt 1 (−σ)dWt + σ 2 dt + dUt · e−σWt 2 1 (−σ)dWt + σ 2 dt 2

1 e−σWt ae−rt dt − σ 2 Vt dt. 2 53

Note V appears only in the dt term, so multiply the integration factor e 2 σ we get 1 2 1 2 d(e 2 σ t Vt ) = ae−rt−σWt + 2 σ t dt. Set Yt = eσWt +(r− 2 σ (iii) Proof. t 1 2

1

2

t

on both sides of the equation,

)t

, we have d(St /Yt ) = adt/Yt . So St = Yt (S0 +

t ads ). 0 Ys

E[ST |Ft ]

= S0 E[YT |Ft ] + E YT
0 t

a ds + YT Ys

T t T

a ds|Ft Ys E YT |Ft ds Ys E[YT −s ]ds t = S0 E[YT |Ft ] +
0

a dsE[YT |Ft ] + a Ys t t T

= S0 Yt E[YT −t ] +
0 t

a dsYt E[YT −t ] + a Ys
T t

= =

S0 +
0 t

a ds Yt er(T −t) + a Ys ads Ys

er(T −s) ds

S0 +
0

a Yt er(T −t) − (1 − er(T −t) ). r

In particular, E[ST ] = S0 erT − a (1 − erT ). r (iv) Proof. t dE[ST |Ft ]

= aer(T −t) dt + S0 +
0 t

ads Ys

a (er(T −t) dYt − rYt er(T −t) dt) + er(T −t) (−r)dt r

=

S0 +
0

ads Ys

er(T −t) σYt dWt .

So E[ST |Ft ] is a P -martingale. As we have argued at the beginning of the solution, risk-neutral pricing is valid even in the presence of cost of carry. So by an argument similar to that of §5.6.2, the process E[ST |Ft ] is the futures price process for the commodity. (v) Proof. We solve the equation E[e−r(T −t) (ST − K)|Ft ] = 0 for K, and get K = E[ST |Ft ]. So F orS (t, T ) = F utS (t, T ). (vi) Proof. We follow the hint. First, we solve the SDE dXt = dSt − adt + r(Xt − St )dt X0 = 0. By our analysis in part (i), d(e−rt Xt ) = d(e−rt St ) − ae−rt dt. Integrate from 0 to t on both sides, we get Xt = St − S0 ert + a (1 − ert ) = St − S0 ert − a (ert − 1). In particular, XT = ST − S0 erT − a (erT − 1). r r r Meanwhile, F orS (t, T ) = F uts (t, T ) = E[ST |Ft ] = S0 + t ads 0 Ys

Yt er(T −t) − a (1−er(T −t) ). So F orS (0, T ) = r

S0 erT − a (1 − erT ) and hence XT = ST − F orS (0, T ). After the agent delivers the commodity, whose value r is ST , and receives the forward price F orS (0, T ), the portfolio has exactly zero value. 54

6. Connections with Partial Differential Equations 6.1. (i) Proof. Zt = 1 is obvious. Note the form of Z is similar to that of a geometric Brownian motion. So by Itˆ’s o formula, it is easy to obtain dZu = bu Zu du + σu Zu dWu , u ≥ t. (ii) Proof. If Xu = Yu Zu (u ≥ t), then Xt = Yt Zt = x · 1 = x and dXu = = = = Yu dZu + Zu dYu + dYu Zu au − σu γu γu du + dWu Zu Zu [Yu bu Zu + (au − σu γu ) + σu γu ]du + (σu Zu Yu + γu )dWu Yu (bu Zu du + σu Zu dWu ) + Zu (bu Xu + au )du + (σu Xu + γu )dWu . + σu Z u γu du Zu

Remark: To see how to find the above solution, we manipulate the equation (6.2.4) as follows. First, to u remove the term bu Xu du, we multiply on both sides of (6.2.4) the integrating factor e− t bv dv . Then d(Xu e− ¯ Let Xu = e− u t u t

bv dv

) = e−

u t

bv dv

(au du + (γu + σu Xu )dWu ). u t

bv dv

Xu , au = e− ¯

u t

bv dv

au and γu = e− ¯

bv dv

¯ γu , then X satisfies the SDE

¯ ¯ ¯ dXu = au du + (¯u + σu Xu )dWu = (¯u du + γu dWu ) + σu Xu dWu . ¯ γ a ¯ ¯ ˆ ¯ To deal with the term σu Xu dWu , we consider Xu = Xu e− ˆ dXu = e− u t u t

σv dWv

. Then σv dWv

σv dWv

¯ ¯ [(¯u du + γu dWu ) + σu Xu dWu ] + Xu e− a ¯ u t

u t

1 (−σu )dWu + e− 2

u t

σv dWv

2 σu du

¯ +(¯u + σu Xu )(−σu )e− γ

σv dWv

du

1 ˆ 2 ˆ ˆ ˆ = au du + γu dWu + σu Xu dWu − σu Xu dWu + Xu σu du − σu (ˆu + σu Xu )du ˆ ˆ γ 2 1 ˆ 2 = (ˆu − σu γu − Xu σu )du + γu dWu , a ˆ ˆ 2 where au = au e− ˆ ¯ ˆ 1 d Xu e 2 u t

σv dWv
2 σv dv

and γu = γu e− ˆ ¯ = e2
1 u t 2 σv dv

u t

σv dWv

. Finally, use the integrating factor e u t 2 σv dv

u 1 2 σ dv t 2 v

, we have

u t

1 ˆ ˆ 1 2 (dXu + Xu · σu du) = e 2 2

[(ˆu − σu γu )du + γu dWu ]. a ˆ ˆ

Write everything back into the original X, a and γ, we get d Xu e− i.e. d u t

bv dv−

u t

1 σv dWv + 2

u t

2 σv dv

= e2

1

u t

2 σv dv−

u t

σv dWv −

u t

bv dv

[(au − σu γu )du + γu dWu ],

Xu Zu

=

1 [(au − σu γu )du + γu dWu ] = dYu . Zu

This inspired us to try Xu = Yu Zu . 6.2. (i)

55

Proof. The portfolio is self-financing, so for any t ≤ T1 , we have dXt = ∆1 (t)df (t, Rt , T1 ) + ∆2 (t)df (t, Rt , T2 ) + Rt (Xt − ∆1 (t)f (t, Rt , T1 ) − ∆2 (t)f (t, Rt , T2 ))dt, and d(Dt Xt ) = −Rt Dt Xt dt + Dt dXt = Dt [∆1 (t)df (t, Rt , T1 ) + ∆2 (t)df (t, Rt , T2 ) − Rt (∆1 (t)f (t, Rt , T1 ) + ∆2 (t)f (t, Rt , T2 ))dt] 1 = Dt [∆1 (t) ft (t, Rt , T1 )dt + fr (t, Rt , T1 )dRt + frr (t, Rt , T1 )γ 2 (t, Rt )dt 2 1 +∆2 (t) ft (t, Rt , T2 )dt + fr (t, Rt , T2 )dRt + frr (t, Rt , T2 )γ 2 (t, Rt )dt 2 −Rt (∆1 (t)f (t, Rt , T1 ) + ∆2 (t)f (t, Rt , T2 ))dt] 1 = ∆1 (t)Dt [−Rt f (t, Rt , T1 ) + ft (t, Rt , T1 ) + α(t, Rt )fr (t, Rt , T1 ) + γ 2 (t, Rt )frr (t, Rt , T1 )]dt 2 1 +∆2 (t)Dt [−Rt f (t, Rt , T2 ) + ft (t, Rt , T2 ) + α(t, Rt )fr (t, Rt , T2 ) + γ 2 (t, Rt )frr (t, Rt , T2 )]dt 2 +Dt γ(t, Rt )[Dt γ(t, Rt )[∆1 (t)fr (t, Rt , T1 ) + ∆2 (t)fr (t, Rt , T2 )]]dWt = ∆1 (t)Dt [α(t, Rt ) − β(t, Rt , T1 )]fr (t, Rt , T1 )dt + ∆2 (t)Dt [α(t, Rt ) − β(t, Rt , T2 )]fr (t, Rt , T2 )dt +Dt γ(t, Rt )[∆1 (t)fr (t, Rt , T1 ) + ∆2 (t)fr (t, Rt , T2 )]dWt .

(ii) Proof. Let ∆1 (t) = St fr (t, Rt , T2 ) and ∆2 (t) = −St fr (t, Rt , T1 ), then d(Dt Xt ) = Dt St [β(t, Rt , T2 ) − β(t, Rt , T1 )]fr (t, Rt , T1 )fr (t, Rt , T2 )dt = Dt |[β(t, Rt , T1 ) − β(t, Rt , T2 )]fr (t, Rt , T1 )fr (t, Rt , T2 )|dt. Integrate from 0 to T on both sides of the above equation, we get
T

DT XT − D0 X0 =
0

Dt |[β(t, Rt , T1 ) − β(t, Rt , T2 )]fr (t, Rt , T1 )fr (t, Rt , T2 )|dt.

If β(t, Rt , T1 ) = β(t, Rt , T2 ) for some t ∈ [0, T ], under the assumption that fr (t, r, T ) = 0 for all values of r and 0 ≤ t ≤ T , DT XT − D0 X0 > 0. To avoid arbitrage (see, for example, Exercise 5.7), we must have for a.s. ω, β(t, Rt , T1 ) = β(t, Rt , T2 ), ∀t ∈ [0, T ]. This implies β(t, r, T ) does not depend on T . (iii) Proof. In (6.9.4), let ∆1 (t) = ∆(t), T1 = T and ∆2 (t) = 0, we get d(Dt Xt ) = 1 ∆(t)Dt −Rt f (t, Rt , T ) + ft (t, Rt , T ) + α(t, Rt )fr (t, Rt , T ) + γ 2 (t, Rt )frr (t, Rt , T ) dt 2 +Dt γ(t, Rt )∆(t)fr (t, Rt , T )dWt .

This is formula (6.9.5). 1 If fr (t, r, T ) = 0, then d(Dt Xt ) = ∆(t)Dt −Rt f (t, Rt , T ) + ft (t, Rt , T ) + 2 γ 2 (t, Rt )frr (t, Rt , T ) dt. We 1 2 choose ∆(t) = sign −Rt f (t, Rt , T ) + ft (t, Rt , T ) + 2 γ (t, Rt )frr (t, Rt , T ) . To avoid arbitrage in this case, we must have ft (t, Rt , T ) + 1 γ 2 (t, Rt )frr (t, Rt , T ) = Rt f (t, Rt , T ), or equivalently, for any r in the 2 range of Rt , ft (t, r, T ) + 1 γ 2 (t, r)frr (t, r, T ) = rf (t, r, T ). 2 56

6.3. Proof. We note d − e ds s 0

bv dv

C(s, T ) = e−

s 0

bv dv

[C(s, T )(−bs ) + bs C(s, T ) − 1] = −e−

s 0

bv dv

.

So integrate on both sides of the equation from t to T, we obtain e−
T 0

bv dv

C(T, T ) − e− t 0

t 0

T bv dv

C(t, T ) = − t s 0

e−
T t

s 0

bv dv

ds.

Since C(T, T ) = 0, we have C(t, T ) = e 1 −a(s)C(s, T ) + 2 σ 2 (s)C 2 (s, T ), we get A(T, T ) − A(t, T ) = −

bv dv

T t

e−

bv dv

ds =
T

e

t s

bv dv

ds. Finally, by A (s, T ) =

T

a(s)C(s, T )ds + t 1 2

σ 2 (s)C 2 (s, T )ds. t Since A(T, T ) = 0, we have A(t, T ) = 6.4. (i) Proof. By the definition of ϕ, we have ϕ (t) = e 2 σ
1 2 T t

T (a(s)C(s, T ) t



1 2 2 2 σ (s)C (s, T ))ds.

C(u,T )du 1

2

1 σ 2 (−1)C(t, T ) = − ϕ(t)σ 2 C(t, T ). 2

2ϕ (t) 1 So C(t, T ) = − φ(t)σ2 . Differentiate both sides of the equation ϕ (t) = − 2 ϕ(t)σ 2 C(t, T ), we get

ϕ (t)

= = =

1 − σ 2 [ϕ (t)C(t, T ) + ϕ(t)C (t, T )] 2 1 1 − σ 2 [− ϕ(t)σ 2 C 2 (t, T ) + ϕ(t)C (t, T )] 2 2 1 4 1 σ ϕ(t)C 2 (t, T ) − σ 2 ϕ(t)C (t, T ). 4 2
2ϕ (t) σ 2 ϕ(t) .

So C (t, T ) = (ii)

1 4 2 4 σ ϕ(t)C (t, T )

1 − ϕ (t) / 1 ϕ(t)σ 2 = 2 σ 2 C 2 (t, T ) − 2

Proof. Plug formulas (6.9.8) and (6.9.9) into (6.5.14), we get − 2ϕ (t) 1 2 2 2ϕ (t) 1 + σ C (t, T ) = b(−1) 2 + σ 2 C 2 (t, T ) − 1, σ 2 ϕ(t) 2 σ ϕ(t) 2

1 i.e. ϕ (t) − bϕ (t) − 2 σ 2 ϕ(t) = 0.

(iii)
1 1 Proof. The characteristic equation of ϕ (t) − bϕ (t) − 2 σ 2 ϕ(t) = 0 is λ2 − bλ − 2 σ 2 = 0, which gives two √ √ 1 1 1 roots 2 (b ± b2 + 2σ 2 ) = 2 b ± γ with γ = 2 b2 + 2σ 2 . Therefore by standard theory of ordinary differential 1 equations, a general solution of ϕ is ϕ(t) = e 2 bt (a1 eγt + a2 e−γt ) for some constants a1 and a2 . It is then easy to see that we can choose appropriate constants c1 and c2 so that

ϕ(t) =

1 2b

1 1 c1 c2 e−( 2 b+γ)(T −t) − 1 e−( 2 b−γ)(T −t) . +γ b−γ 2

57

(iv) Proof. From part (iii), it is easy to see ϕ (t) = c1 e−( 2 b+γ)(T −t) − c2 e−( 2 b−γ)(T −t) . In particular, 0 = C(T, T ) = − So c1 = c2 . (v) Proof. We first recall the definitions and properties of sinh and cosh: sinh z = Therefore ϕ(t) = c1 e− 2 b(T −t) = c1 e− 2 b(T −t) = = and ϕ (t) = 1 2c1 − 1 b(T −t) [b sinh(γ(T − t)) + 2γ cosh(γ(T − t))] b· 2 e 2 2 σ 2c1 1 + 2 e− 2 b(T −t) [−γb cosh(γ(T − t)) − 2γ 2 sinh(γ(T − t))] σ 1 bγ bγ 2γ 2 b2 = 2c1 e− 2 b(T −t) sinh(γ(T − t)) + 2 cosh(γ(T − t)) − 2 cosh(γ(T − t)) − 2 sinh(γ(T − t)) 2 2σ σ σ σ 2 2 1 b − 4γ = 2c1 e− 2 b(T −t) sinh(γ(T − t)) 2σ 2 1 = −2c1 e− 2 b(T −t) sinh(γ(T − t)). 2ϕ (t) sinh(γ(T − t)) . = σ 2 ϕ(t) γ cosh(γ(T − t)) + 1 b sinh(γ(T − t)) 2
1 1 1 1

2ϕ (T ) 2(c1 − c2 ) =− 2 . σ 2 ϕ(T ) σ ϕ(T )

ez − e−z ez + e−z , cosh z = , (sinh z) = cosh z, and (cosh z) = sinh z. 2 2 eγ(T −t) e−γ(T −t) − 1 1 2b + γ 2b − γ
1 2b 1 2 4b 1 − γ −γ(T −t) b + γ γ(T −t) e − 12 2 e 2 2 −γ 4b − γ

1 2c1 − 1 b(T −t) 1 e 2 −( b − γ)e−γ(T −t) + ( b + γ)eγ(T −t) 2 σ 2 2 2c1 − 1 b(T −t) e 2 [b sinh(γ(T − t)) + 2γ cosh(γ(T − t))]. σ2

This implies C(t, T ) = −

(vi) Proof. By (6.5.15) and (6.9.8), A (t, T ) =
2aϕ (t) σ 2 ϕ(t) .

Hence
T

A(T, T ) − A(t, T ) = t 2aϕ (s) 2a ϕ(T ) ds = 2 ln , 2 ϕ(s) σ σ ϕ(t)
1

and A(t, T ) = − 2a ϕ(T ) 2a γe 2 b(T −t) ln = − 2 ln . σ2 ϕ(t) σ γ cosh(γ(T − t)) + 1 b sinh(γ(T − t)) 2

58

6.5. (i) Proof. Since g(t, X1 (t), X2 (t)) = E[h(X1 (T ), X2 (T ))|Ft ] and e−rt f (t, X1 (t), X2 (t)) = E[e−rT h(X1 (T ), X2 (T ))|Ft ], iterated conditioning argument shows g(t, X1 (t), X2 (t)) and e−rt f (t, X1 (t), X2 (t)) ar both martingales. (ii) and (iii) Proof. We note dg(t, X1 (t), X2 (t)) 1 1 = gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x2 dX1 (t)dX1 (t) + gx2 x2 dX2 (t)dX2 (t) + gx1 x2 dX1 (t)dX2 (t) 2 2 1 2 2 = gt + gx1 β1 + gx2 β2 + gx1 x1 (γ11 + γ12 + 2ργ11 γ12 ) + gx1 x2 (γ11 γ21 + ργ11 γ22 + ργ12 γ21 + γ12 γ22 ) 2 1 2 2 + gx2 x2 (γ21 + γ22 + 2ργ21 γ22 ) dt + martingale part. 2 So we must have 1 2 2 gt + gx1 β1 + gx2 β2 + gx1 x1 (γ11 + γ12 + 2ργ11 γ12 ) + gx1 x2 (γ11 γ21 + ργ11 γ22 + ργ12 γ21 + γ12 γ22 ) 2 1 2 2 + gx2 x2 (γ21 + γ22 + 2ργ21 γ22 ) = 0. 2 Taking ρ = 0 will give part (ii) as a special case. The PDE for f can be similarly obtained. 6.6. (i) Proof. Multiply e 2 bt on both sides of (6.9.15), we get
1 1 b 1 1 d(e 2 bt Xj (t)) = e 2 bt Xj (t) bdt + (− Xj (t)dt + σdWj (t) 2 2 2 1 1 = e 2 bt σdWj (t). 2 1

So e 2 bt Xj (t) − Xj (0) =

1

t 1 2σ 0

e 2 bu dWj (u) and Xj (t) = e− 2 bt Xj (0) + 1 σ 2
1

1

1

t 0

e 2 bu dWj (u) . By Theorem = σ2 −bt ). 4b (1 − e

1

4.4.9, Xj (t) is normally distributed with mean Xj (0)e− 2 bt and variance (ii) Proof. Suppose R(t) = d j=1 2 Xj (t), then d

e−bt 2 t bu e du 4 σ 0

dR(t)

= j=1 d

(2Xj (t)dXj (t) + dXj (t)dXj (t)) 1 2Xj (t)dXj (t) + σ 2 dt 4 1 2 −bXj (t)dt + σXj (t)dWj (t) + σ 2 dt 4 d = j=1 d

= j=1 = t Xj (s) d √ dWj (s), j=1 0 R(s)

d 2 σ − bR(t) dt + σ 4

R(t) j=1 Xj (t) R(t)

dWj (t).
2 Xj (t) d j=1 R(t) dt

Let B(t) =

then B is a local martingale with dB(t)dB(t) =

= dt. So

by L´vy’s Theorem, B is a Brownian motion. Therefore dR(t) = (a − bR(t))dt + σ e and R is a CIR interest rate process. 59

R(t)dB(t) (a := d σ 2 ) 4

(iii) Proof. By (6.9.16), Xj (t) is dependent on Wj only and is normally distributed with mean e− 2 bt Xj (0) and 2 variance σ [1 − e−bt ]. So X1 (t), · · · , Xd (t) are i.i.d. normal with the same mean µ(t) and variance v(t). 4b (iv) Proof. E e
2 uXj (t) 1



=
−∞ ∞

e

ux2

e−

(x−µ(t))2 2v(t)

dx

2πv(t)
(1−2uv(t))x2 −2µ(t)x+µ2 (t) − 2v(t)

=
−∞ ∞

e

2πv(t) 1 2πv(t) e− µ(t) (x− 1−2uv(t) ) 2

dx µ2 (t) µ2 (t) − 1−2uv(t) (1−2uv(t))2 2v(t)/(1−2uv(t)) +

=
−∞ ∞

dx

=
−∞

1 − 2uv(t) 2πv(t) .

e

− 2v(t)/(1−2uv(t))

µ(t) (x− 1−2uv(t) )

2

dx ·

e−

µ2 (t)(1−2uv(t))−µ2 (t) 2v(t)(1−2uv(t))

1 − 2uv(t)

=

e

uµ2 (t) − 1−2uv(t)

1 − 2uv(t)

(v) Proof. By R(t) = d j=1 2 Xj (t) and the fact X1 (t), · · · , Xd (t) are i.i.d.,
2 d udµ2 (t) 2a e−bt uR(0) 1−2uv(t)

E[euR(t) ] = (E[euX1 (t) ])d = (1 − 2uv(t))− 2 e 1−2uv(t) = (1 − 2uv(t))− σ2 e−

.

6.7. (i) Proof. e−rt c(t, St , Vt ) = E[e−rT (ST − K)+ |Ft ] is a martingale by iterated conditioning argument. Since d(e−rt c(t, St , Vt )) 1 2 = e−rt c(t, St , Vt )(−r) + ct (t, St , Vt ) + cs (t, St , Vt )rSt + cv (t, St , Vt )(a − bVt ) + css (t, St , Vt )Vt St + 2 1 cvv (t, St , Vt )σ 2 Vt + csv (t, St , Vt )σVt St ρ dt + martingale part, 2
1 we conclude rc = ct + rscs + cv (a − bv) + 1 css vs2 + 2 cvv σ 2 v + csv σsvρ. This is equation (6.9.26). 2

(ii)

60

Proof. Suppose c(t, s, v) = sf (t, log s, v) − e−r(T −t) Kg(t, log s, v), then ct = sft (t, log s, v) − re−r(T −t) Kg(t, log s, v) − e−r(T −t) Kgt (t, log s, v), 1 1 cs = f (t, log s, v) + sfs (t, log s, v) − e−r(T −t) Kgs (t, log s, v) , s s cv = sfv (t, log s, v) − e−r(T −t) Kgv (t, log s, v), 1 1 1 1 css = fs (t, log s, v) + fss (t, log s, v) − e−r(T −t) Kgss (t, log s, v) 2 + e−r(T −t) Kgs (t, log s, v) 2 , s s s s K csv = fv (t, log s, v) + fsv (t, log s, v) − e−r(T −t) gsv (t, log s, v), s cvv = sfvv (t, log s, v) − e−r(T −t) Kgvv (t, log s, v). So 1 1 ct + rscs + (a − bv)cv + s2 vcss + ρσsvcsv + σ 2 vcvv 2 2 = sft − re−r(T −t) Kg − e−r(T −t) Kgt + rsf + rsfs − rKe−r(T −t) gs + (a − bv)(sfv − e−r(T −t) Kgv ) 1 1 gs K K 1 + s2 v − fs + fss − e−r(T −t) 2 gss + e−r(T −t) K 2 + ρσsv fv + fsv − e−r(T −t) gsv 2 s s s s s 1 2 + σ v(sfvv − e−r(T −t) Kgvv ) 2 1 1 1 1 = s ft + (r + v)fs + (a − bv + ρσv)fv + vfss + ρσvfsv + σ 2 vfvv − Ke−r(T −t) gt + (r − v)gs 2 2 2 2 1 1 2 +(a − bv)gv + vgss + ρσvgsv + σ vgvv + rsf − re−r(T −t) Kg 2 2 = rc. That is, c satisfies the PDE (6.9.26). (iii) Proof. First, by Markov property, f (t, Xt , Vt ) = E[1{XT ≥log K} |Ft ]. So f (T, Xt , Vt ) = 1{XT ≥log K} , which implies f (T, x, v) = 1{x≥log K} for all x ∈ R, v ≥ 0. Second, f (t, Xt , Vt ) is a martingale, so by differentiating f and setting the dt term as zero, we have the PDE (6.9.32) for f . Indeed, df (t, Xt , Vt ) = 1 1 ft (t, Xt , Vt ) + fx (t, Xt , Vt )(r + Vt ) + fv (t, Xt , Vt )(a − bvt + ρσVt ) + fxx (t, Xt , Vt )Vt 2 2 1 + fvv (t, Xt , Vt )σ 2 Vt + fxv (t, Xt , Vt )σVt ρ dt + martingale part. 2

1 So we must have ft + (r + 2 v)fx + (a − bv + ρσv)fv + 1 fxx v + 1 fvv σ 2 v + σvρfxv = 0. This is (6.9.32). 2 2

(iv) Proof. Similar to (iii). (v) Proof. c(T, s, v) = sf (T, log s, v) − e−r(T −t) Kg(T, log s, v) = s1{log s≥log K} − K1{log s≥log K} = 1{s≥K} (s − K) = (s − K)+ . 6.8. 61

Proof. We follow the hint. Suppose h is smooth and compactly supported, then it is legitimate to exchange integration and differentiation: gt (t, x) = gx (t, x) =
0 ∞

∂ ∂t





h(y)p(t, T, x, y)dy =
0 ∞ 0

h(y)pt (t, T, x, y)dy,

h(y)px (t, T, x, y)dy, h(y)pxx (t, T, x, y)dy.
0

gxx (t, x) =


1 So (6.9.45) implies 0 h(y) pt (t, T, x, y) + β(t, x)px (t, T, x, y) + 2 γ 2 (t, x)pxx (t, T, x, y) dy = 0. By the arbitrariness of h and assuming β, pt , px , v, pxx are all continuous, we have

1 pt (t, T, x, y) + β(t, x)px (t, T, x, y) + γ 2 (t, x)pxx (t, T, x, y) = 0. 2 This is (6.9.43). 6.9.
1 1 Proof. We first note dhb (Xu ) = hb (Xu )dXu + 2 hb (Xu )dXu dXu = hb (Xu )β(u, Xu ) + 2 γ 2 (u, Xu )hb (Xu ) du+ hb (Xu )γ(u, Xu )dWu . Integrate on both sides of the equation, we have T

hb (XT ) − hb (Xt ) = t 1 hb (Xu )β(u, Xu ) + γ 2 (u, Xu )hb (Xu ) du + martingale part. 2

Take expectation on both sides, we get


E t,x [hb (XT ) − hb (Xt )]

=
−∞ T

hb (y)p(t, T, x, y)dy − h(x) 1 E t,x [hb (Xu )β(u, Xu ) + γ 2 (u, Xu )hb (Xu )]du 2
∞ −∞

= t T

= t 1 hb (y)β(u, y) + γ 2 (u, y)hb (y) p(t, u, x, y)dydu. 2

Since hb vanishes outside (0, b), the integration range can be changed from (−∞, ∞) to (0, b), which gives (6.9.48). By integration-by-parts formula, we have b b

β(u, y)p(t, u, x, y)hb (y)dy
0

= hb (y)β(u, y)p(t, u, x, y)|b − 0
0 b

hb (y)

∂ (β(u, y)p(t, u, x, y))dy ∂y

= −
0

∂ hb (y) (β(u, y)p(t, u, x, y))dy, ∂y

and b b

γ 2 (u, y)p(t, u, x, y)hb (y)dy = −
0 0

∂ 2 (γ (u, y)p(t, u, x, y))hb (y)dy = ∂y

b 0

∂2 2 (γ (u, y)p(t, u, x, y))hb (y)dy. ∂y

Plug these formulas into (6.9.48), we get (6.9.49). Differentiate w.r.t. T on both sides of (6.9.49), we have b hb (y)
0

∂ p(t, T, x, y)dy = − ∂T

b 0

∂ 1 [β(T, y)p(t, T, x, y)]hb (y)dy + ∂y 2 62

b 0

∂2 2 [γ (T, y)p(t, T, x, y)]hb (y)dy, ∂y 2

that is, b hb (y)
0

∂ ∂ 1 ∂2 2 (γ (T, y)p(t, T, x, y)) dy = 0. p(t, T, x, y) + (β(T, y)p(t, T, x, y)) − ∂T ∂y 2 ∂y 2

This is (6.9.50). By (6.9.50) and the arbitrariness of hb , we conclude for any y ∈ (0, ∞), ∂ ∂ 1 ∂2 2 (γ (T, y)p(t, T, x, y)) = 0. p(t, T, x, y) + (β(T, y)p(t, T, x, y)) − ∂T ∂y 2 ∂y 2

6.10. Proof. Under the assumption that limy→∞ (y − K)ryp(0, T, x, y) = 0, we have



K

(y−K)

∂ (ryp(0, T, x, y))dy = −(y−K)ryp(0, T, x, y)|∞ + K ∂y





ryp(0, T, x, y)dy =
K K

ryp(0, T, x, y)dy.

If we further assume (6.9.57) and (6.9.58), then use integration-by-parts formula twice, we have 1 ∞ ∂2 (y − K) 2 (σ 2 (T, y)y 2 p(0, T, x, y))dy 2 K ∂y ∞ ∂ 2 1 ∂ 2 (y − K) (σ (T, y)y 2 p(0, T, x, y))|∞ − (σ (T, y)y 2 p(0, T, x, y))dy = K 2 ∂y ∂y K 1 = − (σ 2 (T, y)y 2 p(0, T, x, y)|∞ ) K 2 1 2 = σ (T, K)K 2 p(0, T, x, K). 2 Therefore,


cT (0, T, x, K)

= =

−rc(0, T, x, K) + e−rT
K ∞

(y − K)pT (0, T, x, y)dy


−re−rT
K ∞

(y − K)p(0, T, x, y)dy + e−rT
K ∞

(y − K)pT (0, T, x, y)dy (y − K)
K

= −re−rT
K ∞

(y − K)p(0, T, x, y)dy − e−rT

∂ (ryp(t, T, x, y))dy ∂y

+e−rT
K

1 ∂2 2 (y − K) (σ (T, y)y 2 p(t, T, x, y))dy 2 ∂y 2




=

−re−rT
K

(y − K)p(0, T, x, y)dy + e−rT
K

ryp(0, T, x, y)dy

1 +e−rT σ 2 (T, K)K 2 p(0, T, x, K) 2 ∞ 1 −rT = re K p(0, T, x, y)dy + e−rT σ 2 (T, K)K 2 p(0, T, x, K) 2 K 1 2 = −rKcK (0, T, x, K) + σ (T, K)K 2 cKK (0, T, x, K). 2

7. Exotic Options 7.1. (i) 63

Proof. Since δ± (τ, s) =

1 √ [log s σ τ

+ (r ± 1 σ 2 )τ ] = 2 =

log s − 1 2 σ τ

+

1 r± 2 σ 2 √ τ, σ

∂ δ± (τ, s) ∂t

r ± 1 σ 2 1 − 1 ∂τ log s 1 − 3 ∂τ 2 (− )τ 2 + τ 2 σ 2 ∂t σ 2 ∂t 1 r ± 2 σ2 √ 1 log s 1 √ (−1) − = − τ (−1) 2τ σ σ τ 1 1 1 = − · √ − log ss + (r ± σ 2 )τ ) 2τ σ τ 2 1 1 = − δ± (τ, ). 2τ s

(ii) Proof. ∂ x ∂ δ± (τ, ) = ∂x c ∂x c ∂ ∂ δ± (τ, ) = ∂x x ∂x σ τ σ τ 1 √ 1 √ log log x 1 + (r ± σ 2 )τ c 2 = 1 √ , xσ τ 1 √ . xσ τ

1 c + (r ± σ 2 )τ x 2

=−

(iii) Proof.
(log s+rτ )2 ±σ 2 τ (log s+rτ )+ 1 σ 4 τ 2 δ± (τ,s) 4 1 1 2σ 2 τ . N (δ± (τ, s)) = √ e− 2 = √ e− 2π 2π

Therefore

2σ 2 τ (log s+rτ ) N (δ+ (τ, s)) e−rτ 2σ 2 τ = = e− N (δ− (τ, s)) s

and e−rτ N (δ− (τ, s)) = sN (δ+ (τ, s)). (iv) Proof. [(log s+rτ ) N (δ± (τ, s)) = e− −1 )) N (δ± (τ, s
2r 2 −(log 1 +rτ )2 ±σ 2 τ (log s−log 1 ) s s 2σ 2 τ

]

= e−

4rτ log s±2σ 2 τ log s 2σ 2 τ

= e−( σ2 ±1) log s = s−( σ2 ±1) .

2r

2r

So N (δ± (τ, s−1 )) = s( σ2 ±1) N (δ± (τ, s)). (v) Proof. δ+ (τ, s) − δ− (τ, s) = (vi) Proof. δ± (τ, s) − δ± (τ, s−1 ) = (vii) Proof. N (y) = y √1 e− 2 2π 2

1 √ σ τ

1 log s + (r + 2 σ 2 )τ −

1 √ σ τ

log s + (r − 1 σ 2 )τ = 2

1 √ σ2 τ σ τ

√ = σ τ.

1 √ σ τ

1 log s + (r ± 2 σ 2 )τ −

1 √ σ τ

log s−1 + (r ± 1 σ 2 )τ = 2

2 log s √ . σ τ

, so N (y) =

y √1 e− 2 2π

2

(− y2 ) = −yN (y).

2

64

To be continued ... 7.3. Proof. We note ST = S0 eσWT = St eσ(WT −Wt ) , WT − Wt = (WT − Wt ) + α(T − t) is independent of Ft , supt≤u≤T (Wu − Wt ) is independent of Ft , and YT = = = S0 eσMT S0 eσ supt≤u≤T Wu 1{Mt ≤sup St e σ supt≤u≤T (Wu −Wt ) t≤u≤T Wt }

+ S0 eσMt 1{Mt >sup
}

t≤u≤T

Wu } }

1

{ St ≤e t Y

σ supt≤u≤T (Wu −Wt )

+ Yt 1

{ St ≤e t Y

σ supt≤u≤T (Wu −Wt )

.

−t So E[f (ST , YT )|Ft ] = E[f (x ST 0 , x YT −t 1{ y ≤ YT −t } + y1{ y ≤ YT −t } )], where x = St , y = Yt . Therefore S S0 x S0 x S0

E[f (ST , YT )|Ft ] is a Borel function of (St , Yt ). 7.4. Proof. By Cauchy’s inequality and the monotonicity of Y , we have m m

| j=1 (Ytj − Ytj−1 )(Stj − Stj−1 )|

≤ j=1 |Ytj − Ytj−1 ||Stj − Stj−1 | m m

≤ j=1 (Ytj − Ytj−1 )2 j=1 (Stj − Stj−1 )2 m ≤

1≤j≤m

max |Ytj − Ytj−1 |(YT − Y0 ) j=1 (Stj − Stj−1 )2 .

If we increase the number of partition points to infinity and let the length of the longest subinterval m 2 max1≤j≤m |tj − tj−1 | approach zero, then [S]T − [S]0 < ∞ and max1≤j≤m |Ytj − j=1 (Stj − Stj−1 ) → Ytj−1 | → 0 a.s. by the continuity of Y . This implies 8. American Derivative Securities 8.1.
2r x 1 Proof. vL (L+) = (K − L)(− σ2 )( L )− σ2 −1 L
2r

m j=1 (Ytj

− Ytj−1 )(Stj − Stj−1 ) → 0.

− σ2r (K − L) = −1. Solve for L, we get L = 2L 8.2.

x=L 2rK 2r+σ 2 .

= − σ2r (K − L). So vL (L+) = vL (L−) if and only if 2L

Proof. By the calculation in Section 8.3.3, we can see v2 (x) ≥ (K2 − x)+ ≥ (K1 − x)+ , rv2 (x) − rxv2 (x) − 1 2 2 2 σ x v2 (x) ≥ 0 for all x ≥ 0, and for 0 ≤ x < L1∗ < L2∗ , 1 rv2 (x) − rxv2 (x) − σ 2 x2 v2 (x) = rK2 > rK1 > 0. 2 So the linear complementarity conditions for v2 imply v2 (x) = (K2 − x)+ = K2 − x > K1 − x = (K1 − x)+ on [0, L1∗ ]. Hence v2 (x) does not satisfy the third linear complementarity condition for v1 : for each x ≥ 0, equality holds in either (8.8.1) or (8.8.2) or both. 8.3. (i) 65

Proof. Suppose x takes its values in a domain bounded away from 0. By the general theory of linear differential equations, if we can find two linearly independent solutions v1 (x), v2 (x) of (8.8.4), then any solution of (8.8.4) can be represented in the form of C1 v1 +C2 v2 where C1 and C2 are constants. So it suffices to find two linearly independent special solutions of (8.8.4). Assume v(x) = xp for some constant p to be 1 1 determined, (8.8.4) yields xp (r−pr− 2 σ 2 p(p−1)) = 0. Solve the quadratic equation 0 = r−pr− 2 σ 2 p(p−1) = 2r 1 2r (− 2 σ 2 p − r)(p − 1), we get p = 1 or − σ2 . So a general solution of (8.8.4) has the form C1 x + C2 x− σ2 . (ii) Proof. Assume there is an interval [x1 , x2 ] where 0 < x1 < x2 < ∞, such that v(x) ≡ 0 satisfies (8.3.19) with equality on [x1 , x2 ] and satisfies (8.3.18) with equality for x at and immediately to the left of x1 and 2r for x at and immediately to the right of x2 , then we can find some C1 and C2 , so that v(x) = C1 x + C2 x− σ2 on [x1 , x2 ]. If for some x0 ∈ [x1 , x2 ], v(x0 ) = v (x0 ) = 0, by the uniqueness of the solution of (8.8.4), we would conclude v ≡ 0. This is a contradiction. So such an x0 cannot exist. This implies 0 < x1 < x2 < K (if K ≤ x2 , v(x2 ) = (K − x2 )+ = 0 and v (x2 )=the right derivative of (K − x)+ at x2 , which is 0). 1 Thus we have four equations for C1 and C2 :  2r C1 x1 + C2 x− σ2 = K − x1  1   2r  C x + C x − σ 2 = K − x
1 2

C −  1     C1 −

2 2 2r − σ2 −1 2r σ 2 C2 x1 2r − σ2 −1 2r σ 2 C2 x2

2

= −1 = −1.

Since x1 = x2 , the last two equations imply C2 = 0. Plug C2 = 0 into the first two equations, we have C1 = K−x1 = K−x2 ; plug C2 = 0 into the last two equations, we have C1 = −1. Combined, we would have x1 x2 x1 = x2 . Contradiction. Therefore our initial assumption is incorrect, and the only solution v that satisfies the specified conditions in the problem is the zero solution. (iii) Proof. If in a right neighborhood of 0, v satisfies (8.3.19) with equality, then part (i) implies v(x) = C1 x + 2r C2 x− σ2 for some constants C1 and C2 . Then v(0) = limx↓0 v(x) = 0 < (K − 0)+ , i.e. (8.3.18) will be 1 violated. So we must have rv − rxv − 2 σ 2 x2 v > 0 in a right neighborhood of 0. According to (8.3.20), + v(x) = (K − x) near o. So v(0) = K. We have thus concluded simultaneously that v cannot satisfy (8.3.19) with equality near 0 and v(0) = K, starting from first principles (8.3.18)-(8.3.20). (iv) Proof. This is already shown in our solution of part (iii): near 0, v cannot satisfy (8.3.19) with equality. (v) Proof. If v satisfy (K − x)+ with equality for all x ≥ 0, then v cannot have a continuous derivative as stated in the problem. This is a contradiction. (vi)
1 Note we have interpreted the condition “v(x) satisfies (8.3.18) with equality for x at and immediately to the right of x ” 2 as “v(x2 ) = (K − x2 )+ and v (x2 ) =the right derivative of (K − x)+ at x2 .” This is weaker than “v(x) = (K − x) in a right neighborhood of x2 .”

66

Proof. By the result of part (i), we can start with v(x) = (K − x)+ on [0, x1 ] and v(x) = C1 x + C2 x− σ2 on [x1 , ∞). By the assumption of the problem, both v and v are continuous. Since (K −x)+ is not differentiable at K, we must have x1 ≤ K.This gives us the equations  2r K − x = (K − x )+ = C x + C x− σ2 1 1 1 1 2 1 2r −1 = C − 2r C x− σ2 −1 .
1 σ2 2 1

2r

Because v is assumed to be bounded, we must have C1 = 0 and the above equations only have two unknowns: C2 and x1 . Solve them for C2 and x1 , we are done. 8.4. (i) Proof. This is already shown in part (i) of Exercise 8.3. (ii) Proof. We solve for A, B the equations AL− σ2 + BL = K − L 2r 2r − σ2 AL− σ2 −1 + B = −1, and we obtain A = (iii) Proof. By (8.8.5), B > 0. So for x ≥ K, f (x) ≥ BK > 0 = (K − x)+ . If L ≤ x < K, x x 2r KL σ2 σ 2 + 2r( L ) σ2 +1 − (σ 2 + 2r)( L ) σ2 2r 2rKx σ 2 KL σ2 − 2r − σ2 + σ2 + x −K =x . f (x) − (K − x) = 2 σ + 2r L(σ 2 + 2r) (σ 2 + 2r)L 2r Let g(θ) = σ 2 + 2rθ σ2 +1 − (σ 2 + 2r)θ σ2 with θ ≥ 1. Then g(1) = 0 and g (θ) = 2r( σ2 + 1)θ σ2 − (σ 2 + 2r 2r 2r 2r 2r) σ2 θ σ2 −1 = σ2 (σ 2 + 2r)θ σ2 −1 (θ − 1) ≥ 0. So g(θ) ≥ 0 for any θ ≥ 1. This shows f (x) ≥ (K − x)+ for L ≤ x < K. Combined, we get f (x) ≥ (K − x)+ for all x ≥ L.
2r 2r 2r 2r 2r 2r 2r

σ 2 KL σ2 σ 2 +2r

2r

,B=

2rK L(σ 2 +2r)

− 1.

(iv) x Proof. Since limx→∞ v(x) = limx→∞ f (x) = ∞ and limx→∞ vL∗ (x) = limx→∞ (K − L∗ )( L∗ )− σ2 = 0, v(x) + and vL∗ (x) are different. By part (iii), v(x) ≥ (K − x) . So v satisfies (8.3.18). For x ≥ L, rv − rxv − 1 2 2 1 2 2 1 2 2 2 σ x v = rf − rxf − 2 σ x f = 0. For 0 ≤ x ≤ L, rv − rxv − 2 σ x v = r(K − x) + rx = rK. Combined, 1 2 2 rv − rxv − 2 σ x v ≥ 0 for x ≥ 0. So v satisfies (8.3.19). Along the way, we also showed v satisfies (8.3.20). In summary, v satisfies the linear complementarity condition (8.3.18)-(8.3.20), but v is not the function vL∗ given by (8.3.13).
2r

(v) Proof. By part (ii), B = 0 if and only if
2r σ2 K x − σ2 σ 2 +2r ( L ) 2r

2rK L(σ 2 +2r)

− 1 = 0, i.e. L =

2rK 2r+σ 2 .

In this case, v(x) = Ax− σ2 =

2r

x = (K − L)( L )− σ2 = vL∗ (x), on the interval [L, ∞).

8.5. The difficulty of the dividend-paying case is that from Lemma 8.3.4, we can only obtain E[e−(r−a)τL ], not E[e−rτL ]. So we have to start from Theorem 8.3.2. (i)

67

Proof. By (8.8.9), St = S0 eσWt +(r−a− 2 σ 1 x 1 2 2 σ )t = σ log L . By Theorem 8.3.2, E[e−rτL ] = e

1

2

)t

1 . Assume S0 = x, then St = L if and only if −Wt − σ (r − a −

1 − σ log

x L

1 1 2 σ (r−a− 2 σ )+

1 σ2

(r−a− 1 σ 2 )2 +2r 2

. x 1 1 1 x 1 If we set γ = σ2 (r − a − 1 σ 2 ) + σ σ2 (r − a − σ2 )2 + 2r, we can write E[e−rτL ] as e−γ log L = ( L )−γ . So 2 the risk-neutral expected discounted pay off of this strategy is

vL (x) =

K − x, 0≤x≤L x −γ (K − L)( L ) , x > L.

(ii) Proof. (iii) Proof. By Itˆ’s formula, we have o 1 2 d e−rt vL∗ (St ) = e−rt −rvL∗ (St ) + vL∗ (St )(r − a)St + vL∗ (St )σ 2 St dt + e−rt vL∗ (St )σSt dWt . 2 If x > L∗ , 1 −rvL∗ (x) + vL∗ (x)(r − a)x + vL∗ (x)σ 2 x2 2 −γ x 1 x−γ−2 x−γ−1 = −r(K − L∗ ) + (r − a)x(K − L∗ )(−γ) −γ + σ 2 x2 (−γ)(−γ − 1)(K − L∗ ) −γ L∗ 2 L∗ L∗ = (K − L∗ ) x L∗
−γ ∂ ∂L vL (x) x = −( L )−γ (1 − γ(K−L) ). L

Set

∂ ∂L vL (x)

= 0 and solve for L∗ , we have L∗ =

γK γ+1 .

1 −r − (r − a)γ + σ 2 γ(γ + 1) . 2

1 By the definition of γ, if we define u = r − a − 2 σ 2 , we have

1 r + (r − a)γ − σ 2 γ(γ + 1) 2 1 2 2 1 = r − σ γ + γ(r − a − σ 2 ) 2 2 1 = r − σ2 2 1 = r − σ2 2 = r− = 0. u 1 + 2 σ σ 2u u2 + 3 σ4 σ u2 + 2r σ2
2

+

u 1 + 2 σ σ u2 + 2r σ2

u2 + 2r u σ2 + u2 u + σ2 σ u2 + 2r σ2 u2 + 2r σ2

1 u2 + 2r + 2 σ2 σ

u2 u − 2 2σ σ

u2 1 + 2r − 2 σ 2

u2 u2 u + 2r + 2 + 2 σ σ σ

1 If x < L∗ , −rvL∗ (x) + vL∗ (x)(r − a)x + 2 vL∗ (x)σ 2 x2 = −r(K − x) + (−1)(r − a)x = −rK + ax. Combined, we get d e−rt vL∗ (St ) = −e−rt 1{St r2 − θrσ 2 0 > r − θσ 2 .

Since θσ 2 < 0, we have obtained a contradiction. So our initial assumption is incorrect, and rK − aL∗ ≥ 0 must be true. (iv) Proof. The proof is similar to that of Corollary 8.3.6. Note the only properties used in the proof of Corollary 8.3.6 are that e−rt vL∗ (St ) is a supermartingale, e−rt∧τL∗ vL∗ (St ∧τL∗ ) is a martingale, and vL∗ (x) ≥ (K −x)+ . Part (iii) already proved the supermartingale-martingale property, so it suffices to show vL∗ (x) ≥ (K − x)+ γK in our problem. Indeed, by γ ≥ 0, L∗ = γ+1 < K. For x ≥ K > L∗ , vL∗ (x) > 0 = (K − x)+ ; for 0 ≤ x < L∗ , vL∗ (x) = K − x = (K − x)+ ; finally, for L∗ ≤ x ≤ K,
−γ−1 x−γ−1 1 d L∗ γK (vL∗ (x) − (K − x)) = −γ(K − L∗ ) −γ + 1 ≥ −γ(K − L∗ ) −γ + 1 = −γ(K − ) γK + 1 = 0. dx γ + 1 γ+1 L∗ L∗

and (vL∗ (x) − (K − x))|x=L∗ = 0. So for L∗ ≤ x ≤ K, vL∗ (x) − (K − x)+ ≥ 0. Combined, we have vL∗ (x) ≥ (K − x)+ ≥ 0 for all x ≥ 0. 8.6. Proof. By Lemma 8.5.1, Xt = e−rt (St − K)+ is a submartingale. For any τ ∈ Γ0,T , Theorem 8.8.1 implies E[e−rT (ST − K)+ ] ≥ E[e−rτ ∧T (Sτ ∧T − K)+ ] ≥ E[e−rτ (Sτ − K)+ 1{τ 0. Take logarithm on both sides and plug in the expression of A(t, T ), we get
T

log B(0, T ) = −Y1 (0)C1 (0, T ) − Y2 (0)C2 (0, T ) +
0

1 2 2 (C (s, T ) + C2 (s, T )) − δ0 (s) ds. 2 1

Taking derivative w.r.t. T, we have ∂ ∂ ∂ 1 2 1 2 log B(0, T ) = −Y1 (0) C1 (0, T ) − Y2 (0) C2 (0, T ) + C1 (T, T ) + C2 (T, T ) − δ0 (T ). ∂T ∂T ∂T 2 2

78

So δ0 (T ) = −Y1 (0) = 10.4. (i) Proof. t ∂ ∂ ∂ C1 (0, T ) − Y2 (0) C2 (0, T ) − log B(0, T ) ∂T ∂T ∂T
−λ1 T

−Y1 (0) δ1 e−λ1 T − −Y1 (0) δ1 e −

λ21 δ2 −λ2 T ∂ − Y2 (0)δ2 e−λ2 T − ∂T log B(0, T ) λ2 e ∂ λ21 δ2 e−λ2 T T − Y2 (0)δ2 e−λ2 T − ∂T log B(0, T )

if λ1 = λ2 if λ1 = λ2 .

dXt

= dXt + Ke−Kt
0

eKu Θ(u)dudt − Θ(t)dt t = −KXt dt + ΣdBt + Ke−Kt
0

eKu Θ(u)dudt

= −K Xt dt + ΣdBt .

(ii) Proof. Wt = CΣBt =
1 σ1 − √ρ 2 σ1 1−ρ t

0 √1 σ2 1−ρ2 t

σ1 0

0 Bt = σ2
1−ρ2 t

1 −√ ρ

1−ρ2

0 √1 t Bt . ρ t + 1−ρ2 −2 1−ρ2 ρt =

1−ρ2

So W is a martingale with W 1 ρ2 +1−2ρ2 t 1−ρ2

= B1

t

= t, W 2
1−ρ2

= −√ ρ
1−ρ2

B1 + √ 1 =

= t, and W 1 , W 2

t

= B1, − √ ρ

B1 + √ 1

B2

1−ρ2 ρt −√ 2 + 1−ρ

B2

=

ρ2 t 1−ρ2

√ ρt
−1

1−ρ2

= 0. Therefore W is a

two-dimensional BM. Moreover, dYt = CdXt = −CK Xt dt+CΣdBt = −CKC Yt dt+dWt = −ΛYt dt+dWt , where   1 √1 0 1  σ2 1−ρ2 0  λ1 0 σ1 · Λ = CKC −1 = − √ρ ρ 1 √1 −1 λ2 |C| σ √1−ρ2 σ1 σ1 1−ρ2 σ2 1−ρ2
1

=

ρλ − √1 σ1

λ1 σ1

1−ρ2



√1 σ2 1−ρ2

0 λ √2 σ2 1−ρ2

σ1 ρσ2

σ2

0 1 − ρ2

=

λ1 ρσ2 (λ2 −λ1 )−σ1 √ σ2 1−ρ2

0 λ2

.

(iii) Proof. t t

Xt

= = =

Xt + e−Kt
0

eKu Θ(u)du = C −1 Yt + e−Kt
0

eKu Θ(u)du Θ(u)du

σ1 ρσ2

σ2

0 1 − ρ2

Y1 (t) + e−Kt Y2 (t)

t

e
0

Ku

σ1 Y1 (t) + e−Kt ρσ2 Y1 (t) + σ2 1 − ρ2 Y2 (t) 79

t

eKu Θ(u)du.
0

So Rt = X2 (t) = ρσ2 Y1 (t)+σ2 1 − ρ2 Y2 (t)+δ0 (t), where δ0 (t) is the second coordinate of e−Kt and can be derived explicitly by Lemma 10.2.3. Then δ1 = ρσ2 and δ2 = σ2 1 − ρ2 . 10.5.

t Ku e Θ(u)du 0

Proof. We note C(t, T ) and A(t, T ) are dependent only on T − t. So C(t, t + τ ) and A(t, t + τ ) aare constants ¯ ¯ when τ is fixed. So ¯ d Lt dt = − = = B(t, t + τ )[−C(t, t + τ )R (t) − A(t, t + τ )] ¯ ¯ ¯ τ B(t, t + τ ) ¯ ¯

1 [C(t, t + τ )R (t) + A(t, t + τ )] ¯ ¯ τ ¯ 1 [C(0, τ )R (t) + A(0, τ )]. ¯ ¯ τ ¯

1 1 Hence L(t2 )−L(t1 ) = τ C(0, τ )[R(t2 )−R(t1 )]+ τ A(0, τ )(t2 −t1 ). Since L(t2 )−L(t1 ) is a linear transformation, ¯ ¯ ¯ ¯ it is easy to verify that their correlation is 1.

10.6. (i) δ0 Proof. If δ2 = 0, then dRt = δ1 dY1 (t) = δ1 (−λ1 Y1 (t)dt + dW1 (t)) = δ1 ( δ1 − Rt δ1 )λ1 dt

+ dW1 (t) = (δ0 λ1 −

λ1 Rt )dt + δ1 dW1 (t). So a = δ0 λ1 and b = λ1 . (ii) Proof. dRt = δ1 dY1 (t) + δ2 dY2 (t) = −δ1 λ1 Y1 (t)dt + λ1 dW1 (t) − δ2 λ21 Y1 (t)dt − δ2 λ2 Y2 (t)dt + δ2 dW2 (t) = −Y1 (t)(δ1 λ1 + δ2 λ21 )dt − δ2 λ2 Y2 (t)dt + δ1 dW1 (t) + δ2 dW2 (t) = −Y1 (t)λ2 δ1 dt − δ2 λ2 Y2 (t)dt + δ1 dW1 (t) + δ2 dW2 (t) = −λ2 (Y1 (t)δ1 + Y2 (t)δ2 )dt + δ1 dW1 (t) + δ2 dW2 (t) = −λ2 (Rt − δ0 )dt + So a = λ2 δ0 , b = λ2 , σ = 10.7. (i) Proof. We use the canonical form of the model as in formulas (10.2.4)-(10.2.6). By (10.2.20), dB(t, T ) = df (t, Y1 (t), Y2 (t)) = de−Y1 (t)C1 (T −t)−Y2 (t)C2 (T −t)−A(T −t) = dt term + B(t, T )[−C1 (T − t)dW1 (t) − C2 (T − t)dW2 (t)] = dt term + B(t, T )(−C1 (T − t), −C2 (T − t)) dW1 (t) . dW2 (t) t 0 2 2 δ1 + δ2

δ1
2 δ1

+

2 δ2

dW1 (t) + W2 (t).

δ2
2 δ1 2 + δ2

dW2 (t) .

2 2 δ1 + δ2 and Bt = √ δ1 2

2 δ1 +δ2

W1 (t) + √ δ2 2

2 δ1 +δ2

So the volatility vector of B(t, T ) under P is (−C1 (T − t), −C2 (T − t)). By (9.2.5), WjT (t) = u)du + Wj (t) (j = 1, 2) form a two-dimensional P T −BM. (ii) 80

Cj (T −

Proof. Under the T-forward measure, the numeraire is B(t, T ). By risk-neutral pricing, at time zero the risk-neutral price V0 of the option satisfies V0 1 ¯ ¯ ¯ = ET (e−C1 (T −T )Y1 (T )−C2 (T −T )Y2 (T )−A(T −T ) − K)+ . B(0, T ) B(T, T ) Note B(T, T ) = 1, we get (10.7.19). (iii) Proof. We can rewrite (10.2.4) and (10.2.5) as
T dY1 (t) = −λ1 Y1 (t)dt + dW1 (t) − C1 (T − t)dt T dY2 (t) = −λ21 Y1 (t)dt − λ2 Y2 (t)dt + dW2 (t) − C2 (T − t)dt.

Then Y1 (t) = Y1 (0)e−λ1 t + Y2 (t) = Y0 e−λ2 t − λ21 t λ1 (s−t) t T e dW1 (s) − 0 C1 (T − s)eλ1 (s−t) ds 0 t t t Y (s)eλ2 (s−t) ds + 0 eλ2 (s−t) dW2 (s) − 0 0 1

C2 (T − s)eλ2 (s−t) ds.

So (Y1 , Y2 ) is jointly Gaussian and X is therefore Gaussian. (iv) Proof. First, we recall the Black-Scholes formula for call options: if dSt = µSt dt + σSt dWt , then E[e−µT (S0 eσWT +(µ− 2 σ with d± = and
1 √ σ T
1 2

)T

− K)+ ] = S0 N (d+ ) − Ke−µT N (d− ) d (log

S0 K

1 + (µ ± 1 σ 2 )T ). Let T = 1, S0 = 1 and ξ = σW1 + (µ − 1 σ 2 ), then ξ = N (µ − 2 σ 2 , σ 2 ) 2 2

E[(eξ − K)+ ] = eµ N (d+ ) − KN (d− ),
1 1 where d± = σ (− log K + (µ ± 2 σ 2 )) (different from the problem. Check!). Since under P T , X = 1 2 2 N (µ − 2 σ , σ ), we have d

B(0, T )E T [(eX − K)+ ] = B(0, T )(eµ N (d+ ) − KN (d− )).

10.11. Proof. On each payment date Tj , the payoff of this swap contract is δ(K − L(Tj−1 , Tj−1 )). Its no-arbitrage price at time 0 is δ(KB(0, Tj ) − B(0, Tj )L(0, Tj−1 )) by Theorem 10.4. So the value of the swap is n+1 n+1 n+1

δ[KB(0, Tj ) − B(0, Tj )L(0, Tj−1 )] = δK j=1 j=1

B(0, Tj ) − δ j=1 B(0, Tj )L(0, Tj−1 ).

10.12.

81

Proof. Since L(T, T ) =

1−B(T,T +δ) δB(T,T +δ)

∈ FT , we have = E[E[D(T + δ)L(T, T )|FT ]] 1 − B(T, T + δ) = E E[D(T + δ)|FT ] δB(T, T + δ) 1 − B(T, T + δ) = E D(T )B(T, T + δ) δB(T, T + δ) D(T ) − D(T )B(T, T + δ) = E δ B(0, T ) − B(0, T + δ) = δ = B(0, T + δ)L(0, T ).

E[D(T + δ)L(T, T )]

11. Introduction to Jump Processes 11.1. (i) Proof. First, Mt2 = Nt2 − 2λtNt + λ2 t2 . So E[Mt2 ] < ∞. f (x) = x2 is a convex function. So by conditional Jensen’s inequality, E[f (Mt )|Fs ] ≥ f (E[Mt |Fs ]) = f (Ms ), ∀s ≤ t. So Mt2 is a submartingale. (ii)
2 Proof. We note Mt has independent and stationary increment. So ∀s ≤ t, E[Mt2 − Ms |Fs ] = E[(Mt − 2 2 Ms ) |Fs ] + E[(Mt − Ms ) · 2Ms |Fs ] = E[Mt−s ] + 2Ms E[Mt−s ] = V ar(Nt−s ) + 0 = λ(t − s). That is, 2 E[Mt2 − λt|Fs ] = Ms − λs.

11.2. Proof. P (Ns+t = k|Ns = k) = P (Ns+t − Ns = 0|Ns = k) = P (Nt = 0) = e−λt = 1 − λt + O(t2 ). Similarly, 1 we have P (Ns+t = k + 1|Ns = k) = P (Nt = 1) = (λt) e−λt = λt(1 − λt + O(t2 )) = λt + O(t2 ), and 1! P (Ns+t ≥ k + 2|N2 = k) = P (Nt ≥ 2) = 11.3. Proof. For any t ≤ u, we have E Su Ft St = = = = = = So St = E[Su |Ft ] and S is a martingale. 11.4. E[(σ + 1)Nt −Nu e−λσ(t−u) |Ft ] e−λσ(t−u) E[(σ + 1)Nt−u ] e−λσ(t−u) E[eNt−u log(σ+1) ] e−λσ(t−u) eλ(t−u)(e e
−λσ(t−u) λσ(t−u) log(σ+1) ∞ (λt)k −λt k=2 k! e

= O(t2 ).

−1)

(by (11.3.4))

e

1.

82

Proof. The problem is ambiguous in that the relation between N1 and N2 is not clearly stated. According to page 524, paragraph 2, we would guess the condition should be that N1 and N2 are independent. Suppose N1 and N2 are independent. Define M1 (t) = N1 (t) − λ1 t and M2 (t) = N2 (t) − λ2 t. Then by independence E[M1 (t)M2 (t)] = E[M1 (t)]E[M2 (t)] = 0. Meanwhile, by Itˆ’s product formula, M1 (t)M2 (t) = o t t t t M1 (s−)dM2 (s) + 0 M2 (s−)dM1 (s) + [M1 , M2 ]t . Both 0 M1 (s−)dM2 (s) and 0 M2 (s−)dM1 (s) are mar0 tingales. So taking expectation on both sides, we get 0 = 0 + E{[M1 , M2 ]t } = E[ 0

Similar Documents

Free Essay

Stochastic Calculus

...Steven E. Shreve Stochastic Calculus for Finance I Student’s Manual: Solutions to Selected Exercises December 14, 2004 Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo Preface This document contains solutions to half the exercises appearing in Stochastic Calculus for Finance I: The Binomial Asset Pricing Model, Springer, 2003. Steven E. Shreve December 2004 Pittsburgh, Pennsylvania USA Contents 1 The Binomial No-Arbitrage Pricing Model . . . . . . . . . . . . . . . . 1.7 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 7 7 2 Probability Theory on Coin Toss Space . . . . . . . . . . . . . . . . . . . . 2.9 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 State Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4 American Derivative Securities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.9 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5 Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.8 Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6 Interest-Rate-Dependent Assets . . . . . . . . . . . . . . . . . . . . . . ....

Words: 12957 - Pages: 52