Tải bản đầy đủ (.pdf) (29 trang)

Perpetual convertible bonds with credit risk potx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (467.81 KB, 29 trang )

Perpetual convertible bonds with credit risk
Christoph K¨uhn

Kees van Schaik

Abstract
A convertible bond is a security that the holder can convert into a specified
number of underlying shares. We enrich the standard model by introducing some
default risk of the issuer. Once default has occured payments stop immediately. In
the context of a reduced form model with infinite time horizon driven by a Brownian
motion, analytical formulae for the no-arbitrage price of this American contingent
claim are obtained and characterized in terms of solutions of free boundary problems.
It turns out that the default risk changes the structure of the optimal stopping
strategy essentially. Especially, the continuation region may become a disconnected
subset of the state space.
Keywords: convertible bonds, exchangeable bonds, default risk, optimal stopping problems,
free-boundary problems , smo oth fit.
Mathematics Subject Classification (2000): 60G40, 60J50, 60G44, 91B28.
1 Introduction
The market for convertible bonds has been growing rapidly during the last years and the
corresponding optimal stopping problems have attracted much attention in the literature
on mathematical finance. One has to distinguish between reduced form models where the
stock price process of the issuing firm is exogenously given by some stochastic process and
structural models where the starting point is the firm value which splits in the total equity
value and the total debt value. Within a firm value model the pricing problem is treated in
Sˆırbu, Pikovsky and Shreve [15] and Sˆırbu and Shreve [16]. In contrast to earlier articles
of Brennan and Schwartz [4] and Ingersoll [11, 12], [15, 16] includes the case where an
earlier conversion of the bond can be optimal that necessitates to address a nontrivial free-
boundary problem. In the context of a reduced form model Bielecki, Cr´epey, Jeanblanc
and Rutkowski [2] made quite recently a comprehensive analysis of interesting features
of convertible bonds. Especially they model the interplay between equity risk and credit



Frankfurt MathFinance Institute, Johann Wolfgang Goethe-Universit¨at, Robert-Mayer-Str. 10, D-
60054 Frankfurt a.M., Germany, e-mail: {ckuehn, schaik}@math.uni-frankfurt.de
Acknowledgements. We would like to thank Andreas Kyprianou for valuable discussions and comments.
1
risk, cf. also Remark 1.2 (iii). This is done for the nonperpetual case. Thus the pricing
problem has finally to be solved by numerical methods.
In this article we work with reduced form models where such a contract without a
recall option for the issuer can be expressed as a standard American contingent claim
(see also Davis and Lischka [5] for a detailed introduction and a precise description of the
contract). The special feature of the current article is that we enrich the standard Black
and Scholes mo de l by introducing some default risk of the issuer. Once default has occured
payments stop immediately. The main purpose is to obtain analytical formulae for the no-
arbitrage price of a perpetual convertible bond under different default intensities through
characterizations in terms of free boundary problems. It turns out that the default risk
changes the structural behavior of the solution essentially. Roughly speaking, in models
without default bonds are converted only by the time the stock price is high, cf. [4], [9],
[11], [12], [15], and [16]. The ratio behind this is that for low stock prices the holder
prefers collecting the prespecified coupon payments, whereas for higher stock prices the
dividends payed out exclusively to stockholders become more attractive which may cause
the bondholder to convert. We model the default intensity of the issuer as a nonincreasing
function of the current stock price. In this setting also a low stock price may cause the
holder to convert the bond (even if the yield is low) in order to get rid of the high risk
that the issuer defaults which would make the contract worthless.
The paper is organized as follows. In Subsec tion 1.1 we introduce the stochastic model.
Stopping times depending on the default state of the issuer are reduced to stopping times
without using this information. We do this in a mathematical framework differing from
the standard one in credit risk mode ling which is based on the progressive enlargement of
the filtration without the default event, cf. e.g. Chapter 5 in [3]. We think this provides
some interesting additional insights – but the resulting payoff process (1.4) is of course the

same. Subsection 1.3 provides some general properties of the value function of convertible
bonds with varying default intensities. In Section 2 we consider the simplifying case that
there are two different default intensities depending on the current stock price. In Section 3
we consider the case that the default intensity is a power function of the current stock
price (with negative exponent). In Section 4 the results of Sections 2 and 3 are represented
by some plots. Parts of the unavoidable technical proofs are left to the appendix.
1.1 The model
Consider the following Black and Scholes market. We have a filtered probability space
(Ω, F, F = (F
t
)
t∈R
≥0
∪{+∞}
, P), where the filtration F satisfies the usual conditions and
F = F

= σ(F
t
, t ∈ R
≥0
). The riskless asset B is given by B
t
= e
rt
for all t ≥ 0, where
r > 0 is the interest rate. The process S models the risky stock paying dividends at rate
δS
t
, where δ ∈ (0, r). S is given by the formula

S
t
= exp(σW
t
+ (r − δ − σ
2
/2)t), t ≥ 0,
2
where σ > 0 is the volatility and W a standard Brownian motion under the unique
equivalent martingale me asure P ∼ P . This means that the discounted cum dividend
cumulative price process (exp(−rt)S
t
+

t
0
exp(−ru)δS
u
du)
t≥0
is a P-martingale. Let for
each s > 0, the measure P
s
be the translation of P such that P
s
(S
0
= s) = 1. F is the
natural filtration generated by W .
In this market we consider a perpetual convertible bond, that is an American contin-

gent claim with infinite horizon which gives the holder the right to convert the contract
at a (stopping) time of his choosing in a predetermined number γ ∈ R
>0
of stocks, while
receiving coupon payments at rate c > 0 up to this (possibly never occuring) time. If de-
fault occurs before the conversion time of the holder, the contract is terminated and the
holder is left with only the coupon payments he has collected up to default. For simplicity
(and as it would not be an interesting feature in combination with default risk) we do not
allow for recalling, i.e. the issuer may not terminate the contract.
For including default in the mathematical model we extend the probability space above
to F ⊗B(R
>0
) containing a random variable e ∈ R
>0
which is both under P and under P
independent of S and exponentially distributed with parameter 1. We allow for the default
intensity of the issuer to depend on the current value of the stock, namely it is given by
the process (χ(S
t
))
t≥0
for some suitable non-negative Borel-measurable function χ. That
is to say, defining the process ϕ by
ϕ
t
=

t
0
χ(S

u
) du, t ≥ 0, (1.1)
the time of default is defined as
ϕ
−1
(ω, e) := inf{t ≥ 0 |ϕ
t
(ω) ≥ e},
which is the generalized left-continuous inverse of ϕ (with the usual convention that
inf ∅ = ∞). Note that this corresponds to the first jump time of a Cox process with
intensity process (χ(S
t
))
t≥0
. Throughout this article we will only consider non-negative
intensity functions χ : R
>0
→ R
≥0
for which (1.1) defines a finitely valued non-decreasing
continuous pro cess .
The payoff process X corresponding to such defaultable convertible bond is thus given
by
X
t
(ω, e) := 1

t
(ω)<e}


e
−rt
γS
t
(ω) +

t
0
ce
−ru
du

+ 1

t
(ω)≥e}

ϕ
−1
(ω,e)
0
ce
−ru
du
for all t ≥ 0 and X

(ω, e) :=

ϕ
−1

(ω,e)
0
ce
−ru
du.
Definition 1.1. A stopping time w.r.t. the enlarged information is an (F ⊗ B(R
>0
) −
B(R
≥0
∪{+∞}))-measurable mapping τ : Ω×R
>0
→ R
≥0
∪{+∞} with {ω ∈ Ω | τ(ω, u) ≤
t} ∈ F
t
for all t ∈ R
≥0
, u ∈ R
>0
such that for all ω ∈ Ω, u ∈ R
>0
the implication
τ(ω, u) < ϕ
−1
(ω, u) =⇒ ∀u

> ϕ
τ(ω,u)

(ω) : τ(ω, u

) = τ(ω, u) (1.2)
3
holds. The set of these stopping times is denoted by

T .
Remarks 1.2. (i) The lhs of (1.2) means that there is pre-default stopping. As the
default event should be non-predictable we assume that this stopping takes place ir-
respective of when exactly default occurs after τ(ω, u), i.e. for all u

with ϕ
−1
(ω, u

) >
τ(ω, u) we should have τ(ω, u

) = τ(ω, u).
(ii) By augmenting the model with the default event, the market becomes incomplete. On
the enlarged probability space the set of martingale measures is no longer a single-
ton. The measure P introduced above is the so-called minimal martingale measure
of F¨ollmer and Schweizer [8]. This measure has the nice property that it respects
orthogonality in the sense tha t the ”untradable” random variable e remains inde-
pendent of S and possesses the same distribution as under P .
(iii) In our model default of the issuer is not identified with default of the firm. This
includes so-called exchangeable bonds where the issuer is not the firm itself but
typically one of its major shareholders. Thus the default intensity χ(S
t
) does not

enter into the no-arbitrage drift condition. Note that this differs e.g. from the model
in [2]. An exchangeable bond may be converted into existing shares and not into new
shares. This destroys the advantages a firm value model possesses in comparison to
a reduced form model.
Since X stays constant after default and by the non-predictability of e from Defini-
tion 1.1, it is enough to consider F-stopping times and average over e.
Proposition 1.3. Let T
a,b
denote the set of [a, b]-valued F-stopping times. We have for
all s ∈ R
>0
sup
τ∈
e
T
E
s
[X
τ
] = sup
T
0,∞
E
s
[L
τ
] , (1.3)
where the F-adapted continuous process (L
t
)

t∈R
≥0
∪{+∞}
is given by
L
t
:= e
−rt−ϕ
t
γS
t
+

t
0
ce
−ru−ϕ
u
du, t ∈ R
≥0
(1.4)
and L

:=


0
ce
−ru−ϕ
u

du.
Remark 1.4. The proof is based on representation (1.8) which says that any stopping
time w.r.t. the enlarged information can be expressed by F- stopping times. This is an
analogous result to Dellacherie, Maisonneuve, and Meyer [6], page 186, for the standard
mathematical framework based on the progressive enlargement of the filtration without the
default event, cf. Chapter 5 in [3].
4
Proof. Step 1. Given a σ ∈ T
0,∞
we obviously have that τ (ω, e) := σ(ω), ∀e ∈ R
>0
, is an
element of

T and we can calculate
E
s

X
τ(ω,e)
(ω, e)

= E
s

1

σ(ω)
(ω)<e}


e
−rσ(ω )
γS
σ(ω)
(ω)
+

σ(ω)
0
ce
−ru
du

+ 1

σ(ω)
(ω)≥e}

ϕ
−1
(ω,e)
0
ce
−ru
du

= E
s

e

−ϕ
σ(ω)
(ω)

e
−rσ(ω )
γS
σ(ω)
(ω) +

σ(ω)
0
ce
−ru
du

+

ϕ
σ(ω)
(ω)
0
e
−ξ

ϕ
−1
(ω,ξ)
0
ce

−ru
du dξ

, (1.5)
where the second equality uses that e is independent of F and exponentially distributed
with parameter 1. By interchanging the order of integration and using that u < ϕ
−1
(ω, ξ) ⇔
ϕ(ω, u) < ξ we obtain for any ω ∈ Ω

ϕ
σ(ω)
(ω)
0
e
−ξ

ϕ
−1
(ω,ξ)
0
ce
−ru
du dξ =

σ(ω)
0
ce
−ru


ϕ
σ(ω)
(ω)
ϕ
u
(ω)
e
−ξ
dξ du
=

σ(ω)
0
ce
−ru−ϕ
u
(ω)
du −e
−ϕ
σ(ω)
(ω)

σ(ω)
0
ce
−ru
du.
Thus the rhs of (1.5) coincides with E
s


L
σ(ω)
(ω)

which implies that sup
τ∈
e
T
E
s
[X
τ
] ≥
sup
T
0,∞
E
s
[L
τ
].
Step 2. To establish the opposite direction, take a τ ∈

T and let
σ(ω) := inf{t ∈ Q
>0
| τ(ω, u) ≤ t, for some u ∈ Q
>0
with ϕ
t

(ω) < u}, (1.6)
(recall that inf ∅ = ∞) and
σ(ω, e) := τ (ω, e) ∨ϕ
−1
(ω, e). (1.7)
Let us show that
σ ∈ T
0,∞
and τ(ω, e) =

σ(ω) for ϕ
σ(ω)
(ω) < e
σ(ω, e) for ϕ
σ(ω)
(ω) ≥ e.
(1.8)
First, note that for every t > 0 we have
{ω ∈ Ω | σ(ω) < t} =

s∈Q∩(0,t)

u∈Q
>0
{ω ∈ Ω | τ(ω, u) ≤ s and ϕ
s
(ω) < u}
  
∈F
s

⊂F
t
∈ F
t
.
Thus, by the usual conditions of F, we have indeed σ ∈ T
0,∞
. That f or any e ∈ R
>0
,
σ(·, e) ∈ T
0,∞
with σ(·, e) ≥ ϕ
−1
(·, e) is obvious.
5
Let (ω, e) ∈ Ω ×R
>0
with e > ϕ
σ(ω)
(ω). Let us show that
τ(ω, e) = σ(ω). (1.9)
First suppose that σ(ω) = ∞, so that e > ϕ

(ω) and ϕ
−1
(ω, e) = ∞. From (1.6)
we see that this means τ(ω, u) = ∞, ∀u ∈ Q
>0
∩ (ϕ


(ω), ∞). If it were the case that
τ(ω, e) < ∞, then by (1.2) we would have τ(ω, u) = τ(ω, e), ∀u ∈ (ϕ
τ(ω,e)
(ω), ∞), but
combining this with the previous sentence we would arrive at τ(ω, e) = ∞. Thus (1.9)
holds for σ(ω) = ∞.
Now suppose that σ(ω) < ∞. By definition of the infimum and the continuity of the
paths of ϕ there is a sequence (t
n
, u
n
)
n∈N
⊂ Q
2
>0
with t
n
↓ σ(ω), σ(ω) ≤ t
n
< ϕ
−1
(ω, e),
ϕ
t
n
(ω) < u
n
and τ(ω, u

n
) ≤ t
n
for all n ∈ N. For any n ∈ N it follows from ϕ
t
n
(ω) < u
n
and τ(ω, u
n
) ≤ t
n
that τ(ω, u
n
) < ϕ
−1
(ω, u
n
) and from τ(ω, u
n
) ≤ t
n
and t
n
< ϕ
−1
(ω, e)
that e > ϕ
τ(ω,u
n

)
(ω). Combining these w ith (1.2) gives
τ(ω, u
n
) = τ(ω, e), ∀n ∈ N, (1.10)
and since τ(ω, u
n
) ≤ t
n
↓ σ(ω) it follows that
τ(ω, e) ≤ σ(ω).
To es tablish the reversed inequality and thus (1.9) it is on account of (1.10) enough to
show σ(ω) ≤ τ(ω, u
n
), ∀n ∈ N. If this were not true we would have an s ∈ (τ(ω, u
n
), σ(ω))∩
Q for some n ∈ N. Using this with σ(ω) ≤ t
n
and ϕ
t
n
(ω) < u
n
it would follow that
τ(ω, u
n
) ≤ s and ϕ
s
(ω) ≤ ϕ

σ(ω)
(ω) ≤ ϕ
t
n
(ω) < u
n
, which would by (1.6) result in
σ(ω) ≤ s and thus a contradiction.
Finally, let (ω, e) ∈ Ω × R
>0
with e ≤ ϕ
σ(ω)
(ω). We need to show that τ(ω, e) ≥
ϕ
−1
(ω, e). Assume that τ(ω, e) < ϕ
−1
(ω, e), so that we could find an s ∈ Q with
ϕ
τ(ω,e)
(ω) < ϕ
s
(ω) < e ≤ ϕ
σ(ω)
(ω).
By the first and second inequality, together with (1.2), we would have that s is in the set
on the rhs of (1.6) and thus σ(ω) ≤ s. But this contradicts with the last two inequalities.
Thus we have established (1.8).
From (1.8) we see that if either ϕ
τ(ω,e)

(ω) < e or ϕ
σ(ω)
(ω) < e, then τ(ω, e) = σ(ω).
By this property it follows directly from the definition of X that
X
τ(ω,e)
(ω, e) = X
σ(ω)
(ω, e).
The same calculation as in Step 1 shows that E
s

X
τ(ω,e)
(ω, e)

= E
s

L
σ(ω)
(ω)

and the
statement of the proposition follows.
We conclude with some notation.
Definition 1.5. (i) By v : R
>0
→ R
>0

we denote the value given by the rhs of (1.3) as
a function of the starting price of the stock S.
6
(ii) The infinitesimal generator of S we denote by L, that is
L :=
σ
2
2
s
2

2
∂s
2
+ (r − δ)s

∂s
.
(iii) For any interval I ⊂ R
>0
we denote by τ(I) the first exit time of I, that is τ(I) :=
inf{t ≥ 0 |S
t
∈ I}.
1.2 Constant default intensity
If the intensity function χ in (1.1) is constant, the problem (1.3) can be reduced to the
case without default and a higher discount factor. This show s the following proposition.
Its proof follows directly from Proposition 1.3 and [9], Theorem 4.1(i) and is therefore
omitted.
Proposition 1.6. Let χ(s) = q for some q ∈ R

≥0
. We denote the associated value function
by ˆv
q
, that is
ˆv
q
(s) := sup
τ∈T
0,∞
E
s

e
−(r+q)τ
γS
τ
+

τ
0
ce
−(r+q)u
du

. (1.11)
Let β
q
1
< 0 < 1 < β

q
2
be the solutions of σ
2
β(β − 1)/2 + (r −δ)β −(r + q) = 0, so that
β
q
1
β
q
2
=
−2(r + q)
σ
2
and (β
q
2
− 1)(1 −β
q
1
) =
2(δ + q)
σ
2
. (1.12)
We have that the optimal stopping time in (1.11) is given by τ(0, ˆs
q
), where
ˆs

q
=
β
q
2
c
γ(r + q)(β
q
2
− 1)
and furthermore
ˆv
q
(s) =

γˆs
1−β
q
2
q
s
β
q
2

q
2
+ c/(r + q) on (0, ˆs
q
)

γs on [ˆs
q
, ∞).
Note that q → ˆs
q
is continuous and strictly decreasing with limits ˆs
0
and 0 for q ↓ 0 and
q → ∞ respectively, and that
ˆs
q
>
c
γ(δ + q)
. (1.13)
Finally, we have that the pair (v
q
|
(0,ˆs
q
)
, ˆs
q
) is the unique solution to the free boundary
problem in unknowns (f, b) ∈ C
2
(0, b) ×R
>0




(L −(r + q))f (s) + c = 0 on (0, b)
f(b−) = γb, f

(b−) = γ
f(0+) ∈ R
>0
.
(1.14)
7
Remark 1.7. A common approach to find analytical expressions for the value function
and the optimal strategy of optimal stopping problems is to guess candidate expressions by
constructing & solving an appropriate free boundary problem, which has a function and
boundary point(s) as solution, and to verify the correctness of the guess by showing that
the corresponding candidate value process
(i) dominates the payoff process
(ii) is a supermartingale
(iii) is a martingale when stopped at the first time it hits the payoff process
(cf. Lemma A.1). Uniqueness of solutions of the free boundary problem follows implicitly
from this.
In the upcoming sections we will work with free boundary problems that allow only for
a semi-explicit characterization of its solution set. The resulting expressions are explicit
enough to be useful, but showing by direct means that a solution indeed exists does not
always seem easy (like for the free boundary problems involving two boundary points used in
Theorem 2.2 (ii) and Theorem 3.3 (ii)). This issue we resolve by proving in the upcoming
Subsection 1.3 that v satisfies a set of properties rich enough to allow to conclude that v
and the associated optimal exercise level(s) indeed form a solution to the free boundary
problem under consideration, thus implicitly yielding existence of solutions.
1.3 Some results for general intensity functions
The following theorem states some properties of v, mainly for use in the examples we

consider in the upcoming sections. Note that the sign of the function λ defined below
corresponds to the sign of the drift rate in the Itˆo-decomposition of L and will be used
throughout for determining the shape of stopping and continuation regions, using (ii) and
(iv) of Theorem 1.9.
Remark 1.8. As lim
t→∞
L
t
exists a.s. and τ ∈ T
0,∞
may take the value +∞, the standard
theory of optimal stopping on a compact tim e interval can directly be translated to our
setting. Especially, as L has continuous paths and is of class (D), we already know that
the [0, ∞]-valued stopping time inf{t ≥ 0 |U
t
= L
t
} is optimal, where U denotes the Snell
envelope of L, cf. the proof of Theorem 1.9 (i).
Theorem 1.9. Let the function λ : R
>0
→ R be given by λ(s) = c − γ(δ + χ(s))s. We
have the following.
(i) v is a continuous function with γs ≤ v(s) ≤ ˆv
0
(s) on R
>0
. The optimal stopping
time is attained and given by τ


:= τ(C), where C = {s ∈ R
>0
|v(s) > γs} is
the continuation region. Let S = R
>0
\ C be the stopping region. We have C ⊂
(0, ˆs
0
). Furthermore, suppose that (χ
n
)
n∈N
is a sequence of intensity functions, with
8
associated value functions denoted by v
n
, converging to χ in the max-norm. Then
v
n
converges to v in the max-norm.
(ii) Let I ⊂ R
>0
be some interval

. If λ ≤ 0 on I and ∂I ⊂ S, then
¯
I ⊂ S. If λ > 0 on
I, then I ⊂ C.
Now suppose that χ is c`adl`ag or c`agl`ad and that its set of discontinuities, denoted by
D

χ
, is finite. Suppose furthermore that ∂C is finite, i.e. that C is a finite union of open
intervals (from (ii) we see that a sufficient condition for this is that λ changes its sign at
most finitely often). Under these assumptions the following holds.
(iii) Set N
v
:= (C ∩D
χ
) ∪∂C. We have that v ∈ C
2
(R
>0
\N
v
) ∩C
1
(R
>0
) and v satisfies
(L −(r + χ(s)))v(s) + c

= 0 on C \D
χ
≤ 0 on R
>0
\ N
v
.
(iv) Let s
0

∈ R
>0
. Suppose that there exists  > 0 such that λ ∈ C
1
(s
0
, s
0
+ ) and
that either λ(s
0
+) > 0 or both λ(s
0
+) = 0 and λ

(s
0
+) > 0. Then s
0
∈ C. The
same holds if λ ∈ C
1
(s
0
− , s
0
) and either λ(s
0
−) > 0 or both λ(s
0

−) = 0 and
λ

(s
0
−) < 0.
Proof. Ad (i). The lower and upper bound for v are obvious. Since (exp(−(r−δ)t)S
t
)
t≥0
is
a martingale and δ > 0, it follows that L is of class (D), i.e. that the family {L
τ
|τ ∈ T
0,∞
}
is uniformly integrable. It follows that the Snell envelope U of L is well defined and of
class (D), cf. [13], Theorem 3.2 e.g. For any t ≥ 0 we have
U
t
= ess sup
τ∈T
t,∞
E
s
[L
τ
|F
t
]

=

t
0
ce
−ru−ϕ
u
du + e
−rt−ϕ
t
×ess sup
τ∈T
t,∞
E
s

e
−r(τ −t)−(ϕ
τ
−ϕ
t
)
γS
τ
+

τ
t
ce
−r(u−t)− (ϕ

u
−ϕ
t
)
du




F
t

=

t
0
ce
−ru−ϕ
u
du + e
−rt−ϕ
t
v(S
t
). (1.15)
The above calculation is at least intuitively clear by the Markov property, for a rigorous
justification we refer to Theorem 3.4 in [7]. Although the authors work with a payoff of the
form g(X
t
) for a suitable function g and a Markov process X it also covers this case if we

regard L as a function of the Markov process (t, S
t
, ϕ
t
,

t
0
exp(−ru − ϕ
u
) du)
t≥0
. Namely,
the resulting four-dimensional value function has the form of the rhs of equation (1.15).

For sets A ⊂ R
>0
, ∂A denotes the boundary of A in R
>0
, i.e. if A = (a, b) with a ∈ R
≥0
and
b ∈ R
>0
∪ {+∞} then ∂A = {a, b} ∩ R
>0
. Furthermore the closure of A in R
>0
is denoted by
¯

A, i.e.
¯
A = A ∪∂A.
9
Continuity of v follows from Proposition 4.7 in [7]. From general theory on optimal
stopping, see Theorem 5.5 in [13] e.g., together with (1.15) it follows that the optimal
stopping time in v is attained and given by inf{t ≥ 0 |U
t
= L
t
} = τ(C).
Let χ
n
tend to χ in the max-norm as n → ∞, denote by 
n
the max-norm of χ −χ
n
.
Since γs ≤ v(s) ≤ ˆv
0
(s) we have v
n
(s) = v(s) = γs on [ˆs
0
, ∞) and we may restrict the set
of stopping times over which is maximized in v and v
n
to those that are bounded above
by τ(0, ˆs
0

) on account of τ(C) ≤ τ(0, ˆs
0
). Using this we find by some easy calculations
that |v(s) − v
n
(s)| ≤ γˆs
0
C(
n
) +


0
ce
−ru
(1 − e
−
n
u
) du for any s ∈ (0, ˆs
0
), where C(
n
)
is the maximum value the function x → e
−rx
(1 − e
−
n
x

) attains on (0, ˆs
0
], yielding the
result.
Ad (ii). An application of Itˆo’s formula yields
L
t
= γs +

t
0
e
−ru−ϕ
u
γσS
u
dW
u
+

t
0
e
−ru−ϕ
u
λ(S
u
) du. (1.16)
Let s
0

∈ I. First let λ ≤ 0 on I and ∂I ⊂ S. By (1.15), using that v(s) = γs on ∂I, we
find that we may write
v(s
0
) = sup
τ∈T
0,∞
E
s
0

L
τ(I)
τ

. (1.17)
Since λ ≤ 0 on I, (1.16) shows that L
τ(I)
is a local supermartingale. Since L is of class
(D), it follows by Doob’s optional sampling that the supremum in (1.17) is attained by
τ = 0 and thus indeed v(s
0
) = γs
0
.
Next let λ > 0 on I. Note that this implies that I is bounded from above since λ ≤ 0 on
[c/(δγ), ∞). It follows that the local martingale part of L
τ(I)
in (1.16) is a true martingale.
This allows to take any t > 0 and use again Doob’s optional sampling together with λ > 0

on I and P
s
0
(τ(I) > 0) = 1 to deduce that v(s
0
) ≥ E
s
0
[L
t∧τ(I)
] > γs
0
.
Ad (iii). Step 1. Note that C \ D
χ
is open in R
>0
by continuity of v and since D
χ
is
finite. Let us show that on this set, v is a C
2
-function satisfying (L−(r+χ(s)))v(s)+c = 0.
For this, take some environment I = (a, b) ⊂ C \ D
χ
with a > 0, b < ∞. By the
assumptions on χ and since I ∩D
χ
= ∅ we have χ ∈ C
0

(I). First consider the homogenous
boundary value problem

(L −(r + χ(s)))f(s) = 0 on I
f = 0 on ∂I
(1.18)
and let us show that it only has the trivial solution. Let f ∈ C
2
(I) be any solution and
consider the continuous process Z given by Z
t
= exp(−r(t ∧τ(I)) −ϕ
t∧τ(I)
)f(S
t∧τ(I)
) for
all t ≥ 0. Itˆo’s formula shows that Z is a local martingale. Clearly, Z is also a bounded
process so that Doob’s optional sampling shows that indeed f(s) = E
s
[Z
0
] = E
s
[Z
τ(I)
] on
I, the rhs vanishing on account of f = 0 on ∂I.
By the Fredholm Alternative, the fact that (1.18) is only solved by the trivial solution
implies that the boundary value problem
10


(L −(r + χ(s)))f(s) + c = 0 on I
f = v on ∂I
has a solution f ∈ C
2
(I). An application of Lemma A.1 (i) yields for all s ∈ I, using that
P
s
(τ(I) < ∞) = 1,
f(s) = E
s

e
−rτ (I)−ϕ
τ (I)
v(S
τ(I)
) +

τ(I)
0
ce
−ru−ϕ
u
du

.
Since I ⊂ C, (1.15) shows that for any s ∈ I, v(s) can be written as the rhs of the
above formula. Thus v = f on I, yielding the assertion.
Step 2. Let us show that v ∈ C

2
(R
>0
\N
v
) ∩C
1
(R
>0
). Recall that N
v
= (C ∩D
χ
) ∪∂C
is by assumption finite, let a ∈ N
v
and  > 0 so that with I := (a − , a + ) we have
I ∩ N
v
= {a}. Since v ∈ C
2
(C \ D
χ
) (by Step 1) and v(s) = γs on S = R
>0
\ C, the
assertion follows if we show that v ∈ C
1
(I). In particular, since we already have
v ∈ C

2
(a −, a] ∪C
2
[a, a + ), (1.19)
we know that v

(a−) and v

(a+) b oth exist and it remains to show that they must coincide.
To see (1.19), by construction of I both (a − , a) and (a, a + ) are subsets of either S
or C \ D
χ
. Since v(s) = γs on S there is nothing to show for that case, while on subsets
of C \ D
χ
we have from Step 1 that v satisfies (L − (r + χ(s)))v(s) + c = 0, so that
by a s tandard result from the theory of ODEs (cf. e.g. [10], Ch. II, Theorem 1.1 and
Theorem 3.1) it follows from the fact that χ(a±) exists and is finite (by our assumption
on χ) that also the corresponding v

(a±) exists and is finite.
So let us show that v

(a−) and v

(a+) are equal. Recall that the Snell envelope U
can be expressed as (1.15). On account of (1.19) we may apply the change-of-variables
formula (A.1) from the proof of Lemma A.1 (cf. the re marks preceding that formula) and
we obtain
U

τ(I)
t
= U
0
+ M
t
+

t∧τ(I)
0
1
{S
u
=a}
e
−ru−ϕ
u
[(L −(r + χ(S
u
))) v(S
u
) + c] du
+

t∧τ(I)
0
1
{S
u
=a}

e
−ru−ϕ
u
(v

(a+) −v

(a−)) dL
a
u
, t ≥ 0,
where M is given by
M
t
=
σ
2

t∧τ(I)
0
e
−ru−ϕ
u
(v

(S
u
+) + v

(S

u
−)) S
u
dW
u
, t ≥ 0.
Note that M is a true martingale on account of the boundedness of I and of v

on I.
Let us start S at a. First consider a ∈ C∩D
χ
. By construction we have I ⊂ C, so that
(L−(r+χ(s)))v(s)+c = 0 on I\{a} by Step 1. This means that in the above decomposition
11
of U
τ(I)
the drift part consists solely of the integral with respect to local time. Thus if
we had v

(a−) = v

(a+), then U
τ(I)
would be a strict super- or submartingale. But this
is impossible since U
τ(I)
is a martingale, which follows directly from the well known fact
that U
τ


is a martingale (see e.g. [13], Corollary 5.3) and τ (I) ≤ τ

on account of I ⊂ C.
Next consider a ∈ ∂C ⊂ S. Since v(a) = γa while v(s) ≥ γs on I we have v

(a−) ≤
v

(a+). So if v

(a−) = v

(a+) did not hold, then v

(a+) − v

(a−) > 0. But the above
decomposition of U shows that this would mean that the process Z given by Z
t
=

t
0
1
{S
u
=a}
dU
τ(I)
t

for all t ≥ 0 is a strict submartingale. However, this is impossible since
it is well known that U is a supermartingale, see e.g. [13], Lemma 3.3.
Step 3. Let us show that (L−(r +χ(s)))v(s)+c ≤ 0 on R
>0
\N
v
(which is an open set
since N
v
is finite). Taking the result from Step 1 into account, it is enough to show that
(L−(r + χ(s)))v(s) + c ≤ 0 on the inner of S (denoted by inn(S)). This however is clear.
Namely, since v(s) = γs on S we have for any s ∈ inn(S) that (L−(r + χ(s)))v(s) + c =
λ(s). Suppose that we had λ(s) > 0. By the ass umptions on χ we would have λ > 0 on
either (s − , s) or (s, s + ) for some  > 0, but this means by (ii) that s ∈
¯
C, which
contradicts with s ∈ inn(S).
Ad (iv). Set I := (s
0
, s
0
+). Let us assume that s
0
∈ S and derive a contradiction. We
have either Case 1: λ(s
0
+) > 0 or Case 2: λ(s
0
+) = 0 and λ


(s
0
+) > 0. Since λ ∈ C
1
(I)
(and thus also χ ∈ C
1
(I)) we may assume w.l.o.g. that λ > 0 on I, which means that
I ⊂ C (cf. (ii)) and that (L − (r + χ(s)))v(s) + c = 0 holds on I (cf. (iii)), with, since we
assumed s
0
∈ S, v(s
0
+) = γs
0
and v

(s
0
+) = γ.
For Case 1, taking the limit for s ↓ s
0
in (L − (r + χ(s)))v(s) + c = 0 we find,
using v(s
0
+) = γs
0
, v

(s

0
+) = γ and λ(s
0
+) > 0, that v

(s
0
+) < 0. But again using
v(s
0
+) = γs
0
and v

(s
0
+) = γ this would imply v(s) < γs on (s
0
, s
0
+ 

) for some 

> 0,
yielding the required contradiction.
For Case 2, taking the same limit as in Case 1 this time yields v

(s
0

+) = 0 on account
of λ(s
0
+) = 0. On I, differentiating the equation (L−(r + χ (s)))v(s) +c = 0 once (which
is possible since χ ∈ C
1
(I)) we find
σ
2
2
s
2
v

(s) = (δ −r − σ
2
)sv

(s) + (χ(s) + δ)v

(s) + χ

(s)v(s).
Furthermore λ

(s) = −(δ + χ(s))γ − χ

(s)γs, so we may take the limit for s ↓ s
0
in the

above equation and use v(s
0
+) = γs
0
, v

(s
0
+) = γ, v

(s
0
+) = 0 and λ

(s
0
+) > 0 to
derive that v

(s
0
+) < 0. But this would again imply v(s) < γs on (s
0
, s
0
+ 

) for some



> 0 and yield a contradiction.
Remark 1.10. Theorem 1.9 (iii) shows that v is C
1
across the points in ∂C and C ∩D
χ
.
For ∂C this is just the usual smooth pasting condition at the boundary between continuation
and stopping region. For s
0
∈ C ∩D
χ
we may use the differential equation that governs v
around s
0
to compute v

(s
0
+) − v

(s
0
−) = 2v(s
0
)(χ(s
0
+) − χ(s
0
−))/σ
2

. That is to say
12
that a jump of χ at s
0
causes a jump of v

in the same direction, but it does not affect
the continuity of v

.
2 Piecewise constant intensity function
In this section we will address in more detail the case that χ is given by χ(s) = 1
{s≤¯s}
p
for parameters p, ¯s > 0. The process ϕ from (1.1) is now given by
ϕ
t
=

t
0
p1
{S
u
≤¯s}
du
for all t ≥ 0 and we denote the associated value function by v
p
, that is
v

p
(s) = sup
τ∈T
0,∞
E
s
[L
τ
] = sup
τ∈T
0,∞
E
s

e
−rτ −ϕ
τ
γS
τ
+

τ
0
ce
−ru−ϕ
u
du

,
with continuation and stopping regions denoted by C

p
and S
p
, resp.
Throughout we will make repeated use of the functions ˆv
q
and associated optimal
stopping levels ˆs
q
which were discussed in Proposition 1.6. Note that for any p > 0, from
Theorem 1.9 (i) & (iii) and χ(s) ≤ p we know that v
p
is a non-decreasing C
1
(R
>0
)-function
with ˆv
p
≤ v
p
≤ ˆv
0
.
The drift rate λ of L takes the form λ(s) = c − γs(δ + p1
{s≤¯s}
). If λ(¯s+) ≤ 0, i.e.
¯s ≥ c/(γδ), S
p
has the same structure as the optimal stopping region of ˆv

p
, see Theorem 2.1
below.
On the other hand, if λ(¯s+) > 0, i.e. ¯s ∈ (0, c/(γδ)), λ is strictly positive on
(0, c/(γ(δ + p))) ∪ (¯s, c/(γδ)), which for p large enough causes S
p
to be the union of
two disjoint intervals, one contained in (c/(γ(δ + p)), ¯s) and one contained in (c/(γδ), ˆs
0
).
See Theorem 2.2 below, resp. Figures 1 & 2 in Section 4.
Theorem 2.1. Suppose that ¯s ≥ c/(γδ). For q ∈ R
≥0
, b ∈ R
>0
let
c
q
1
(b) :=

q
2
− 1)γb −β
q
2
c/(r + q)

q
2

− β
q
1
)b
β
q
1
and c
q
2
(b) =

q
1
− 1)γb −β
q
1
c/(r + q)

q
1
− β
q
2
)b
β
q
2
,
where β

q
1
, β
q
2
are defined like ˆs
p
and ˆv
p
in Proposition 1.6. We have
(i) If ˆs
p
≤ ¯s, then v
p
= ˆv
p
.
(ii) If ˆs
p
> ¯s, then S
p
= [b
p
, ∞), where b
p
∈ ( ˆs
p
, ˆs
0
) is the unique solution on (¯s, ∞) of

the following equation in b

0
1
− β
p
2
)c
0
1
(b)¯s
β
0
1
+ (β
0
2
− β
p
2
)c
0
2
(b)¯s
β
0
2

β
p

2
cp
r(r + p)
= 0 (2.1)
13
and
v
p
(s) =






c
0
1
(b
p
)¯s
β
0
1
+ c
0
2
(b
p
)¯s

β
0
2
+ cp/(r(r + p))

(s/¯s)
β
p
2
+ c/(r + p) on (0, ¯s)
c
0
1
(b
p
)s
β
0
1
+ c
0
2
(b
p
)s
β
0
2
+ c/r on [¯s, b
p

)
γs on [b
p
, ∞).
(2.2)
Proof. Ad (i). Recall from Proposition 1.6 that the free boundary system (1.14) has a
unique solution (f

, b

), with b

= ˆs
p
and f

= ˆv
p
|
(0,b

)
, and that by extending f

by
setting f

(s) = γs on [b

, ∞) we get f


∈ C
2
(R
>0
\ {b

}) ∩ C
1
(R
>0
) and f

= ˆv
p
. Let us
show that v
p
= f

.
By assumption we have b

= ˆs
p
≤ ¯s and thus χ(s) = p on (0, b

), which yields by
(1.14)
(L −(r + χ(s)))f


(s) + c = 0 on (0, b

), (2.3)
while on (b

, ∞) a direct calculation with f

(s) = γs and ˆs
p
> c/(γ(δ + p)) (cf. (1.13))
gives
(L −(r + χ(s)))f

(s) + c = λ(s) ≤ 0 on (b

, ∞). (2.4)
Applying Lemma A.1 (i), thereby using (2.3) and f

(s) = γs on [b

, ∞), and Lemma A.1 (ii),
thereby using (2.3), (2.4) and f

(s) = ˆv
p
(s) ≥ γs on R
>0
, we find that f


(s) = sup
τ∈T
0,∞
E
s
[L
τ
]
on R
>0
. Thus indeed v
p
= f

.
Ad (ii). From ˆv
p
≤ v
p
≤ ˆv
0
, ˆs
p
> ¯s and λ being negative on (¯s, ∞) it follows with
Theorem 1.9 (ii) that S
p
= [b
p
, ∞) for some b
p

∈ [ˆs
p
, ˆs
0
].
Step 1. From Theorem 1.9 (iii) it follows that the pair (v
p
|
(0,b
p
)
, b
p
) solves the following
free boundary problem in unknowns (f, b) ∈ C
2
((0, b) \ {¯s}) ∩C
1
(0, b) ×(¯s, ∞).














f(0+) ∈ R
≥0
(L −(r + p))f(s) + c = 0 on (0, ¯s)
f(¯s−) = f(¯s+), f

(¯s−) = f

(¯s+)
(L −r)f(s) + c = 0 on (¯s, b)
f(b−) = γb, f

(b−) = γ.
Let us show that this system has in fact a unique solution (f

, b

), with b

equal to
the unique solution to (2.1) and f

given by the first two lines in the rhs of (2.2). Clearly,
for any b > ¯s, f
b
solves both differential equations in the above system iff
f
b
(s) =


C
1
s
β
p
1
+ C
2
s
β
p
2
+ c/(r + p) on (0, ¯s)
C
3
s
β
0
1
+ C
4
s
β
0
2
+ c/r on (¯s, b)
for constants C
1
, . . . , C

4
. Since β
p
1
< 0 < β
p
2
, we have f
b
(0+) ∈ R
≥0
iff C
1
= 0. Furthermore
some straightforward calculations show that the four boundary conditions at s = ¯s and
14
s = b translate into explicit expressions for C
2
= C
2
(b), . . . , C
4
= C
4
(b) in terms of b and
the requirement that b solves (2.1). Using the identities (1.12), differentiating the lhs of
(2.1) with respect to b yields the expression
2(δγ −cb
−1
)

σ
2

0
2
− β
0
1
)


0
1
− β
p
2
)

¯s
b

β
0
1
− (β
0
2
− β
p
2

)

¯s
b

β
0
2

,
and on account of β
p
2
> 0, β
0
1
< 0 < 1 < β
0
2
and ¯s ≥ c/(δγ) this quantity is strictly
negative for b ∈ (¯s, ∞), thus it follows that (2.1) can have at most one solution on (¯s, ∞).
So (f

, b

) is indeed uniquely determined with b

= b
p
. Plugging b


= b
p
in the above
expressions for C
2
(b), . . . , C
4
(b) shows that f

= f
b
p
is indeed given by the formulae in the
rhs of (2.2).
Step 2. We noted above already that b
p
∈ [ˆs
p
, ˆs
0
]. It remains to show that b
p
< ˆs
0
and b
p
> ˆs
p
. To see the former, note that from Proposition 1.6 and Step 1 we have

that the pairs (ˆv
0
|
(¯s,ˆs
0
)
, ˆs
0
) and (v
p
|
(¯s,b
p
)
, b
p
) both solve the following system in unknowns
(f, b) ∈ C
2
(¯s, b) ×(¯s, ∞)

(L −r)f(s) + c = 0 on (¯s, b)
f(b−) = γb, f

(b−) = γ.
If we fix some b ∈ (¯s, ∞), the corresponding f in the above system is obviously uniquely
determined. This means that if we had b
p
= ˆs
0

=: b

, then also v
p
= ˆv
0
on (¯s, b

), but
this is clearly impossible since for any s ∈ (¯s, b

), there is a positive P
s
-probability that S
spends a Lebesgue positive amount of time in (0, ¯s) before reaching the optimal stopping
level b

, implying v
p
(s) < ˆv
0
(s). Thus b
p
< ˆs
0
.
To see b
p
> ˆs
p

, suppose that we had b
p
= ˆs
p
=: b

. From Step 1 we know that
v
p
satisfies (L − r)v
p
(s) + c = 0 on (¯s, b

) with v
p
(b

−) = γb

and v

p
(b

−) = γ while
from Proposition 1.6 we know that ˆv
p
satisfies (L − (r + p))ˆv
p
(s) + c = 0 on (0, b


) with
ˆv
p
(b

−) = γb

and ˆv

p
(b

−) = γ. Taking the limit for s ↑ b

in both differential equations
and making use of the mentioned boundary conditions at s = b

−, it readily follows on
account of the different potentials that v

p
(b

−) < ˆv

p
(b

). This however means, taking

into account that by the boundary conditions v
p
and ˆv
p
and their derivatives coincide at
s = b

−, that ˆv
p
> v
p
on (b

− , b

) for some  > 0, thus yielding a contradiction to
ˆv
p
≤ v
p
.
Theorem 2.2. Suppose that ¯s ∈ (0, c/(δγ)). There exists a unique ¯p ∈ (0, ∞) with
ˆs
¯p
∈ (0, ¯s) such that the following holds.
(i) For p ∈ (0, ¯p) we have S
p
= [b
p
, ∞), where b

p
∈ (c/(δγ) ∨ ˆs
p
, ˆs
0
) is the unique
solution of equation (2.1) on (c/(δγ), ∞) and v
p
is given by the rhs of (2.2).
(ii) For p ∈ [¯p, ∞) we have S
p
= [ˆs
p
, a
p
] ∪ [b
p
, ∞), with a
¯p
= ˆs
¯p
, where the pair (a
p
, b
p
)
is the unique solution of the following system of equations in unknowns (a, b) ∈
15
[ˆs
p

, ¯s) × (c/(δγ), ˆs
0
)
c
p
1
(a)¯s
β
p
1
+ c
p
2
(a)¯s
β
p
2
+ c/(r + p) = c
0
1
(b)¯s
β
0
1
+ c
0
2
(b)¯s
β
0

2
+ c/r (2.5)
β
p
1
c
p
1
(a)¯s
β
p
1
+ β
p
2
c
p
2
(a)¯s
β
p
2
= β
0
1
c
0
1
(b)¯s
β

0
1
+ β
0
2
c
0
2
(b)¯s
β
0
2
(2.6)
and
v
p
(s) =














ˆv
p
(s) on (0, ˆs
p
)
γs on [ˆs
p
, a
p
].
c
p
1
(a
p
)s
β
p
1
+ c
p
2
(a
p
)s
β
p
2
+ c/(r + p) on (a
p

, ¯s]
c
0
1
(b
p
)s
β
0
1
+ c
0
2
(b
p
)s
β
0
2
+ c/r on (¯s, b
p
)
γs on [b
p
, ∞).
(2.7)
Proof. Let us prove the assertion for
¯p := inf{p > 0 |S
p
∩ (0, ¯s) = ∅}. (2.8)

Obviously, there can be at most one ¯p for which (i) and (ii) hold both. Since v
p
≥ ˆv
p
and
ˆs
0
> c/(δγ) > ¯s (by assumption and (1.13)) we have ¯p > 0 and ˆs
¯p
∈ (0, ¯s). Obviously
¯p < ∞.
Ad (i). Since S
p
∩ (0, ¯s) = ∅ and λ > 0 on (¯s, c/(δγ)) we have by Theorem 1.9 (ii)
& (iv) that S
p
∩ (0, c/(δγ)] = ∅. Furthermore, since λ < 0 on (c/(δγ), ∞) we get by
again Theorem 1.9 (ii) and ˆv
p
≤ v
p
≤ ˆv
0
that S
p
= [b
p
, ∞) for some b
p
∈ [ˆs

p
, ˆs
0
] with
b
p
> c/(δγ). For the remaining statements of (i) the same proof as for Theorem 2.1 (ii)
applies.
Ad (ii). Step 1. Take some s
0
∈ S
p
∩ (0, ¯s), which is non-empty by the assumption
(note that, due to the continuity of p → v
p
in the max-norm (cf. Theorem 1.9 (i)), the
infimum in (2.8) is attained). From v
p
(s
0
) = γs
0
and s
0
≤ ¯s, it follows that
v
p
(s) = ˆv
p
(s), ∀s ≤ s

0
. (2.9)
Namely, starting S at s ∈ (0, s
0
] the process never enter the default-free region (¯s, ∞)
when optimally stopped at S
p
. Thus the optimal payoff of v
p
(s
0
) coincides with the cor-
responding payoff of the potentially smaller claim when default occurs everywhere with
rate p. Now, (2.9) means that S
p
∩ (0, s
0
] = [ˆs
p
, s
0
]. Since s
0
is an arbitrary element of
S
p
∩(0, ¯s), this implies in particular that S
p
∩(0, ¯s) is an interval. Since λ(¯s+) > 0 we have
by Theorem 1.9 (iv) that ¯s ∈ S

p
, which also means that we have S
p
∩ (0, ¯s) = S
p
∩ (0, ¯s],
the latter being closed in R
>0
. In conclusion we arrive at S
p
∩ (0, ¯s) = [ˆs
p
, a
p
] for some
a
p
∈ [ˆs
p
, ¯s).
Furthermore, since λ > 0 on (¯s, c/(δγ)), λ < 0 on (c/(δγ), ∞) and v
p
≤ ˆv
0
we get from
Theorem 1.9 (ii) & (iv) that S
p
∩ [¯s, ∞) = [b
p
, ∞) for some b

p
∈ (c/(δγ), ˆs
0
]. The same
argument as in Step 2 in the proof of Theorem 2.1 (ii) shows that b
p
< ˆs
0
. We have thus
established the shape of S
p
as in the statement.
16
Step 2. Let us show that a
¯p
= ˆs
¯p
, i.e. that S
¯p
∩(0, ¯s) consists of a single point. From (i)
we know that for p ∈ (0, ¯p) we have v
p
(s) = C
p
s
β
p
2
+ c/(r + p) on (0, ¯s). By the continuity
of p → β

p
2
and of p → v
p
in max-norm (cf. Theorem 1.9 (i)), C
p
has a limit value C
¯p
as
p → ¯p and we can write v
¯p
(s) = C
¯p
s
β
¯p
2
+ c/(r + ¯p) on (0, ¯s). Clearly, C
¯p
has to be strictly
positive. Since β
¯p
2
> 1, this formula shows that v
¯p
can touch s → γs in only a single point.
Step 3. Finally, let us establish the stated formulae for (a
p
, b
p

) and v
p
|
(a
p
,b
p
)
. Consider
the following free boundary problem in unknowns (f, a, b) ∈ C
2
((a, ¯s) ∪(¯s, b)) ∩C
1
(a, b)×
[ˆs
p
, ¯s) × (c/(δγ), ˆs
0
)














f(a+) = γa, f

(a+) = γ
(L −(r + p))f(s) + c = 0 on (a, ¯s)
f(¯s−) = f(¯s+), f

(¯s−) = f

(¯s+)
(L −r)f(s) + c = 0 on (¯s, b)
f(b−) = γb, f

(b−) = γ.
(2.10)
From Theorem 1.9 (iii) (and Step 1 for the intervals which contain a
p
and b
p
) we know
that the triplet (v
p
|
(a
p
,b
p
)
, a

p
, b
p
) solves this system. Let (f, a, b) be any solution to this
system. By considering the initial value problems consisting of the first and last two lines
of the system resp., it is straightforward to check that we have
f|
(a,¯s)
(s) = c
p
1
(a)s
β
p
1
+ c
p
2
(a)s
β
p
2
+ c/(r +p) and f|
(¯s,b)
(s) = c
0
1
(b)s
β
0

1
+ c
0
2
(b)s
β
0
2
+ c/r (2.11)
(recall that c
·
1,2
are defined in Theorem 2.1). As is readily checked, the remaining pasting
conditions at s = ¯s, i.e. the third line of (2.10), are satisfied iff (a, b) satisfies the system
of equations (2.5)-(2.6). Thus (a
p
, b
p
) indeed satisfies (2.5)-(2.6) and, taking into account
that v
p
= ˆv
p
on (0, ˆs
p
] (by Step 1), v
p
is indeed given by (2.7).
It remains to show that (2.5)-(2.6) has at most one solution. For this, let (a, b) ∈
[ˆs

p
, ¯s) ×(c/(δγ), ˆs
0
) be any solution. Defining the function f on (a, b) by (2.11), with the
understanding that f (¯s) := f|
(a,¯s)
(¯s−) = f|
(¯s,b)
(¯s+), we have that (f, a, b) is a solution to
system (2.10). Let us first show that
f(s) > γs on (a, b). (2.12)
From a ≥ ˆs
p
, δ < r and c/(γ(δ + p)) < ˆs
p
(cf. (1.13)) it f ollows that c
p
1
(a) ≥ 0 and
c
p
2
(a) > 0, using this with β
p
1
< 0 < 1 < β
p
2
a straightforward calculation shows that
f|

(a,¯s)

> 0, so that with the boundary conditions in s = a+ from system (2.10) we see
that f|
(a,¯s)
(s) > γs. For f|
(¯s,b)
, using system (2.10) we can take the limit for s ↑ b in
(L−r)f
b
(s) + c = 0 and use the boundary conditions at s = b− together with b > c /(δγ)
to see that f|
(¯s,b)

(b−) > 0. Furthermore, on account of β
p
1
< 0 < 1 < β
p
2
a simple
computation shows that f|
(¯s,b)

might have at most one zero. From this structure of
f|
(¯s,b)

, the boundary conditions at s = b− and f|
(¯s,b)

(¯s+) = f|
(a,¯s)
(¯s−) > γ¯s it readily
follows that f|
(¯s,b)
(s) > γs and thus (2.12) indeed holds.
Now, extend f to a C
2
((ˆs
p
, ∞) \ {a, ¯s, b}) ∩ C
1
(ˆs
p
, ∞)-function by setting f(s) = γs
on [ˆs
p
, a] ∪ [b, ∞). Using that f satisfies (L −(r + χ(s)))f(s) + c = 0 on (a, ¯s) ∪ (¯s, b)(cf.
17
system (2.10)), that ( L − (r + χ(s)))f(s) + c = λ(s) ≤ 0 on (ˆs
p
, a) ∪(b, ∞) ⊂ (c/(γ(δ +
p)), ¯s) ∪ (c/(δγ), ∞) and that f(s) ≥ γs on [ˆs
p
, ∞) (cf. (2.12)), we get from Lemma A.1
(i) & (ii) that
f(s) = sup
τ∈T
0,∞
E

s

L
τ(ˆs
p
,∞)
τ

on [ˆs
p
, ∞).
A second solution (a
2
, b
2
) of (2.5)-(2.6) would by the same means as above allow to
construct an associated solution function f
2
on [ˆs
p
, ∞) that also equals the rhs of the above
formula. Thus f
2
= f, and since f(s) > γs iff s ∈ (a, b) and f
2
(s) > γs iff s ∈ (a
2
, b
2
) (cf.

(2.12)) this indeed implies (a
2
, b
2
) = (a, b), as required.
3 Power based intensity function
In this s ection we look at an intensity function of the form χ(s) = s
−α
for α > 0 and
we denote the associated value f unction by v
α
. This means that we get ϕ
t
=

t
0
S
−α
u
du,
t ≥ 0, and
v
α
(s) = sup
τ∈T
0,∞
E
s
[L

τ
] = sup
τ∈T
0,∞
E
s

e
−rτ −ϕ
τ
γS
τ
+

τ
0
ce
−ru−ϕ
u
du

, (3.1)
we denote by C
α
and S
α
the asso ciated continuation and stopping regions resp. The drift
rate is now given by λ(s) = c−γs(δ +χ(s)) = c−δγs−γs
1−α
. It turns out that depending

on whether α < 1, α = 1 or α > 1, v
α
behaves quite differently, see Theorem 3.3 below
and Figures 3 & 4 in Section 4.
Proposition 3.1. For any α > 0, v
α
is a non-decreasing C
1
(R
>0
)-function with γs ≤
v
α
(s) ≤ ˆv
0
(s) on R
>0
and v
α
(0+) = 0.
Proof. γs ≤ v
α
(s) ≤ ˆv
0
(s) on R
>0
and v
α
∈ C
1

(R
>0
) are immediate from Theorem 1.9 (i)
& (iii). That v
α
is non-decreasing is obvious by writing (3.1) as
v
α
(s) = sup
τ∈T
0,∞
E
1

e
−rτ −s
−α
ϕ
τ
γsS
τ
+

τ
0
ce
−ru−s
−α
ϕ
u

du

.
Using this expression and that (exp(−rt)S
t
)
t≥0
is a class (D) supermartingale we find by
Doob’s optional sampling theorem
v
α
(s) ≤ γs + E
1



0
c exp

−ru − s
−α
ϕ
u

du

and thus by dominated convergence it follows that v
α
(0+) = 0.
Investigating where λ is positive (if anywhere) requires a few calculations and is done

next.
Proposition 3.2. Assume (for convenience) that δ < 1. We have the following cases.
18
(i) Let α ∈ (0, 1). Then λ is strictly decreasing with λ(0+) = c. We denote its zero by
s
r
.
(ii) Let α = 1. Then λ is strictly decreasing with λ(0+) = c − γ. For c > γ we denote
its zero by s
r
.
(iii) Let α > 1. Then λ attains a strict maximum in s
0
:= (δ/(α − 1))
−1/α
. The set
J := {α > 1 |λ(s
0
) > 0} satisfies
J =









∅ if c/γ ≤ δ


r
, ∞) if c/γ ∈ (δ, 1]
(1, α
l
) ∪(α
r
, ∞) if c/γ ∈ (1, δ + 1)
(1, ∞) if c/γ ≥ δ + 1,
where (if applicable) α
l
∈ (1, δ + 1) and α
r
∈ (δ + 1, ∞) are the zeros of
Ψ(α) = α

δ
α −1

(α−1)/α

c
γ
.
If α ∈ J, λ has two zeros s
l
< s
0
< s
r

.
Proof. Cases (i) and (ii) are obvious. Also case (iii) is easily checked. Note that λ(s
0
) > 0
iff Ψ(α) > 0, so J = {α > 1 |Ψ(α) > 0}. Taking into account the easily verified facts
Ψ(1+) = 1, Ψ is increasing on (1, δ + 1), Ψ is decreasing on (δ + 1, ∞) and Ψ(∞) = δ,
together with δ < 1, the characterization of J follows.
Finally we turn to obtaining semi-explicit formulae for v
α
and the optimal exercise
level(s):
Theorem 3.3. Assume δ < 1. Let I
ν
and K
ν
denote the modified Bessel functions of the
first and second kind resp., of order ν. Set
ν =
2
ασ

2r +

σ
2

r − δ
σ

2

,
let the functions φ
1
, φ
2
: R
>0
→ R be defined by
φ
1
(s) = s
1/2−(r −δ)/σ
2
I
ν

2

2
ασ
s
−α/2

and φ
2
(s) = s
1/2−(r −δ)/σ
2
K
ν


2

2
ασ
s
−α/2

and the functions c
1
, c
2
: R
>0
→ R by
c
1
(b) =

α
b
2(r−δ)/σ
2
(bφ

2
(b) −φ
2
(b)) and c
2

(b) =

α
b
2(r−δ)/σ
2

1
(b) −bφ

1
(b)) .
Furthermore, define F : R
>0
→ R by
19
F (b) = −
2
σ
2

b
0
ξ
(r−δ)/σ
2
−3/2
K
ν


2

2
ασ
ξ
−α/2

λ(ξ) dξ. (3.2)
We have the following.
(i) Suppose that α ∈ (0, 1) or both α = 1 and c > γ. Then S
α
= [b
α
, ∞), where b
α
is
the unique solution of F(b) = 0 on (s
r
, ∞) and
v
α
(s) =





4c
ασ
2

φ
1
(s)

s
0
φ
2
(ξ)ξ
2(r−δ)/σ
2
−2


2
(s)

c
2
(b
α
) +
4c
ασ
2

b
α
s
φ

1
(ξ)ξ
2(r−δ)/σ
2
−2


on (0, b
α
)
γs on [b
α
, ∞).
(3.3)
In particular, v
α
(0+) = 0 and v

α
(0+) = ∞ for α < 1 while v

α
(0+) = c for α = 1.
(ii) Suppose that α = 1 and c ≤ γ. Then S
α
= R
>0
and v
α
(s) = γs on R

>0
.
(iii) Suppose that α > 1. If α ∈ J then S
α
= R
>0
and v
α
(s) = γs on R
>0
. Let α ∈ J.
Then S
α
= (0, a
α
] ∪ [b
α
, ∞), where the pair (a
α
, b
α
) is the unique solution of the
following systems of equations in unknowns (a, b) ∈ (0, s
l
) ×(s
r
, ∞)
c
1
(b)φ

1
(a) + c
2
(b)φ
2
(a) +
4c
ασ
2

b
a
φ
1
(ξ)φ
2
(a) −φ
1
(a)φ
2
(ξ)
ξ
2−2(r− δ)/σ
2
dξ = γa (3.4)
c
1
(b)φ

1

(a) + c
2
(b)φ

2
(a) +
4c
ασ
2

b
a
φ
1
(ξ)φ

2
(a) −φ

1
(a)φ
2
(ξ)
ξ
2−2(r− δ)/σ
2
dξ = γ (3.5)
and
v
α

(s) =











γs on (0, a
α
]
φ
1
(s)

c
1
(b
α
) −
4c
ασ
2

b
α

s
φ
2
(ξ)ξ
2(r−δ)/σ
2
−2



2
(s)

c
2
(b
α
) +
4c
ασ
2

b
α
s
φ
1
(ξ)ξ
2(r−δ)/σ
2

−2


on (a
α
, b
α
)
γs on [b
α
, ∞).
(3.6)
Proof. First suppose that either both α = 1 and c ≤ γ or α ∈ (1, ∞) ∩ J
c
. These are
exactly the cases for which λ is everywhere non-p ositive, cf. Proposition 3.2. It follows from
Theorem 1.9 (ii), thereby using [ˆs
0
, ∞) ⊂ S
α
that indeed S
α
= R
>0
and thus v
α
(s) = γs
on R
>0
. It remains to consider α for which λ is not everywhere non-positive, in partic ular

we can assume for the sequel that s
r
is well defined and so is s
l
if α > 1, cf. Proposition 3.2.
The remainder consists of three steps, in the first two we study two free boundary
problems by analytical means and in the last one we use these to deduce the statements.
Step 1. Consider the free boundary system in unknowns (f, b) ∈ C
2
(0, b) ×(s
r
, ∞):



(L −(r + χ(s)))f(s) + c = 0 on (0, b)
f(b−) = γb, f

(b−) = γ
f(0+) = 0.
(3.7)
20
Let us show that it has a (unique) solution pair (f

, b

) iff F (s
r
) < 0 and that if F (s
r

) < 0
holds, b

is the unique solution of F(b) = 0 on (s
r
, ∞), f

is given by the first two lines of
(3.3) with b
α
replaced by b

and f


(0+) equals ∞, c, 0 for α < 1, α = 1, α > 1, resp.
First, for any b ∈ (s
r
, ∞) we have from Lemma A.2 (i) and the general theory on
ODEs that the initial value problem consisting of the first two lines of system (3.7), so
without the condition f(0+) = 0, admits a unique solution f = f
b
, with
f
b
(s) = φ
1
(s)

c

1
(b) −
4c
ασ
2

b
s
φ
2
(ξ)
ξ
2−2(r− δ)/σ
2


+ φ
2
(s)

c
2
(b) +
4c
ασ
2

b
s
φ

1
(ξ)
ξ
2−2(r− δ)/σ
2


(3.8)
for s ∈ (0, b). So in order to find the solutions to system (3.7) we need to find those
b ∈ (s
r
, ∞) for which f
b
(0+) = 0. Using φ
1
(0+) = ∞, φ
2
(0+) = 0 and (A.4) from
Lemma A.2 (ii) & (iii) we see that this holds iff the first of the two bracketed terms in
(3.8) vanishes as s ↓ 0, which using the formula for c
1
boils down to
γb
2(r−δ)/σ
2
(bφ

2
(b) −φ
2

(b)) −
2c
σ
2

b
0
ξ
2(r−δ)/σ
2
−2
φ
2
(ξ) dξ = 0. (3.9)
On account of φ
2
(0+) = φ

2
(0+) = 0 by Lemma A.2 (ii) the above lhs vanishes as
b ↓ 0. Furthermore, differentiating this lhs, thereby using that by definition (L − (r +
χ(s)))φ
2
(s) = 0 (Lemma A.2 (i)), gives the quantity −2b
2(r−δ)/σ
2
−2
φ
2
(b)λ(b)/σ

2
. So (3.9)
may b e written as F (b) = 0 with F given by (3.2) and furthermore this derivation shows,
together with φ
2
> 0 (Lemma A.2 (i)) and λ < 0 on (s
r
, ∞) (cf. Proposition 3.2) that F
is strictly increasing on (s
r
, ∞), thus it has a (unique) root on this interval iff F(s
r
) < 0.
If F (s
r
) < 0 holds and b

denotes this unique root, the pair (f

, b

) is thus the unique
solution to system (3.7), where f

= f
b

takes the required form by adjusting (3.8) for
b = b


. A straightforward computation with φ

2
(0+) = 0 and (A.5) from Lemma A.2 (ii)
& (iii) yields that f


(0+) equals ∞, c, 0 for α < 1, α = 1, α > 1, resp.
Step 2. Suppose in this step that α > 1, so that s
l
is well defined. Consider the free
boundary problem in unknowns (f, a, b) ∈ C
2
(a, b) ×(0, s
l
) ×(s
r
, ∞):



(L −(r + χ(s)))f(s) + c = 0 on (a, b)
f(b−) = γb, f

(b−) = γ
f(a+) = γa, f

(a+) = γ.
(3.10)
Let us show that the set of solutions consist of all pairs (a, b) that satisfy (3.4)-(3.5), with

associated solution function f given by the first two lines of the rhs of (3.6) with a
α
and
b
α
replaced by a and b resp., and that for any solution we have f(s) > γs on (a, b).
By the same arguments as in Step 1, for any (a, b) ∈ (0, s
l
) × (s
r
, ∞) there exists a
unique C
2
(a, b)-function that satisfies the initial value problem consisting of the first two
lines of system (3.10), namely f
b
(s) as given by (3.8), considered for s ∈ (a, b). Using
this formula for f
b
shows readily that f
b
(a+) = γa and f

b
(a+) = γ hold iff (a, b) satisfies
(3.4)-(3.5).
21
Let now (f, a, b) be any solution to the system. For proving that f(s) > γs on (a, b)
we only need the system itself and the sign of λ from P roposition 3.2. First consider f
on [s

r
, b). Taking the limit s ↑ b in (L − (r + χ(s)))f(s) + c = 0 and using the boundary
conditions in s = b− we find that σ
2
b
2
f

(b−)/2 = −λ(b) > 0. Thus, by again the
boundary conditions at s = b−
f

(s) ↑ γ and f(s) −γs ↓ 0 as s ↑ b. (3.11)
Now suppose that s
0
∈ [s
r
, b) exists with f(s
0
) = γs
0
and let it w.l.o.g. be the largest such
point, so that f(s) > γs on (s
0
, b). It follows that there has to a point in (s
0
, b) where
f

is larger than γ, otherwise namely f(s

0
) = γs
0
is impossible on account of (3.11). Let
s
1
∈ (s
0
, b) be the largest point where f

attains the value γ, so that f

(s
1
) ≤ 0, f

(s
1
) = γ
and f(s
1
) > γs
1
. But if we plug the latter two into (L−(r + χ(s
1
)))f(s
1
)+c = 0 it follows
that σ
2

s
2
1
f

(s
1
)/2 ≥ −λ(s
1
) > 0, and thus a contradiction is obtained.
The same idea, but ”reflected”, can be used to show that f(s) > γs on (a, s
l
]. So it
remains to show that f(s) > γs also holds on (s
l
, s
r
). Supp ose that this assertion does
not hold and let, using f(s
r
) > γs
r
, s
2
be the largest solution of f(s) = γs on (s
l
, s
r
), so
that f(s

2
) = γs
2
and f

(s
2
) ≥ γ. Plugging this in (L−(r + χ(s
2
)))f(s
2
) + c = 0 and using
that λ(s
2
) > 0 it follows that f

(s
2
) < 0, so that
f

(s) ↓ f

(s
2
) ≥ γ and f(s) −γs ↑ 0 as s ↑ s
2
. (3.12)
It follows that there has to be a point in (s
l

, s
2
) where f

is smaller than γ, otherwise
namely (3.12) would imply that f(s
l
) < γs
l
and we know already that f(s
l
) > γs
l
. In
particular, taking again (3.12) into account, f

has to have a largest point s
3
∈ (s
l
, s
2
)
where it equals γ, i.e. f

(s
3
) ≥ 0, f

(s

3
) = γ and f(s
3
) ≤ γs
3
. But as before we can
plug the latter two into (L − (r + χ(s
3
)))f(s
3
) + c = 0 and use λ(s
3
) > 0 to derive that
f

(s
3
) < 0 and obtain a contradiction.
Step 3. Ad (i). Let α ∈ (0, 1) or both α = 1 and c < γ. Using the behaviour of λ from
Proposition 3.2, it follows from Theorem 1.9 (i), (ii) & (iv) that S
α
= [b
α
, ∞) for some
b
α
∈ (s
r
, ˆs
0

]. Next, applying Theorem 1.9 (iii) and using v
α
(0+) = 0 (by Proposition 3.1),
it follows that the pair (v
α
|
(0,b
α
)
, b
α
) solves the free boundary system (3.7). As seen in
Step 1 this system has a unique solution pair (f

, b

). Thus we may identify b
α
= b

and
v
α
|
(0,b
α
)
= f

and using the properties of (f


, b

) derived in Step 1, the results follow.
Ad (iii). Let α > 1 and α ∈ J. Using the behaviour of λ from Proposition 3.2, it follows
from Theorem 1.9 (i), (ii) & (iv) that there are two possibilities, either S
α
= [b
α
, ∞) or
S
α
= (0, a
α
] ∪[b
α
, ∞) for some a
α
∈ (0, s
l
) and b
α
∈ (s
r
, ˆs
0
]. Let us show that the former
can not hold. Namely, if this were the case, it would by the same means as in the previous
paragraph follow that (v
α

|
(0,b
α
)
, b
α
) is the unique solution to system (3.7). But the result
from Step 1 would now imply that v

α
(0+) = 0 and since v
α
(0+) = 0 (cf. Proposition 3.1)
this would contradict with v
α
(s) ≥ γs on R
>0
.
So indeed S
α
= (0, a
α
] ∪ [b
α
, ∞). From Theorem 1.9 (iii) we have that the triplet
22
(v
α
|
(a

α
,b
α
)
, a
α
, b
α
) solves the system (3.10) from Step 2. It follows by Step 2 that (a
α
, b
α
)
solves equations (3.4)-(3.5) and that v
α
is given by (3.6). So it remains to show that (3.4)-
(3.5) has at most one solution. For this, let (a, b) be any solution to those equations and
let, by Step 2, f ∈ C
2
(a, b) be the function for which the triplet (f, a, b) solves system
(3.10). We extend f to a C
2
(R
>0
\ {a, b}) ∩ C
1
(R
>0
)-function by setting f(s) = γs on
(0, a]∪[b, ∞). Applying Lemma A.1 (i), thereby using that f(s) = γs on (0, a]∪[b, ∞) and

(L−(r+χ(s)))f(s)+c = 0 on (a, b), and Lemma A.1 (ii), thereby using that f(s) ≥ γs on
R
>0
(cf. Step 2) and (L−(r+χ(s)))f(s)+c = λ(s) ≤ 0 on (0, a)∪(b, ∞) ⊂ (0, s
l
)∪(s
r
, ∞)
(cf. Proposition 3.2), it follows that f(s) = sup
τ∈T
0,∞
E
s
[L
τ
] = v
α
(s) on R
>0
. Since, using
Step 2, f(s) > γs iff s ∈ (a, b) and v
α
(s) > γs iff s ∈ (a
α
, b
α
), this indeed implies
(a, b) = (a
α
, b

α
).
4 Some plots
2
4
6
8
12
s

c ∆Γ
b
p
2
4
6
8
10
12
Figure 1: Situation as in Theorem 2.2 (i), with σ = 0.2, r = 0.1, δ = 0.05, γ = 1, c = 0.5,
¯s = 2.5 and p = 0.06. The solid line is v
p
, the three dotted lines are (from the bottom up)
s → γs, ˆv
p
and ˆv
0
resp. Note that ˆv
0
is the value in the standard case without default.

23
4
6
8
12
s

c ∆Γ
b
p
a
p
s

p
2
4
6
8
10
12
Figure 2: Situation as in Theorem 2.2 (ii), with the same parameters as in Figure 1, except
with p = 0.4. The solid line is v
p
, the two dotted lines are (from the bottom up) s → γs
and ˆv
0
resp. Note that ˆv
0
is the value in the standard case without default.

0.2
0.4
0.6
0.8
1
s
r
b
Α
0.2
0.4
0.6
0.8
1
1.2
1.4
Figure 3: Situation as in Theorem 3.3 (i), with σ = 0.2, r = 0.1, δ = 0.05, γ = 1, c = 1.25
and α = 0.2. The solid line is v
α
, the dotted one is s → γs.
24
5
10
15
20
30
s
r
s
0

b
Α
a
Α
5
10
15
20
25
30
Figure 4: Situation as in Theorem 3.3 (ii), with the same parameters as in Figure 3, except
with α = 5. The solid line is v
α
, the dotted one is s → γs.
A Appendix
Lemma A.1. Let I ⊂ R
>0
be an interval and N
f
⊂ I be a finite set. Let f be a continuous
function on
¯
I such that f ∈ C
2
(I \N
f
) ∩C
1
(I) and the limits f


(a±) exist and are finite
for all a ∈ N
f
. We have the following.
(i) Suppose that f is bounded and satisfies (L−(r + χ(s)))f(s) + c = 0 on I \N
f
. Then
f(s) = E
s

1
{τ(I)<∞}
e
−rτ (I)−ϕ
τ (I)
f(S
τ(I)
) +

τ(I)
0
ce
−ru−ϕ
u
du

, ∀s ∈ I.
(ii) Suppose that f satisfies

(L −(r + χ(s)))f(s) + c ≤ 0 on I \ N

f
f(s) ≥ γs on
¯
I.
Then
f(s) ≥ sup
τ∈T
0,∞
E
s

L
τ(I)
τ

, ∀s ∈ I.
Proof. First consider a function that satisfies the weaker requirement h ∈ C
2
(I \D)∩C(I)
for some finite set D, with existing and finite limits h

(a±) for all a ∈ D . Applying the
25

×