Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
vuoi
o PayPal
tutte le volte che vuoi
STOCHASTIC PROCESSES
- this is a theoretical course
- applied examples
- books: Sacerdote, Kolokoltsov
- exercises to do during the semester and written part in June and lectures with exercises
Schilling, Partzsch, Karlin and Taylor (Towards applications)
Brownian motion
In 1888, a biologist observed the movement of pollen plant in the river ⇒ the goal is to understand how they move They observed:
- irregular movement; translations and rotations
- moves stops
- each particle moves independently from the others
- motion is more active the less viscous is the fluid
In 1905, Einstein arrived to put together these observations ⇒ he understood that atoms are moving and bombarding macroparticles, they are moving accordingly to the movement of atoms.
❗ We can have many different microscopic behavior that give arise to the same macroscopic behavior
microscopic ➡️ macroscopicWhite noise
when a signal have all possible frequencies
Sf(x)eiwdx = f(w)Properties of brownian motion
- the movement start at x=0
- change position only at discrete times kΔt with T (= T fixed) and k ∈ ℕ
- maximum movement Δx (unit fixed) to the left or to the right with probability 1/2
- Δx does not depend on any previous position nor on the present position but on time t = kΔt
- as Δt → 0 we also have Δx → 0 due the fact that motion is continuous
We consider Xt the position of the particle at time t ∈ [0,T] with T=NΔt and N = ⌊t/Δt⌋ (the biggest integer part of t/Δt)
- The left/right movement is independent and identically distributed
- N is the number of changing position
Introducing i.i.d Bernoulli random variables {Ek}k such that P(Ek=1) = P(Ek=0) = 1/2
The number of movement toward right is
SN = ∑i=1N Eiand the number of movement toward left is given by N-SN
So we get
XT = SNΔx - (N-SN)Δx = (2SN-N)Δx = ∑i=12Ei-1ΔxConsidering λ = mΔt and T = NΔt we get
We have a broken time
and we are adding points - and we are getting our new broken time -
as we have a denoty quantity of points => we don’t have any point in which we have differentiability!
4 Let B(t) a Brownian motion in 1 dimension on [0,1) and let B(t) be
B(t) - B(t/2) - B(t/2)
where
B(1/2) = B(t) - B(t/2)
B(t) - B(1/2) = B(t/2)
and we can observe that B(1/2) || B(t/2) and in particular both are normal with zero mean and variance 1/2
5 Let A and B be independent N(0,σ²) random variables.
Then A+B and A-B are two independent N(0,2σ²) random variables
DIM\ Cov(A+B,A-B) = E([(A+B)(A-B)] = E[A²-B²] = E[A²] - E[B²] = 0
of course they are orthogonal ■
9 B(1/2), B(t/2) and B(t/2), B(1/2), so ...? are independent
- B(1/2) - 1/2 B(t/2) = 1/2 B(t/2) - B(t/2)
DIM\ B(t/2) = B(t/2) + 1/2 B(t) + 1/2 B(t/2) + 1/2 B(t)
the COVARIANCE MATRIX OF Y
STANDARD GAUSSIAN RANDOM VECTORS
- independent components that are all gaussian with zero mean and variance 1. (N(0,1))
A vector of random variables Y: Ω → ℝm is GAUSSIAN MULTIVARIATE if there exists
- a standard gaussian vector X in ℝm, m=m,
- an m×m matrix A
- an m-dimensional vector b such that y = Ax + b
BOX and MILLER method gives us the chance to pass from cartesian coordinates to polar coordinates (x,y) as independent ⇒ we are able to generate (r,θ) as random variables
! If we apply a linear transformation on a standard gaussian vector, we are getting a multivariate gaussian vector which is not anymore standard (in general).
! In a gaussian multivariate, we have
- cov(Y) = E[(Y-EY)(Y-EY)ᵀ] = AAT
(This equation can be proved) and from cov(Y) = AAT we can deduce that AAT should be symmetric and positively semidefined
* If AAT = I then we say that A is ORTHOGONAL.
LEMMA
If Θ is an m×m orthogonal matrix and X is an m-dimensional standard gaussian vector then ΘX is an m-dimensional standard gaussian vector.
some α > 0. Then (Wt), Wt := Bt+α - Bt is again a b.m.
3) Markov property of a Brownian motion
P [Bs ∈ A | Bt] = P [Bs ∈ A | Bta] with t ≤ s
that is equivalent to say,
E [f (Bt+h) | Bs] = E [f (Bt+h) | Bt] with t ≤ s
for every bounded function f
4)
We say that Bs and Bt are independent if we know Bs with t ≥ s ≥ 0, 2t and we have that covariance matrix is diagonal.
(X1,...,Xm) ~ N(0, ξ) and (Xt+1,...,Xm)
Conditioned to (Xk = xk,...,Xl = xl), is
N(0, Σ2|1) where Σ2|1 = Σ22 - Σ21 Σ11-1 Σ12
and in our case (with t ≥ s ≥ 0, 2t) we have
ξ = (σ σ σ)
(σ σ σ)
(σ σ σ)
Σ2|1 = Σ22 - Σ21 Σ11-1 Σ12 =
= (σ σ) (σ σ) -1 (σ 0) =
(σ σ) ( σ σ) σ²
(σ σ) σ
(σ²/σ )
σ
0
5) Time inversion
Let (Bt), t ≥ 0 be a Brownian motion and fix some α > 0 (time). Then Wt := Bαt - Bαt
with λ ∈ [0, α], is again a b.m.
6) Scaling property
For all c > 0 and t > 0 we have Bct ∼ c1/2 Bt
and in general is too small for applications, we can enlarge it using
(i.e. it is a stopping time with respect to)
LEMMA
Let be a d-dimensional stochastic process with right-continuous sample paths and be an open set.
The first hitting time satisfies
and
EXAMPLE
For a 1-dimensional Brownian motion consider the FIRST PASSAGE TIME, the entry time into the closed set {b}.
Observe that
where the random variables are i.i.d. standard normal random variables.
Then
By the i.i.d. property of the, we get that and
Since, we conclude that almost surely.
The same argument applies to the minimum and we get
cum t → 0 pt0,t(x,t) = δ(x-x0) = { 1 if x = x0 }
0 if x ≠ x0
Now we show that in dimension 1 (∂/∂t = 1/2 ∂2/∂x2) the process BM with σ2 = 2 satisfies ∂p/∂t = ∂2p/∂x2.
we need the FOURIER TRANSFORM ϕ(λ,t) = 1/√(2π) ∫∞-∞ px0,t(x,t) e-iλx dx
and does the INVERSE FOURIER TRANSFORM will be
px0,t(x,t) = ∫∞-∞ 1/√(2π) ϕ(λ,t) eiλx dλ
To prove it, first let us suppose that p is a solution of ∂p/∂t = ∂2p/∂x2 and apply the Fourier transform
we have ∂ϕ/∂t(λ,t) = ∫∞-∞ e-iλx ∂p/∂t dx = ∫∞-∞ e-iλx ∂2p/∂x2 dx = e-iλx ∂p/∂x-∞∞ - ∫∞-∞ e-iλx ∂p/∂x dx
= -λ2 ϕ(λ,t)
↑ we integrate by parts
Then we have ∂/∂t ϕ(λ,t) = -λ2 ϕ(λ,t), that is an
ORDINARY DIFFERENTIAL EQUATION
⇒ ϕ(λ,t) = e-λ2t const ϕ(λ)
and worry limt → 0 ϕ(λ,t) = limt → 0 1/√(2π) ∫∞-∞ p(x,t) e-iλx dx = = 1/√(2π) ∫∞-∞ e-iλx δ(x-x0) dx = e-iλx0
ϕ(λ,0) = e-iλx0 => ϕ(λ,t0) = e-iλx0+λ2t0
Now apply the inverse Fourier transform and we get
px0,t0(x,t) = 1/√(2π) ∫∞-∞ eiλxeλ2(t + t0) + iλx0 dλ
= 1/√(2π) ∫∞-∞ e -[λ2(t-t0) + iλ(x-x0)] dλ
= 1/√(2π) e -(x-x0)2/4(t-t0) ∫∞-∞ e -(λN=ξ0 - i (x-x0)/2√(t-t0))2 dλ