Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
vuoi
o PayPal
tutte le volte che vuoi
The truncation error is composed of two parts:
- local truncation error: it results from the application of a single step of the method:
( )
+ℎ·Φ , ,ℎ, −
+1
τ = ℎ
+1 ( )
+ ℎ · Φ , , ℎ,
which is composed by the approximation obtained by
applying the Taylor Method at the step and the exact value of the function +1
- propagated truncation error: it results from the propagation of the local truncation
errors in the following steps
In order to study the error we then focus on the local truncation error. If a method has an
( )
ℎ
error of order we say that the method has order . The key point is to study the local
truncation error, since it is also the one that can be precisely measured and gives insight on
what kind of error the method produces.
For the Euler Method the local truncation error essentially corresponds to the remaining
terms in the Taylor expansion that are dropped:
−1
() () ( ( ) ) ( )
( )
ℎ ℎ ℎ ℎ
τ = '' +... + () + ℎ = '' ζ ≈ ' , ℎ ζ ∈ ,
2 ! 2 2
+1 +1
→
(ℎ)
Based on this observation it can be proved that overall the truncation error is
proportionate to the step size. Therefore the global error can be reduced by decreasing the
step size. This method will provide error-free predictions if the solution is linear because in
that case we have '' = 0 ⇒ τ = 0
+1
Intuitively, we have that, since the value of the error is the difference between the tangent
+ 1 ( )
line at the point and the real function +1
( ) ( ) ( )
− = + ℎ · , −
+1 +1 +1
the smaller we take the step, the smaller will be this difference!
Moreover we should consider the stability of the method:
a numerical solution is said to be unstable if the errors grow exponentially for a problem
which has a bounded solution. The stability of a particular application can depend on three
factors:
- the differential equation itself
- the numerical method
- the size of the considered step
An insight on the step size required to obtain stability can be examined by studying a model
problem: '() = λ()
(0) = 0
Now suppose to use the Euler Method to numerically solve the problem:
= + ℎλ
+1
=
0 0
this leads to the solution: = · ( 1 + ℎλ )
+1
Now, is obtained according to the same formula, just with the indeces switched back by 1:
2
= ( 1 + ℎλ
) ( 1 + ℎλ ) = ( 1 + ℎλ )
+1 −1 −1
=
I can then repeat this substitution til I reach :
0 0
+1 +1
= ( 1 + ℎλ ) = ( 1 + ℎλ )
+1 0 0
How does this compare to the exact solution?
Since the exact solution is λ
() = ( λ ) < 0
with
which implies lim () = 0
→
∞
the numerical solution must have the same behaviour, that is
lim = 0 ⇔ | 1 + ℎλ | < 1
+1
→
∞
This means that for the Euler Method we can identify the stability region as a unit circle with
(− 1, 0)
center in .
Let’s consider an example about stability.
Let’s approximate this using the Euler Method:
'() = − () [0, 10]
= 1
0
Considering that | 1 + ℎλ | = | 1 − ℎ | < 1
ℎ
we can notice how changing the values for greatly affects the final results.
We say that the stability of the error is a necessary and sufficient property for the solution to
be asymptotically correct.
One important mark about the Eurler Method is that the derivative at the beginning of the
interval is assumed to apply across the entire interval. This derivative (the tangent line) that
we are using to approximate the function over the whole interval somehow approximates
the derivative of .
There are two ways to improve the method and get a better approximation of the derivative
of in the interval:
- Heun Method
- Midpoint Method 2
Note that both have local truncation error of order .
Heun Method:
Method to improve the estimate of the slope of the derivative of : two derivatives are
determined, one at the beginning and one at the end of the interval. The two derivatives are
then averaged together to obtain and improved estimate of the slope of the entire interval:
( )
1: ,
( )
0
2: ,
+1 +1
( )
0
( )
, + ,
+1 +1
: 2
So, starting from Euler Method we should write:
0 ( )
= + ℎ ,
+1
0
+ 1
here the value is not the final approximation value at , but it has to be inteded as
+1
a prediction that helps to compute a new estimate for the slope at the end of the interval:
( )
' 0
= ,
+1 +1 +1
therefore the average slope for the interval is obtained by
( )
0
( )
, + ,
+1 +1
2
Finally we derive the Heun Method: ( )
0
( )
, + ,
+1 +1
= + ℎ · 2
+1
Midpoint Method:
Method to improve the estimate of the slope of the derivative of : instead of using an
average between the slopes at the extremes of the interval, here we use the Euler Method
to predict a new value at the midpoint of the interval. This way we use Euler to just move
ℎ
half a step , we don’t go directly from to but to . To do this we estimate the
1
+1 + 2
slope: ( )
,
1 1
+ +
2 2
This predicted value is then used to calculate the slope at the midpoint, assumed to
represent a valid approximation of the average slope for the entire interval. This slope is
then used to extrapolate linearly from to .
+1
Formally, we do one step of Euler Method: ( )
ℎ
= + · ,
1 2
+ 2
so the new slope is given by: ( )
'
= ,
1 1 1
+ + +
2 2 2
and finally the Midpoint Method is given by: ( )
= + ℎ ,
1 1
+1 + +
2 2
The Heun and the Midpoint Method both belong to a larger class of methods, which is called
the Runge-Kutta Methods. The Runge-Kutta Method has as an aim to create an increment
function that emulates the Taylor Method’s one, without having the need to compute
derivatives and at the same time being able to emulate the function pretty closely.
Their general form is: = + ℎ · Φ
+1
Φ
where the increment function is
Φ = + +... +
1 1 2 2
and the ’s are constants and the ’s are functions:
( )
= ,
- 1
( )
= + ℎ, + ℎ
- 2 1 11 1
( )
= + ℎ, + ℎ + ℎ
- 3 2 21 1 22 2
...
- ( )
= + ℎ, + ℎ + ℎ +... + ℎ
- −1 −1,1 1 −1,2 2 −1,−1 −1
Φ
Note that here the increment function is not a Taylor expansion, but instead a
combination of coefficients with functions, which are some evaluations at those points.
,..., ,...,
The choice of the coefficients and determines the method.
1 −1 11 −1,−1
The two-stage of a Runge-Kutta Method is given by:
( )
= + ℎ +
+1 1 1 2 2
with , , and determined by imposing that
1 2 1 11 +
1 1 2 2
Φ +
approximates the increment function of an order 2 Taylor method. Here is
1 1 2 2
tuned so that the local truncation error tends to be the same.
3 4
This leads to equations and unknowns:
+ = 1
1 2 1
= 2
2 1 1
= 2
2 11
We assume a value of one of the unknowns to determine the other three. Since here for the
“free-variable” we can choose and infinite number of values, there are an infinite number of
different two-stage Runge-Kutta Methods.
For example, assuming 1
= 2
2
we get that 1
= 2
1
= 1
1
= 1
11
which gives us the Heun Method: ( )
1 1
= + ℎ · +
2 2
+1 1 2
with: ( )
= ,
- : slope at the beginning of the interval
1
( )
= + ℎ, + ℎ
- : slope at the end of the interval
2 1
One of the most used methods is a four-stage one:
( )
1
= + + 2 + 2 + · ℎ
6
+1 1 2 3 4
with: ( )
= ,
- : slope at the beginning of the interval
1
( )
1 1
= + ℎ, + ℎ
- : slope at the midpoint of the interval using and
2 2
2 1 1
( )
1 1
= + ℎ, + ℎ
- : slope at the midpoint of the interval using and
2 2
3 2 2
( )
= + ℎ, + ℎ
- : slope at the beginning of the interval
4 3
Note that each method is just a matter of a different set of ’s, which corresponds to a set of
evaluations of the slope in different points of the interval. Remember that all we are doing is
just trying to find a better approximation of the slope, which in the end gives a better
approximation of the unknown function. For methods of steps more than four, the
→
added number of steps doesn’t pay off as much as before you just spend more
computational effort to basically get no be