Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
vuoi
o PayPal
tutte le volte che vuoi
Moment Estimation
The idea is to use the observation to determine the moments of the underlying parameter distribution (assumed to be known).
Example 1
Obs. 1, X1, X2, ..., Xm that come from a normal N (μ, σ2)
The parameters of the density are estimated by filling the empirical moments.
→ μ = X̄
← to avoid bias
Example 2
The observation comes from gamma distribution
Only positive values are possible so we want the density to especially respect this condition.
The mean is α/β
The variance is α/β²
So estimators are
DIFFERENTIAL APPROACH
Let us consider a function y(t)
A ODE is F(t, y(t), y'(t), y''(t) ... y(m)(t)) = 0
The aim is to find the solution y(t) that respect the equation ∀t ∈ I, and the solution is called integral of the equations
PARTIAL DIFFERENTIAL EQUATION
consider a function y(t, x1 ... xm)
- F(t, y, ∂y/∂t, ∂y/∂x1 ... ∂y/∂xm, ∂2y/∂t2, ∂2y/∂x12 ... ∂2y/∂t∂x1 ... ∂2y/∂xm∂xn) = 0
∂y/∂t, ∂y/∂xi are the partial derivatives with respect to one of the independent variables
∂2y/∂t2; ∂2y/∂xi2 are the second partial derivatives
∂2y/∂t∂xi, ∂2y/∂xi∂xk mixed partial derivatives
SCHWARZ THEOREM
mixed partial derivatives are the same while changing the order of the derivatives
∂2y/∂t∂xn = ∂2y/∂xn∂t
Let us consider a simple ODE y'(t) = f(t)
The solution is y(t) = ∫ f(τ) dτ + k
∴ there are infinite solutions that depend on the value of k
Optimization Approach
Optimization means to find the max and min of a function over a given data set.
If the set concurs with the domain, it is unconstrained optimization. Otherwise, it is constrained optimization.
z = f(x, y) - Unconstrained
In the boundary points, extension of Fermat's theorem, max and min are resolved among those with null gradient.
∇f = | 0
Then consider the Hessian matrix:
H = | ∂²z ∂²z | | ∂²z ∂²z |
det > 0 1st element > min
det > 0 1st el. < 0 max
det < 0 saddle point
det = 0 no information
→ z = f(x, y) - Constrained
- Study the unconstrained optimal points within the domain.
- Study the border of the domain:
- The border is a close line expressed as function g(x, y) = 0
Two Possible Approach
- Parameterization of the border
- Lagrange multipliers
- The border g(x, y) = 0 can be write as x = x(t), y = y(t) t ∈ I
- Based on t = f(x, t) and g(x, y) = 0
- Λ(x, y, λ) = f(x, y) + λ g(x, y)
The max and min are those with null gradient:
∂Λ/∂x = ∂f(x, y)/∂x + λ ∂g(x, y)/∂x = 0
∂Λ/∂y = ∂f(x, y)/∂y + λ ∂g(x, y)/∂y = 0
∂Λ/∂λ = g(x, y) = 0
The steady variables are the average pressure pi
- Resistance Ri: viscous hydraulic resistance which opposes the advancement of the blood
- Compliance Ci: capacitative effect due to the elastic behaviour of the walls
- Inertance Li: inertial effect linked to the motion of the blood in the vessels
Viscous resistance of the wall Rvi: dissipative effect linked to the viscous behaviour of the wall.
All these parameters are calculated starting from the rheological properties of the blood (ρ, μ) and geometrical and mechanical properties of the vessel (ri, hi, l, ε)
Vessel model with electrical analogy
The two equations that describe the fluid dynamics are
C / = QIn - QOut
Pi-Pj= L / + RQ
We can replace P with V (electrical voltage) and Q with electrical current I
C V/t = IIn - IOut
VIn - VOut = L I/t + RI
Resistance
V = RI
Inductance (=inertance)
V = L I/t
Capacity (=compliance)
I = C V/t
Work loop circuit for in vitro studies of the cardiovascular system
Let us consider a circuit that mimics the physiological or pathological conditions of the human circulation
To have a reliable description of the circulation, it is fundamental that the parameters of the afterloads circuit fit those of the physiological or pathological conditions to study.
The circuit is modeled through LP
This parameter also affects the optimal value of the other parameters; thus all of them have to be estimated from the physical circuit to ensure its effectiveness.
To be estimated we can use an ODE approach.
dP/dt = 1/C Q - 1/RPC PC
combined they give a 1st order ODE
dQ/dt = (Qi - Qi-1) / Δt
d2Q/dt2 = (Qi - Qi-1) / Δt2 - (Qi - Qi-2) / Δt Δt
Comparison of 2 Model and Cross-Validation
When you compare two models, the prediction error should be evaluated using a given metric like the MSE. In general, the best model is the one that minimizes the error of prediction:
MSE = (1/I) Σ (xi - x̄)2
At the same time, the interpretability and explainability of the models should be considered too: we could decide to accept a slightly higher error if the model is less complex and more explainable -> we look for a trade-off between error and complexity.
A cross-validation approach provides a powerful way to validate a model in order to avoid overfitting, which occurs when we give too much flexibility to the model and it starts following the local variability of the single observations.
Cross-validation consists in dividing the original dataset in a test set and a training set: the training set is used to develop the model, while the test set is used to compute a predictive error metric (like MSE). The process can be iterated until the result is satisfactory.
The best model is the one which performs better during the cross-validation.
Machine Learning Approach
Types of Technique of Machine Learning
Basically machine learning uses three types of techniques:
- Supervised Learning: Trains a model on known input and output data so that it can predict future outputs.
- Unsupervised Learning: Finds hidden patterns or intrinsic structures in input data.
- Reinforcement Learning: Between supervised and unsupervised.
Supervised Learning
Supervised learning builds a machine learning model that makes evidence-based predictions in the presence of uncertainty. A supervised learning algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to the new data.
Supervised learning uses classification and regression techniques to develop predictive models.
In classification problems, the learning algorithm learns a function to map inputs to output when the output value is a discrete class label (e.g., malignant or benign).
Regression problems are concerned with mapping inputs to output where the output is a continuous real number.
To sum up:
- Classification → Sorting items into categories → discrete output
- Regression → Identifying the value of the output variable based on the value of the input item → continuous output
So learning is supervised when it requires to be trained with some data.
- Provide the machine learning algorithm categorized or labeled input and output data pairs to learn.
- Feed the machine new, unlabeled information to see if it tags input data appropriately. If not, continue refining the algorithm.