Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
vuoi
o PayPal
tutte le volte che vuoi
Il metodo della funzione di trasferimento
Il parametro singolo p è a priori:
- Univocamente o globalmente identificabile se e solo se il sistema ha una e solo una soluzione
- Non univocamente o localmente identificabile se e solo se il sistema ha più di una soluzione ma un numero finito di soluzioni
- Non identificabile se e solo se il sistema ha un numero infinito di soluzioni
Il parametro singolo p è a priori:
- Univocamente o globalmente identificabile se tutti i suoi parametri sono univocamente identificabili
- Non univocamente o localmente identificabile se tutti i suoi parametri sono identificabili, sia in modo univoco che non univoco, e almeno uno è non univocamente identificabile
- Non identificabile se almeno uno dei suoi parametri è non identificabile
GENERALIZZAZIONE DEL METODO
Nel caso generale, un modello dinamico lineare è descritto matematicamente da:
La prima equazione è l'equazione del bilancio di massa, mentre la seconda è l'equazione di misurazione. Il metodo della funzione di trasferimento si basa sull'analisi della matrice di funzione di trasferimento mxr.
where r is the number of model input and m is the number of model output. where each element Hij of H is the Laplace transform of the response in the measurement variable at port I, yi(t, p), to a unit impulse at port j, uj(t)=δ(t), and I is the identity matrix. Thus each element Hij(s, p) reflects an experiment performed on the system between input port j and output port i.
The transfer function approach makes reference to the coefficients of the numerator and denominator polynomials of each of the mxr elements Hij(s, p), 1≤i≤ni, 1≤j≤nj, βij(p),…,…,…, βij(p) and αij(p), respectively ni,…,…,…,αij(p) ≤ 25.
The next figure is the general expression of what we have done before: we are writing a system of equations linking the observation parameters to functions of unknown model parameters.
Exhaustive summary of the model:
This system of nonlinear algebraic equations needs to be solved for the unknown parameter vector p to define.
The identifiability properties of the model is done on bases of all coefficients of all the transfer function, for these reason, beta 1, 11 dove gli apici sono gli input e output che consideriamo. Bisogna risolvere questo Sistema per verificare l'identificabilità del modello.
CONDITION FOR A-PRIORI IDENTIFIABILITY
The model is a priori identifiable if and only if the rank of the Jacobian matrix J(p) is equal to the number of model parameter, P: VEDI ESEMPIO 1 E 2 SUL QUADERNO
THE TAYLOR SERIES EXPANSION METHOD
It is a method based on the analysis of the coefficients of the Taylor series expansion in t0 of the output variable y. Let us consider the i-th measurement variable yi. The Taylor series expansion in t0 of yi is:
The derivatives of the output are measurable, thus are able to provide information on the parameters to be assessed. The derivatives of the output are the observational parameters.
The model is UNIQUELY IDENTIFIABLE if and only if the system of equations has a unique
Solution in p (unknown model parameters). Nothing can be said about the identifiability properties of the model if one is not able to prove that the system has a unique solution, due to the complexity of the system itself. However, it does not necessarily mean that the model is nonidentifiable.
The method only provides necessary and sufficient condition for a priori unique identifiability. It should also be noted that one does not know how many coefficients of the power series expansion are needed to study a priori identifiability of a given model and experiment.
In other words, theory does not provide the value of k, i.e. the order of the derivatives, at which one can stop.
The difference from the transfer function method is in the information provided, in this case we can't determine if the model is globally identifiable or nonidentifiable at all, we can only say that it is non a priori unique identifiable.
CAYLEY - HAMILTON THEOREM
The power series expansion method also holds for linear models.
In this case, it is possible to determine the order of derivatives to that one can stop at=2 when n=number of compartmentsk n−1i.e. there is no independent information in the derivatives of order higher than2n-1.EXERCISE
Assume that the material is injected at time 0 as a bolus of dose d, i.e.U(t)=D∙ δ(t), in plasma (compartment 1) and its concentration is measured.
PARAMETER ESTIMATION
This is the last point of modelling methodology analysis, comes after the apriori identifiability.A model – whether an input/output or a system one – has now been formulated.The model contains a set of unknown parameters to which we would liketo assign numerical values from the data of an experiment. For a structuralpriorimodel (a system model) we assume that we have checked it’s aidentifiability. However, tis is not necessary for an input/output model (a datadriven model) because, by definition, its unknown parameters are theobservational ones. The experimental data are
available (differently from apriory identifiability in which we requires onlu I/O of the ideal case, here weneed the real data), e.g. they have been obtained after a qualitative experiment design phase. In mathematical terms the ingredients we have are:
- The model output: 27Is the measurement equation, described in a mathematical way becausethis is the model
- The discrete-time noisy output measurements:Real data, noisy and discrete output differently form the continuous function ofmodel outputWhere vi is measurement error of the i-th measurement and t is the moment iniwhich we are taking the measurements an not the same t before. Is the sum ofwhat model should provide and a measurement error that is added to the realmodel output (in a real experiment should be always added an error).
The problem in to assign a numerical value to p from the data z i
EXAMPLEThe parameters characterizing the model need to be adjusted until a set ofvalues for them is obtained which provides the ‘best
fit’ to the data.
Best fit means when we arrive to this set of numerical values of p, we have that model output is as close as possible to the real noisy measurement data
Regression analysis is the most widely used method to ‘adjust’ the parameters characterizing a particular function to obtain the ‘best fit’ scenario to a set of data.
PARAMETER ESTIMATION 28
We have a system, a physiological system, and we want to use a determinate model to model the system. Let’s suppose that on the model we have performed the a priori identifiability and we have a set of unknown parameters and the experiment designed is perfectly matched with the unknown parameters that we have. If we go to perform the parameter estimation we are able to estimate an unique value of all unknown model parameters. Now we have designed the input and the output. When we go to perform the parameter estimation, we have measured the data, so we have all the discrete time noisy measurements. We try to
put our model output, called prediction, as close as possible to real data. This is performed to an estimation algorithm, that simulates the model output and provides us a model prediction. After that compares the model prediction with the real data and check if the model predictions are fitting the real data, if the fit is not so good we come back to change the numerical value and a new prediction is performed. The procedure end when we have a model prediction that is as close as possible to the real data. We have a different prediction of a different set of numerical values.LINEAR AND NON LINEAR PARAMETERS
For a model linear in the parameters, exact solutions are available based on linear regression theory. For a model nonlinear in the parameters, only approximate solutions are available based on the linear regression theory. An example of a model linear in the parameters is the polynomial function.
We have to remember that linearity of the dynamics does not mean linearity in the parameters because the
La soluzione dell'output del modello ha il parametro k con una dipendenza non lineare nell'output. REGRESSIONE E RESIDUI Regressione: trovare un insieme di valori dei parametri che definiscono una funzione che willy(t) = pt. Fornire la migliore adattamento per un insieme di dati. Un esempio è una linea retta. Per risolvere il problema, si osserva che, per diversi valori di p, saranno generate diverse linee rette (possiamo avere diverse previsioni). Come si trova il valore particolare di p che fornisce il miglior adattamento per ogni punto al tempo t? Dove c'è una misurazione dell'output z, c'è una corrispondente previsione y da y(t), y (previsione fornita dal modello). Ciò significa che se assumiamo che i triangoli neri siano le 29 misurazioni dell'output rumorose nel tempo, a ogni output corrisponde un output del modello. Le misurazioni reali vengono prese in punti di tempo esatti ti e in ogni ti c'è una previsione del modello yi: y viene calcolato in ti. Possiamo confrontare la misurazione rumorosa discreta zi con i valori corrispondenti che il modelloisproducing in that specific time point. We can associate to each triangle a pointin this line, and this point is the correspondent model output competed in ti. Ifwe compute the different between the point in the line and the real data wecan measure the error between the observed and the predicted data for eachti. This error is called residual. If we go back to the parameter estimationscheme we obtain a residual each time we perform the estimation withdifferent parameters
Residual: difference between the experimentally outputmeasurement and the calculated value once a value for p ischosen.
Error between the observed and predicted value for each sample time t i
RESIDUAL SUM OF SQUARES (RSS)
Residual sum of squares, RSS, can be considered a measure of howgood the fit is to the given set of datawhere N is the number of observations.
For different numerical values for p, one will obtain a different RSS. Therefore,RSS itself can be considered as a function of the parameterscharacterizing the
function chosen to describe a set of data (RSS(p)). In order to perform a regression analysis we decide to minimize RSS with respect to the parameter values characterizing the function to be fitted to the data, i.e. to find a set of parameter values for y(t) which minimizes RSS. This process is called LEAST SQUARES.
Lower is the RSS value, better is the fit.
DEGREES OF FREEDOM
Suppose a function y(t) described by P parameters is to be fitted to a set of N data points; the degrees of freedom is defined as the number N-P.
The degrees of freedom are important since in order to solve the regression problem, i.e. to find the best fit, we need to have enough data points (N) relative to the number of parameters (P).