Estratto del documento

We underestimate the true variance. Professor explanation

2

̂:

We want to estimate using MM based on

1 2 2

∑ ̂ = ̂

=1

2

̂ is biased (not unbiased) because:

1 −

02 2 02

2 ]

[̂ = ∑(1 − ℎ ) = ≠

0

=1 02

Unless is equal to 0, this quantity is different from . The estimator MM underestimates the true

variance of the disturbances. However, there exist a very simple transformation to make it unbiased (OLS):

[(̂ 02

2

(

[̂ ̂] = [( ) )] = [ ] = [ ] = = = ( − )

)] [∑ ]

=1 02

[ ] ̂ ̂ ( − )

is just the sum of squared residuals. If the expected value of is equal to , we can:

1 ̂ ̂

2 2

≡ ∑ ̂ =

− −

=1

,

The square root of this estimate, is called the standard error of the regression or the regression standard

2

error. We call this estimator and its expectation is going to be:

[̂ ̂] 1

02 02

2 ] (

[ = = − ) =

− −

This is the OLS estimator and it is unbiased. To summarize:

1

• 2 2 2 2

̂ = ̂ ; [̂ ] ≠

MM: . It is biased but consistent;

1

• 2 2 2 2

∑ ]

= ̂ ; [ =

OLS: . It is unbiased and consistent.

→ ∞

For the two estimators are approximately asymptotically equivalent. To summarize, although the

two-estimators lead to the same estimated coefficients, the variance estimator is not the same.

Variance estimator

[̂ [̂

2 2 2 −1

(

= )

] ]:

If we replace by in expression: , we obtain an unbiased estimate of

(̂ (̂ ̂

) ) …

1 1 2

[̂]

̂ 2 −1

(

= ) =[ ]

… ) …

2

… … …

This is the usual estimate of the covariance matrix of the OLS parameter estimates under the assumption of

IID disturbances. In gretl the standard error (which if it is smaller it is better) is the square root of the value

0.008 in the covariance matrix of regression coefficients. To use some numbers, we have that:

93.08271

2

= = = 0.959615

− 100 − 3 2

The result is the estimated variance of the disturbances. The is the standard error of regression:

2

√ = √0.959615 = 0.9795

Overspecification and underspecification

= + .

Imagine that you have the following true DGP: You estimate as (estimated model):

0

= + +

You are introducing in the list of regressors but there isn’t in true model (gamma is a coefficient and is a

matrix). We have an irrelevant variable. In this case the OLS remains unbiased but we loose efficiency

because the variance becomes larger (exam question). Including irrelevant information plays no role in

terms of unbiasedness of the estimator but it affects its efficiency. Formally, suppose we estimate:

02

= + + ℎ ~(0, )

When the data are actually generated by: 02

= + ℎ ~(0, )

0 ̂

Suppose now that we run the linear regression (model). By the FWL Theorem, the estimates are the same

as those from the regression: = +

We get: ̂ −1

(

= + )

0

.

Everything depends on the relation between and The conditional expectation of the second term on

).

the right-hand side is zero ( has zero expected value because is independent from Moreover, we

[̂ 02 02

−1 −1

( (

= ) ≥ )

]

have an effect on: . The inequality almost always holds strictly. The

only exception is that the regression of on has no explanatory power at all. The opposite case is the one

with a true DGP of: = + +

0 0

And the estimated model is: = +

We are missing a relevant variable. As a result, there is no way for the OLS estimator to estimate beta in the

appropriate way because you are missing a relevant information. The OLS estimator becomes biased and

inconsistent. Formally, suppose that we estimate the model: 02

= + ℎ ~(0, )

̃

Which yields the estimator but that the DGP is: 02

= + + ℎ ~(0, )

0 0

We find that:

= 0 = 0.

The second term in the last line of is equal to zero only when or When you are missing a

0

relevant variable, the estimated coefficient is biased and inconsistent.

General recap

= +

From the model: we know that:

• ̂ (̂)

̂

−1 2 −1 2

( (

= ) . = ) [̂] =

OLS estimator: Its variance is: iff the with:

̂ ̂

2

= so we must have both homoskedasticity and no serial correlation. Otherwise, if we have:

[̂] = Ω, the variance of the OLS estimator is the sandwich covariance matrix which is a sort of

−1 −1

(̂ T T

= X) ΩX(X )

) (X

general formula: . This formula will simplify if we have

homoskedasticity and no serial correlation;

• ̂ = ̂;

Fitted values:

• − ̂ = ̂.

Residuals:

Now we do some applications of this in gretl and the professor is recording.

Measure of goodness of fit

2 2

There are three version of useful to have a criterion to select the best model (the higher the , the

better the model): 2 2

‖ ‖

‖ ‖

• 2 2

= = =1− = cos

Coefficient of determination (or uncentered ): from

2 2

‖‖ ‖‖

2

the Pythagoras’ theorem and remembering that is the angle between the two vectors. This

runs from zero to one. It is invariant under non-singular linear transformations of the regressors. It

. ;

is invariant to changes in the scale of This coefficient uses the distance from zero for

2

∑ ̂

• 2 =1

= 1 − .

Centered: . This coefficient uses the distance from the mean of for The

2

(

∑ −)

=1

2

advantage of is that it is invariant to changes in the mean of the regressand. By adding a large

2

enough constant to all the , we could always make become arbitrarily close to 1, at least if the

2 2

regression included a constant. Another, possibly undesirable, feature of both and as

measures of goodness of fit is that both increase whenever more regressors are added;

1 2

∑ ̂

=1

• 2 2

= 1 −

Adjusted: . The numerator is the estimator and the denominator is the

1 2

(

∑ −)

=1

−1

variance estimator of the mean. This adjusted version can take negative values (in extreme cases if

a model has very little explanatory power) while this is not true for the others.

This is the end of chapter 4. Hypothesis testing in linear regression models

After estimating a model and computing the covariance matrices, we want to test some hypothesis. We will

see two different aspects related to hypothesis testing (hypothesis testing and confidence intervals in the

following chapter which are exactly the same but from a different point of view).

Basic ideas

The basic idea is a model containing just the constant. Suppose that we wish to test the hypothesis that the

expectation is equal to some value that we specify. A suitable model for this test is:

2

= + ℎ ~(0, )

This model is equivalent to compute the mean of a sample and testing some hypothesis on the mean of the

sample. is an observation on the dependent variable, is the expectation of each of the , and is the

2

only parameter of the regression function, and is the variance of the disturbance . Let be the

0

=

specified value of the expectation, so that we can express the hypothesis to be tested as: .

0

Basic ideas (part 2) ̂

The least-squares estimator of is just the sample mean. If we denote it by , then it follows that, for a

:

sample of size

1 1

̂ (̂ 2

= ∑ =

)

=1

These formulas can either be obtained from first principles or as special cases of the general results for OLS

, -vector

estimation. In this case, the regressor matrix is just an of 1s. Thus, for the first model we had:

̂ (̂

2 −1 2 −1

( (

= + ℎ ~(0, ), = ) = )

)

the standard formulas: and:

1 1

̂ (̂

=1 2

= =

)

yield the two formulas given in: . The hypothesis to be tested is called,

for historical reasons, the null hypothesis ( ). The key part of the hypothesis testing is that, in order to test

0

, we need a test statistic, which is a random variable that has a known distribution when the null

0

hypothesis is true (important) and some other distribution when the null hypothesis is false. If the value of

this test statistic is one that might frequently be encountered by chance under the null hypothesis, then the

test provides no evidence against the null. On the other hand, if the value of the test statistic is an extrem

Anteprima
Vedrai una selezione di 20 pagine su 103
Applied econometrics - Appunti completi (ENG) Pag. 1 Applied econometrics - Appunti completi (ENG) Pag. 2
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 6
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 11
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 16
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 21
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 26
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 31
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 36
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 41
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 46
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 51
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 56
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 61
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 66
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 71
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 76
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 81
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 86
Anteprima di 20 pagg. su 103.
Scarica il documento per vederlo tutto.
Applied econometrics - Appunti completi (ENG) Pag. 91
1 su 103
D/illustrazione/soddisfatti o rimborsati
Acquista con carta o PayPal
Scarica i documenti tutte le volte che vuoi
Dettagli
SSD
Scienze economiche e statistiche SECS-P/05 Econometria

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher HawkedF di informazioni apprese con la frequenza delle lezioni di Applied Econometrics e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università Università Cattolica del "Sacro Cuore" o del prof Monticini Andrea.
Appunti correlati Invia appunti e guadagna

Domande e risposte

Hai bisogno di aiuto?
Chiedi alla community