Che materia stai cercando?

Anteprima

ESTRATTO DOCUMENTO

As time t passes, the smoothed statistic ExpSmooth

becomes the weighted average of a greater and greater

number of the past observations y and the weights

t−n

assigned to previous observations are proportional to the

2

terms of the geometric progression {1, (1−α), (1−α) ,

3 , …}. A geometric progression is the discrete

(1−α)

version of an exponential function, so this is where the

name for “exponential smoothing” comes from.

• Note that all exponential smoothing

algorithms face two problems in application

(with many arbitrary solutions): α

(1) how to determine parameters, such as ?

(2) how to determine starting values for y ?

0

29

• The forecast from exponential smoothing is

defined by simply extrapolating ExpSmooth T

flatly into the future.

• This is inappropriate for trending data, which are

the rule in economics. For trending data, a

number of (arbitrary) suggestions follow (among

these, the double exponential smoothing).

• The smoothing approach is not so often applied

nowadays, though it is still present in many

computer programs.

• The single exponential smoothing corresponds to

a model-based forecast for the ARIMA(0,1,1)

model, the double exponential smoothed to an

ARIMA (0,2,2). 30 15

• Holt-Winters methods are generalizations of the

exponential smoothing, which cover both trending and

seasonal time series cases. Basically, HW methoids entail

more than one smoothing parameter.

• Another ad hoc method of potential interest is the

Brockwell-Davis’s small-trend method, designed for

variables with a not too large trend component. The basic

idea is that the average of the observations over one year

delivers a crude indicator of that year. Then, that trend can

be subtracted from the original observations, and the

resulting series is averaged by month (quarter) over the

whole sample. The main drawback with BD method is the

assumption of time-constancy of the seasonal cycle, and no

further modelling of the stationary components.

• In general, smoothing approaches are of some interest

when the number of observations is not enough to

permit sensible time-series more formal approaches (see

ARIMA). 31

Smoothing vs filtering

Though smoothing can be justified by being

optimal under certain model assumptions, such

methods are not really based on the validity of a

model. Literally, smoothing is not forecasting.

Systems analysis distinguishes among several

methods that are used for recovering ‘true

underlying’ variables from noisy observations.

Filters are distinguished from smoothers by the

property that filters use past observations only to

recover data, while smoothers use a whole time

range to recover observations within the same

time range. 32 16

ARIMA models

• We have observed data about a variable; let’s

data speak themselves!

• Idea: at least for forecasting, we do not need the

model which actually generated the observations:

“all models are wrong, but some are useful”

• Therefore, Box-Jenkins approach fits models to

observed data with the purpose of short-term out-

of-sample prediction of that data following a

loop-procedure.

• It is a simple way to describe data patterns;

competitive w.r.t. other approaches: work-horse

of the forecasting industry and favorite

benchmark. 33

Box-Jenkins’s ARIMA loop

1. Plot of the series

2. Unit root test

3. Correlogram

4. Model identification

5. Estimation

6. Residual diagnostics (e.g. residuals correlogram)

7. If all ok, forecast, otherwise restart from 3-4

8. Forecast use 34 17

Linear/Nonlinear ARIMA

• If the stochastic process {y }, t =1,2,…,T has a stationary

t

Gaussian distribution (or when DGP non-linear features

are too weak to be exploited by different models), we just

need to capture its first two moments, e.g. means and

covariances. In this context, the optimal forecast is a

linear combination of past values of the data with

constant weights, that is proxied by simple models such

ε

= a + b y + .

as the popular AR(1) model y t t-1 t

• If {y } is not Gaussian (e.g. asymmetric), the optimal

t

forecast will not be linear and induce to use non-linear

time series models, e.g. Threshold AR (TAR) models, or

Markov Switching models. Both models posit 2 (or more)

different regimes. 35

TAR/MTAR/LSTAR models

Example of TAR(1) model with 2 regimes: ε

= + + − + − +

+ + − −

y a I b y I a ( 1 I ) b y ( 1 I )

− −

t t t 1 t t t 1 t t

where I is the Heaviside indicator such that:

t τ

I = 1 if y

t t-1 τ

I = 0 if y <

t t-1

τ

and is the threshold delimiting the switch,

searched over the trimmed distribution of y .

t

Variations of the model above interact I with only

t

some coefficients, the intercept and/or the error

variance.

The model above is a Self-Exciting TAR, a more

general TAR can allow exogenous forces rather

than y to drive the switch. 36

t-1 18

τ

If the switch is driven by changes: (<)

∆y

t-1

τ

instead of levels: y (>) , the model is called

t-1

Momentum TAR (MTAR).

TAR/MTAR models are characterized by sharp

(i.e. 0/1) indicator functions. In alternative, one

could assume a Smooth Transition AR (STAR)

model.

The Logistic STAR assumes:

1 γ

= >

I with 0

[ ]

γ τ

+

t 1 exp - ( y - )

t 1 37

Below a number of transition paths depending on the

γ τ = 0

parameter when .

amplitude of the γ

TAR’s sharp switch is proxied when = 25 (i.e. high).

38 19

Markov switching models

• It is of similar concept than transition models - as

both entail different regimes – but in Markov

switching models the switches occur based on an

unobserved latent variable (while TAR on past

values of the observed data).

• I is modeled as unobserved and following a two-

t

state Markov process.

• Two fundamental references:

Granger-Terasvirta (1993), Modelling Nonlinear

Economic Relationships, Oxford UP

Hamilton (1994), Time Series Analysis, Princeton UP

39

Further extensions

• Again In many data sets, stability – e.g. DGP

stationarity of the first two moments – appears to

be clearly violated.

• Reasons:

1. Local (stochastic) trends, i.e. growth trends

2. Volatility changes

3. Level shifts

• In all these cases, the ARMA stationary model

must be extended to further modeling of the trend

(case 1 and partly 3 above) and of the components

(e.g. the variance, point 2 above). 40 20

Modeling trends&components

• Again plot/correlogram/unit root test as a way to

explain deterministic and stochastic trends in data

(remember the “benefits of over-differencing for case

3 above, see Clements-Hendry) α β ε

• Deterministic trend plus noise: y = + t +

t t

α ε

• Simplest stochastic trend : y = + y +

t t-1 t

• If a forecasting model incorrectly incorporates

deterministic trends, but the series is highly persistent,

spuriously large weight can be placed on the

estimated trend component, and long-run forecast can

suffer greatly.

• The caveat of the homogeneity over time of a series:

41

is it equally reliable throughout its length?

• In the more general unobserved components model

(UCM), y is represented by the sum of more

t

stochastic components, which are modeled and

combined. The estimation of UCM uses the ML

approach through the Kalman filter; see Harvey

(1990), Forecasting STSM and the Kalman Filter,

Cambridge UP.

• In some economic applications (especially in

finance) interest is in forecasting conditional higher

moments (e.g. error variance) instead of focusing on

the prediction of conditional means. In this, Engle’s

(1982), ARCH and stochastic volatility models can

be helpful. ε

• The random coefficient model y = a + b y + is

t

t t t-1

more general, where b = c b + u ; parameter c

t t-1 t

drives the evolution of coefficient series. If c=1 the

model contains a stochastic unit root. 42 21

Modeling clustering volatility

• Many financial time series, such as returns on common

stocks, are nearly unpredictable. Neither ARMA models

nor non-linear time-series models allow reliable and

possibly profitable predictions. However, these series

digress in two aspects from usual white noise, as it would

be generated from a Gaussian stochastic process.

• Firstly, the unconditional distribution is severely

leptokurtic, with more unusually large and small obs than

would be implied from the Gaussian law: fat tails

• Secondly, calm and volatile episodes are observed, such

that at least the variance appears to be predictable:

volatility clustering

• Engles’s (1982) ARCH models are able to match both the

features. The relevant GARCH generalization is due to

Bollerslev (1986), a brillant Engles’s phd student. 43

ARCH-GARCH: basics

• There are two equations: (1) the conditional mean

equation, e.g. modeled as a stationary ARMA; (2) the

conditional variance (volatility) equation.

µ φ ε

• Eq. (1) for an AR(1) model is: y = + y +

t t-1 t

• Depending on the form of the equation (2) we have

either ARCH or GARCH models. ω α ε 2t-1

• ARCH(1) conditional variance equation: h = +

t

ε ε

2t

with h = E( | ) not stochastic

t t-1

• GARCH(1,1) model’s conditional variance equation:

ω α ε β ε ε

2t-1 2t

h = + + h ; where again h = E( | )

t t-1 t t-1

ω

• parameters are estimated by ML, and must be that >0,

α β α β

≥0

, and that + <1 . The three parameters are the

weight of: 1) the unconditional long run variance, 2) the

news of “yesterday”, and 3) the previous forecast. 44 22

• A practical difficulty with ARCH(p) models is that, with

p large, estimation will often lead to the violation of the

α

non-negativity constraints on the parameters. For this,

the generalized GARCH(p,q) model has been proposed:

ω α ε β α Σ α β Σ β

2t i i

h = + (L) + (L)h ; (L)= L ; (L)= L

t t i=1,p i i=1,q i

• Interestingly, the GARCH(1,1) model is so often

empirically adequate that it has achieved something of a

canonical status.

• In conditional mean models, ignored ARCH/GARCH

effects imply model misspecification: heteroskedasticity

results in appropriate (too small) s.e; ignoring ARCH lead

to over- parameterized models.

• Escapes: (1) test for ARCH squared residuals to identify

orders p and q, plus portmenteau statistics; (2) specific

ARCH test due to Engle(1982). GARCH more complex.

45

ARCH-GARCH: interpretation

• The efficient-markets theory predicts that past financial

µ ε

data are not useful to forecast y. Then y = +

t t

ε σ 2

• However, often is not true that is iid(0, ).

t

ε

• Remember White: Var( ) can be estimated by the square

t

2

of the residuals e . Visual inspection, correlogram and

2 tend to reject H uncorrelated variance.

Lijung-Box of e 0

ε ω α ε

2t 2t-1

• AR(1) assumption: = + + v ; where v is the

t t

ω α ε 2t-1

noise, and h = + is the signal: the predictable

t

ε ε →

2 2t

part of ; decomposing = h + v ARCH(1)

t t

ε ω ε

2t 2t-1

• ARMA(1,1): = + b + c v + v in which the

t-1 t

ω ε ω ε

2t-1 2t-1

predictable part is : h = + b + c v = + b

t t-1

ε ω ε

2t-1 2t-1

+ c ( – h ) = + (b+c) – c h ; therefore

t-1 t-1

ω α ε β α β

2t-1

h = + + h [where =b+c and = – c] as in

t t-1 α β

the GARCH(1,1) above. Note that + = b 46 23

ARCH-GARCH: concluding remarks

• Since Granger-Morgenstern (1970) “Predictability of Stock

Market Prices” book, it has largely acknowledged that there

is not much forecastability in stock market first-moments

(i.e. means). Further, if people are enjoying good forecasting

results in the stock market, they would not showing them.

Therefore, published literature is – at least – a biased sample

of evidence!

• In absence of good return forecasts, price protection depends

upon the risk assessment, and risks are measured by the

variance of assets returns. Fortunately, returns volatility (i.e.

second moments) is by far more predictable than first

moments, as the seminal results from ARCH/GARCH

shows.

• For this, there are many ARCH variants, including stochastic

volatility models (of challenging estimation) which add a

volatility shock to the deterministic ARCH process. 47

Indicators

Three types of indicators have been used for many

years to help forecast the national economy. They

are the Leading, Coincident, and Lagging indicator

series. The terms refer to the nature of their

relationship to the business cycle. Main aim: pick

the turning points in economic activity.

As no single indicator series can encompass the

economy's diverse activity, it is necessary to track

many different series to determine the direction of

the economy. 48 24

Using these data series, the NBER has developed

composite indexes that capture and smooth the data

contained in several of them including the three

major Composite Indexes (Leading, Coincident, and

Lagging). Components of the Leading Indicators

index include such things as average weekly hours

worked and claims for unemployment insurance,

manufacturer's new orders, stock prices, orders for

plant and equipment, an index of consumer

expectations, the real M2 money supply, etc.

The Composite of Leading Indicators is useful for

understanding the business cycle and is primarily

intended to identify changes in the direction of the

economy. 49

The indexes predict by assuming that past trends

and relationships will continue into the future. In

practice, the timing and strength of each indicator

relationship with the reference series change over

time: a stable relation between the lead and the led

(target) variable, e.g. fluctuations in housing market

booms&busts.

Despite this, in practical attempts to forecast the

future, these Indices are among the most important

tools available to most organizations, including

national Governments. Experience and econometric

good practice (stability test, rolling or TVP models

estimation, quick re-estimation with new data) can

limit the drawbacks. 50 25

Can Google queries help predict economic activity?

Google Trends provides daily and weekly reports on the

volume of queries related to various industries.

Choi&Varian (2009) assume that such queries may be

correlated with the current level of economic activity in

given industries, and thus helpful in predicting the

subsequent data releases.

They show that Google Trends help predicting the

present (coincident indicator for nowcasting).

For example, the volume of queries on a particular

brand of automobile during the second week in June

may be helpful in predicting the June sales report for

that brand, when it is released in July. 51

Basic literature

General:

Choi&Varian (2009) quoted above, downloadable at:

http://google.com/googleblogs/pdfs/google_predicting_the_present.pdf

(with R procedure in the appendix)

Applications:

Ginsberg et al. (2009) [Detecting Influenza Epidemic Using Search

Engine Query Data, Nature, No. 457] introduce an approach which

make possible to use search queries to detect influenza epidemics in

areas with a large population of web search users.

D’Amuri&Marcucci (2009) Google-predict the unemployment rate,

downloadable at:

http://www.lavoce.info/articoli/-lavoro/pagina1001407.html

http://www.iser.essex.ac.uk/publications/working-papers/iser/2009-32.pdf 52 26

Econometric systems of equations

• Main tool in macroeconometric forecasting.

• They try to model behaviors: consolidate

existing empirical and theoretical

knowledge on how the economy works.

• Policy multipliers: deliver policy advice.

• Help to explain their own failures.

• Relevant role of adjustments in modifying

their forecasts. 53

FOCUS: Previsioni e politica monetaria

Scopo della politica monetaria:

salvaguardare il potere d’acquisto della

moneta

Fissato un tasso “normale” di inflazione

pari al 2% per l’area dell’Euro, la BCE

interviene quando (?) l’inflazione va oltre

Strumento della politica monetaria: tasso

ufficiale di sconto, oppure di

rifinanziamento bancario (r )

s 54 27

Meccanismo di trasmissione di una stretta

di politica monetaria:

(1) aumento di r s

(2) aumento della struttura dei tassi di

interesse

(3) rallentamento della crescita per una

riduzione della domanda

(4) riduzione del tasso di inflazione

Tempo intercorso da (1) a (4): 12-18 mesi

55

La Taylor rule (funzione di reazione)

ρ α β

int - infl = + (infl - 2%) + gap

t t t t

dove:

int è il tasso di interesse, infl è il tasso di

t t

inflazione, gap è la differenza % fra l’output

t

effettivo e quello potenziale; tutte le variabili

ρ

sono misurate al tempo t; (tasso reale di

α β

equilibrio), > 0 e > 0 sono tre parametri.

La “regola” di Taylor (1993) raccomanda come

la banca centrale dovrebbe condurre la politica

monetaria, cioè manovrare int . In pratica è stato

t

dimostrato come la condotta della Fed sia stata

nel tempo abbastanza in linea con tale “regola”.

56 28

ρ α β

int - infl = + (infl - 2%) + gap

t t t t

Se infl = 2% (valore desiderato) e gap = 0

t t

(pieno utilizzo della capacità produttiva), int -

t

ρ

infl = (tasso reale compatibile col pieno

t

impiego). Un eccesso di inflazione (sopra al

target del 2%) e/o un eccesso di domanda

(gap > 0) predicono una politica monetaria

t

restrittiva. Viceversa, quando l’inflazione è

sotto al 2% e/o gap < 0, la “regola” predice una

t

politica monetaria espansiva.

α β

e sono i pesi in caso di obiettivi in conflitto

(ad esempio, eccesso di inflazione e gap < 0,

t

cioè di “stagflazione”) 57

ρ α β

int - infl = + (infl - 2%) + gap

t t t t

Problema: se int reagisce a infl e gap la banca

t t t

centrale arriverà sempre in ritardo (ricorda i

tempi di propagazione del meccanismo di

trasmissione della politica monetaria).

Pertanto, una “regola” meglio funzionante per

stabilizzare l’economia è quella “lungimirante”:

int - E (infl ) =

t t t+i

ρ α β

+ [E (infl ) - 2%] + E (gap )

t t+i t t+i

dove E (infl ) e E (gap ) sono l’inflazione e il

t t+i t t+i

gap che, al tempo t, ci si attende per t+i (i è il

tempo necessario per rendere efficace la politica

monetaria). 58 29

La banca centrale deve fissare int sulla

t

base di cosa prevede succederà a

inflazione e ciclo economico i mesi avanti.

Prevedere queste grandezze significa

estrapolare dal presente e dal passato le

loro tendenze sistematiche.

Come detto prima, alternativi approcci:

(a) giudizio di esperti;

(b) previsioni usando un modello

econometrico. 59

Contrariamente a (b), il processo di

formazione delle previsioni con (a) non è

statisticamente formalizzato e validato a

priori.

La previsione con i modelli econometrici:

specificazione, MODELLO

DATI stima e test ECONOMETRICO

STATISTICI caratterizzazione dei

legami fra presente e

PREVISIONE passato (condizionata

estrapolazione

ECONOMETRICA al passato)

ciò che ci aspettiamo

(condizionato al UNIVARIATO o

presente e al passato) MULTIVARIATO

60 30


PAGINE

36

PESO

305.95 KB

AUTORE

Atreyu

PUBBLICATO

+1 anno fa


DESCRIZIONE DISPENSA

Materiale didattico per il corso di Econometria per la politica economia del prof. Roberto Golinelli, all'interno del quale sono affrontati i seguenti argomenti: le previsioni economiche e la programmazione; sondaggi, metodi di previsione quantitativa; modelli ARIMA; modelli ARCH - GARCH.


DETTAGLI
Corso di laurea: Corso di laurea magistrale in politica amministrazione e organizzazione
SSD:
Università: Bologna - Unibo
A.A.: 2011-2012

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher Atreyu di informazioni apprese con la frequenza delle lezioni di Econometria per la politica economica e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università Bologna - Unibo o del prof Golinelli Roberto.

Acquista con carta o conto PayPal

Scarica il file tutte le volte che vuoi

Paga con un conto PayPal per usufruire della garanzia Soddisfatto o rimborsato

Recensioni
Ti è piaciuto questo appunto? Valutalo!

Altri appunti di Econometria per la politica economica

Previsioni con sistemi multivariati
Dispensa
Abilità previsiva
Dispensa
Previsione e informazione
Dispensa