Che materia stai cercando?





oggi futura



Acquistare un’attività Venderla o tenerla

finanziaria incassando un

elevato rendimento

compero (oggi) ad un prezzo basso nella

prospettiva (futura) di vendere ad un prezzo

maggiore. 11

Alfred Cowles III, scrisse nel 1933 un articolo: Can

stock market forecasters forecast? in cui dimostrava

che i previsori non possono anticipare con

accuratezza l’esito di comportamenti di un mercato

efficiente. Nel breve periodo, le quotazioni in tali

mercati sono un random walk (la migliore

previsione per domani è ciò che si è realizzato

oggi). Unica eccezione è quando si dispone di inside

information (e la legge vieta l’insider trading...).

In pratica: data l’impossibilità osservata di

diventare tutti ricchi, la previsione delle quotazioni

finanziarie è molto difficile, a meno di usare

tecniche efficaci e sconosciute ai più. In caso

contrario, l’operare dell’arbitraggio “cancella” tutta

l’informazione dai dati storici. 12 6



oggi futura



Formulare la legge Misure di politica

finanziaria economica nel 2010

scegliere oggi la spesa che si realizzerà in

futuro sulla base di ciò che si prevede di

incassare in futuro (gettito delle imposte).

Patto di stabilità. 13

Due modi alternativi

(complementari?) per ottenere

delle previsioni:

1. Approccio qualitativo

2. Approccio quantitativo

• Nota l’aggettivo si riferisce allo strumento

usato e non al contenuto della previsione 14 7

Qualitative forecasting methods

• Purely subjective (guessing forecast):

based on intuition, insight, and luck.

• Consensus methods: many experts in

different disciplines tell opinions and

judgments plus final synthesis.

• Surveys of the sales force, of the

customers, of the general public. 15

Guessing&rules of thumb

• Frequently used when no quantitative data. Based on

familiarity with the issues and problems involved.

• Main problem: it is impossible to recognize a good

forecast until it has come to pass (no formal validation).

• Good guesses (by chance? Who knows) are often

reported, bad ones ignored…

• Additional problem: is usually optimistic

(overconfidence, insufficient consideration of

distributional information – i.e. risk – about outcomes)

due to inside view. Escape: outside view, i.e. use

distributional information on previous cases similar to the

one being forecast. 16 8

Consensus&experts’ judgments

• Put all the experts in a room and let them argue

it out. Again, it’s difficult predict which oracle

will be right next time…

• Individuals have different group interaction,

authority and persuasion skills.

• Solution: Delphi technique based on 4 steps:

1) anonymous predictions

2) summary and back to experts

3) ask to defend/modify original opinions

4) repetition until stability (and not consensus)



• Members of sales force representing different

products, segments and regions.

• Temptation to bias the reports (to beat the

forecast and get the rewards). Structured

questionnaires to avoid biased answers.

Standardization: same order, same questions

• Open/closed-ended questions. Dichotomous to

continuous answers. Response rate & brevity.

• Types: phone, mail, face/online interviews

(feedback) 18 9

• Sample selection is critical to the validity of the

information that represents the population that is being

studied (e.g. consumers, households, and businesses).

Two different approaches:

1. Nonprobability sampling approach: convenience

samples who are available. Beware the lack of


2. Probability sampling approach: the chance of being

sampled is known. Types: equal probability of

selection (e.g. simple random sampling), prob

proportional to size (higher chance of selection for

larger elements), stratified random sampling (first

subpopulations then separate random samples) is used

when no size data. 19

• Pros

- efficient

- its reliability may be proved

- only questions of interest (low costs)

• Cons

- can be informative about future events, but relay

on plans being realized

- A change of climate, changes plans (less costly..)

- depend on subjective quality (e.g. honesty)

- nonresponses

- vague questions 20 10

Quantitative forecasting methods

• Past data extrapolation through stat/math

techniques with or without economic theory.

• Environment stability assumption: the forces

creating the past, will continue to operate in the


• If so, extrapolation is appropriate (exploratory

data analysis to identify data features and

appropriate modeling). Stationarity vs persistence

• Developmental inertia: some items are more

easily changed than others.

• Clothing style contains little inertia (i.e. difficult

to forecast with past data). While energy

consumption has a lot of inertia 21

• Problem: historical data are noisy because

social system interactions are complex

• The model tries to shape noisy data by

accounting for most relevant statistical

and/or economic factors

• However, omitted (minor) determinants

leave residual noise (model’s error term)

• For this, model-based forecasts explain

only the systematic part of future outcomes

• Modeling starts from an initial set of

specification hypotheses (assumptions)

• Specification errors affect forecast ability 22 11

Why forecasts sometimes fail

• Definition: a large than “usual” deviation

from the actual outcome.

• Even the best models are unable to capture

unexpected stochastic shocks (arising from

political crises, wars, natural disasters, etc.)

because there are no early warning signs for

such idiosyncratic episodes.

• Usual deviations can be represented by

intervals surrounding point forecasts.

• Relevance of assessing model’s forecasting

performance 23

Quantitative techniques’ details

• Statistical methods for historical data:

- Moving average and exponential smoothing

- Univariate ARIMA(lin/nonlin) modeling trends

- Using indicators

• Exploiting also economic theory (causal forces):

- Multivariate single-equation linear regression.

Cons: strong correlations between predictors

create unstable forecasts. Neglected endogeneity.

- Multivariate systems of equations: reduced-

form (e.g. VARs) or structural economic models

24 12

Moving average

Used to create a series of averages of different

subsamples. In our context, a moving average is not

a single number but it is a set of numbers, each of

which is the average of the corresponding


Rolling, if the size of the subsample (window) is

kept fixed. Recursive, if each value is the avg of all

previous data.

MovAvg smoothes out short-term fluctuations and

highlight longer-term trends or cycles. It is a type of

convolution and so it is also similar to the low-pass

filter used in signal processing (e.g. HP filter). 25

Unweighted mean of the previous w data points:

+ + +

y y ... y

= − − +

t t 1 t w 1

MovAvg t ,

w w

Weighted mean (weights decrease aritmetically):

( )

+ − + +

w y w 1 y ... y

− − +

= t t 1 t w 1

WMovAvg ( )

+ − + + +

t ,

w w w 1 ... 2 1

Weights sum = 1; more recent periods weight more

26 13

Exponential smoothing

Exponential smoothing is a form of weighted moving

average in which the new forecast is a weighted sum

of: the actual data in the current period and the wgtd

forecast of that variable for the same period:


ExpSmooth y


0 , 0 ( )

α α

= + −

ExpSmooth y 1 ExpSmooth

α α

t , t t 1



= + −

ExpSmooth ExpSmooth ( y ExpSmooth )

α α α

− −

t , t 1

, t t 1


α α

< <

and is the smoothing factor with 0 1

As time passes, ExpSmooth becomes an avg of a

growing number of past observations y (see below).


Why “exponential”?


ExpSmooth y (




. the initial value




, 0

in general

: ( )

α α

= + −

ExpSmooth y 1 ExpSmooth

α α

t , t t 1





. the first estimate

: ( )

α α

= + −

ExpSmooth y 1 y



, 1 0

the lagged general estimates are :

( )

α α

= + −

ExpSmooth y 1 ExpSmooth

α α

− − −

t 1

, t 1 t 2


( )

α α

= + −

ExpSmooth y 1 ExpSmooth

α α

− − −

t 2

, t 2 t 3


then substituti

ng lags in t :

( ) ( )

α α α α

= + − + − 2

ExpSmooth y 1 y 1 ExpSmooth

α α

− −

t , t t 1 t 2 ,

( ) ( ) ( )

α α α α α α

= + − + − + −

2 3

ExpSmooth y 1 y 1 y 1 ExpSmooth

α α

− − −

t , t t 1 t 2 t 3


( ) ( ) ( )

α α α α

= + − + − + −

2 3

ExpSmooth [ y 1 y 1 y ] 1 ExpSmooth

α α

− − −

t , t t 1 t 2 t 3



: ( ) ( ) ( ) ( )

α α α α α

= + − + − + + − + −

2 t 1 t

ExpSmooth [ y 1 y 1 y ... 1 y ] 1 y 28

α − −

t , t t 1 t 2 1 0 14

As time t passes, the smoothed statistic ExpSmooth

becomes the weighted average of a greater and greater

number of the past observations y and the weights


assigned to previous observations are proportional to the


terms of the geometric progression {1, (1−α), (1−α) ,

3 , …}. A geometric progression is the discrete


version of an exponential function, so this is where the

name for “exponential smoothing” comes from.

• Note that all exponential smoothing

algorithms face two problems in application

(with many arbitrary solutions): α

(1) how to determine parameters, such as ?

(2) how to determine starting values for y ?



• The forecast from exponential smoothing is

defined by simply extrapolating ExpSmooth T

flatly into the future.

• This is inappropriate for trending data, which are

the rule in economics. For trending data, a

number of (arbitrary) suggestions follow (among

these, the double exponential smoothing).

• The smoothing approach is not so often applied

nowadays, though it is still present in many

computer programs.

• The single exponential smoothing corresponds to

a model-based forecast for the ARIMA(0,1,1)

model, the double exponential smoothed to an

ARIMA (0,2,2). 30 15

• Holt-Winters methods are generalizations of the

exponential smoothing, which cover both trending and

seasonal time series cases. Basically, HW methoids entail

more than one smoothing parameter.

• Another ad hoc method of potential interest is the

Brockwell-Davis’s small-trend method, designed for

variables with a not too large trend component. The basic

idea is that the average of the observations over one year

delivers a crude indicator of that year. Then, that trend can

be subtracted from the original observations, and the

resulting series is averaged by month (quarter) over the

whole sample. The main drawback with BD method is the

assumption of time-constancy of the seasonal cycle, and no

further modelling of the stationary components.

• In general, smoothing approaches are of some interest

when the number of observations is not enough to

permit sensible time-series more formal approaches (see

ARIMA). 31

Smoothing vs filtering

Though smoothing can be justified by being

optimal under certain model assumptions, such

methods are not really based on the validity of a

model. Literally, smoothing is not forecasting.

Systems analysis distinguishes among several

methods that are used for recovering ‘true

underlying’ variables from noisy observations.

Filters are distinguished from smoothers by the

property that filters use past observations only to

recover data, while smoothers use a whole time

range to recover observations within the same

time range. 32 16

ARIMA models

• We have observed data about a variable; let’s

data speak themselves!

• Idea: at least for forecasting, we do not need the

model which actually generated the observations:

“all models are wrong, but some are useful”

• Therefore, Box-Jenkins approach fits models to

observed data with the purpose of short-term out-

of-sample prediction of that data following a


• It is a simple way to describe data patterns;

competitive w.r.t. other approaches: work-horse

of the forecasting industry and favorite

benchmark. 33

Box-Jenkins’s ARIMA loop

1. Plot of the series

2. Unit root test

3. Correlogram

4. Model identification

5. Estimation

6. Residual diagnostics (e.g. residuals correlogram)

7. If all ok, forecast, otherwise restart from 3-4

8. Forecast use 34 17

Linear/Nonlinear ARIMA

• If the stochastic process {y }, t =1,2,…,T has a stationary


Gaussian distribution (or when DGP non-linear features

are too weak to be exploited by different models), we just

need to capture its first two moments, e.g. means and

covariances. In this context, the optimal forecast is a

linear combination of past values of the data with

constant weights, that is proxied by simple models such


= a + b y + .

as the popular AR(1) model y t t-1 t

• If {y } is not Gaussian (e.g. asymmetric), the optimal


forecast will not be linear and induce to use non-linear

time series models, e.g. Threshold AR (TAR) models, or

Markov Switching models. Both models posit 2 (or more)

different regimes. 35


Example of TAR(1) model with 2 regimes: ε

= + + − + − +

+ + − −

y a I b y I a ( 1 I ) b y ( 1 I )

− −

t t t 1 t t t 1 t t

where I is the Heaviside indicator such that:

t τ

I = 1 if y

t t-1 τ

I = 0 if y <

t t-1


and is the threshold delimiting the switch,

searched over the trimmed distribution of y .


Variations of the model above interact I with only


some coefficients, the intercept and/or the error


The model above is a Self-Exciting TAR, a more

general TAR can allow exogenous forces rather

than y to drive the switch. 36

t-1 18


If the switch is driven by changes: (<)




instead of levels: y (>) , the model is called


Momentum TAR (MTAR).

TAR/MTAR models are characterized by sharp

(i.e. 0/1) indicator functions. In alternative, one

could assume a Smooth Transition AR (STAR)


The Logistic STAR assumes:

1 γ

= >

I with 0

[ ]

γ τ


t 1 exp - ( y - )

t 1 37

Below a number of transition paths depending on the

γ τ = 0

parameter when .

amplitude of the γ

TAR’s sharp switch is proxied when = 25 (i.e. high).

38 19

Markov switching models

• It is of similar concept than transition models - as

both entail different regimes – but in Markov

switching models the switches occur based on an

unobserved latent variable (while TAR on past

values of the observed data).

• I is modeled as unobserved and following a two-


state Markov process.

• Two fundamental references:

Granger-Terasvirta (1993), Modelling Nonlinear

Economic Relationships, Oxford UP

Hamilton (1994), Time Series Analysis, Princeton UP


Further extensions

• Again In many data sets, stability – e.g. DGP

stationarity of the first two moments – appears to

be clearly violated.

• Reasons:

1. Local (stochastic) trends, i.e. growth trends

2. Volatility changes

3. Level shifts

• In all these cases, the ARMA stationary model

must be extended to further modeling of the trend

(case 1 and partly 3 above) and of the components

(e.g. the variance, point 2 above). 40 20

Modeling trends&components

• Again plot/correlogram/unit root test as a way to

explain deterministic and stochastic trends in data

(remember the “benefits of over-differencing for case

3 above, see Clements-Hendry) α β ε

• Deterministic trend plus noise: y = + t +

t t

α ε

• Simplest stochastic trend : y = + y +

t t-1 t

• If a forecasting model incorrectly incorporates

deterministic trends, but the series is highly persistent,

spuriously large weight can be placed on the

estimated trend component, and long-run forecast can

suffer greatly.

• The caveat of the homogeneity over time of a series:


is it equally reliable throughout its length?

• In the more general unobserved components model

(UCM), y is represented by the sum of more


stochastic components, which are modeled and

combined. The estimation of UCM uses the ML

approach through the Kalman filter; see Harvey

(1990), Forecasting STSM and the Kalman Filter,

Cambridge UP.

• In some economic applications (especially in

finance) interest is in forecasting conditional higher

moments (e.g. error variance) instead of focusing on

the prediction of conditional means. In this, Engle’s

(1982), ARCH and stochastic volatility models can

be helpful. ε

• The random coefficient model y = a + b y + is


t t t-1

more general, where b = c b + u ; parameter c

t t-1 t

drives the evolution of coefficient series. If c=1 the

model contains a stochastic unit root. 42 21




305.95 KB




+1 anno fa


Materiale didattico per il corso di Econometria per la politica economia del prof. Roberto Golinelli, all'interno del quale sono affrontati i seguenti argomenti: le previsioni economiche e la programmazione; sondaggi, metodi di previsione quantitativa; modelli ARIMA; modelli ARCH - GARCH.

Corso di laurea: Corso di laurea magistrale in politica amministrazione e organizzazione
Università: Bologna - Unibo
A.A.: 2011-2012

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher Atreyu di informazioni apprese con la frequenza delle lezioni di Econometria per la politica economica e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università Bologna - Unibo o del prof Golinelli Roberto.

Acquista con carta o conto PayPal

Scarica il file tutte le volte che vuoi

Paga con un conto PayPal per usufruire della garanzia Soddisfatto o rimborsato

Ti è piaciuto questo appunto? Valutalo!

Altri appunti di Econometria per la politica economica

Previsioni con sistemi multivariati
Abilità previsiva
Previsione e informazione