Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
vuoi
o PayPal
tutte le volte che vuoi
STEPS TO FOLLOW: IDENTIFYING THE FAILURE; EXPLAINING IT; REMEDYING IT.
Firstly, the failure can be identified by looking at the performance of the regulation: detecting undesirable
behaviour; developing responses and intervention tools to deal with errant behaviour; producing self-
defeating outcomes. In general, there may be a failure when the outcomes produced by the given regulatory
system differ to the hypothetical outcomes that would have been produced by doing nothing or by
implementing some other regime of control. A failure can be the consequence of under/over regulation.
Under-regulation is often linked to lack of information-gathering on the risks and risk creators that impact on
the achieving of objectives. Besides, those deficiencies in regulatory detection may relate to the inclusivity
of the rules and standards that regulators are applying. Under inclusiveness will mean that the conduct that
should be controlled is allowed to escape constraint. On the contrary, over-regulation is associated to
stringent and perspective regulation (over-formalism) or to over-inclusive rules reducing the possibility for
innovation and research and involving the excessive restriction of behaviour that should not be subjected to
control. More specific “response failings” follow. One of them may be associated with the choice of
regulatory instruments. At the heart of enforcement failings may be a more general problem: failure to
maintain reputation: reputation and credibility are critical in establishing and sustaining a regulator’s ability
to act autonomously. As for assessment and modification failings, these affect a regulator’s ability to achieve
desired outcomes because a regulator would seem unable to cope with new challenges (this happens
because of absences of data gathering and feedback systems, of obstacles given by tight legislation, …).
Turning to process failures, regulators tend to fail procedurally when they do not develop and follow
procedures that satisfy stakeholders’ appetites for openness and transparency or where the regime does not
provide for accountability of an acceptably representative nature. Moreover, in decentred regulatory
regimes, the issue of regulatory failure gets more complex because outcomes may be collectively generated
or there may be doubt as to the locus of responsibility for dealing with a problem. In some circumstances,
political interferences in technical sectors make a failing in regulating: regulators might be asked by their
political masters to do the impossible; those focusing on politicians’ behaviour would point to the problem
of governments changing their minds over time and others would note the risk-avoiding behaviour of
regulatory agencies that focus on realizing popular and convenient outcomes, rather than those that are
important, difficult, and potentially unpopular.
After having identified the failure, the government must explain it. At the broadest level, regulatory failure
can be explained by insufficient resources and by epistemological limitations. In fact, information is costly
and the capacity of the organization/system to process all available information within time and other
constraints is inherently limited. Therefore, regulation can go wrong because of uncertainty and ambiguity
of knowledge, such that the likelihood that the regulatory strategies manage to achieve their intended effect
in all cases is very low. As a result, the regulators may not be able to calculate which steps they have to take
in order best to serve the public interest, and this may impede their endeavours despite their good intentions.
Information asymmetries can thus generate several kinds of drift: coalitional drift (governments changing
preferences over time); agency drift (agencies not following their statutory objectives); industry drift
(industry not following regulatory requirements). Given the limits of our knowledge and understanding, one
key strategy therefore is not to rely on grand schemes, but rather to employ incremental ‘trial-and-error’
approaches towards regulatory change. For the specific levels we consider the rhetorical and the analytical
ones. Looking first at the rhetorical level, one high-level approach draws on the work by the Economist Albert
Hirschman, who notes three rhetorical strategies/positions that are commonly employed to resist
“progressive policy interventions” (like proposals for new types of regulations) or to dispute the effectiveness
of existing provisions. They are:
▪ Futility: according to Hirschman, this position urges that, regardless of regulatory effort, no change to
the existing problem will occur without altering the complexity of the existing problem; for example,
people, will not change their behaviour, regardless of regulatory intervention.
▪ Jeopardy: this argument sorts out when, despite the worthwhile character of a particular regulatory
instrument, its deployment would risk wider achievements and/or lead to a chain of undesirable side-
effects; then, the potential benefits of regulatory activity could be outweighed by the costs (causing a
shift in deviance to more dangerous kinds of activities) of the wider loss of other achievements (the so-
called ‘slippery slope’ argument).
▪ Perversity: it is used to identify interventions achieving the exact opposite of their intended outcomes.
These explanations also suggest that we are faced with often contradictory advice on how to deal with
regulatory failure. Three general recipes follow:
Coordination: more coordination is useful to centralize information, to maintain control and to impose a
▪ more uniform regulatory process using common methodologies. In fact, problems of over- and under-
regulation are often associated with failings in coordination.
Organizational reform and learning: because of bounded rationality encountered in decision-making
▪ regulatory reform will not be conducted based on exhaustive and comprehensive analysis, instead,
reform proposals will be based on limited searches, meaning that the learning and evaluating is
important when facing a failure.
Clumsy solutions/hybrids: it is an approach mixing elements from various “pure” strategies in order to
▪ compensate against side-effects, which are collateral when reforming the existing regimes.
Looking across these three widely advocated solutions to regulatory failure suggests that any remedy is
associated with inherent trade-offs, side-effects, and limitations.
REGULATING RISKS | Risk is usually defined as the probability of a particular event occurring and the
consequent severity of the impact of that event, that’s why, unlike uncertainty, risk is quantifiable.
Nevertheless, living in this advanced modernity requires specific expertise to identify, recognize and measure
these, often global, risks. In addition, contemporary risks differ in their quality to previous generations of
risks in so far as the former may unintentionally generate greater (transboundary) unanticipated risks. In fact,
“risk means the anticipation of catastrophe”. Regulation can be seen as the control of risk across all
dimensions of regulation, namely standard-setting, information-gathering, and behaviour-modification. Risk
regulation helps us to deal with uncertain popular responses to anticipated or realized risks and the issues
presented by communications (examples: concerns about the safety of large technical installations such as
nuclear reactors; the fears associated with mad cow disease and covid-19; worries about the safety of
genetically modified foods) > application of the precautionary principle within countries, within the EU, and
in global trade.
Due to the inherent complexity of organizational processes, based on different technologies differentiated
approaches should be adopted. Similarly, issues arise concerning the relative importance of addressing high-
impact (low impact) but low-probability (high probability) risks: how risks are perceived and responded to
often has more to do with subjective matters (i.e., fears and anxieties, moral panics) than with any form of
objective risk profiling. That’s why particular risks are regulated in a heavy way while other ones are tolerated
in a much more reactive way.
DEFINING AND ASSESSING RISKS | It is necessary to differentiate between types of risk:
Probabilistic risks, whose probability is based on available statistics concerning past incidents; they are
▪ objectives, because they’re seen as scientifically assessable by experts.
Unpredictable risks can’t be qualified a priori; among the latter, there are non-repeating risks where
▪ probabilities cannot be estimated and are non-expert perceptions, such that subjective assessments
must be made.
Voluntarily undertaken risks (e.g., from taking oral contraceptives or diet drinks) and societally imposed
▪ risks (e.g., from nuclear power stations), where citizens have little choice as to exposure.
Discrete risks, whose correlated events are of a precise and bounded nature.
▪ Pervasive risks, which born as a collateral of functioning of society (i.e., polluted air, water, and soil).
▪ Reversible risks after actualizing some procedures.
▪ Non-reversible risks.
▪ Different natures of risks: natural, physical, biological, social-communicative.
▪
Even if there’s not one best way, some guidelines may be plotted, and few broad and varying approaches
can be identified.
➢ Technical perspectives
It looks to the relative frequencies of events that are amenable to objective observation (e.g., numbers of
deaths) assessing probabilities (the numerical result of the risk) through the extrapolation of statistics on
past events (the objective evidence). Then, technical approaches, in general, seek to anticipate physical
harms, average events over time and space, and use relative frequencies to specify probabilities.
It is important to specify that this perspective has been used to assess not merely the quantum of risks but
also their social acceptability. This latter application has, however, been much criticized by social scientists
on the grounds that what persons perceive as undesirable depends on their values and preferences and that
technical strategies tend to undervalue objectives such as equity, fairness, public participation, and
resilience. Objectors have also contended that judgements are involved in selecting, defining, and structuring
the ‘risk problem’ and that these influence subsequent conclusions. Thus, these approaches are seen as
potentially biased, requiring not just the existence of data, but also a belief in the possibility of technical
analysis in dealing with risks. Such criticisms have eroded not only the idea of objectivity in risk assessment
but also the presumed difference between expert and lay public views of risk—the critics of technical
approaches hold that both technical and lay assessments of risks involve human interpretation, judgement,
and subjectivity. Nevertheless, even if other approaches to risk may be applied, it can be argued that
technical assessments have a role to play, considering that their contribut