Anteprima
Vedrai una selezione di 20 pagine su 99
Appunti di Cybersecurity Fundamentals Pag. 1 Appunti di Cybersecurity Fundamentals Pag. 2
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 6
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 11
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 16
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 21
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 26
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 31
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 36
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 41
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 46
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 51
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 56
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 61
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 66
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 71
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 76
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 81
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 86
Anteprima di 20 pagg. su 99.
Scarica il documento per vederlo tutto.
Appunti di Cybersecurity Fundamentals Pag. 91
1 su 99
D/illustrazione/soddisfatti o rimborsati
Disdici quando
vuoi
Acquista con carta
o PayPal
Scarica i documenti
tutte le volte che vuoi
Estratto del documento

∅, {N {EU {U {N {N {EU

lowing sets of categories: U C}, R}, S}, U C, EU R}, U C, U S}, R, U S},

{N ⊆

and U C, EU R, U S}. These sets of categories form a lattice under the operation (as in

figure 1.13). Each security level and category form a security level.

As before, we say that subjects have clearance at (or are cleared into, or are in) a security

level and that objects are at the level of (or are in) a security level. For example, William

{EU

may be cleared into the level (SECRET, R}) and George into the level (TOP SECRET,

{N {EU

U C, U S}). A document may be classified as (CONFIDENTIAL, R}).

39

Figura 1.13: Example

Security levels change access. Because categories are based on a ”need to know,” someone with

{N

access to the category set U C, U S} presumably has no need to access items in the category

EUR. Hence, read access should be denied, even if the security clearance of the subject is higher

than the security classification of the object. But if the desired object is in any of the security

∅, {N {U {N

levels U C}, S}, or U C, U S} and the subject’s security clearance is no less than the

document’s security classification, access should be granted because the subject is cleared into

the same category set as the object. ′ ′

Definition 29. The security level (L, C) dominates the security level (L , C ) if and only if

′ ′

≤ ⊆

L L and C C.

Bell-LaPadula problems

• Numerous categories (and security levels) to represent commercial applications.

• Creation of categories and security levels is usually decentralized.

• Problem of information aggregation:

– much innocuous information can be public, but their aggregation could give sensitive

confidential information.

1.6.2 ORCON

Definition 30. An originator controlled access control (ORCON) bases access on the creator

of an object (or the information it contains).

The goal of this control is to allow the originator of the file (or of the information it contains)

to control the dissemination of the information. The owner of the file has no control over who

may access the file.

ORCON is a policy in which a subject can give another subject rights to an object only with

the approval of the creator of that object.

In practice, a single author does not control dissemination; instead, the organization on whose

behalf the document was created does. Hence, objects will be marked as ORCON on behalf of

the relevant organization.

1.6.3 Integrity policies

Integrity has aspects and principles of operation not as relevant to military security:

• separation of duty: several different people must be involved to complete a critical

function; 40

• separation of function: a single person cannot complete complementary roles within a

critical process;

• auditing: recoverability and accountability require maintaining an audit trail.

Requirements for the integrity policies:

• users will not write their own programs;

• programmers will develop and test programs on a non-production system;

• a special process must be followed to install a program from the development system onto

the production system;

• this must be controlled and audited;

• managers and auditors must have access to both the system state and log state.

GOALS:

• If two or more steps are required to perform a critical function at least two people should

perform the steps (separation of duties).

• Developers do not develop new programs on production systems (separation of functions).

• Developers do not process production data on production systems (separation of functions).

• Commercial systems emphasize recovery and accountability (auditing).

• Auditing involves analyzing systems to determine what actions took place and who was

involved (auditing).

Biba’s Model

This model is the dual of the Bell-LaPadula Model.

Principle. No read down, no write up.

Integrity labels are not security (confidential) labels.

• Confidentiality labels limit the flow of information.

• Integrity labels limit the modification of the information.

Biba’s models emphasize that the integrity of the process relies on both the integrity of the

program and the integrity of the access control file. The former requires that the program be

properly protected so that only authorized personnel can alter it. The system managers must

determine who the “authorized personnel” are. Among the considerations here are the principle

of separation of duty and the principle of least privilege.

41

Clark-Wilson Model

• Authentication: the identity of all users must be properly authenticated.

• Audit: modifications should be logged to record every program executed and by whom,

in a way that cannot be undone.

• Well-formed transactions: users manipulate data only in constrained ways. Only

legitimate accesses are allowed.

• Separation of duty: the system associates with each user a valid set of programs they

can run. Prevents unauthorized modifications, thus preserving integrity and consistency

with the real world.

Example. When a company receives an invoice, the purchasing office requires several steps to

pay for it. First, someone must have requested a service and determined the account that would

pay for the service. Next, someone must validate the invoice (was the service being billed for

actually performed?). The account authorized to pay for the service must be debited, and the

check must be written and signed. If one person performs all these steps, that person could

easily pay phony invoices; however, if at least two different people perform these steps, both

must conspire to defraud the company. Requiring more than one person to handle this process

is an example of the principle of separation of duty.

The Clark-Wilson model defines data subject to its integrity controls as constrained data

items or CDIs. Data not subject to integrity controls are called unconstrained data items, or

UDIs. For example, in a bank, the balances of accounts are CDIs since their integrity is crucial to

the operation of the bank, whereas the gifts selected by the account holders when their accounts

were opened would be UDIs because their integrity is not crucial to the operation of the bank.

The set of CDIs and the set of UDIs partition the set of all data in the system being modeled.

A set of integrity constraints (similar in spirit to the consistency constraints discussed above)

constrain the values of the CDIs. In the bank example, the consistency constraint presented

earlier would also be an integrity constraint.

The model also defines two sets of procedures. Integrity verification procedures, or IVPs, test

that the CDIs conform to the integrity constraints at the time the IVPs are run. In this case,

the system is said to be in a valid state. Transformation procedures, or TPs, change the state of

the data in the system from one valid state to another; TPs implement well-formed transactions.

Certification rules and enforcement rules:

C1 All IVPs must ensure that CDIs are in a valid state when the IVP is run.

C2 All TPs must be certified valid.

C3 Assignment of TPs to users must satisfy separation of duty.

C4 The operation of TPs must be logged.

C5 TPs executing on UDIs must result in valid CDIs.

E1 Only certified TPs can manipulate CDIs.

E2 Users must only access CDIs by means of TPs for which they are authorized.

E3 The identity of each user attempting to execute a TP must be authenticated.

E4 Only the agent permitted to certify entities can change the list of entities associated with

other entities. 42

1.7 Secure interoperation

It is about the composition of secure systems (systems with identical or compatible security

attributes or policies) and the communication when objects and subjects have an assigned

security level (multilevel systems).

Example. Two offices:

• admin system

• sales system Figura 1.14: Example of connection between two systems.

Systems are individually secure.

Is it safe to allow file sharing between Personnel and Sales systems? Clare is not

authorized to access Bob’s files, but, Clare may access Bob’s files via the Sales system.

We need to reconfigure connections/systems to close this circuitous access route.

Interoperation principles:

• autonomy: any access permitted within an individual system must also be permitted under

secure interoperation;

• security: any access not permitted within an individual system must also be denied under

secure interoperation.

Definition 31 (access configuration). A collection of constraints between entities (subjects,

objects) specifying access permissions.

• {S,

Variables V = O};

• {a,

Domain D = b, c};

• C (a, b) = T (a can access the b’s files);

S1

• C (a, c) = F (a cannot access the c’s files);

S1

• {w}

C (a, b) = (a has the permission of writing on the b’s files.

S1

Example. If I consider the default deny, I can write only the “true” arrow.

I can reconfigure singularly the systems (reducing the autonomy to increase security) in the

previous example in such a way as to delete the transitivity on B (it is secure).

How? Reducing the user’s rights. 43

Figura 1.15: Example of reducing rights

Example. As we can see in the example in figure 1.16 combining the two systems (figure 1.17)

we have some contradictions. In these cases, we can choose to work with the “red” arrows to

combine the system and assume that all the missing arrows are the permitting accesses.

Figura 1.16: Two starting systems

Figura 1.17: The 2 systems combined.

Security by reducing interoperation

It is about removing security violations while maintaining a maximum amount of information

exchange (i.e. reducing interoperation and maintaining autonomy)

Maximal secure reconfiguration

• Removing the minimum number of arcs

• reducing the number of arcs in order to maintain maximum connections

In real life, the system’s connections are: client-server of peer-to-peer.

44

1.7.1 Multilevel security

Multilevel security considers the assurance risk when composing multilevel secure systems eva-

luated under security evaluation criteria.

Note:

• Analyzing the security of interoperating and individually secure systems can be done in

polynomial time.

• Given a non-secure network configuration, then re-configuring the connections in an opti-

mal way (to minimize the impact on interoperability) is NP.

MLS systems assured different levels of assurance based on evaluation criteria.

(worst)D < C < C < C < B < B < B < A (best)

1 2 3 1 2 3 1

Evaluated systems must meet minimum risk requirements.

System Stores Minimum Assurance

top secret + unclassified B3

top secret + secret B2

secret+unclassified B1

Channel

Dettagli
Publisher
A.A. 2022-2023
99 pagine
SSD Ingegneria industriale e dell'informazione ING-INF/05 Sistemi di elaborazione delle informazioni

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher emmevibertols di informazioni apprese con la frequenza delle lezioni di Cybersecurity fundamentals e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università Università degli Studi di Perugia o del prof Santini Fabio.