Estratto del documento

DEL SERVIZIO SSI COMPRENDONO I SEGUENTI ELEMENTI MINIMI A I CRITERI PIÙ SIGNIFICATIVI PER DETERMINARE LE INFORMAZIONI

; ) ' .

SUGGERITE AL DESTINATARIO DEL SERVIZIO B LE RAGIONI PER L IMPORTANZA RELATIVA DI TALI PARAMETRI

3. Q 1

UALORA SIANO DISPONIBILI DIVERSE OPZIONI A NORMA DEL PARAGRAFO PER I SISTEMI DI RACCOMANDAZIONE CHE DETERMINANO

' ,

L ORDINE RELATIVO DELLE INFORMAZIONI PRESENTATE AI DESTINATARI DEL SERVIZIO I FORNITORI DI PIATTAFORME ONLINE RENDONO

DISPONIBILE ANCHE UNA FUNZIONALITÀ CHE CONSENTE AL DESTINATARIO DEL SERVIZIO DI SELEZIONARE E MODIFICARE IN QUALSIASI MOMENTO

' . T '

L OPZIONE PREFERITA ALE FUNZIONALITÀ È DIRETTAMENTE E FACILMENTE ACCESSIBILE DALLA SEZIONE SPECIFICA DELL INTERFACCIA ONLINE

.

DELLA PIATTAFORMA ONLINE IN CUI ALLE INFORMAZIONI È DATA PRIORITÀ

Article 27 DSA: recommendation system. How does the recommendation system

work? Online platforms must set out in play an intelligible language, the many

parameters used in the recommender systems. You have the right to know how you

get recommendation. You must also give the recipients of a service a way to modify

and influence those parameters.

You're watching YouTube and there's a there's a small announcement. Do you

- think this commercial was relevant to you? You remember that somebody

comes out and says, was this piece of information relevant to you, yes or no?

You're providing the system with feedback. When you access Youtube terms of

conditions, you should be able to have some knowledge of how the make those

recommendations.

There are additional requisites that apply to what they have for very large online

platforms and very large online search engines

RISK ASSESSMENTS

one of the most important rules is the

- Do you remember where we found the risk assessment? AI act ranks

- technologies on the basis of a kind of race that they generate. It says that

providers shall gently identify, analyze and assess any systemic risks and this

must be done every year. The assessment must be specific to the service and

proportionate to the systemic risks.

if you run a very big platform, you are supposed to make an assessment of the

- type of risk that you are generating. These assessments must be done on an

annual basis.

Why doesn't this requisite apply to smaller entities/corporations/platforms? They

- will have more difficulties in meeting the costs and having humor services to do

that. For those around platforms may run very risky types of business, but if

you put on the shoulders of everybody the same kind of obligations you're

actually deterring. The expansion of the market you will have fewer players in

the market because just few players were able to meet the costs. So younger

companies may also have more difficulties in assessing what kind of risk they

are generating.

It's same logic of the Samaritan paradox: I want to make this place safe, but I

- don't want to deter people from taking action, so I will try both to preserve the

market and to encourage chrome buttons to behave in a safe way

ARTICOLO DA LEGGERE

Violation of the article 8 of the token motion. The focus on this case is how the

outgoing of process with data of the people because it was made to define potential

social welfare program and to rent individual and identify program persist lot of

personal information as well. Very sensitive.

Welfare scheme tax benefits. So, things have to do with public expenditures. And why

did they claim that this was wrong? Politics! Because it will make sure we're able to

set the ultimately without hustling the asking to be legal if he concerned the

treatment method. At some point they speak about the concerns just to rule out

that was reliable legal basis for it. The government can make an assessment whether

I'm eligible and then they can do background checks. The specific consent may not be

needed.

What it was the problem with Siri or Cyril? Why was the system considered potentially

unlawful? I'm enjoying some benefits. I am not paying taxes, or I have been, am being

given benefits; at some point they detect that I may not be eligible. One of the

problems is that the system lacks transparency. It's a privacy concern: I am giving

away my privacy in order to get a benefit. It can't just be the fact that I am sharing my

details with you because you need to know something about me in order to give me

some benefits, so it's about privacy

Article 8 of the information rights speaks about privacy, about private life. If that is the

right assignment.

Can I be considered a potential criminal for how long? Yes, investigations can last for 2

years and my information can be exploited for 20 months by several agencies: there

are several state agencies that share information about me and I'm not being notified

does this affect my private life? Yes, it does, not because they have my

- information, but because I gave them my information in order to enjoy some

benefits. The problem is that they are using my information in a controversial

way

The Dutch Court says this is illegal (there was lack of transparency). Where was

transparency lacking? The legislation itself was generic, was not specific enough how

they would utilize your data and how they process them. So there needs to be a legal

basis for using your data to detect whether you cheated. It was not clear which kind of

data were used, how and which was the purpose of the use (there was no clear legal

basis) -> the court found that the violation of privacy was excessive. Putting the bot

in the system to prevent people from cheating was not enough as justification for the

legislation

What happens if everybody knows all the details of the algorithm ? If everybody knows

how this system works, they will still be able to game the system just to avoid to be

identified. So, the argument of the Government was that it could not be too specific

because being too specific meant helping people anticipate what they could do and

what they could not do with and how they should be able to keep cheating without me

knowing that we're cheating

The court says that there needs to be the possibility for people to understand what are

the consequences of their actions and the government was not giving them this

possibility. They started implementing this system and they would target specific

stride down population

The problem is that the algorithm identified fragile tax or welfare frauds and took

away tax benefits or unemployment vanishes. So, people failed under the poverty

line and they lost their children.

Was the algorithm based on an AI tool? It wasn’t: there was no deep learning, it was

not really an artificial intelligence system. It generated connections. If you behave in a

certain way, and you if you belong in a certain social stratus then it's likely that you

are committing a fraud. It generated a sort of social scoring system: it ranks

people, according to their social and economic status. The risk profiles that the system

generated where dangerously close to the social scoring.

Il social scoring c’è anche in Italia rispetto al download di applicazioni che controllano i

comportamenti degli utenti (es se si usano mezzi pubblici di frequente si avranno dei

biglietti gratuiti perché si vuole diminuire l’uso delle macchine nelle città): possibile

secondo alcuni solo se previsto come meccanismo opzionale, mentre altri sostengono

che non possa essere lecito perché si tratta di un sistema che induce le persone

(nudging) a scaricare le app proprio per ottenere i biglietti gratuiti

One of the main problems is the rational connection between what you are doing and

what you are cheating in the reward. There needs to be some connection between

getting free ticket of some for something and behaving a certain way

The Chinese system is not simply opaque, but multidimensional. It really depends on

what the purpose of having a search certain scheme unit.

Another point has to do with bias. One of the additional problems is that these types of

scoring system are very selective. They generate an idea of a good citizen. You have

problems of economic and social bias. You have problems of the compulsory

components

When they implemented Syri they decided to focus on the specific social status and

make them behave in a certain way. Digital technologies have this capacity to nudge

people to do something.

There is a mechanism that identifies potential threat and fraud: evidences should have

been disclosed. Unable to get evidence to help them. These mechanisms sometimes

run the risk of shifting the burden of proof. Software specifics are often protected by

intellectual property clauses. When you have a social scoring system in place, it's

usually the case that those who are affected don't really have their sources to

challenge the system. If you don't show me the specifics of the algorithm, you can't

win The right and fair trial overrides intellectual property issues -> so if you

- implement artificial intelligence systems, you need to make sure that they

comply with fair trial requisites, otherwise you can't use them

13.05

EDPB Pay or Consent

Consent is behavioral advertising: in order to have behavioral advertising, it needs

to have data about us so when FB said you can either receive advertisements or not,

they were not really saying I’m not going to gather your data, they were just saying

I’m going to send you advertisement

Behavioral advertisement includes 2 step process

- 1. I learned about you so I ask your consent to track down what you are doing

on the web (what you are looking for, how long, which are your preferences,

in order to profile people and consequently to advertise to us what FB

though we like). I need those aspects in order to profile you and then I can

really send you targeted advertisements

2. FB said “I’m not going to send you any advertisement” so FB never give you

a chance to decide whether you would like them not to gather data and

process them, but simply said “I’m not going to show you something that

you may want to purchase”

Marketing developed on our specific persona: FB showed us something that they think

will match our persona

The authority said that it’s not always lawful the way they do that because it always

tends to be a radical alternative (yes or not) and it is not proportionate

The opinion suggested was to provide users a third alternative

PROBLEMS of not having a third option

Radical alternative (or-or/out-out): it is unlawful (the lack of a third alternative is

unlawful) because the consent isn’t freely given but also because you are nudging

people to share data if they don’t want to pay. When they think of a third option, they

are thinking of an option through which you share your data but in a more proportional

way. FB was asking people to relinquish

Anteprima
Vedrai una selezione di 6 pagine su 24
Appunti corso Diritto privato comparato Pag. 1 Appunti corso Diritto privato comparato Pag. 2
Anteprima di 6 pagg. su 24.
Scarica il documento per vederlo tutto.
Appunti corso Diritto privato comparato Pag. 6
Anteprima di 6 pagg. su 24.
Scarica il documento per vederlo tutto.
Appunti corso Diritto privato comparato Pag. 11
Anteprima di 6 pagg. su 24.
Scarica il documento per vederlo tutto.
Appunti corso Diritto privato comparato Pag. 16
Anteprima di 6 pagg. su 24.
Scarica il documento per vederlo tutto.
Appunti corso Diritto privato comparato Pag. 21
1 su 24
D/illustrazione/soddisfatti o rimborsati
Acquista con carta o PayPal
Scarica i documenti tutte le volte che vuoi
Dettagli
SSD
Scienze giuridiche IUS/02 Diritto privato comparato

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher jggg di informazioni apprese con la frequenza delle lezioni di Diritto privato comparato e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università Università degli Studi di Padova o del prof Pin Andrea.
Appunti correlati Invia appunti e guadagna

Domande e risposte

Hai bisogno di aiuto?
Chiedi alla community