Fundamentals of IT Law: AI Liability and Responsible Entities

Slides from Università Politecnica Delle Marche about Fundamentals of IT Law. The Pdf explores legal liability in Artificial Intelligence, discussing the 'black box' problem and challenges in the AI system distribution chain, identifying back-end and front-end operators and their responsibilities. This University material is useful for Law students.

Ver más

14 páginas

1
www.univpm.it
Fundamentals of
IT LAW
Prof. Roberto Ruoppo
Facoltà di Economia “Giorgio Fuà
2
Artificial Intelligence Liability
Why it is important to analyze this feature?
Which is the liable entity in case of an harmful effect deriving
from a specific output produced by an AI system?
Would it be the system itself? Or would it be the provider? Or
would it be the deployer?
Is there an autonomous personality for AI systems?
The positive answer is linked to the degree of autonomy
through which these systems are able to work and their
adaptiveness as well

Visualiza gratis el PDF completo

Regístrate para acceder al documento completo y transformarlo con la IA.

Vista previa

Artificial Intelligence Liability

1
SI
CNICA
UN
E
DELL
RC
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
-
Facoltà di Economia "Giorgio Fuà"
Fundamentals of
IT LAW
Prof. Roberto Ruoppo
www.univpm.it2
ARSITA PO
UNIVERS
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
Why it is important to analyze this feature?
Which is the liable entity in case of an harmful effect deriving
from a specific output produced by an AI system?
Would it be the system itself? Or would it be the provider? Or
would it be the deployer?
Is there an autonomous personality for AI systems?
The positive answer is linked to the degree of autonomy
through which these systems are able to work and their
adaptiveness as wellARSITA PO
UNIVERS
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
3
Why it is important to analyze this feature?
Which is the liable entity in case of an harmful effect deriving
from a specific output produced by an AI system?
Following the majoritarian approach an autonomous
personality cannot be attribuited to AI systems, since this
category can be referred to only to natural or juridical personARSITA PO
UNIVERS
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
4
EU institutions have regulated this topic as well
- In order to carry out a complete legal framework;
- To achieve predictability, avoiding differencies and
uncertainties that can arise from several national legal rules;
- To improve confidence by European citizens in services and
products using AI, granting the economic development and
investments in AI field
-
To strike a balance between consumers' needs (monetary
compensation in case of a violation) and enterprises' needs,
in particular small and medium ones (having the possibility
to be aware about the requirements that should be
complied with in Europe)ARSITA PO
UNIVERS
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
5
Which the aims pursued through AI liability regulation:
-
Deterrent effect: enterprises will have more mandatory
reasons to comply with lawfulness requirements
established for AI systems (since their violation could
represent a presumption to their fault):
-
Preventive effect: avoid violations to not incur in
compensation dutiesERSITA PO
UNIVERS
ECNICA
900
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
6
Which are the most controversial aspects linked with AI
systems liability?
Due to the illicit structure (behavior, causal link, subjective
requirement, damage), which are the obstacles for the subject
that has been damaged when bringing a claim before a judge?ARSITA PO
UNIVERS
ECNICA
900
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
7
Which are the most controversial aspects linked with AI
systems liability?
It is not always easy for the damaged user (when a violation of
a fundamental right occurs, such as life, physical integrity,
patrimonial rights) to understand the cause of the damage: this
is the so called black box effect, represented by the difficult
knowledge of the input used to develop the systemARSITA PO
UNIVERS
ECNICA
900
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
8
Which are the most controversial aspects linked with AI
systems liability?
The most important challenges for a uniform set of rules
concerning AI liability:
-
It is difficult to link the liability in the distribution chain of
an AI system, since several operators are involved in their
employment: the subject creating the system; the subject
who develops the algorithm; the subject benefiting from its
application
-
What is the problem from a procedural point of view?R SITA PO
UNIVERS
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
9
In the distribution chain of AI systems there are often the back-
end and front-end operators
i)
back-end operator: the natural or legal person who defines
the features of the technology, provides data and essential
support service and exercises a degree of control over the
risk connected with the operation of the AI system
ii) front-end operator: the natural or legal person who
exercises a degree of control over a risk connected with the
functioning of the AI system and benefits from its operation

AI Liability Regime Introduction

10
R SITA PO
UNIVERS
ECNICA
900
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
Which way to introduce an AI liability regime?
There will be soon a derivative legal rule (Regulation or Directive)
and the complementary role of the Producer Liability for Defective
Products Directive (Dir. 85/374 EC)
These rules will represent the pillars of a common liability
framework for AI systems requiring coordination and alignment
An amendment of the 1985 Directive is needed: AI systems must
be qualified as a product

European Parliament Resolution on AI Civil Liability

11
RSITA PO
UNIVERS
ECNICA
000
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
European Parliament Resolution of 20 October 2020:
recommendations on the civil liability regime for artificial
intelligence (regulation proposal)
- No autonomous legal personality to AI systems;
- Liability is addressed to those people entitled to exercise< control over AI systems;
- Different liabilities are provided in connection with different AI
systems

Resolution Details: Insurance, Limitation Periods, and Damage Entity

12
RSITA PO
UNIVER
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
European Parliament Resolution of 20 October 2020:
recommendations on the civil liability regime for artificial
intelligence (regulation proposal)
- Insurance duties;
- Limitation periods: civil liability claims concerning harm to life,
health or physical integrity shall be subject to a special limitation
period of 30 years from the date on which the harm occurred; 10
years in case of harm to property or immaterial harm resulting in a
verifiable economic loss;
- Entity of damage: EUR 2 million in the event of death or harm to
health; EUR one million in the event of significant immaterial harm or
damage caused to property

Resolution Details: Liability Types for AI Systems

13
RSITA PO
UNIVERS
ECNICA
000
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
European Parliament Resolution of 20 October 2020:
recommendations on the civil liability regime for artificial
intelligence (regulation proposal)
- For high risk systems a strict liability has been provided,
meaning that it is not needed to prove the fault of the provider
or of the deployer;
- For not high-risk systems a fault based liability has been
provided

European Commission Directive Proposal on AI Liability

14
RSITA PO
UNIVERS
ECNICA
DELLE MARCHE
UNIVERSITÀ
POLITECNICA
DELLE MARCHE
Artificial Intelligence Liability
European Commission Directive proposal of 28 September 2022
- No strict liability has been provided for AI high-risk systems;
- Only some presumptions have been introduced concerning the
causal link and the fault requirement:
- An order to exhibit can be imposed by jurisdictional authorities
to providers and developers; when this order is not executed a
presumption will work
- A relative presumption has been provided for the causal link
between the behavior and the output

¿Non has encontrado lo que buscabas?

Explora otros temas en la Algor library o crea directamente tus materiales con la IA.