IA; Personalidade; Responsabilidade; Sustentabilidade.
Este trabalho visa reconstruir o arcabouço regulatório sobre inteligência artificial e processamento de dados, examinando como as fontes europeias têm direitos de personalidade qualificados, perfis de responsabilidade civil e transformação digital em relação à sustentabilidade. O trabalho destaca as questões subjacentes e os limites da tecnologia no que diz respeito aos direitos da personalidade.
1. Introduction
2. Regulatory Framework
2.1. General Data Protection Regulation (GDPR)
2.2. Data Governance Act (DGA)
2.3. Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law
2.4. AI Act
2.5. European Commission Guidelines on Prohibited AI Practices under the AI Act
2.6. European Work Programme for 2025
3. The Interaction between AI and Individual Rights
4. The Prohibition of Certain AI Practices and Their Incompatibility with the Values of the Union.
5. AI Bias and Emerging Issues of Civil Liability
6. Digital Transformation and Sustainability: The Role of AI
7. Technological Limits and the Mandatory Mathematical Value. The Dictatorship of Calculation and Hallucinations
8. Recent Italian legislation on AI
9. Concluding Remarks
Bibliography
1. Introduction
In the era of artificial intelligence, modern users, immersed in an “ecosystem of algorithms and customizations,” often passively accept the conveniences offered by AI—which collects and analyses data on their behalf, shaping behaviours and decisions along predefined paths that appear free (but are not)—without questioning the risks of dependency and manipulation
[1].
The only antidote to prevent technology from becoming an invisible tyrant, turning a promise of convenience into a trap for individual freedom, is the critical awareness that stems from knowledge, study, and diligent inquiry.
The Greeks taught an essential truth: whether it be democracy or tyranny, its author assumes ownership of the choice, signs its deeds, subscribes to its modalities, and openly supports its reasons, thereby earning the honours of history.
Cicero, in the famous trial against Verres, recounted that the tyrant Dionysius I of Syracuse had designated a latomía as a prison for political prisoners: this cave, similar to a long and high corridor in the rock adjacent to the Theatre, enabled the tyrant to eavesdrop on the seditious speeches of the prisoners, thanks to the excellent acoustics of its walls.
Not coincidentally, Caravaggio—during his reparative journey in Sicily—gave it the name “Ear of Dionysius” due to its outer arch resembling an auricle, which he is believed to have placed in the background of his painting “The Burial of Saint Lucy”.
Thus, oscillating between the metaphor of historical reality—represented by the Ear of Dionysius—and the metaphor of philosophical reality—bequeathed to civilization by Plato’s Cave—literature skillfully captures the patterns of tyranny and offers us, as readers and citizens of the global polis, an interpretative grid for contemporary reality that is readily comparable to the political and psychological condition experienced by humanity centuries ago. This transports us, in other words, from Syracuse and Athens of the 5th century BC to our 20th century, and from there to the contemporary world.
In an inappropriate yet effective temporal leap, we arrive at 1932, at Aldous Huxley’s prophetic work “Brave New World”, which foresaw humanity’s ruin through entertainment transformed into a tool of social control more effective and efficient than coercion and violence. In this utopian novel, the author describes a new social project in which it is deemed compliant with social norms to be highly sociable, to care for one’s body, and to be good consumers of products. On the contrary, it is unacceptable and absolutely dangerous, both for oneself and for others, to spend time in solitude or not to be interconnected with others, in a claustrophobic physical and logistical proximity typical of the global village. Emotional relationships are disapproved of, and are replaced by seemingly friendly ties that guarantee perfect and strict standardization.
To that single human being (“the individual” in the manner of Kierkegaard) who opts out of the village, the author assigns the fate of a disconnected, asocial, maladjusted existence—potentially dangerous or at the very least strange, alien, estranged.
In this scenario, culture—which has or should have the task of leading people “out of Plato’s Cave, not as a group but one by one”—ends up sacrificed, abandoned to the absence of discernment, relegated to the background as an unsettling and unwelcome guest.
Better, it would seem, a reality where everyone is happy in the manner of puppets, dragged like automatons towards some unspecified place where they are recruited to carry out unclear tasks in an enterprise with obscure purposes, where that vocation for ”poiein” in the Aristotelian sense—to act, to think, and to create poetry, which humanizes—is sacrificed, and where cheerfulness is artificial, forced, and at times disturbing.
One must invoke Kafka and the great theatre of Oklahoma to read intellectual degradation as the antechamber of slavery: a modern slavery in which new virtual communities display tribal and violent attitudes, standardize behaviours, and cloud minds with the promise of a connected and thus happy life.
Far from the dynamics of an authentic democratic Polis, the global village mirrors and synthesizes a digital tyranny, with providers (Facebook, Google, etc.) who, like new sovereigns, authorize, determine, or deny the existence of others, thereby determining or permitting the possibility of a digital identity: indeed, contemporary existence increasingly coincides with being online, and this presence online at times resembles the condition of perpetual subjugation to a tyrant who owes no explanations and shares no strategies, but fosters prejudice or formalizes categorizations based on error.
After a thorough and analytical reconstruction of the relevant legislation, this paper aims to address the impact of AI on human rights and sustainability, including the protection of civil liability. This analysis also takes into account the recent approval (on September 17) by the Italian Parliament of a law that supplements the rules of the AI Act, intervening "ad adiuvandum" in sectors the government deems particularly sensitive, such as national governance, healthcare, justice, and employment.
The measure (Law No. 132 of 2025) will come into force on October 10, 2025. From that date, the government will have 12 months to adopt legislative decrees in alignment with the AI Act and regulate criminal law, and 24 months from the entry into force of each delegated decree to amend the legislation if necessary.
Certainly, the introduction of this new regulation will pave the way for deeper considerations regarding whether AI will elevate the relative value of exclusively "human" skills or render them obsolete, with the understanding that greater control must be exercised over technological infrastructure and critical data.
Europe, grappling with the widespread diffusion of artificial intelligence and the need to comply with the regulatory constraints imposed by the AI Act, is building—not always without difficulty—its digital sovereignty. The hope is that this effort will provide a secure and reliable foundation to succeed in the present and future challenges of a society and economy increasingly dependent on AI capabilities.
2. Regulatory Framework
The inseparable link between artificial intelligence and the related data has led the European legislator to regard data protection not only as a safeguard for citizens, but also as an essential precondition for the ethical and responsible development of intelligent technologies
[2].
The close connection between data processing and AI highlights how the European regulatory framework has anticipated a rapid technological evolution, laying the groundwork for a future in which innovation and the guarantees of privacy rights can progress in parallel
[3].
The inherent absence of national boundaries in digital relations, and in particular in the use of artificial intelligence systems, therefore requires a supranational approach to regulation. This is necessary in order to establish uniform rules and requirements that facilitate regulatory compliance by companies operating in the sector, engaged in the development and deployment of AI systems, and to set minimum protection standards for users; rights, regardless of their nationality or the place of establishment of the supplier or user enterprise of such technological systems.
The significance of these issues prompted the European Parliament and the Council of the Union to address the matter of “data processing” as early as 2016, with particular attention to its regulation.
2.1. General Data Protection Regulation (GDPR)
This led to the General Data Protection Regulation (GDPR), applicable since May 2018, which represents a pioneering normative reference for the regulation of artificial intelligence. The GDPR was conceived with the aim of harmonizing privacy laws within the European Union, ensuring that citizens have greater control over their personal data and imposing strict obligations on organizations in terms of transparency, security, and accountability
[4].
The GDPR arose from specific needs, as indicated by the European Commission itself, including legal certainty, harmonization, and simplification of the rules concerning the transfer of personal data from the EU to other parts of the world.
It draws on the recommendations made by the WP29 (October 2017) regarding new technologies, from Artificial Intelligence to Machine Learning to the Internet of Things, within a framework that also includes privacy by design, privacy by default, and pseudonymization.
These topics required necessary and urgent responses to the challenges posed by technological developments that have had significant impacts on key aspects of the GDPR precisely in relation to technological innovation and new models of economic growth, while taking into account the increasingly perceived need of EU citizens for personal data protection.
The GDPR defines “personal data” as any “information relating to an identified or identifiable natural person (‘data subject’).” A natural person is deemed “identifiable” if they can be identified, directly or indirectly, by reference to an identifier such as a name, an identification number, location data, an online identifier, or to one or more factors specific to their physical, physiological, genetic, mental, economic, cultural or social identity.
“Data subjects” include all those who are required to comply with privacy legislation, namely companies, public entities, and individuals who must access, process, store, manage or transfer personal data of EU citizens, and who must therefore apply the rules set out in the GDPR.
Data may not be collected for just any initiative, but may only be collected and used for specific purposes explicitly stated in the consent, which must be based on clear and easily understandable information (whether written or oral) provided by the data controller. This is to ensure that the data subject has a comprehensive understanding of the purposes and methods of use, is able to give informed consent, and can exercise their rights in relation to the designated controller.
The key figures under the GDPR concerning personal data are: the controller; the processor (an optional figure), to be appointed through a contract (or other legal act compliant with national law), which governs the duration of the processing, its nature and purpose, the type of personal data and categories of data subjects, and the obligations and rights of the controller; the sub-processor, who may be appointed by the processor with the controller’s prior written authorization; the authorized person, i.e. the individual authorized to process personal data; the Data Protection Officer (DPO), who is responsible for data protection on behalf of private entities by the controller.
One of the cornerstones of the GDPR is the principle of accountability, which binds both the controller and the processor (including the chain of contractors and sub-contractors processing personal data on its behalf) to responsibility before the Supervisory Authority and the ordinary courts.
This requires the adoption of proactive measures aimed at assessing the risk impact of data loss or unauthorized access, keeping a record of processing activities, preventing personal data breaches, appointing a data processor, and demonstrating the adequacy of the security measures implemented.
The regulation introduces new rights such as the right to erasure (right to be forgotten), the right to restriction of processing, and the right to data portability.
From a sanction’s perspective, violations entail pecuniary penalties in addition to compensation obligations: for less serious infringements, the higher of EUR 10 million or 2% of the total worldwide annual turnover of the preceding financial year; for more serious infringements, the higher of EUR 20 million or 4% of the total worldwide annual turnover of the preceding financial year.
There remains concern, however, regarding the areas of discretion left to individual Member States to regulate certain aspects not falling within the EU’s competence under the principle of conferral.
This situation may give rise to conflicts between different national supervisory authorities responsible for interpreting and concretely applying the GDPR provisions at national level.
In May 2025, a proposal for reform of the GDPR was put forward. Among the main changes proposed are the introduction of new definitions, such as that of “small mid-cap companies,” and the simplification of administrative obligations, including the abolition of the requirement to maintain records of processing activities for companies with fewer than 750 employees, except where there are high risks for data subjects’ rights. Furthermore, the proposal envisages a broader use of codes of conduct and certifications for SMEs and small mid-cap companies, with the aim of improving with data protection legislation.
2.2. Data Governance Act (DGA)
Regulation (EU) 2022/868, known as the Data Governance Act (DGA), entered into force on 23 June 2022 and has been fully applicable since 24 September 2023.
It forms part of the European data strategy, which aims to create a single market for data. The DGA responds to the need for a coordinated approach to data governance in order to prevent fragmented national regulations
[5].
Specifically, the DGA seeks to promote data sharing and to enhance trust in data sharing, to strengthen mechanisms to increase data availability, and to overcome technical barriers to data reuse. It supports the creation and development of European Common Data Spaces in strategic sectors, involving both private and public actors, in areas such as health, environment, energy, agriculture, mobility, finance, manufacturing, public administration, and skills
[6].
The regulation transparently manages the risks associated with sensitive research data while improving access to such data for the benefit of society. Through its detailed provisions on secure data intermediation, metadata transparency, and notifications to stakeholders, the DGA ensures that access to sensitive research data is as open as possible, while complying with legal, ethical, and public interest constraints.
The DGA establishes a structured approach to risk management in improving access to sensitive data, including research data, personal data, and other categories of data derived from public funding. The DGA introduces data intermediation services that must comply with strict governance and transparency requirements, ensuring that sensitive data is handled securely and in line with legal rights and ethical principles. It promotes technical and organizational safeguards for the reuse of sensitive data, including personal data and data protected by intellectual property rights, such as the anonymization or pseudonymization of data, controlled access through secure data spaces, and aggregated or de-identified datasets for forms of limited access.
The DGA establishes a regulatory framework for “data altruism”, allowing individuals and organizations to voluntarily share their data for the public good, with explicit and informed consent, ensuring the protection of data subjects’ rights and transparent risk management. Data intermediaries and providers are required to meet certification and monitoring standards, ensuring compliance with the GDPR, cybersecurity requirements, and other relevant regulatory frameworks.
The Data Act governs access to digital data and the modalities for its sharing, including in the public interest, and provides the consumer-user with multilayered protection that is preparatory to data circulation in compliance with individual rights
[7].
As has been observed, the GDPR has the merit of having highlighted that data protection does not constitute an absolute value, but must be considered “in light of its social function and balanced against other fundamental rights”
[8], including, in particular, the freedom to conduct a business.
The reference to the social function sheds light on the general interest, positioning the Data Governance Act as an instrument designed to promote the altruistic use of data, addressing a social need that emerged strongly during the pandemic experience. The latter highlighted the necessity of data reuse in the general interest, for example, to promote scientific research and support health policies
[9].
More generally, through this regulation, the reuse of protected data held by public bodies and data intermediation services are also conceived by the European legislator as serving the pursuit of social objectives, through the establishment of favourable rules.
On 7 October 2024, the Rules for adapting national legislation to the provisions of Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 were published.
2.3. Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law
The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law was adopted by the Council of Europe on 17 May 2024 in Strasbourg.
The participation of non-state actors and of non-member states of the Council of Europe contributes to affirming its universal vocation and to making it the first binding international legal instrument aimed at introducing requirements and obligations for the development and use of artificial intelligence systems, representing the first attempt at global governance of the phenomenon, in line the aspirations of the United Nations.
The Convention stands out as the first international treaty that legally compels states to prevent and mitigate risks related to the use of artificial intelligence in people’s lives.
Following a long preparatory process
[10], the Convention consists substantively of a preamble and eight chapters, comprising a total of 36 articles.
The Preamble of the Convention makes clear from the outset that its primary objective is to ensure that activities connected to the AI systems’ life cycle fully respect the principles of democracy, human rights, and the rule of law. The members of the Council of Europe declare their awareness that AI systems have the potential to promote human prosperity, social well-being, sustainable development, the empowerment of all women, and gender equality, while at the same time fostering progress and innovation. In this context, it was recognized that it was necessary to establish a legal framework applicable at global level, capable of defining rules to regulate activities throughout the life cycle of AI systems, with the aim of safeguarding values and harnessing the benefits of AI to promote responsible innovation
[11].
The Preamble also highlights the possibility of the Convention being supplemented by additional protocols establishing specific objectives to address emerging challenges, encouraging the assessment of risks associated with such technologies, including those related to human health, the environment, and socio-economic aspects such as labor and employment.
Against this legal backdrop, the Convention lays down a series of rules designed to regulate AI use with a view to enhancing the protection of human rights. In this regard, the Convention provides a definition of artificial intelligence as: a machine-based system that, for explicit or implicit objectives, generates outputs such as predictions, content, recommendations or decisions influencing physical or virtual environments from the inputs it receives.
From a teleological perspective, the Convention aims to ensure that States Parties adopt measures for the continuous assessment and management of risks associated with AI technologies through a preventive approach (Article 1 of the Convention).
Article 3 defines the scope of application, providing that the Convention shall primarily apply to activities carried out by public authorities or by private entities acting on behalf of the State, while leaving room for regulating the activities of independent private actors in a manner consistent with the objectives of the Convention. The provision also establishes certain exceptions, such as the explicit exclusion of matters related to national defence: this means that activities connected with the development and use of AI systems for military or national security purposes fall outside the scope of the Convention, leaving such responsibilities to individual States.
Furthermore, Article 3 excludes research and development activities of AI systems that are still experimental (not yet in use), unless such activities directly and negatively impact fundamental rights or democracy. In this way, the Convention seeks to balance the need to protect fundamental rights with the requirements of national security and state sovereignty.
At a general level, the Convention creates legal obligations requiring all States Parties to take measures to ensure that AI activities are consistent with their obligations to protect human rights (Article 4).
In particular, Article 5 provides that each Party shall adopt measures to ensure that AI systems do not undermine the integrity and effectiveness of democratic institutions and processes, by establishing (or maintaining) mechanisms such as fair participation in public debate and the ability of individuals to form opinions freely.
Additionally, the Convention addresses aspects of AI linked to transparency and privacy. Article 8 is of particular significance as it introduces the principle of transparency and oversight
[12]. It provides that automated decisions based on AI systems must be clear and subject to appropriate oversight. States are required to ensure that citizens have access to information about the functioning of such systems, enabling them to challenge decisions where necessary. The use of AI must support, and not compromise, democratic processes: accordingly, AI applications must not influence or alter the will of voters, restrict freedom of expression, or reduce political participation.
Automated decisions, especially those with political consequences, must therefore be subject to public oversight and transparency. AI technologies should promote informed participation and civic responsibility, avoid misinformation, and ensure fair access to information.
Pursuant to Article 14 of the Convention, each State Party shall adopt or maintain measures enabling individuals to assert breaches of the Convention. This means that AI systems must be designed in a way that ensures a clear understanding of the algorithmic decision-making process so that individuals can challenge decisions affecting their rights before competent authorities (Article 14(2)(c)).
It is evident that the Convention represents significant progress towards a responsible and inclusive regulation of artificial intelligence, placing the protection of human rights, democracy, and the rule of law at the centre of the debate
[13].
One of the main innovations of the Convention is its ability to set minimum binding standards for Member States in an attempt to prevent the misuse of AI. This is particularly relevant in the context of rapid technological development, where AI developers and users must operate within a clear and transparent legal framework without violating national or international regulations. AI technologies, particularly algorithms processing large quantities of sensitive data, must be managed with care and prudence in order to protect individuals’ fundamental rights.
In conclusion, the Convention establishes a broad range of obligations on States Parties aimed at regulating the development of new technologies in accordance with international human rights law.
These obligations, although partly overlapping with those set out in Regulation (EU) 2024/1689, have a much broader scope in the context of the Convention, as they potentially bind not only EU Member States but also all Council of Europe member states and non-member states that consent to be bound by it. From this perspective, the Convention appears better suited to addressing the “global” challenge posed by technological progress, particularly that of balancing innovation in new technologies with the protection of human rights, democracy, and the rule of law, in an effort to ensure that technological innovation is not only compatible with but instrumental to human development.
2.4. AI Act
Within this context lies Regulation (EU) 2024/1689 of 13 June 2024 (AI Act)—comprising 13 chapters—which lays down the rules for the development and placing on the market (or, more precisely, putting into service) of artificial intelligence tools. This legislative text, which constitutes a binding legal act for the Member States, provides a legal definition of artificial intelligence while clarifying the related legal elements upon which the placing into service of AI models must be based
[14].As the very first legal framework on AI, the AI Act addresses the risks associated with AI and positions Europe to play a leading role at the global level
[15]. Its objective is to promote trustworthy AI in Europe through a clear set of risk-based rules for AI developers and deployers regarding its specific uses
[16].
This regulation forms part of a broader package of policy measures supporting the development of trustworthy AI, which also includes the AI innovation package, the launch of AI factories, and the coordinated plan on AI
[17].
The AI Act sets out a framework of prohibitions, obligations, and requirements concerning AI systems as defined therein, including a sanctions and institutional apparatus. Its legal basis is identified in Articles 16 and 114 of the Treaty on the Functioning of the European Union (TFEU).
The first Recital of the AI Act states that its purpose is to improve the functioning of the internal market by establishing a uniform legal framework, in particular as regards the development, placing on the market, putting into service, and use of artificial intelligence (AI) systems in the Union, in accordance with the values of the Union. It further specifies that the AI Act aims to “promote the uptake of human-centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety and fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, protect against harmful effects of AI systems in the Union, and promote innovation”.
The first Recital concludes with a prohibition directed at Member States, stating that the Regulation “ensures the free cross-border movement of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, placing on the market and use of AI systems, except where expressly authorized by [this] Regulation”.
To facilitate the transition to the new legal framework, the Commission launched the AI Pact, a voluntary initiative aimed at supporting future implementation, engaging with stakeholders, and inviting AI providers and operators from Europe and beyond to anticipate and comply with the core obligations of the AI Act.
In July 2024, just days after the publication of the AI Act in the Official Journal of the European Union, the Italian Strategy for Artificial Intelligence 2024–2026 was also made available
[18].
This document, prepared by a Committee of Experts appointed by the Government, is intended to assist the latter in defining national legislation and policies on AI. After analyzing the global context and Italy’s positioning, it identifies strategic actions grouped into four macro-areas: Research, Public Administration, Enterprises, and Education. The strategy also proposes a monitoring system for its implementation and an analysis of the regulatory context setting out the framework within which it will be deployed.
2.5. European Commission Guidelines on Prohibited AI Practices under the AI Act
On 4 February 2025, the European Commission adopted the content of draft Communication C(2025) 884 final, which sets out the European Commission’s Guidelines on prohibited artificial intelligence practices under Regulation (EU) 2024/1689 (AI Act or AIA or the Regulation).
The Guidelines, adopted pursuant to Article 96(1)(b) of the AI Act, provide interpretative guidance on the prohibited AI practices set out in Article 5 of the Regulation, complementing other European legislation, in particular Regulation (EU) 2016/679 (GDPR), anti-discrimination law, consumer protection law, and digital market law. The Guidelines clarify that compliance with the AI Act does not exempt operators from compliance with other applicable legal frameworks; the assessment of lawfulness must therefore always be systemic and multi-level.
In practical terms, the Guidelines aim to provide a detailed interpretation of the prohibited practices under Article 5 AI Act and of the legal elements required to establish them, with the purpose of supporting operators (providers, deployers, supervisory authorities) in identifying unlawful practices, thereby avoiding risks stemming from potential terminological ambiguities and preventing or countering their occurrence.
Structurally, the document opens with a description of the regulatory background and objectives of the Guidelines, followed by a detailed illustration of the prohibited practices under Article 5 AI Act, an analysis of their legal basis, and their material and personal scope. It also addresses exclusions from the scope of the AI Act, such as national security, research activities, non-professional personal use, and open-source software
[19].
In particular, the Guidelines examine the interaction between prohibitions and obligations for high- risk AI systems
[20], supplementing the text with concrete examples and specific case studies.
The Guidelines clarify that the prohibitions apply not only to the placing on the market and putting into service of AI systems but also to their use, interpreted broadly: thus, any use, even after commercialization, may constitute a prohibited practice, regardless of any contractual limitations imposed by the provider. Liability extends to both providers and deployers, with responsibilities apportioned proportionally to their role and the degree of control exercised over the system.
The Guidelines take a systematic reading of the AI Act, which adopts a risk-based approach, distinguishing between: AI systems with unacceptable risk (prohibited); high-risk AI systems (regulated); limited-risk AI systems (transparency obligations); and minimal-risk AI systems (not regulated, except by voluntary codes of conduct).
Article 5 AIA—the central focus of the Guidelines—prohibits specific AI practices deemed incompatible with the fundamental values of the Union, albeit with various exceptions. These include: specific subliminal, manipulative or deceptive techniques; exploitation of vulnerabilities related to age, disability, or socio-economic situation, resulting in distorted and harmful effects; social scoring leading to unjustified unfavourable treatment; predictive risk assessments of criminal behaviour based solely on profiling or personality traits; indiscriminate collection of facial images for facial recognition databases; emotion recognition in the workplace or education; biometric categorization for inferring sensitive data; real-time remote biometric identification in public spaces by law enforcement authorities
[21].
Specifically, with regard to subliminal techniques and deliberately manipulative or deceptive techniques (Article 5(1)(a) AIA), the Guidelines clarify that the prohibition applies to AI systems capable of materially distorting individual behaviour to the point of causing—or making it reasonably likely to cause—significant harm. The rationale of this provision is to protect individual autonomy and the capacity for free and informed decision-making: AI must not reduce individuals to mere objects of covert strategies, but must respect their dignity and freedom of choice.
The Guidelines define subliminal techniques as those operating below the threshold of awareness, bypassing an individual’s rational defences—such as subliminal visual, auditory, or tactile messages. Manipulative practices are understood as those exploiting cognitive biases or psychological vulnerabilities, even without specific intent to harm, including sensory manipulation practices.
Deceptive techniques are those presenting false or misleading information, thereby compromising decision-making autonomy
[22].
The European legislator intended to provide enhanced protection to persons particularly exposed to the risk of manipulation and exploitation, such as minors, the elderly, persons with disabilities, or disadvantaged social groups. Regarding the exploitation of vulnerabilities (Article 5(1)(b) AIA), the Guidelines state that the prohibition applies to AI systems exploiting vulnerabilities linked to age, disability, or specific socio-economic situations, resulting in materially distorted behaviour and significant harm. Not all forms of vulnerability are relevant, but only those arising from structural and objective conditions. Examples include AI systems inducing children into dangerous behaviour through reward mechanisms, exploiting the cognitive abilities of the elderly, or pushing economically vulnerable individuals towards harmful financial choices.
Again, harm must be significant, and the assessment must be sensitive to context and the foreseeability of consequences.
Undoubtedly, the Guidelines represent a key tool for the implementation of the AI Act, both in terms of legal certainty and the effective protection of fundamental rights in the European digital ecosystem. The Commission repeatedly stresses the need for a systematic interpretation in coordination with other relevant European legislation and often invites case-by-case assessments based on objective and scientific criteria. The principle of proportionality appears to guide the interpretation of prohibitions, which must be applied in a way that avoids chilling effects on innovation while ensuring the effective protection of fundamental rights.
However, questions may arise as to the practical adequacy of the Guidelines: although they represent significant work in reconstructing the legal framework and case law, their drafting style appears primarily addressed to a specialised legal audience, given the technical legal language and frequent normative references. While the document demonstrates the European Union’s commitment to shaping an advanced regulatory framework capable of supporting competitiveness and integration of the single digital market, it may offer limited practical assistance to operators seeking to understand, from the outset, how to comply with the obligations arising under the AI Act.
In conclusion, the Guidelines serve not only as a tool for guidance for economic operators and national authorities but also as evidence of the Union’s ambition to establish a governance model that may serve as a reference for other legal systems, in line with its role as a global regulatory laboratory.
2.6. European Work Programme for 2025
On 12 February 2025, the European Commission announced the publication of its Work Programme for 2025.
In paragraph 4 of the Programme, it is noted that the Commission carefully examined all proposals not yet adopted by the European Parliament and the Council at the beginning of its mandate, assessing whether they should be maintained, amended, or withdrawn in light of the policy priorities announced for the new mandate and the prospects for future adoption.
In carrying out this assessment, the Commission added that it had carefully considered the views expressed by the European Parliament and the Council. As a result of this evaluation, the Commission intends to withdraw 37 proposals still pending agreement, listed in Annex IV of the Programme along with explanations for their withdrawal. These explanations, it specifies, have been provided to allow the European Parliament and the Council to express their views before the Commission takes its final decision on whether to proceed with the withdrawals. The remaining pending proposals are listed in Annex III of the Programme.
Upon reviewing the annexes of the Programme, in particular Annex IV entitled Withdrawals, at point 29 one finds the mention of proposal COM(2017)10 final – 2017/0003 (COD) for a Regulation of the European Parliament and of the Council on the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications)
[23].
This directive safeguards the confidentiality of communications, ensuring that electronic communications are private and protected from unauthorised interception; it governs the processing of personal data, setting rules for the collection, use, and storage of personal data generated during electronic communications, such as traffic data (calling and called numbers, call duration) and location data; it lays down rules for online advertising, regulating the use of cookies and other tracking technologies for advertising purposes, requiring user consent; it promotes the free movement of data, ensuring that the protection of personal data does not hinder the free circulation of data within the European Union, provided that data protection rules are respected.
The ePrivacy Regulation proposal aimed to replace Directive 2002/58/EC to update the legal framework in light of new technologies and the challenges of the digital single market. Its main objectives included:
simplifying and harmonizing rules, as the regulation aimed to create a single, harmonized legal framework across the EU, replacing the various national laws that transposed the directive; enhancing data protection, strengthening the protection of personal data in the field of electronic communications, particularly regarding communication confidentiality and online user privacy; promoting innovation, seeking to create a favourable environment for digital sector innovation while ensuring that new technologies are developed in compliance with fundamental individual rights
[24].
The Regulation also accounted for new technologies and business models, such as over-the-top (OTT) communications and instant messaging services. In summary, the ePrivacy Regulation was an attempt to modernise and strengthen personal data protection in electronic communications, taking into account the challenges posed by new technologies and the digital single market.
The reason given for the intention to withdraw the ePrivacy Regulation proposal is that it appears outdated in light of recent legislative developments, both with regard to the technological context and the legal framework.
At point 32 of Annex IV, reference is made to the more recent proposal COM(2022)496 final – 2022/0303 (COD): Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). The reason cited for the intention to withdraw the AI Liability Directive proposal is: “No foreseeable agreement — the Commission will assess whether another proposal should be tabled or another approach should be chosen”.
Whereas the withdrawal announcement for the ePrivacy Regulation proposal is justified by considerations of obsolescence given the changed technological and legislative context, in the case of the much more recent AI Liability Directive proposal, the Commission appears convinced of the need to change the regulatory approach underlying that proposal. Indeed, the AI Liability Directive proposal had been published together with proposal COM(2022)495 for a new directive on defective products liability, intended to replace Directive 85/374/EEC, in response to the need to review that directive’s regime in light of technological developments, with particular reference to AI systems.
The latter proposal, in contrast, successfully completed the legislative process, resulting—after modifications and approval by the Parliament and Council—in the adoption of the so-called new Product Liability Directive (new PLD), which will be addressed further below.
3. The Interaction between AI and Individual Rights
However, the role of the GDPR is not limited to data protection; it constitutes a cornerstone for the entire system governing emerging technologies, whose functioning is based on the management and quality of information as a means to ensure the development of artificial intelligence that is sustainable and respectful of the fundamental rights enshrined in the various constitutional charters of the EU Member States.
While there is no doubt that artificial intelligence impacts all sectors of the economy and society, the effects it produces on individuals involved in its use—especially regarding the potential infringement of certain fundamental rights—remain largely unexplored.
One of the most problematic corollaries of the rise of the digital society, through digital intermediaries that deploy algorithms capable of constantly analyzing the human “presence” on the internet, has been the blurring of the distinction between the public and private spheres. This redefines spaces and individual relationships, expanding opportunities for expression and the development of individual personality, but ultimately creates an ambiguous space, suspended between the private dimension—where privacy protection is central—and the public significance of one’s “presence” online.
Drawing on the pages of Pietro Rescigno (1998)
[25] on intermediate social formations, it has been suggested that the digital sphere has assumed the features of an intermediate organization between the individual and the State. This social dimension could complement and integrate the concept of human personality.
From this perspective, the internet appears as an infrastructure generating a data flow on which AI feeds. It enables broad participation, serves multiple educational, recreational, and entertainment functions, and can, in theory, be instrumental in fostering the development of human personality
[26].
However, the non-human nature of AI—which operates through mathematical formulas capable of acquiring statistical data and processing it without understanding its significance—disregards the experiential element that operates through data with the ability to distinguish one data point from another. Consequently, changing the dataset provided to the algorithm also changes the outcome.
For example, if an AI is trained to recognise images and identify those containing plant species, but the dataset is incorrect, the AI will obviously be unable to distinguish an orchid from a water sport.
Moreover, AI feeds on statistical data and therefore also absorbs biases, misinformation, and the conformism typical of the society in which it was developed.
Numerous distortions are attributable to the uncontrolled use of AI, mainly due to the technological features under examination, thereby compromising fundamental legal positions. As demonstrated by disputes both within and outside Europe, the use of AI carries the risk of violating a variety of rights and freedoms, such as freedom of expression, freedom of association, intellectual property, and the minimum safeguards required in administrative proceedings and criminal trials
[27].
The weak binding force of certain supranational rules continues to generate unresolved issues, particularly in the field of individual rights, due to the lack of obligations requiring States to balance the benefits of AI technology use against the risks posed to users.
Credit for highlighting the absence of uniform prescriptions for AI system developers—and the associated significant risk to the exercise and enjoyment of fundamental rights, which are left without effective remedies against violations committed by either state authorities or private entities—belongs to the Advisory Body appointed by the UN Secretary-General, in its final report of September 2024 entitled Governing AI for Humanity
[28].
As noted above, the adoption of the Framework Convention aims primarily at safeguarding users’ rights by extending existing human rights obligations to the field of artificial intelligence. The commitment of the contracting parties to ensure that the obligation to protect fundamental rights is fulfilled both by state bodies and by private entities (such as companies that develop or use AI technologies) gives further strength to the doctrinal view that has long advocated the horizontal application of human rights protection rules
[29].
The protective measures for users’ rights are set out in Chapters 2 to 6 (Articles 4–22), partly reflecting the same requirements provided for by the EU Regulation on high-risk systems. Among the general principles guiding AI systems’ activities, particular importance is attached to respect for human dignity and individual autonomy (Article 7), the obligation of transparency and human oversight (Article 8), the principle of accountability (Article 9), the prohibition of discrimination (Article 10), and personal data protection (Article 11).
Regarding the relationship between the EU Regulation and the Council of Europe’s Framework Convention—specifically whether the Convention extends to all AI systems, and not only those classified as high-risk under the AI Act—the Convention’s rules will have an innovative effect. This consists not only in extending human rights protection obligations beyond the EU’s borders but also in enhancing the guarantees for user protection
[30].
Turning to the legitimacy of certain AI models, such as facial recognition tools using biometric data, the European Court of Human Rights (ECtHR) has intervened to assess their compliance with the requirements of legality under the Convention
[31]. The Court had already addressed the potential interference of biometric data processing with the right to respect for private life under Article 8 of the European Convention on Human Rights
[32].
The case at hand is exemplary: it originated from an application by a Russian citizen who, after being arrested following an unlawful protest, complained of a violation of Article 8 of the Convention due to the authorities’ improper use of facial recognition systems capable of identifying people in public spaces using biometric data. The defence of the respondent State relied on the existence of a legal basis justifying the AI tool’s use and the legitimacy of the aim pursued—namely safeguarding public security
[33].
On that occasion, the Strasbourg judges expressly stated that measures limiting the right to respect for private life, including personal data protection, must comply with a strict principle of legality in the context of AI system use. It is not sufficient, as was undisputed in that case, that there be a public interest reason justifying the tool’s use. The legal framework must prescribe the conditions, purposes, and requirements for data retention and use, as well as procedures for the immediate destruction of data after use.
Similar to the AI Act, the Court reaffirmed that the use of facial recognition biometric systems does not per se constitute unlawful interference with the right to respect for private life, but its legitimacy depends on certain safeguards, primarily the existence of a legal basis detailing the conditions of use and data destruction procedures—a condition deemed lacking in the case examined
[34].
Article 5(1)(h) AI Act, although it lists biometric identification systems among prohibited practices, exceptionally allows their use where certain conditions are met—conditions that echo the legitimacy requirements for interferences with the Article 8 right under the Strasbourg Court’s case law.
It is clear that, in the coming years, interpreters will have to address new forms of unlawful use, as the era of artificial intelligence gives rise to deep fakes multiplying the opportunities for abusive use of individuals’ images, especially of well-known persons. The reproduction of a person’s distinctive features using AI represents the new frontier of right of publicity protection in the United States
[35].
It is therefore necessary to review the notoriety subsystem, ensuring that it does not end up merely defending a fortress built around purely patrimonial logics.
Certainly, through the obligations set out in Articles 4 et seq., the Council of Europe seeks to ensure that AI systems are developed, trained, and used in a manner consistent with fundamental rights—as derived from common constitutional traditions, the European Convention on Human Rights, and the Charter of Fundamental Rights of the European Union—the protection of which is one of the defining features of the European legal tradition and the cultural identity of the entire continent
[36].
4. The Prohibition of Certain AI Practices and Incompatibility with Union Values
As previously noted, the core subject of the Guidelines concerns the prohibition of specific AI practices deemed incompatible with the fundamental values of the Union. The analysis of Article 5 of the AI Act provided the opportunity to examine particular examples of practices which, if implemented within AI systems, violate fundamental values protected by the very architecture of AI systems—specifically the value of trustworthiness and, in parallel, the objective of developing human-centric technology.
The first prohibited AI practice under Article 5(1)(a) concerns the placing on the market, putting into service, or use of AI systems that employ subliminal techniques or other manipulative or deceptive techniques (with deceptive techniques also being manipulative) with the aim or effect of materially distorting the behaviour of a person or group of persons, impairing their ability to make an informed decision and inducing them to take a decision they would not otherwise have taken, in a way that causes significant harm to that person or to another person or group. Subliminal techniques act below the threshold of awareness and are inherently manipulative
[37].
Recital 29 of Regulation 2024/1689 states: “AI-based manipulation techniques can be used to persuade people to adopt undesirable behaviour or to deceive them into making decisions in a way that subverts or undermines their autonomy, decision-making processes and free choice. The placing on the market, putting into service or use of certain AI systems with the aim or effect of materially distorting human behaviour, with the risk of causing significant harm, in particular sufficiently serious adverse effects on physical or psychological health or on financial interests, are particularly dangerous and should therefore be prohibited. Such AI systems use subliminal components such as audio, graphic and visual stimuli that people are unable to perceive as they go beyond human perception or other manipulative or deceptive techniques that subvert or undermine the autonomy, decision-making process or free choice of a person without that person being aware of such techniques or, if aware, without being able to control or resist them or avoid the deception (…).”
The Guidelines cite various examples of subliminal techniques under Article 5(a), including subliminal visual, auditory, or tactile messages, embedded images, and temporal manipulation techniques
[38].
The second category of prohibited practices, under Article 5(1)(b), concerns the placing on the market, putting into service, or use of AI systems that exploit a person’s vulnerability (or that of a group of persons) due to age (children and adolescents), disability, or a specific social or economic situation (such as elderly persons affected by illness, isolation or poverty), with the aim or effect of materially distorting behaviour and causing significant harm.
These practices are prohibited because they exploit the vulnerabilities of individuals or groups. They reflect techniques already prohibited, for example, under EU law on unfair commercial practices, particularly those classified as aggressive practices, which already recognize the concept of the vulnerable consumer.
The third type of prohibited AI practice under Article 5(1)(c) concerns the placing on the market, putting into service, or use of AI systems for the evaluation or classification of individuals or groups over a certain period based on their social behaviour or known, inferred, or predicted personal characteristics or personality traits, where the resulting negative social score leads to either:
i) unfavourable treatment in contexts unrelated to those in which the data were originally generated or collected;
ii) unfavourable or disproportionate treatment relative to the social behaviour or its severity.
The prohibition of social scoring aims to prevent AI systems from unjustifiably or disproportionately negative consequences for individuals or groups, especially when data originates from unrelated contexts
[39]. The Commission notes that the prohibition targets certain social scoring practices that could unjustly harm individuals within a framework of social control and surveillance, although it recognizes that some scoring systems can enhance security, efficiency, and service quality. The main risk addressed by the AI Act is that of systemic discrimination and social exclusion, in breach of equality and non-discrimination principles under the Charter of Fundamental Rights.
The Guidelines detail conditions for the prohibition to apply, including: the use of AI systems for social evaluation based on behaviour or personal traits; the creation of an explicit or implicit social score; negative consequences in contexts not directly related to the evaluated behaviour, or with unjustified or disproportionate effects.
They also provide interpretative guidance on “evaluation” and “classification,” distinguishing between judgments involving an element of assessment and mere categorization, and on “social behaviour” and “personal or personality characteristics”.
In a predictive crime risk context, the Guidelines confirm that systems estimating the likelihood of criminal offences based solely on profiling or personality traits, without objective and verifiable elements, are prohibited to avoid automated predictive profiling that threatens the presumption of innocence and the right to a fair trial
[40].
Regarding mass collection of images for facial recognition databases (e.g., through scraping of internet content or CCTV footage), the prohibition aims to prevent mass surveillance and unauthorized collection of highly sensitive data. All databases, even if not exclusively used for facial recognition, created through indiscriminate scraping techniques, fall within the scope of this prohibition
[41].
The National Anti-Corruption Authority (ANAC), in an opinion dated 30 January 2025, reaffirmed that technical solutions aimed at preventing search engine indexing or scraping of public administration transparency portals are not permissible, including measures intended to prevent web scraping or the training of generative AI
[42].
Web scraping is a widely used technique for the automated extraction of data from websites. Through specialized software, called web scrapers, it is possible to navigate between web pages, identify information of interest and organize it in structured formats, such as databases, spreadsheets or JSON files. This technology is particularly useful for various purposes, including market analysis, price monitoring, data journalism and data collection for statistical or academic purposes. Web scraping can be used by companies to optimize commercial strategies, by researchers to analyse trends and even by public institutions to collect data useful for planning public policies.
A concept often associated with web scraping is that of data indexing, a process by which search engines and other digital platforms catalog and organize information on the web. Indexing allows users to quickly find content of interest to them through targeted queries, thus improving the accessibility and usability of online information
[43].
Although both techniques have in common the activity of data collection, web scraping and indexing operate in different ways and for different purposes.
Web scraping is a selective and targeted process of extracting specific data from web pages, indexing plays a broader role, that of organizing and classifying digital content to facilitate its retrieval by users. Search engines, such as Google or Bing, use automatic crawlers to continuously scan the web and update their databases, providing users with relevant results in response to their searches.
The Guidelines further highlight that Article 5(1)(e) restricts scraping prohibitions to facial data, not other biometric data such as voice samples. While this is consistent with the aim of preventing mass surveillance and privacy breaches, the interpretative stance leaves open whether alternative readings might also be legitimate.
Regarding emotion recognition systems in work or educational contexts, the Commission recalls privacy and anti-discrimination risks, as well as threats to dignity and personality development, allowing exceptions only for health or security purposes with strict limitations
[44].
On biometric categorization, the prohibition targets AI systems assigning individuals to sensitive categories (e.g., race, political opinions, religion, sexual orientation) based on biometric data, unless strictly necessary for lawful dataset labelling, bias-free training, or enforcement purposes.
For remote biometric identification (RBI) in real time in public spaces for law enforcement purposes, the AI Act prohibits its use except under narrowly defined conditions (e.g., locating specific victims, preventing concrete threats, locating suspects of serious crimes) and subject to procedural safeguards in Article 5(2). RBI includes AI systems tracking individuals remotely in public spaces via database comparisons (e.g., Schengen Information System). By contrast, systems for access control or verifying identity against pre-stored data (e.g., ID documents, smartphone data) are excluded.
Finally, the Guidelines interpret “public space” broadly as any physical space accessible to the public, irrespective of ownership or management, excluding virtual spaces, prisons, borders, and certain port and airport zones.
5. AI Distortions and New Profiles of Civil Liability
The objective of the European legislator, once the regulatory framework within which the application of new technologies acquires legal significance has been precisely defined, is to ensure full protection of the principle of individual self-determination from distortions that may result from the improper use of AI systems.
This topic, of great importance and complexity, falls within an established legal debate that, in the Italian legal system, has long sought to balance contractual autonomy with the need to establish appropriate limits and safeguards. In particular, civil law scholars have extensively examined the scope of freedom in behavioural and contractual choices
[45]. Today, this reflection is enriched by new debates linked to the use of artificial intelligence in decision-making processes.
Since the European Parliament Resolution of 16 February 2017 “with recommendations to the Commission on Civil Law Rules on Robotics”, the European Union has sought to combine the promotion of technological innovation with the need to ensure the safety and reliability of digital products and services and to provide effective protection for the rights and fundamental freedoms of subject to decisions made by intelligent systems, following a human-centric approach
[46].
Thus, issues relating to the safety expected of digital products, robots and technological systems, and civil liability have been addressed as central concerns, with the understanding that the two cannot exist in isolation.
The use of new forms of intelligent technology, more or less automated, applied to the production of goods and provision of services, may cause harm of a completely different nature from that traditionally addressed by case law, giving rise to new civil liability issues
[47].
In addition to threats to fundamental rights and freedoms—such as health, safety, privacy, personal data protection, integrity, dignity, and self-determination—the diverse landscape of so-called algorithmic harm also encompasses situations where a properly programmed IT system infringes the right of individuals not to be discriminated against and to have fair and equal access to goods and services.
This includes scoring mechanisms, widely used in digital practice to select job applicants, assess creditworthiness for bank loans
[48], or determine eligibility for insurance policies. The new risks may arise in both traditional contractual relationships—such as sales, employment, professional services, banking, insurance, and financial intermediation contracts—and in smart contracts and contracts for the supply of digital content or services
[49].
The regulatory approach is based on the “safety (ex ante protection) - liability (ex post protection)” model, with the dual aim of ensuring that AI use does not lead to a reduction in safety or accountability compared to traditional standards.
Regarding ex ante protection, Regulation (EU) 2024/1689 sets out harmonized rules on artificial intelligence, amending previous regulations
[50]. Article 1 specifies that the Regulation seeks to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence, while ensuring a high level of protection for health, safety, and fundamental rights under the Charter of Fundamental Rights, including democracy, the rule of law, and environmental protection, and fostering technological innovation.
The risk management model adopted is notable: risks are graded from medium or low to high, up to unacceptable. In cases of unacceptable risk to safety, livelihoods, rights, health, fundamental rights, the environment, democracy, or the rule of law, the Regulation imposes bans with limited exceptions, preventing the entry of such digital systems into the internal market
[51].
These risks are connected to the type of activity, but importantly, they do not correspond to traditional market values; they relate to the framework of values and fundamental rights recognized by the EU.
From a procedural standpoint, providers of high-risk AI systems must undergo a conformity assessment before placing the system on the EU market or putting it into service. They must ensure compliance with mandatory requirements for trustworthy AI, including data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness. This assessment must be repeated if substantial modifications to the system or its purpose occur.
Ongoing market surveillance is entrusted to supervisory authorities, who must conduct external audits and provide channels for providers to report incidents or serious breaches affecting fundamental rights.
For low-risk intelligent systems (e.g., video games or spam filters), identified by exclusion, there are no obligations applicable to high-risk systems; instead, codes of conduct are encouraged, and only minimum transparency requirements apply.
Regarding ex post protection, the European Commission has established civil liability rules to apply if these risks materialize and cause harm
[52]. The new Directive (EU) 2024/2853 maintains the strict liability of producers for damage caused by unsafe products. This approach is considered the only suitable means to fairly allocate the risks inherent in modern technological production and to balance innovation with the protection of those harmed by new products.
Among the most significant innovations, the Directive extends the scope of application, revising definitions of product and defect and the regulation of damage. Legal scholarship had already noted the limitations of the earlier framework in connecting these concepts to certain smart devices
[53].
It is clear that the weight of product liability depends on the definition of product. Accordingly, the European legislator has updated this concept, acknowledging that new digital goods—comprising intangible and logical-informatic components—often consist of an inseparable combination of physical items (hardware, sensors) and intangible elements (software, algorithms, data), as well as services (e.g., data collection, processing, connectivity services).
Digital products may undergo changes during use, through software updates or extensions deemed essential to their operation, provided either by the producer or, increasingly, by third parties on its behalf. The Directive seeks to accommodate such variables and intangible assets (e.g., software, firmware, AI systems) within the legal framework
[54].
Where updates substantially alter functionality or introduce new features, they are classified as related services—digital services integrated into or interconnected with the product, whose absence prevents the product from performing one or more functions.
Despite the Directive’s scope, gaps remain—for example, whether an algorithm can be classified as a product component, and whether its designer can be held liable independently of the software developer
[55]. The question hinges on the legal characterization of the algorithm (mere design, provision of ideas, or intellectual creation). It is argued here that the algorithm’s designer should bear liability, both contractually towards the client and in tort towards third parties harmed by a technological system empowered to learn and direct its own behaviours.
6. Digital Transformation and Sustainability: The Role of AI
Sustainability, defined as the ability to meet present needs without compromising the ability of future generations to meet their own, has gained significant importance in corporate programmers.
Companies increasingly recognize the need to balance economic growth with environmental protection and social responsibility. However, the path towards sustainable digital transformation is fraught with challenges, including resource management, ethical considerations, and the need for innovative solutions aligned with sustainability goals.
Digital transformation corresponds to the integration of digital technology across all areas of business, radically changing how organizations operate and deliver value to customers. This transformation is not merely about adopting new technologies; it requires a cultural shift that embraces innovation and agility. Industry literature emphasizes the importance of aligning digital strategies with organizational objectives to achieve successful transformation, highlighting that a clear vision and leadership commitment are essential to addressing the complexities of digital change.
Sustainability has become a critical consideration in the context of digital transformation, as organizations increasingly recognize the need to balance economic growth with environmental and social responsibilities.
Specifically, the intersection between technology and sustainability has become increasingly pivotal, particularly in the field of software development. This refers to the role of Generative AI in promoting low energy software development practices, aimed at reducing the environmental impact associated with traditional coding methods. By leveraging advanced algorithms capable of automating code generation and optimization, generative AI represents a promising solution for improving energy efficiency in software applications. This technology encompasses various applications, including natural language processing, image generation, and predictive modelling.
Recent studies have highlighted the transformative potential of generative AI in automating processes, enhancing creativity, and fostering innovation. For example, applications in content creation, product design, and data analysis have demonstrated how generative AI can streamline operations and reduce time-to-market for new products
[56]. However, the implementation of generative AI also raises ethical concerns, including issues relating to data privacy, algorithmic bias, and potential misuse.
The literature identifies several frameworks for integrating sustainability into digital transformation strategies, emphasizing the importance of stakeholder engagement and the adoption of circular economy principles. Research indicates that organizations prioritizing sustainability in their digital initiatives can achieve competitive advantages, such as enhanced brand reputation, customer loyalty, and operational efficiency.
Nevertheless, the challenge remains in effectively measuring and managing sustainability outcomes in the context of rapidly evolving digital technologies, as well as addressing energy waste in AI training. As is well known, the digital services we use daily generate energy consumption concerns related to data centres, which rely on large quantities of water—used to cool servers—that are incompatible with sound energy policies aimed at protecting the planet.
Looking specifically at the Italian case, where the public water network loses approximately half of its resources and drought remains a persistent issue, digital sustainability is no longer a peripheral concern. It must be studied in relation to technology emissions, not as a phenomenon in isolation, but with regard to how much these emissions save in economic terms and environmental impact.
Moreover, the fate of each machine, whose disposal is a critical issue, will have a significant impact on the future of new technologies. One potential solution could come from the correct application of the "Do No Significant Harm" principle, introduced in the European Union Taxonomy Regulation (EU Regulation 2020/852) and applied in various financing instruments, starting with Next Generation EU. The corollary of this principle is that any project wishing to access funding from the National Recovery and Resilience Plan and the European Union Recovery and Resilience Fund must demonstrate, at the proposal stage, that it will not have significant environmental impacts in six key areas: climate change, sustainable use of water resources, circular economy, pollution prevention, biodiversity protection, and climate change adaptation.
In the case of artificial intelligence, especially generative models, the enormous amounts of data, computation, and energy required—consider that training a single model can emit tons of CO2—raise concerns that AI may be counter to sustainability. However, this should not be interpreted as a call to slow down innovation, but rather to integrate sustainability into development models from the very beginning—not as a superficial element or a "green stamp" to be displayed on a company website, but as a structural part of the technological process.
This requires a shift in mindset, starting with the design phase: focusing on models that consume less energy without compromising performance, adopting green software engineering practices, reducing planned hardware obsolescence, and rethinking cloud architectures with an eco-friendly approach.
7. Technological Limits and Mandatory Mathematical Value: The Dictatorship of Calculation and Hallucinations
Every advanced technology, no matter how sophisticated, is intrinsically subject to structural limits determined, on the one hand, by the quality and quantity of the data on which it relies, and on the other, by the mathematical logic underpinning information processing
[57].
A particularly critical aspect of modern analysis and prediction systems is the tendency to generate responses which, while appearing coherent and plausible, may be marred by systematic errors or distorted interpretations of reality. This occurs when the system, instead of recognizing (or admitting) the absence of reliable data or the uncertainty of available knowledge, nevertheless produces a result, attributing to it an apparent degree of certainty that may mislead the user.
This issue becomes especially evident in contexts where the reliability and accuracy of information are central, raising key concerns from both technological and legal standpoints. The need to provide an answer cannot overlook its verifiability and accuracy, lest it led to erroneous decisions with significant impacts. In certain fields of study, the risk of processing inaccurate or decontextualized information calls for the adoption of effective mitigation strategies, such as strengthening control mechanisms, ensuring transparency in decision-making processes, and promoting user accountability.
Technological progress therefore compels a critical reflection on its use, emphasizing the importance of complementary and superior tools for verification and validation. Only through a rigorous, regulated methodological approach can the innovative potential of new technologies be fully harnessed while avoiding distortions that would compromise their reliability and effectiveness
[58].
In this context, a phenomenon of particular significance in automated data processing is that of so- called hallucinations
[59]—the generation of inaccurate, misleading or baseless information. The term, borrowed from neuroscientific language, describes the tendency of certain advanced systems to produce responses that, while structured and logically coherent in appearance, do not correspond to factual reality. The causes of this phenomenon are manifold: beyond the quality and completeness of the reference data, the probabilistic logic underpinning advanced models plays a significant role.
These models do not reason in a human sense; instead, they generate results based on statistical patterns which, in some cases, may lead to statements that are formally correct yet lacking in concrete validation.
The implications of hallucinations are particularly significant in contexts where precision and information reliability are critical factors. In fields such as law, science, and public administration, the risk of basing decisions on erroneous information demands a cautious approach founded on appropriate verification and control tools.
This challenge calls not only for continual technological improvement in system design but also for greater critical awareness on the part of users. The ability to correctly interpret and validate system outputs, combined with the adoption of rigorous verification protocols, is essential to ensure the safe and effective use of emerging technologies.
Therefore, if mathematical constraints on the development of artificial intelligence are real, they should not be seen as obstacles to technological progress but rather as elements that favour an evolution more consistent with a human-centric approach. Such awareness would help guide research towards applications that enhance the human role, ensuring that AI is used as a complementary tool rather than a substitute for human capabilities.
This issue raises fundamental questions of transparency and accountability. Deep neural networks, essential to modern machine learning, often function as black boxes—meaning that the internal dynamics of the algorithms are not fully accessible or comprehensible to users or regulators. To address these concerns, it is essential to develop AI policies that include thorough ethical assessments and even independent audits.
Indeed, recognizing the scope of the matter, the Regulation clarifies: “In order to take account of existing agreements and specific requirements for future cooperation with foreign partners with whom information and evidence are exchanged, this Regulation should not apply to public authorities of a third country and to international organizations acting within the framework of cooperation or international agreements concluded at Union or national level for cooperation between judicial or law enforcement authorities and the Union or its Member States, provided that the relevant third country or international organizations provide adequate safeguards for the protection of the fundamental rights and freedoms of individuals”.
8. The New Italian Law on AI
The new Italian law on artificial intelligence aims to be the first comprehensive national legislative document on AI, coordinated with European regulations. Consisting of four chapters and 26 articles, the original bill was approved by the Council of Ministers on April 23, 2024, even before the final approval (and publication in the EU Official Journal) of Regulation 2024/1689 (AI Act).
This timing led to several misalignments, both terminological and substantive, between the Italian draft and the AI Act, many of which were resolved during the parliamentary process.
The first four articles are programmatic provisions, some of which are merely repetitive of the AI Act (e.g., the definitions in Article 2), while others introduce innovative—but potentially ineffective—provisions, such as Article 1, paragraph 4. This article requires parental consent for minors under the age of fourteen not only for the processing of their data (or the signing of related contracts) but also for mere access to artificial intelligence technologies.
Given the widespread use of AI technologies in consumer products, this provision may remain a theoretical point rather than a practical rule. Articles 16 and 24, however, grant the Government of the Republic broad—and perhaps excessive—powers to define rules on artificial intelligence. The law addresses three specific areas: national security and defence, healthcare, and copyright.
In terms of national security, Article 6, paragraph 1, excludes from the application of the law any research, testing, development, adoption, or use of AI systems and models carried out for national security purposes by the Department of Security Intelligence (DIS), the External Security and Information Agency (AISE), the Internal Security and Information Agency (AISI), the National Cybersecurity Agency (ACN) for cybersecurity protection, the Armed Forces for national defence, and the Police Forces to prevent and combat specific crimes.
This exclusion aligns with the AI Act, which reserves competence in national security matters for Member States, reflecting the strategic importance of AI innovation in international relations. Despite the general exclusion, Article 6, paragraph 1, last subparagraph, referencing Article 3, paragraph 4, appropriately establishes that such activities "must not prejudice the democratic conduct of institutional and political life" nor "the freedom of democratic debate from unlawful interference, by whoever caused, while protecting state sovereignty." For all AI uses related to national security, Article 6, paragraph 4, refers to a regulation that will define how the principles and rules set out in the new law apply. It is worth noting that the original provision (Article 6, paragraph 2) requiring AI systems for public use to be installed on national servers has been eliminated from the final approved text.
However, the principle of national localization remains valid under Article 5, which requires public bodies to prioritize solution providers that guarantee the localization and processing of strategic data within national data centres. This trend aligns with the development of EU data regulation, as seen in Article 4, paragraph 1, of Regulation 2018/1807 on the circulation of non-personal data, which prohibits data localization measures while safeguarding public security concerns in compliance with the principle of proportionality. Likewise, Regulations 2022/868 (Data Governance Act), 2023/2854 (Data Act), and 2025/327 (European Health Data Space) allow for restrictions on cross-border data flows for public interest reasons, potentially including data localization constraints.
Articles 7, 8, 9, and 10 address AI applications in healthcare. Article 7 establishes shared principles for AI use in healthcare, including non-discrimination, information transparency, social inclusion, the indispensability of human decision-making, and system accuracy and security.
However, better coordination with the AI Act would have been beneficial, and more specific data quality requirements—such as a preference for synthetic and anonymized data—would improve the framework. Article 8 facilitates the processing of personal and non-personal data for the development of AI systems in the healthcare sector. Paragraph 1 declares data processing for scientific research and experimentation in AI development to be of "significant public interest." While useful, this provision has been criticized for its lack of specificity, failing to clarify processing operations, data sources, and the qualifications of authorized researchers.
The principle of "significant public interest" is aligned with Regulation 2025/327 on the Common European Health Data Space, but such processing still requires prior approval from relevant ethics committees and notification to the Data Protection Authority, in accordance with the provisions of Articles 24, 25, 32, and 35 of the GDPR. Paragraph 2 of Article 8 legitimizes the secondary use of personal data, as long as it lacks direct identifying elements, for research and experimentation, removing the requirement for prior consent from the data subject. Key concerns with this provision include the nature of the information (with a simple general notice on the data controller's website being considered sufficient), the absence of data minimization principles, the lack of prohibitions on secondary data use, and the absence of an opt-out right, as required by EU Regulation 2025/327 on the Common European Health Data Space.
Chapter IV (Article 25) addresses copyright, establishing that works can be protected "even when created with the aid of artificial intelligence tools, provided the human contribution is creative, significant, and demonstrable." While there is friction between the principles underlying the AI Act and the new Italian law, this measure often appears more symbolic than a substantive innovation within the national regulatory framework. As such, it may create coordination challenges with European regulations.
9. Concluding Remarks
In conclusion, the Union legislator initially provided a teleological definition of artificial intelligence, describing it as a set of evolving technologies capable of generating significant economic, environmental, and social benefits, with a transversal impact on industrial and civil sectors.
Subsequently, it outlined the limits and counter-limits to its use, taking into account the application context, methods of use, and level of technological development. The objective is to ensure a balance between progress and protection, preventing harm to public interests and the fundamental rights enshrined in Union law.
It is clear that the regulatory framework must evolve in accordance with supranational values—as enshrined in Article 2 of the Treaty on European Union—and fundamental freedoms. It should therefore serve as an instrument for individuals, with the ultimate aim of improving human well-being.
It was thus necessary to establish a uniform regulatory framework for AI systems classified as high- risk, acknowledging that potential harm arising from their use may manifest in both material and immaterial forms.
Indeed, as the Regulation itself clearly states in its preamble, artificial intelligence, alongside its various beneficial effects, has the potential to be misused and provide powerful new tools for “practices of manipulation, exploitation and social control.”
This establishes a clear hierarchy of priorities among the needs arising from AI’s dissemination, reaffirming that technological development cannot be a legitimate aim in itself, but must always serve as a tool for affirming the rights and freedoms of individuals, which constitute the cornerstones of the European legal framework.
The new regulatory instrument will contribute to disseminating these values beyond the borders of the continent, given that the legal text is intended to bind non-European countries that not only participated in the negotiation of the convention text but have already signed it, thereby assuming the provisional obligations arising under Article 18 of the Vienna Convention on the Law of Treaties. At the same time, the Framework Convention may serve as a paradigm for the future development of a multilateral treaty instrument independent of the work of the Council of Europe, thus influencing the global governance of artificial intelligence.
Focusing on the European judicial space and the influences the treaty will exert, one can observe the emergence of an osmotic relationship between the Council of Europe’s Framework Convention and the Union’s AI Act. Beginning with the inevitable influence on the interpretation of the provisions of the European Convention on Human Rights, it is foreseeable that the impetus for the development of human rights generated by the Framework Convention will also affect the concrete application of the legitimacy requirements set for high-risk systems by the AI Act, which represent a specific application in the AI field of the guarantees established by the Charter of Fundamental Rights of the European Union.
The current articulation of fundamental rights that the Framework Convention will help define, consistently with today’s social and technological context, will have significant practical consequences not only directly within the domestic legal orders of the States Parties—as will occur in Italy, for example, through the obligations arising under Article 117 of the Italian Constitution—but also indirectly, as the Convention adopted by the Council of Europe will impact the current regulation of artificial intelligence, for example by shaping the interpretation of the AI Act through Article 52 of the Charter of Nice.
In light of what has been highlighted, it emerges that the recently introduced conventional framework represents the affirmation of a method and cultural approach that retains its validity even in the context of technological innovation, where the central role must be played by human rights, the protection of democracy, and the safeguarding of the rule of law. These are fundamental values of a legal tradition that does not lose its identity in the face of the challenge of regulating new digital phenomena and is destined to shape the practical regulation of AI systems both within Europe’s borders and—this is the hope—at the international level.
This complex definitional approach seeks to promote the spread of human-centric and trustworthy artificial intelligence, while ensuring a high level of protection for health, safety, and the legal positions protected by the Charter of Fundamental Rights of the European Union.
Thus, it declares the intention to protect individuals from the potentially harmful effects of AI systems, which must first be identified and then confined within a well-defined perimeter through rigorous controls. This will enable the creation of an innovative technological infrastructure that ensures both stability and regulatory coherence, so that digital evolution does not result in disorderly transformation but follows a gradual, structured, and sustainable development path.
To countries less concerned with setting detailed regulatory frameworks, old Europe may appear overly attached to ethical rules, restrictions, and values. Yet this vision reassures practitioners that the European path is the most just and appropriate to safeguard the individual, who remains the primary concern.
BIBLIOGRAPHY
Alexy, Robert,
Concetto e validità del diritto, Roma, 2022
Bekker, Sonja, “Fundamental Rights in Digital Welfare States: The Case of SyRI in the Netherlands
”, in O. Spijkers et al. (eds.),
Netherlands Yearbook of International Law 2019, Netherlands Yearbook of International Law 50, 2021, p. 289 ss.
Betti, Emilio,
Teoria generale del negozio giuridico, Napoli, 1950
Bravo, Fabio, “Intermediazione di dati personali e servizi di data sharing dal GDPR al Data Governance Act”, in
Cont. e impr. Europa, 2021, p. 200 ss.
Casolari, Federico / Buttaboni, Carlotta / Floridi, Luciano,
The EU Data Act in Context: A legal assessment, in
https://ssrn.com/abstract=4584781 (1.10.2025)
Capilli, Giovanna
, “I criteri di interpretazione della responsabilità
”, in G. Alpa,
(a cura di), Diritto e intelligenza artificiale. Profili generali, soggetti, contratti, responsabilità civile, diritto bancario e finanziario, processo civile”, Pisa, 2020, p. 457 ss
.
Carnelutti, Francesco,
Introduzione allo studio del diritto, Napoli, 2016
Castronovo, Carlo,
Eclissi del diritto civile, Milano, 2015
Cevolani, Nicolo’, “La nuova disciplina europea della responsabilità per danno da prodotti difettosi (Dir. 2024/2853/UE)
”, in
Le Nuove Leggi Civili Commentate, 2, 1 marzo 2025, p. 439
Cistaro, Mariangela, “Vietato impedire l’indicizzazione della sezione “Amministrazione trasparente” per prevenire il web scraping e l’addestramento dell’IA generativa
”, in
Azienditalia, 4, 1 aprile 2025, p. 501 ss.
Colacurci, Marco
, “Riconoscimento facciale e rischi per i diritti fondamentali alla luce delle dinamiche di relazione tra poteri pubblici, imprese e cittadini
”, in Sistema penale, 2022, p. 23 ss.
Colapietro, Carlo
, “Gli algoritmi tra trasparenza e protezione dei dati personali
”, in
Federalismi.it, 2023, n. 5, 151 ss.
Cole, Mark D.
,”AI Regulation and Governance on a Global Scale: An Overview of International, Regional and National Instruments
”, in Journ. of AI Law and Regulation, 2024, p. 126 ss.
D’alfonso, Guido
, “Tecnologie digitali “emergenti” e nuovi rischi. Scenari normativi europei tra incertezza scientifica e principio di precauzione
”, in
Le dimensioni giuridiche del principio di precauzione, Napoli, 2023, pp. 121-133
D’alfonso, Guido
, “Il diritto alla spiegazione della decisione automatizzata nelle prospettive dell’U.E. Dall’opacità alla trasparenza dell’algoritmo”
, in Temas Actuales de Derecho Privado III, a cura M.ª D. Cervilla Garzón Y A. Blandino Garrido
, Pamplona, 2024, p. 111 ss.
D’orazio, Federico, “Il credit scoring e l’art. 22 del GDPR al vaglio della Corte di giustizia
”, in
La Nuova Giurisprudenza Civile Commentata, n. 2, 1° marzo 2024, p. 410
Ferri, Luigi,
L’autonomia privata, Milano, 1959
Finocchiaro, Giusella,
Intelligenza artificiale. Quali regole?, Bologna, 2024
Frosini, Vittorio,
La struttura del diritto, Napoli, 2022 (1962)
Gambini, Marialuisa,
“Responsabilità civile e controlli del trattamento algoritmico
”, in P. Perlingieri, S. Giova E I. Prisco
(a cura di), Il trattamento algoritmico dei dati tra etica, diritto e economia, Napoli, 2020, p. 313 ss.
Geiregat, Simon, “The Data Act: Start of a New Era of Data Ownership?
”, p. 3 ss., in https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4214704 (1.10.2025)
Gentili, Aurelio,
Il diritto come discorso, Milano, 2013
Grieco, Cristina
, “Intelligenza artificiale e diritti umani nel diritto internazionale e dell’Unione europea. Alla ricerca di un delicato equilibrio
”, in
Ord. internaz. e dir. um., 2022, p. 782 ss.
Grossi, Paolo,
L’invenzione del diritto, Laterza, Roma-Bari, 2017
Irti, Natalino,
Un diritto incalcolabile, Torino, 2016
Li, Zihao
, “The Dark Side of ChatGPT: Legal and Ethical Challenges from Stochastic Parrots and Hallucination”, in
arXiv, 2023
Ling, Maurice H.T.,
“ChatGPT (Feb 13 Version) is a Chinese Room”, in
arXiv, 2023, in https://arxiv.org/abs/2304.12411(1.10.2025)
Lipari, Nicolo’,
Il diritto civile tra legge e giudizio, Milano, 2017
Luo, Junliang / Li, Teng / Wu, Dan / Jenkin, Micheal / Liu, Sun / Dudek, Gregory, “Hallucination Detection and Hallucination Mitigation: An Investigation
”, in
arXiv, 2024, in https://arxiv.org/abs/2401.08358 (1.10.2025)
Mauro, Mario
, “L’inizio di applicazione del regolamento (UE) 2023/988 relativo alla sicurezza generale dei prodotti (GPSR)
”, in
Persona e Mercato, n.3, 2024, p. 1142
Montinaro, Roberta
, “Responsabilità da prodotto difettoso. Tecnologie digitali tra soft law e hard law”
, in
Persona e mercato, n.4, 2020, p. 350
Morace Pinelli, Andrea, “Data Act (reg. UE 2023/2854): la circolazione dei dati nell’interesse privato e generale
”, in
La Nuova Giurisprudenza Civile Commentata, n. 1, 2025, p. 218
Nardocci, Costanza
, “Il riconoscimento facciale sul “banco” degli imputati. Riflessioni a partire, e oltre, Corte EDU Glukhin c. Russia
”, in
BioLaw Journal-Rivista di BioDiritto, 2024, p. 279 ss. Nigro, Mario, “Formazioni sociali, poteri privati e libertà del terzo
”, in
Politica del diritto, 5-6/1975, p. 581 e ss.
Pajno, Alessandro
, “Prefazione. La costruzione dell’infosfera e le conseguenze sul diritto
”, in F. Donati E A. Perrucci,
Intelligenza artificiale e diritto: una rivoluzione? Diritti fondamentali, dati personali e regolazione, I, Bologna, 2022, p. 9 ss.
Palombino, Fulvio Maria
, “La dimensione ‘orizzontale’ della Convenzione europea dei diritti dell’uomo
”, in Rass. dir. civ., 2021, p.219 ss.
Passagnoli, Giovanni,
“Ragionamento giuridico e tutele nell’intelligenza artificiale
”, in
Persona e mercato, n.3, 2019, p. 79 ss.
Petruso, Rosario
, “La nuova direttiva UE sulla responsabilità per danno da prodotti difettosi: una prima lettura
”, in
Riv. Dir. econ. Trasp. Amb., 2024, vol. XXII, p.575
Pizzetti, Roberto,
Privacy e il diritto europeo alla protezione dei dati personali, tomo I (Dalla dir. 95/46 al nuovo regolamento europeo) e tomo II (Il regolamento europeo 2016/679), Torino, 2016
Poletti, Diana, “GDPR tra novità e discontinuità - Le condizioni di liceità del trattamento di dati personali
”, in
Giur. it., 2019, p. 2785
Pollicino, Oreste,
“Regolare l’intelligenza artificiale: la lunga via dei diritti fondamentali
”, in F. Donati, G. Finocchiaro E F. Paolucci,
La disciplina dell’intelligenza artificiale, Milano, 2025, p. 3 ss.
Rescigno, Pietro, “Le formazioni sociali intermedie
”, in AA. VV.,
Dalla Costituente alla Costituzione, Roma, 1998, p. 231
Resta, Giorgio
, “Cosa c’è di ‘europeo’ nella Proposta di Regolamento UE sull’intelligenza artificiale?
”, in
Dir.inf. informatica, 2022, p. 53 ss.
Resta, Giorgio, “La regolazione digitale nell’unione europea. Pubblico, privato, collettivo nel sistema europeo di governo dei dati
”, in
Riv. trim. dir. pubbl., 2022, p. 971
Rossi, Emilio,
Le formazioni sociali nella Costituzione italiana, Padova, 1989
Ruffolo, Ugo
, “La responsabilità da produzione e gestione dell’intelligenza artificiale self learning
”, in ID. (a cura di),
XXVI Lezioni di diritto dell’intelligenza artificiale, Torino, 2021, p. 132 s.
Ruffolo, Ugo, Amidei Andrea, “Intelligenza Artificiale e diritti della persona: le frontiere del ‘‘transumanesimo
’’, in
Giur. it., 2019, p. 1658
Ruoppo, Roberto,”Il regolamento UE in materia di Intelligenza Artificiale (c.d. AI Act): un quadro normativo uniforme per la tutela dei diritti fondamentali”, in
Rass. dir. civ., 2024, p. 996 ss.
Ruoppo, Roberto
, “La convenzione quadro del Consiglio d’Europa sull’intelligenza artificiale e il suo contributo allo sviluppo dei diritti fondamentali
”, in
Persona e mercato, 1, 2025, p. 189
Salanitro, Ugo
, “Intelligenza artificiale e responsabilità: la strategia della Commissione Europea”
, in Riv. dir. civ., 2020, p.1247
Scognamiglio, Claudio,
“Responsabilità civile ed intelligenza artificiale: quali soluzioni per quali problemi?”, in
Resp.civ.prev., 2023, p. 1073 ss.
Sica, Salvatore, D’antonio, Virgilio e Riccio, Giovanni Maria
(a cura di), La nuova disciplina europea della privacy, Milano, 2016
Simoncini, Andrea
, “L’algoritmo incostituzionale: intelligenza artificiale e il futuro delle libertà
”, in
BioLaw Journal – Rivista di BioDiritto, 2019, n. 1, 63 ss.
Spano, Robert
, “The Rule of Law as the Lodestar of the European Convention on Human Rights: The Strasbourg Court and the Independence of the Judiciary
”, in
European Law Journal, 2021, p. 211-227
Stanzione, Pasquale, “Conclusioni”
, in
La circolazione dei dati personali. Persona, contratto e mercato, a cura di Morace Pinelli, Pacini
, 2023, p. 159 ss.
Trincado Castán, Carlos
, “The Legal Concept of Artificial Intelligence: The Debate Sorrounding the Definition of AI System in the AI Act
”, in
BioLaw Journ.-Rivista di BioDiritto, 2024, p. 305 ss.
Zaccaria, Giuseppe,
La comprensione del diritto, Roma-Bari, 2019
Zaccaroni, Giovanni, ”Intelligenza artificiale e principi democratici: riflessioni a margine dell’emersione di un quadro normativo europeo”
, in
Quaderni AISDUE, 2024, p.1-37
Zorzi Galgano, Nadia “Il Regolamento UE 2024/1689 del 13 giugno 2024 sul c.d. alto rischio inaccettabile: le pratiche inerenti sistemi di intelligenza artificiale vietate dal legislatore europeo”
, in
Contratto e Impresa, n. 1, 1 gennaio 2025, p. 46.
Zeno-Zencovich, Vincenzo, “Dati, grandi dati, dati granulari e la nuova epistemologia del giurista”, in
MediaLaws, 2, 2018
[1] A. Simoncini
, “L’algoritmo incostituzionale: intelligenza artificiale e il futuro delle libertà”
, in
BioLaw Journal – Rivista di BioDiritto, 2019, n. 1, 63 ss.; A. Pajno
, “Prefazione. La costruzione dell’infosfera e le conseguenze sul diritto”
, in F. Donati E A. Perrucci,
Intelligenza artificiale e diritto: una rivoluzione? Diritti fondamentali, dati personali e regolazione, I, Bologna, 2022, p. 9 ss.; C. Colapietro
, “Gli algoritmi tra trasparenza e protezione dei dati personali”
, in
Federalismi.it, 2023, n. 5, 151 ss.; F. D’orazio, “Il credit scoring e l’art. 22 del GDPR al vaglio della Corte di giustizia”
, in
La Nuova Giurisprudenza Civile Commentata, n. 2, 1° marzo 2024, p. 410.
[2] U. Salanitro
, “Intelligenza artificiale e responsabilità: la strategia della Commissione Europea”
, in Riv. dir. civ., 2020, p. 1247; G. D’alfonso
, “Il diritto alla spiegazione della decisione automatizzata nelle prospettive dell’U.E. Dall’opacità alla trasparenza dell’algoritmo”
, in Temas Actuales de Derecho Privado III, a cura M.ª D. Cervilla Garzón Y A. Blandino Garrido
, Pamplona, 2024, p. 111 e ss.; O. Pollicino, “Regolare l’intelligenza artificiale: la lunga via dei diritti fondamentali”
, in F. Donati, G. Finocchiaro E F. Paolucci,
La disciplina dell’intelligenza artificiale, Milano, 2025, p. 3 ss.
[3] G. Zaccaroni
, “Intelligenza artificiale e principi democratici: riflessioni a margine dell’emersione di un quadro normativo europeo”, in
Quaderni AISDUE, 2024, p. 22.
[4] G. Passagnoli, “Ragionamento giuridico e tutele nell’intelligenza artificiale”
, in Persona e mercato, 3, 2019, p. 79 ss.; D. Poletti, “GDPR tra novità e discontinuità - Le condizioni di liceità del trattamento di dati personali”, in
Giur. it., 2019, p. 2785; R. Spano
, The Rule of Law as the Lodestar of the European Convention on Human Rights: The Strasbourg Court and the Independence of the Judiciary, in
European Law Journ., 2021, p.211-227; G. Finocchiaro
, Intelligenza artificiale. Quali regole?, Bologna, 2024, p. 15 ss.
[5] CiTiP Working Paper 2022 – White Paper on the Data Act Proposal, 26 October 2022, edited by Ducuing - Margoni - Schirru, Ku Leuven Centre for IT & IP LAW, p.10 ss.; Hennemann - Ebner - Karsten,
Part I (Art. 1-13, 35), in Hennemann, Karsten, Wienroeder, Lienemann, Ebner (eds.),
The Data Act Proposal. Literature Review and Critical Analysis, University of Passau Institute for Law and the Digital Society Research Paper Series No. 23-01, 2023, p. 4 e ss.; F. Casolari – C. Buttaboni – L. Floridi,
The EU Data Act in Context: A legal assessment, in https://ssrn.com/abstract=4584781 (1.10.2025); S. Geiregat,
The Data Act: Start of a New Era of Data Ownership?, in https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4214704 (1.10.2025), p. 3 ss.
[6] V. Zeno-Zencovich, “Dati, grandi dati, dati granulari e la nuova epistemologia del giurista
”, in
MediaLaws, 2, 2018; F. Bravo, “Intermediazione di dati personali e servizi di data sharing dal GDPR al Data Governance Act”, in
Cont. e impr. Europa, 2021, p. 200 ss.
[7] A. Morace Pinelli, “Data Act (reg. UE 2023/2854): la circolazione dei dati nell’interesse privato e generale
”, in
La Nuova Giurisprudenza Civile Commentata, n. 1, 2025, p. 218. The author emphasizes that "Observing, as the Data Act does, the circulatory phenomenon from the consumer user's side, compressing to a certain extent the economic initiative of the data owners, is explained by the need to achieve that difficult balance between protection of the person and promotion of the market, which constitute the fundamental values pursued by the legal system in the matter of data. However, a point of balance must be reached between the two values in play".
[8] P. Stanzione, “Conclusioni”, in
La circolazione dei dati personali. Persona, contratto e mercato, a cura di Morace Pinelli, Pacini
, 2023, p. 159 ss.
[9] See G. Resta, “La regolazione digitale nell’unione europea. Pubblico, privato, collettivo nel sistema europeo di governo dei dati
”, in
Riv. trim. dir. pubbl., 2022, p. 971.
[10] The negotiations for the Convention started in September 2022, under the auspices of the Committee on AI (CAI), established by the Council of Europe in Strasbourg. In addition to the European Commission on behalf of the EU, several member states of the Council of Europe, the Holy See, the United States, Canada, Mexico, Japan, Israel, Australia, Argentina, Peru, Uruguay and Costa Rica participated in the negotiations. Furthermore, several civil society stakeholders, for example from academia and industry, were involved in the negotiations, confirming the multidisciplinary and inclusive approach, which distinguishes the negotiations within the Council of Europe. This best reflects the global nature of the challenges and opportunities related to AI, recognizing that effective regulation of the sector can only be pursued through international/universal cooperation also extended to civil society.
[11] C. Trincado Castán
, “The Legal Concept of Artificial Intelligence: The Debate Sorrounding the Definition of AI System in the AI Act
”, in
BioLaw Journ.-Rivista di BioDiritto, 2024, p. 305 ss.
[12] The Framework Convention is not intended to introduce obligations towards instruments that are not already recipients of the requirements set out in Articles 8 et seq. of the AI Act for high-risk systems, given that from reading the conventional text, in light of the authentic interpretation provided by the explanatory report, the same is intended to prescribe requirements and standards towards those same instruments that would likely be subsumed within the scope of high-risk systems according to the taxonomy of the AI Act.
[13] According to Article 19, the Convention establishes that States Parties shall promote public consultations and informed debates on issues related to artificial intelligence, ensuring the involvement of civil society and NGOs in decision-making processes, thus underlining the importance of transparency and democratic participation. In order to promote international cooperation, a Conference of the Parties is also established, a supranational body responsible for monitoring the implementation of the Convention, facilitating international cooperation and promoting the exchange of information between Member States. With specific reference to the monitoring activities of the Convention, Article 26 establishes the obligation to establish independent bodies with the necessary human and financial resources to monitor the implementation of the provisions of the Convention.
[14] R. Ruoppo
, “Il regolamento UE in materia di Intelligenza Artificiale (c.d. AI Act): un quadro normativo uniforme per la tutela dei diritti fondamentali
”, in
Rass. dir. civ., 2024, p. 996 ss.
[15] The measure specifically addresses the issue of risk: while most AI systems do not pose risks and can help solve many societal challenges, some AI systems create risks that need to be addressed to avoid undesirable outcomes. In this sense, 4 levels of risk are defined for AI systems: minimal risk, limited risk, high risk and unacceptable risk. The first is found in the vast majority of AI systems currently used in the EU, such as AI-enabled video games or spam filters. The second concerns the risks associated with the need for transparency regarding the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust, such as when using AI systems such as chatbots where humans should be aware that they are interacting with a machine so they can make an informed decision. Therefore, some AI-generated content should be clearly and prominently labelled, namely deep fakes and texts published for the purpose of informing the public on matters of public interest. The high risk concerns, for example, AI safety components in critical infrastructure (e.g. transport), the failure of which could put citizens' lives and health at risk; AI solutions used in educational institutions, which can determine someone's access to education and the course of their professional life (e.g. exam scores); AI-based product safety components (e.g. AI application in robot-assisted surgery); AI tools for employment, worker management and access to self-employment (e.g. CV screening software for recruitment); some AI use cases used to provide access to essential public and private services (e.g. credit scoring denying citizens the possibility of getting a loan); AI systems used for remote biometric identification, emotion recognition and biometric categorization (e.g. AI system to retroactively identify a shoplifter); AI use cases in law enforcement that may interfere with fundamental rights of individuals (e.g. assessment of reliability of evidence); AI use cases in migration, asylum and border control management (e.g. automated examination of visa applications); AI solutions used in the administration of justice and democratic processes (e.g. AI solutions to prepare court judgments).
Finally, unacceptable risk scenarios include all AI systems considered to pose a clear threat to the safety, livelihoods and rights of individuals, namely, AI-based malicious manipulation and deception; AI-based malicious exploitation of vulnerabilities; social scoring; assessing or predicting individual crime risk; non-targeted scraping of internet or CCTV material to create or expand facial recognition databases; emotion recognition in the workplace and educational institutions; biometric categorization to infer certain protected characteristics; real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.
[16] It is interesting to note that, compared to the terms and definitions of the previous versions, in the English version of the final text of the Regulation the word “user” has been replaced with “deployer”, and that in the Italian version this word has not been translated into Italian. Its definition is as follows: “deployer”: a natural or legal person, a public authority, an agency or another body that uses an AI-system under its own authority, except where the AI-system is used in the course of a personal non-professional activity”.
[17] The process that led to the approval of the AI Act lasted just over three years. After the publication of the European Commission proposal on 21.4.2021, the Council of the European Union approved on 21.5.2024 the text of the Position of the European Parliament, voted by the EP on 13.3.2024 and subsequently subjected to a procedure provided for the corrections by art. 241 of the Rules of Procedure of the European Parliament (with the publication of the amended text dated 19.4.2024).
[18] The strategic document aims to promote a national AI ecosystem that is both internationally competitive and respectful of European values and regulations.
[19] The interpretation provided by the restrictive Guidelines regarding the exemptions provided for by art. 2 AIA, i.e. national security, defence and military purposes, international judicial cooperation, pre-market research and development activities, non-professional personal use, so-called open source systems, appears interesting. In particular, it is emphasized that the exclusion for national security or defence purposes applies only if the use is exclusive; otherwise, in the case of "dual use", the system falls within the scope of the AI Act. Then, as regards the hypotheses of experimentation, the research activity is excluded only until the placing on the market or in service, while experimentation in real conditions is regulated by specific regimes and does not fall within the scope of this exemption.
[20] Chapter II (Article 5 AIA) entitled Prohibited Artificial Intelligence Practices, contains a single article that lists a series of AI systems, also identified with reference to use cases, subject to the prohibitions of placing on the market, putting into service and/or use. “Placing on the market” is defined as: “the first making available of an AI system or an AI model for general purposes on the Union market”; “putting into service” is defined as: “the supply of an AI system directly to the deployer for the first use or for own use in the Union for the intended purpose”. Finally, “use” is not defined.
[21] GDPD, 22 febbraio 2024, doc. web nn. 9995680, 9995701, 9995741, 9995762, 9995785. With reference to cases of processing of biometric data in the workplace, the Privacy Guarantor has adopted a particularly restrictive approach, absolutely prohibiting the use of facial recognition systems for attendance control.
[22] The behavioural distortion must be such that it can induce, beyond the actual realization of the undesired effect, decisions that the individual would not otherwise take, by virtue of the development of techniques capable of altering the ability to adopt an informed, conscious and free decision; such distortion must be plausibly linked to significant damage, which may be physical, psychological, economic or social. The damage, then, must be assessed on a case-by-case basis, taking into account the severity, duration, reversibility and vulnerability of the subjects involved. Mere ‘lawful persuasion’, transparent, on the other hand, is excluded from the prohibition, as are practices that are not reasonably likely to cause significant damage.
[23] U. Ruffolo, A. Amidei, “Intelligenza Artificiale e diritti della persona: le frontiere del ‘‘transumanesimo
’’, in
Giur. it., 2019, p. 1658; S. Sica, V. D’antonio e G.M. Riccio
(a cura di), La nuova disciplina europea della privacy, Milano, 2016; R. Pizzetti,
Privacy e il diritto europeo alla protezione dei dati personali, tomo I (Dalla dir. 95/46 al nuovo regolamento europeo) e tomo II (Il regolamento europeo 2016/679), Torino, 2016.
[24] Very interesting in this regard is the Declaration on the processing of personal data in the context of the COVID-19 epidemic, adopted on 19 March 2020 by the European Data Protection Board. Addressing data protection in the workplace or with reference to the use of location data from mobile devices, the document, even before entering in medias res, underlines that “even in these exceptional moments [...] it is necessary [...] to take into account a series of considerations to ensure the lawfulness of the processing of personal data and, in any case, it must be remembered that any measure adopted in this context must respect the general principles of law and cannot be irrevocable. The emergency is a legal condition that can legitimize limitations of freedoms, provided that such limitations are proportionate and confined to the emergency period”.
[25] P. Rescigno, “Le formazioni sociali intermedie
”, in AA. VV.,
Dalla Costituente alla Costituzione, Roma, 1998, p. 231.
Si vedano anche E. Rossi,
Le formazioni sociali nella Costituzione italiana, Padova, 1989; M. Nigro, “
Formazioni sociali,
poteri privati e libertà del terzo”, in
Politica del diritto, 5-6/1975, p. 581 e ss.
[26] G. Finocchiaro
, Intelligenza artificiale. Quali regole?, Bologna, 2024, p. 18.
[27] The Hague District Court, Nederlands Juristen Comité voor de Mensenrechten e al. vs
. The State of Netherlands, 5 febbraio 2020, C/09/550982. S. Bekker, “Fundamental Rights in Digital Welfare States: The Case of SyRI in the Netherlands
”, in O. Spijkers et al. (eds.),
Netherlands Yearbook of International Law 2019, Netherlands Yearbook of International Law 50, 2021, p. 289 e ss.
[28] UN High-Level Advisory Board On Artificial Intelligence, Governing AI for Humanity, final report, settembre 2024, in https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf (1.10.2025).
M.D. Cole
, “AI Regulation and Governance on a Global Scale: An Overview of International, Regional and National Instruments, in
Journal of AI Law and Regulation, 2024, p.1 26 ss.
[29] F.M. Palombino
, “La dimensione ‘orizzontale’ della Convenzione europea dei diritti dell’uomo
”, in Rass. dir. civ., 2021, p. 219 ss.
[30] R. Ruoppo
, “Il regolamento UE in materia di Intelligenza Artificiale (c.d. AI Act): un quadro normativo uniforme per la tutela dei diritti fondamentali
”, in
Rass. dir. civ., 2024, p.996 ss.
[31] C. Grieco
, “Intelligenza artificiale e diritti umani nel diritto internazionale e dell’Unione europea. Alla ricerca di un delicato equilibrio
”, in
Ord. internaz. e dir. um., 2022, p.782 ss.
[32] CEDU,
Gaughran c. Regno Unito, 13 febbraio 2020, ric. n. 45245/15.
[33] M. Colacurci
,”Riconoscimento facciale e rischi per i diritti fondamentali alla luce delle dinamiche di relazione tra poteri pubblici, imprese e cittadini
”, in Sistema penale, 2022, p.23 ss.; C. Nardocci
, “Il riconoscimento facciale sul “banco” degli imputati. Riflessioni a partire, e oltre, Corte EDU Glukhin c. Russia
”, in
BioLaw Journal-Rivista di BioDiritto, 2024, p.279 ss.
[34] R. Ruoppo
, “La convenzione quadro del Consiglio d’Europa sull’intelligenza artificiale e il suo contributo allo sviluppo dei diritti fondamentali
”, in
Persona e mercato, 1, 2025, p. 189.
[35] See, in this regard, the case Kyland Young v. Neocortext Inc (2023), in which the appellant - known for having participated in numerous reality shows - had complained of the violation of his right of publicity deriving from the unauthorized reproduction of his image through software equipped with artificial intelligence systems.
[36] G. Zaccaroni
, “Intelligenza artificiale e principi democratici: riflessioni a margine dell’emersione di un quadro normativo europeo
”, in
Quaderni AISDUE, 2024, p. 1-37.
[37] N. Zorzi Galgano, “Il Regolamento UE 2024/1689 del 13 giugno 2024 sul c.d. alto rischio inaccettabile: le pratiche inerenti sistemi di intelligenza artificiale vietate dal legislatore europeo
”, in
Contratto e Impresa, n. 1, 1 gennaio 2025, p. 45.
[38] EU Commission,
Approval of the content of the draft Communication from the Commission – Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act), Brussels, 4.2.2025 C(2025) 884 final.
[39] It is noted that “With reference to the social score, one aspect must be clarified. It can strictly manifest itself in two antithetical profiles. It can be a negative social score such as the one clearly highlighted in letter c), par. 1, subsection 1, art. 5, which is the subject of a ban, for the reasons already highlighted. But it can also be a positive social score, where points (or prizes or recognitions or measures in favor) are given to those who conform to a specific positive social model promoted by the State as a rule. As an example, as far as our country is concerned, one can think of the support measures for large families in order to combat growth 0. In this case, there is no reason to deny their admissibility even by AI systems for the pursuit of a given model that the national state within the framework of the values of the European Union wants to promote and pursue. The Guidelines also report an example of positive social scoring in relation to a specific social model. This is the hypothesis of an artificial intelligence system applied to an online shopping platform that offers privileges to users with a solid history of purchases and a low rate of product returns, such as a faster return request procedure or the possibility of accessing refunds without returning the product. As clarified by the European Commission, this example would not fall under the prohibition pursuant to art. 5, par. 1, subparagraph 1, letter c), since the advantages would be justified and proportionate to reward positive behavior and other users would still have access to the standard return procedure”. N. Zorzi Galgano, “Il Regolamento UE 2024/1689 del 13 giugno 2024 sul c.d. alto rischio inaccettabile: le pratiche inerenti sistemi di intelligenza artificiale vietate dal legislatore europeo
”, in
Contratto e Impresa, n. 1, 1 gennaio 2025, p. 46.
[40] N. Zorzi Galgano,
op.ult.cit.: "The exclusion from the ban in the terms specified is based on two profiles. First of all, the profiling by the AI system for the assessment of the risk that a natural person commits a crime or for the prediction of a crime must be such as to result in an aid tool for an assessment that cannot be removed from humans and that must necessarily remain a human assessment. From another perspective, the assessment that a crime may be committed or the prediction of a crime cannot be unrelated to an assessment conducted and based on the actual existence of evidence, or means of proof that, in a state of law, today, cannot be ignored".
[41] The prohibition does not include systems aimed at identifying specific individuals or predetermined groups of people, for example with a view to identifying a criminal or identifying a group of victims. Likewise, the prohibition does not include systems for scraping other biometric data, such as voice samples, those not based on AI technologies and those not capable of recognizing people (because, for example, they are used to train generative AI models).
[42] M. Cistaro, “Vietato impedire l’indicizzazione della sezione “Amministrazione trasparente” per prevenire il web scraping e l’addestramento dell’IA generativa
”, in
Azienditalia, 4, 1 aprile 2025, p. 501 e ss.
[43] Let us recall the Clearview AI case, brought to the attention of the Italian Data Protection Authority, which launched an investigation into the phenomenon (Provision of the GPDP, 21 December 2023, web doc. no. 9972593), which concluded with the publication of the Information Note “on web scraping for the purposes of training generative artificial intelligence and possible counteractions to protect personal data” (this is the Provision of the GPDP, 20 May 2024, web doc. no. 10020316).
[44] Given that the conditions referred to in letter f) are cumulative, the Guidelines focus on the notions that lead to the applicability of the prohibition. Having cast doubts on the effectiveness and accuracy of such systems, it is specified that the rule does not concern emotion recognition systems tout-court, but systems that are able to make 'inferences' on the emotions of a natural person (as well as 'identify' them, pursuant to Recital 44 AIA), but, in the reconstruction carried out therein, the need to reconstruct the definition referred to in letter f) is affirmed in a perspective similar to that adopted by the other rules applicable to other emotion recognition systems (i.e. art. 50 AIA and Annex III, no. 1, letter c)).
[45] E. Betti,
Teoria generale del negozio giuridico, Napoli, 1950; L. Ferri,
L’autonomia privata, Milano, 1959; A. Gentili,
Il diritto come discorso, Milano, 2013; C. Castronovo, Eclissi del diritto civile, Milano, 2015; F. Carnelutti,
Introduzione allo studio del diritto, Napoli, 2016; N. Irti,
Un diritto incalcolabile, Torino, 2016; P. Grossi,
L’invenzione del diritto, Laterza, Roma-Bari, 2017; N. Lipari,
Il diritto civile tra legge e giudizio, Milano, 2017; G. Zaccaria,
La comprensione del diritto, Roma-Bari, 2019; V. Frosini,
La struttura del diritto, Napoli, 2022 (1962).
[46] G. Passagnoli,
“Ragionamento giuridico e tutele nell’intelligenza artificiale
”, in
Persona e mercato, n.3, 2019, p. 79 ss.
[47] G. Capilli
, “I criteri di interpretazione della responsabilità
”, in G. Alpa,
(a cura di), Diritto e intelligenza artificiale. Profili generali, soggetti, contratti, responsabilità civile, diritto bancario e finanziario, processo civile, Pisa, 2020, p. 457 ss
.; M. Gambini, “Responsabilità civile e controlli del trattamento algorítmico”
, in P. Perlingieri, S. Giova E I. Prisco
(a cura di), Il trattamento algoritmico dei dati tra etica, diritto e economia, Napoli, 2020, p.313 ss.
[48] In addition to the Schufa case of 2023, the Court of Justice of the European Union, with its ruling in case C-203/22, dated 27 February 2025, ruled on the automated assessment of creditworthiness, with particular reference to the right of the interested party to an explanation of the logic underlying the decision on whether or not to grant credit, which allows him to understand and contest the automated decision.
[49] U. Ruffolo
, “La responsabilità da produzione e gestione dell’intelligenza artificiale self learning
”, in ID. (a cura di),
XXVI Lezioni di diritto dell’intelligenza artificiale, Torino, 2021, p. 132 s.
[50] The reference is to EC 300/2008, EU 167/2013, EU 168/2013, EU 2018/858, EU 2018/1139 and EU 2019/2144 and EU directives 2014/90, EU 2016/797 and EU 2020/1828.
The regulation entered into force on 2 August 2024 and must be applied from 2 August 2026, to give companies time to comply with the discipline, except for: the prohibitions of systems with unacceptable risk that are respected, starting from 2 February 2025; the codes of good practice and the rules on AI systems for general purposes, including governance that must be applied, respectively, from 2 May and 2 August 2025.
[51] G. Resta
, “Cosa c’è di ‘europeo’ nella Proposta di Regolamento UE sull’intelligenza artificiale?
”, in
Dir.inf. informatica, 2022, p.53 ss., § 3.
[52] M. Mauro
, “L’inizio di applicazione del regolamento (UE) 2023/988 relativo alla sicurezza generale dei prodotti (GPSR)
”, in
Persona e Mercato, n.3,2024, p.1142.
[53] R. Montinaro
, “Responsabilità da prodotto difettoso. Tecnologie digitali tra soft law e hard law
”, in
Persona e mercato, n.4, 2020, p. 350; R. Petruso
, “La nuova direttiva UE sulla responsabilità per danno da prodotti difettosi: una prima lettura
”, in
Riv. Dir. econ. Trasp. Amb., 2024, vol. XXII, p. 575.
[54] C. Scognamiglio,
“Responsabilità civile ed intelligenza artificiale: quali soluzioni per quali problemi?
”, in
Resp.civ.prev., 2023, p. 1073 ss.
[55] C. Scognamiglio
, op. ult. cit., p. 1074.
[56] N. Cevolani, “La nuova disciplina europea della responsabilità per danno da prodotti difettosi (Dir. 2024/2853/UE)
”, in
Le Nuove Leggi Civili Commentate, 2, 1 marzo 2025, p. 439.
[57] G. D’alfonso
, “Tecnologie digitali “emergenti” e nuovi rischi. Scenari normativi europei tra incertezza scientifica e principio di precauzione
”, in
Le dimensioni giuridiche del principio di precauzione, Napoli, 2023, pp. 121-133.
[58] R. Alexy,
Concetto e validità del diritto, Roma, 2022.
[59] J. Luo - T. Li - D. Wu - M. Jenkin - S. Liu - G. Dudek, “Hallucination Detection and Hallucination Mitigation: An Investigation
”, in a
rXiv, 2024; in
https://arxiv.org/abs/2401.08358; Z. Li
, “The Dark Side of ChatGPT: Legal and Ethical Challenges from Stochastic Parrots and Hallucination
”, in
arXiv, 2023; M.H.T. Ling, “ChatGPT (Feb 13 Version) is a Chinese Room
”, in
arXiv, 2023, in
https://arxiv.org/abs/2304.12411.