Italy and the EU AI Act: A Double Layer of Obligations

1 April 2026

Test

Italy’s Law No. 132/2025, in force since October 2025, sits alongside the EU AI Act rather than replacing it. The Italian law defers to the EU framework on definitions and risk classifications but adds binding sector-specific obligations in areas Brussels left to Member States. For businesses with Italian operations, both layers require separate analysis.

The EU Risk Classification — Four Categories

The Act operates on a four-tier framework. Where a client’s AI systems sit within it determines the entire compliance burden.

At the top, a category of systems is prohibited outright, banned since February 2025. This covers social scoring by public authorities, systems using subliminal techniques to manipulate behaviour, tools that exploit vulnerabilities of specific groups, and most real-time remote biometric identification in public spaces.

Below that, high-risk systems – those deployed in healthcare, critical infrastructure management, employment, education, and access to essential private and public services – face the most demanding compliance requirements. These cover risk management systems, data governance, human oversight mechanisms, technical documentation, and transparency obligations toward deployers. Full compliance is required by 2 August 2026, though the recently proposed Digital Omnibus may delay certain requirements deadlines until 2027 and 2028.

Fines for non-compliance vary depending on the nature of the violation: reaching up to €15 million or 3% of global annual turnover for high-risk breaches, and up to €35 million or 7% for infringements related to prohibited practices, whichever is higher.

Limited-risk systems carry lighter but still mandatory transparency obligations under Article 50 of the Act, also applying from 2 August 2026. Three main categories are caught. Operators of chatbots and AI-driven virtual assistants must ensure users know they are engaging with a machine, unless the context makes this self-evident. Systems generating or manipulating images, audio, or video in ways that could mislead – deepfakes – must carry disclosure of artificial origin, subject to a narrower carve-out for clearly artistic or satirical content. AI-generated text published on matters of public interest must similarly be labelled as machine-produced, unless a human has reviewed and taken editorial responsibility for it.

At the base, minimal-risk systems, such as spam filters, inventory tools, recommendation engines, AI-enabled consumer applications, face no specific Act obligations. One baseline requirement cuts across all categories regardless of risk level, however: providers and deployers must ensure adequate AI literacy among staff who work with AI systems. That obligation has been in force since February 2025 and applies universally.

What Italy Adds

Three areas carry immediate obligations under national law, independent of the August 2026 deadline.

In employment, employers must specifically inform workers when AI is used in processes affecting them, like recruitment screening, performance monitoring, disciplinary procedures. In healthcare, human clinical oversight is mandatory and patients must be informed of AI involvement in their care. In professional services, practitioners must disclose AI use to clients and ensure professional judgement prevails over AI output. Each of these obligations is already in force.

The most significant national addition is criminal and corporate liability. Italy’s law establishes fines of up to €1,549,000 and, in serious cases, disqualifying measures under Legislative Decree No. 231/2001, including suspension of licences and exclusion from public contracts for up to two years. Decree 231 liability attaches to the legal entity. For foreign businesses with Italian subsidiaries, an AI compliance failure is a corporate governance issue, not merely a regulatory fine.

An Incomplete Framework

Italy’s enforcement architecture is not yet fully in place. Implementing decrees due by October 2026 will confer formal sanctioning powers on AgID and the National Cybersecurity Agency and establish civil redress mechanisms for AI-related harm. The Commission is also expected to publish guidance on Article 50 transparency obligations in the second quarter of 2026, directly relevant for clients operating customer-facing chatbots or content generation tools in the Italian market.

The Immediate Priorities

Our Tech team can assist clients in considering all four risk tiers. We can work with you to ensure no AI systems fall within the prohibited category. Assist you in identifying the relevant risk classification and in performing the required conformity assessments  before 2 August 2026. Implement Article 50 transparency disclosures for limited-risk systems by the same date. Verify that AI literacy obligations already in force are being met across the relevant workforce. Review sector-specific obligations under Italian law that are already operative. And update Decree 231 organisational models to incorporate AI-specific compliance protocols before the implementing decrees activate enforcement powers later this year.

 

Our Legal Tech team would be pleased to assist you in assessing your obligations under both frameworks and ensuring your Italian operations are fully compliant ahead of the August 2026 deadline. Please do not hesitate to get in touch.

AUTHORS

Julia Holden

Partner

RELATED EXPERTISE