Explore how artificial intelligence is transforming the fashion and luxury industry, from generative design and personalized shopping to compliance with the EU AI Act and Italian AI Law. Learn about key applications, legal challenges, and strategies for responsible AI adoption.
Regulation (EU) 2024/1689 (AI Act) is the first comprehensive legislative framework worldwide governing artificial intelligence, imposing strict obligations on entities that place AI systems on the market, put them into service or use them.
AI Act entered into force on 1 August 2024 and will apply from 2 August 2026, although some provisions already apply (partly from 2 February 2025 and partly from 2 August 2025), while others will apply from 2 August 2027.
Its objectives, outlined in Recital 1, are ambitious: to establish a uniform legal framework for fostering human-centric and trustworthy AI, ensuring the protection of fundamental rights and the free cross-border circulation of AI-based goods and services, preventing Member States from introducing divergent national restrictions.
The scope of the AI Act is very broad: obligations are imposed not only on AI system providers, but also on deployers (professional users), authorised representatives, importers and distributors. Its application is not limited to entities established within the EU but also extends to non-EU entities placing AI systems on the market or whose AI system outputs are used within the EU.
According to the AI Act, AI systems are automated systems with variable levels of autonomy that infer from the inputs they receive how to generate outputs such as predictions, content, recommendations or decisions, influencing physical or virtual environments.
The definition of artificial intelligence provided by the AI Act is intentionally very broad and includes all rapidly evolving technologies which, based on the data they receive and with varying levels of autonomy and adaptability, learn to generate outputs.
The approach adopted by the European legislator is a risk-based approach, meaning that the type and content of regulatory requirements are calibrated according to the level of risk that such systems may generate.
AI systems are therefore classified according to their level of risk (i.e. the probability of harm occurring and the severity of that harm) into four categories: unacceptable risk, high risk, limited risk and minimal risk.
The first category includes unacceptable risk systems, which are prohibited due to their impact on fundamental rights and individual freedoms and therefore their incompatibility with EU values. These include, for example, systems that exploit the vulnerability of individuals or groups, social scoring systems, real-time biometric identification systems and biometric categorisation systems. The ban on the use of such systems has been in force since 2 February 2025.
The second category includes high-risk systems, which may produce adverse effects on safety and fundamental rights and which constitute the core of the regulatory framework.
High-risk systems are divided into two types:
For high-risk systems, the AI Act provides for obligations not only on the provider, but also on the deployer, i.e. the professional user. The deployer must, for example, ensure appropriate human oversight, monitor the functioning of the system, record events and incidents, notify serious incidents, and provide information to end users.
The third category concerns limited-risk systems, such as chatbots, virtual assistants and certain content generation systems. These systems do not present high risks but still involve more than minimal risks because they can influence user decisions and therefore require specific transparency obligations, to ensure that users are aware they are interacting with an AI system.
The last category concerns minimal-risk systems, for which no specific obligations are provided.
Alongside these, there are also general-purpose AI models, including generative AI systems, which can be used for different tasks, either directly or by integration into other systems, and which may pose a systemic risk, namely a specific risk due to their broad impact capabilities.
Most AI systems currently used by fashion and luxury companies fall within the category of minimal-risk systems or limited-risk systems, for which only transparency obligations apply – such as chatbots and virtual assistants for customer service, and visual or textual content generation software for marketing and communication campaigns. However, in some cases they may fall within the category of high-risk systems – for example, when biometric data of customers are used to develop tailor-made models.
In order to ensure compliance with the regulatory framework, the first step is to map all AI systems in use and, once it has been verified that they fall within the definition of AI systems under the AI Act and are therefore relevant, to classify them within one of the risk categories outlined above, and then define the appropriate compliance measures to be adopted.