Breadcrumb
Entry into force of the first obligations under the AI Act
The AI Act came into force on 1 August 2024, and from 2 August 2026, most obligations must be complied with by companies. However, the first obligations will enter into force as early as 2 February 2025 and relate to promoting AI literacy on the one hand and banning certain AI practices on the other hand.
Purpose and scope
With the AI Act, the European Union aims to create a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with the Union’s values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union.
The AI Act applies to providers that place in the market or put into service AI systems, as well as to deployers who use these systems . If, as an employer, you use AI systems – for example, in the recruitment process – you will mainly have to take into account the obligations for deployers. In our previous newsflash we gave an overview of the most important guidelines and action points.
AI Literacy
The first obligation to come into effect from 2 February 2025 is to ensure a sufficient level of AI literacy. The goal is to ensure that all individuals involved in AI systems within the company have the necessary skills and knowledge to make informed decisions and use the AI systems responsibly.
The measures to ensure a sufficient level of AI proficiency must be determined taking into account:
- their technical knowledge, experience, education and training;
- the context in which the AI systems are to be used; and
- considering the persons or groups of persons on whom the AI systems are to be used.
The AI Act does not specify what measures an employer must take to achieve a ‘sufficient’ level of AI literacy. This makes it difficult to demonstrate that you comply with this obligation, but it also offers the opportunity to determine for yourself what is ‘sufficient’ for your company and employees. However, it is part of the tasks of the AI Office, a body that was established within the European Commission as the centre of AI expertise, to provide additional information on this matter.
For this reason, companies that use AI systems should organise training courses on AI literacy. The implementation of a detailed Responsible AI Use Policy also contributes to the AI literacy obligation.
In addition, not all employees need to achieve the same level of AI literacy. It is not a ‘one size fits all’ obligation, but certainly a tailor-made approach. Everyone who comes into contact with AI is expected to understand the basic principles, as well as to be able to deal responsibly and critically with AI (systems). Compliance with this obligation is also an ongoing and dynamic process.
Prohibited AI practices
From 2 February 2025, the AI Act will prohibit a number of unacceptable practices in the field of AI. These are practices that are contrary to European fundamental norms and values, such as a violation of the fundamental rights enshrined in the Charter of the European Union.
For example, the following AI practices shall be prohibited:
- AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. This includes systems that push people to make decisions they would not otherwise make which can lead to significant harm;
- AI systems that exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age or disability, to materially disrupt their behaviour which can result in significant harm;
- AI systems that evaluate or classify people based on their social behaviour or known, inferred or predicted personal or personality characteristics (‘social scoring”’), and that lead to detrimental or unfavourable treatment;
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footages;
- AI systems that infer emotions of a natural person in the areas of workplace and education institutions, except where the use of AI system is intended to be put in place or into the market for medical or safety reasons;
- …
Companies that develop or use prohibited AI practices shall be subject to administrative fines of up to
EUR 35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover, whichever is higher. When fines are imposed on SMEs and start-ups, their interests and their economic viability are taken into account and a lower fine may be imposed.
AI-policy
The AI office should encourage and facilitate the drawing up of codes of practice, taking into account international approaches. In an AI policy, the employer can draw up clear guidelines for the use of AI within the company. It can be included which AI systems may be used by whom and to what extent AI systems may be used in the position of certain employees. The policy can also determine how staff can remain sufficiently AI-literate.
Action point
From 2 February 2025, the first obligations of the AI Act will come into force. Therefore, map out which AI systems are used within your company. Qualify these AI systems and stop using the AI systems with an unacceptable risk.
Then map out the current level of AI literacy in the company and assess what additional measures are needed (e.g., training, internal regulations, etc.).
Our Data & Privacy Team can always assist you in drawing up an AI policy, organising training courses and with questions about HR, privacy and AI within your company.
We are pleased to announce our partnership with Umaniq, a pioneer in Data & AI Governance, Risk and Compliance. Together, we offer tailored training programmes to strengthen AI literacy within your organisation.
Curious to learn more? Find all the details via this link.