
Artificial Intelligence Act update: AI Literacy and Definition of AI Systems
10 maart 2025
The first parts of the EU AI Act came into force on 2 February 2025. Organisations that use or develop artificial intelligence are required to ensure their employees are "AI Literate". The Dutch Data Protection Authority and the European Artificial Intelligence Office have published guidance on AI Literacy. In addition, the European Commission has published guidelines on defining AI Systems.
AI Literacy
AI Literacy Guidance by the Dutch DPA
The Dutch DPA has provided guidance by publishing a Multiannual Action Plan aimed at promoting AI literacy within organisations. They advocate for a strategic and long-term approach, emphasising that there is no one size fits-all solution, as the context and area of deployment must be considered. AI literacy includes the technical, ethical, societal, and practical aspects of AI systems.
The required level of AI literacy for employees increases with the risk level of the AI system. The necessary knowledge, skills, and understanding depend on the employee's role, the deployment context, financial resources and organisational possibilities. To implement the Multiannual Action Plan, organisations must establish management-level plans, allocate budgets, define organisational and ownership responsibility, and periodically assess progress of AI literacy.
- Map the AI systems within the organisation, including their associated risks, possibilities, and societal effects.
- Identify key AI personnel and their respective roles
- Assess employees' baseline knowledge on technical, social, ethical and practical levels via surveys or interviews. These results will establish a benchmark for tracking AI literacy progress.
- Define AI literacy goals and priorities based on the risk levels of the systems in use.
- Customise the necessary knowledge and tools for each employee to ensure responsible AI system use.
- Employees directly working with AI systems should have sufficient knowledge of the risks and functionality.
- Other employees need general awareness of AI deployment and its purpose.
- Clearly outline intended objectives and the invested responsibilities accordingly to ensure alignment across the organisation.
- Develop and execute strategies and actions, such as awareness training on ethical, technical and legal aspects of AI systems, or offering specialised training for employees who actively work with AI systems.
- Appoint an AI officer to track AI literacy within the organisation.
- Periodically analyse reports to determine if the AI literacy targets are being met and evaluate the effectiveness of the training.
- Conduct evaluations of employees' AI literacy levels, feedback mechanisms and residual risks.
By following the structured approach outlined in the Multiannual Action Plan, organisations can enhance their employees' AI literacy in a tailored manner, ensuring responsible AI system use. This strategic and long-term approach will help organisations effectively navigate the complexities of AI deployment and enhance overall AI literacy.
AI Office - AI Literacy Practices
The European Union AI Office has published a living repository of AI Literacy Practices, which are intended as examples of how organisations can implement the AI Literacy requirement.
This repository outlines the AI literacy initiatives of various organisations, focusing on key areas such as the technical knowledge and required training, the context in which AI is used, and the impact and monitoring of AI literacy initiatives. It also addresses the challenges and issues the organisations face and their future plans to improve AI literacy.
The responses from organisations emphasise the implementation of diverse types of training they offer, ranging from awareness to technical trainings. These programs are tailored to the different uses of AI within the organisations and cater to the varying levels of AI knowledge among employees. Most compliant organisations also provide a basic training to set a baseline for the skills of the employees. These trainings are mostly linked to the activities in which the AI tools are used within the business. The challenges that were addressed were the rapid technological change and the integration of new tools. To assess the effectiveness of AI literacy initiatives, organisations employ key performance indicators and feedback mechanisms.
Although the repository does not include a “one size fits all” template for how to approach the implementation of an AI literacy programs, but the examples can be used as inspiration and a starting point for developing such programs.
Guidance on the definition of AI Systems by the European Commission
The definition of AI systems is complex and encompasses a wide range of elements. The European Commission recently published guidance on how these various elements can be interpreted.
Article 3 (1) AI Act defines AI systems as:
''AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”
This definition covers both pre-deployment and post-deployment phases, and includes seven elements, which are not required to be present continuously throughout both phases. The elements are as follows:
1. Machine-based system
AI systems rely on both hardware and software components to function. These machines are essential for functions such as model training, data processing, and decision-making. The term also includes a wide range of computational systems, such as quantum computing and biological systems, if they provide computational capacity.
2. Varying autonomy
Varying autonomy means that AI systems can operate with some degree of independence from (in)direct human involvement and can function without human intervention. Autonomy and inference (as explained in the next paragraph) are closely linked, as the ability to generate outputs like predictions and decisions is the key to autonomy. Systems designed to work with complete manual control are excluded from this definition. AI Systems capable of operating with limited or no human intervention, require additional risk mitigation and human oversight measures.
3. Adaptiveness
According to Recital 12 AI Act 'adaptiveness' refers to a system's self-learning capabilities, which enable it to change its behavior while in use. As a result, the adapted system may produce different outcomes from the same inputs compared to its previous behavior. However, after deployment it is not mandatory for an AI system to possess adaptiveness or self-learning capabilities.
4. Explicit or implicit objectives
AI systems are designed with objectives – either explicitly stated or implicitly derived. Explicit objectives are stated goals, which are clearly encoded. Implicit objectives arise from training data or the system's interaction with its environment. The objectives are inherent to the AI system itself, relating to the goals of the tasks it is designed to perform and the outcomes it aims to achieve.
5. Input-based output generation
The fifth element involves the system's ability to infer from input data to generate outputs. This capability distinguishes AI systems from simpler traditional software. Inference refers to producing outputs such as predictions and recommendations based on inputs, a key characteristic of AI systems. The process includes techniques like machine learning and logic-based approaches. Machine learning methods include supervised, unsupervised, self-supervised, reinforcement, and deep learning. Logic-based approaches rely on knowledge encoded by human experts. AI systems must demonstrate inference to qualify as AI, with varying techniques used during the building and use phases. This ensures the system can derive outputs autonomously and adaptively, supporting complex tasks and decision-making processes.
Systems that operate based solely on human-defined rules – such as mathematical optimisation systems, basic data processing systems, classic heuristics systems and simple predictions systems - fall outside of the scope of the AI Act. These systems lack the capability to autonomously analyse patterns or adjust outputs.
6. Produces predictions, content, recommendations or decisions
The capability to produce predictions, content, recommendations or decisions distinguishes AI systems from traditional software, highlighting the importance of generating varied outputs based on inputs. AI systems can generate four types of outputs based on inputs: predictions, which estimate outcomes with minimal human involvement; content, which involves generating new material; recommendations, which suggest actions or decisions based on user data; and decisions, which are automated choices made without human intervention.
7. Influences environments
The last element of AI systems includes 'active' systems, which impact both tangible objects (such as a self-driving car) as virtual environments (such as the car's integral navigation system).
Please do not hesitate to contact Thomas de Weerd, Jurre Reus, or Lucy de Graaf if you would like more information on the AI Act's implications for your organisation.