Key contacts
Under the EU Artificial Intelligence (AI) Act, by 2 February 2025 all organisations must ensure that their staff are AI literate, whether they participate in the AI value chain as providers or as users (or deployers).
Beyond the general provisions in Article 4 on AI literacy, the AI Act does not specify the measures organisations must take to develop AI literacy with their staff and among others involved in operating or using AI systems on their behalf.
Practical compliance may be facilitated by using the guidelines issued by the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) in early February 2025. Given that the AI Act applies across the EU, these guidelines may serve as a useful resource for all organisations.
To support AI literacy, we have compiled a list of practical tasks based on the key provisions of the guidance.
- AI literacy involves acquiring the skills and knowledge necessary to use AI systems responsibly. Literacy enables organisations to harness AI's potential while minimising associated risks.
- Developing AI literacy is a long-term process, requiring management commitment, sufficient budgetary resources, clear organisational responsibility and continuous monitoring of progress.
- AI literacy development is also recommended when using algorithms that do not qualify as AI systems.
- The level of AI literacy needed depends on an organisation’s resources and funding. The greater the risks posed by the AI systems used, the higher the level of AI literacy required by your staff.
- Organisations should maintain an internal inventory of their AI systems, assessing their impact on data subjects and society. This should include associated risks, prioritisation of risks, and relevant organisational policies, plans, and measures. Additionally, organizations must document the scope of staff members using AI systems, the scope of AI usage, and the groups affected by AI use.
- Organisations should assess their staff’s current knowledge and determine what knowledge and tools are needed to achieve an appropriate level of AI literacy.
- Employees who do not directly use AI systems do not need in-depth knowledge of their functionality. For transparency, however, they should know the AI systems the organisation uses and the purpose of each system. Conversely, staff who directly operate AI systems should understand how these systems work and the risks involved. It is essential that employees grasp the social, ethical, legal, and practical aspects of AI usage, interpret AI-generated outputs correctly and recognise potential biases or inaccuracies. HR professionals using AI for profiling should be well-versed in its risks and correct usage. Specialised training should be provided for staff involved in AI-related decision-making and procurement.
- Deployers of high-risk AI systems must ensure that those responsible for implementing the instruction manual and human oversight possess the necessary expertise.
- The learning process should be continuously monitored, and specific educational roles should be assigned with responsibilities, such as appointing an AI officer and creating an AI literacy document for the organisation.
- Organisations should conduct regular assessments (e.g. internal surveys, periodic reports, and internal or external audits) to measure progress against set objectives. Based on these findings, organisations can define new goals and actions to enhance AI literacy, adapt to technological advancements, and mitigate risks.
For advice on navigating these regulations and to ensure your AI systems meet required standards, contact your CMS client partner or these CMS experts.