Key contacts
Under Luxembourg law, employees are expected to discharge their functions with diligence and loyalty, ensuring they comply with all contractual and statutory obligations.
When employees work with Artificial Intelligence (“AI”) tools or systems, this diligence and loyalty obligation extends to properly handling data, respecting confidentiality requirements, and avoiding conduct that could expose their employer to liability.
In practice, an employee who uses AI systems for work-related tasks must exercise caution to prevent harm or loss to the employer or third parties. For example, entering sensitive data into AI systems without the employer's authorisation or without sufficient data security measures could breach confidentiality obligations and result in disciplinary sanctions, including dismissal.
Additionally, the use of generative AI tools raises specific concerns regarding intellectual property rights. Employers and employees should be aware that content generated by AI may infringe third-party copyrights and that the ownership of AI-generated works remains a legally uncertain area under Luxembourg and EU law.
What are the employers’ main responsibilities and best practices?
AI literacy - Article 4 of the EU AI Act:
Since 2 February 2025, employers deploying AI systems must ensure that all employees who use them possess the necessary skills, knowledge, and understanding for reasonable use. Employers should therefore provide appropriate training, clear guidelines and policies to employees using AI tools. This will help them understand the legal, technical, and ethical limitations of AI use. For further information on AI literacy, please refer to the following CMS dedicated article on ten key points of AI Literacy.
From an employment law perspective, it is therefore key to set clear AI compliance policies outlining employee responsibilities, with disciplinary actions for violations. Employment contracts and/or internal regulations should specify permitted AI uses, confidentiality, intellectual property rights, and consequences for breaches. Indeed, in the absence of such policies or training, it will be difficult for the employer to prove the employee’s fault in case of a confidentiality breach. As a reminder, dismissals must be based on real and serious grounds and must be analysed on a case-by-case basis.
Moreover, the employer could also be held liable for damages resulting from the employee’s improper use of AI, particularly where no adequate training or guidelines were provided.
High-risk AI Systems - Article 6(2) and Annex III of the EU AI Act:
AI systems intended to be used for recruitment and employee management purposes are considered “High-risk”.
Employers wishing to deploy such high-risk AI system must therefore comply with all the requirements of Article 26 of the AU AI Act, particularly implementing adequate security measures, ensuring strict human oversight and implementing robust log retention policies.
Moreover, before putting into service or using a high-risk AI system at the workplace, employers shall inform employee’ representatives and the affected employees that they will be subject to the use of the high-risk AI system. This information shall be provided to employee representatives or the joint works council (comité mixte d’entreprise) in accordance with Luxembourg labour law.
Prohibited AI practices – Article 5 of the EU AI Act:
The AI Act prohibits certain AI systems that are classified as “unacceptable risk”. This applies inter alia to systems designed to analyse employees' emotions at work. Other examples of prohibited AI systems in the workplace include social scoring systems that evaluate employees based on their behaviour or personality traits, as well as real-time biometric surveillance systems used for monitoring purposes without proper legal basis.
GDPR key considerations:
The use of AI systems in the workplace must also comply with the EU General Data Protection Regulation (“GDPR”), in particular where AI systems process employee or applicant personal data.
Employers must ensure that any processing has a valid legal basis, complies with the principles of purpose limitation, data minimisation and transparency, and does not result in excessive or disproportionate monitoring of employees. Particular caution is required where AI systems process sensitive data, generate profiles, or support automated decision‑making affecting employees.
Where AI systems are used for recruitment, performance evaluation, monitoring or other HR‑related purposes, employers should assess whether a Data Protection Impact Assessment (“DPIA”) is required, given the likely high risks to employees’ rights and freedoms.
Last but not least, employers should ensure that appropriate contractual safeguards are in place with AI tool providers (including data processing agreements), that access and retention rules are clearly defined, and that international data transfers are adequately handled.
The legal framework governing AI in the workplace is evolving rapidly and requires careful navigation at the intersection of employment law and data protection. Should you have questions or be considering the deployment of AI tools within your organisation, your dedicated CMS team would be pleased to support you with tailored and practical advice.