AI in recruitment: key considerations for employers in Bulgaria and EU
Key contacts
Artificial intelligence is increasingly shaping recruitment practices, transforming the way organisations assess and select candidates. AI tools offer a data-driven and efficient approach to finding talent, which promise to streamline the hiring process, reduce human bias, and assist companies in identifying candidates quickly and accurately.
At the same time, the increasing use of AI in recruitment introduces challenges, such as algorithmic bias, transparency, and the protection of personal data, which demand consideration and highlight the importance of regulatory compliance. Because the EU AI Act (Regulation (EU) 2024/1020) sets stricter standards, organisations in Bulgaria and the EU must balance the benefits of technological advancement with fairness, transparency, and respect for candidate rights.
As AI becomes integral to various business processes, including recruitment, organisations in Bulgaria and the EU must navigate a complex regulatory landscape. Two key regulations apply to the use of AI systems in hiring: the AI Act (EU Regulation 2024/1020) and the General Data Protection Regulation (GDPR) (EU Regulation 2016/679).
AI systems intended for recruiting or selecting candidates are classified as “high-risk” under the AI Act. Their use must therefore meet strict compliance requirements designed to safeguard fairness, ensure transparency, and protect fundamental rights.
Who Is responsible?
Under the AI Act, organisations using AI systems (e.g. employers and recruitment agencies) are generally considered “deployers”. They may, however, be reclassified as “providers” and subject to more stringent obligations if they:
- put their name or trade mark on a high-risk AI system that has already been placed on the market or put into service (unless a contract clearly assigns these responsibilities elsewhere);
- make a substantial modification to the system; or
- change the intended purpose of the AI system.
Key compliance obligations for deployers
Deployers of high-risk AI systems for recruitment must comply with a set of obligations, which align with the GDPR’s requirements for automated decision-making and profiling:
1. Transparency and notification obligations:
- inform candidates and employees that they are subject to a high-risk AI system before deployment;
- provide clear information about the system’s purpose, capabilities, and limitations;
- ensure compliance with GDPR transparency requirements, including the right to information, access and contest automated decisions.
2. Human oversight obligations:
- assign trained personnel with sufficient authority to oversee the AI system;
- ensure they can interpret outputs, and intervene or suspend the system if necessary;
- oversight must be active and informed, rather than formalistic.
3. Data quality and bias mitigation obligations:
- input data must be relevant, representative and mitigate bias at the deployment stage, which may necessitate internal audits or validation procedures;
- conduct bias audits are advisable to ensure data does not lead to discriminatory outcomes;
- align with GDPR principles of data minimisation and accuracy.
4. Technical and organisational measures obligations:
- use the system only as per the provider’s instructions (violation of this could result in qualifying the deployer as a provider);
- implement safeguards to prevent misuse or unintended consequences;
- suspend operation if the system poses a risk or malfunctions, and inform the provider, distributor and relevant market surveillance authority.
5. Recordkeeping and monitoring obligations:
- maintain logs of system operations for at least six months (if logs are under the deployer’s control);
- monitor performance continuously to detect anomalies or risks and inform providers;
- cooperate with market surveillance authorities and provide documentation upon request.
6. Impact assessment obligations:
- before first use, perform a fundamental rights impact assessment as a part of the data protection impact assessment. The deployer may rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider.
7. AI literacy and training obligations:
- take measures to ensure a sufficient level of AI literacy among staff and other persons dealing with the operation and use of AI systems. All staff involved in AI operations must receive adequate training.
Respecting candidate rights under GDPR
Deployers must uphold the candidate’s rights under the GDPR, particularly those in Article 22, which grants individuals protection against being subject to decisions based solely on automated processing. Where such processing is permitted, candidates must be guaranteed:
- the right to human intervention;
- the right to express their point of view; and
- the right to contest a decision.
These safeguards are especially important in recruitment, where AI-driven decisions can significantly affect individuals’ careers and livelihoods.
High-risk AI systems have the potential to transform recruitment by streamlining processes and supporting better decisions. Yet this potential can only be realised if organisations apply AI technology responsibly in line with the GDPR and AI Act. Compliance not only protects candidate rights and reduces legal risks, but also builds trust in AI-powered hiring.
For more information on the use of AI in recruitment, contact your CMS client partner or these CMS experts: Eva Petrova, Senior Associate and Anna Tanova, Counsel.