What are the data protection implications that should be factored into an AI recruitment project? Transparency and accountability are critical features of the General Data Protection Regulation (GDPR) and are likely to be highly relevant to AI. This article looks at one way in which artificial intelligence can be used by HR.
Before we all point out that the H in HR stands for human and that this might seem to be an area unsuitable for machines, when you dig a little deeper, you can see how automated decision making has the potential to not only create efficiencies, but also to enhance the recruitment process.
Automated decision-making has the ability to be objective and remove unconscious bias in areas like recruitment and shortlisting. In recent years, as knowledge and understanding of overt discriminatory practices has improved, the focus has shifted to eradicating unconscious bias in the minds of recruiters.
Put simply unconscious bias is where we make decisions using the short cuts in our brain to identify with people who are like us or share our values. Unconscious thoughts can involve making negative decisions by applying stereotypical views and attitudes that affect our understanding, actions and attitudes in an unconscious manner, for example assuming that a parent may not wish to travel to work and discard their application on that basis.
AI models at the recruitment stage can be used to produce decision-making free from such unconscious bias. For large employers looking to make efficiencies, the development of this type of talent acquisition software can assist by scanning, reading and evaluating a large number of applicants very quickly.
Like any emerging technology, unintended consequences may arise. One of these is the scope for input bias to creep into an AI system. A recent UK Parliament Select Committee Report on Artificial Intelligence considered the possibility that data input in AI could be subject to bias as well as the scope for the algorithms to produce biased decisions. The report refers to AI used in the American criminal justice system to assess risk in sentencing, explaining that this system ‘commonly overestimated the recidivism risk of black defendants and underestimated that of white defendants.’ Employers therefore need to be live to these issues, ask questions and carry out appropriate diligence before introducing AI into their operations.
Decisions made using automated decision-making have a variety of data protection implications, which have been amplified by the introduction of GDPR. Some of the key issues employers should consider before adopting AI in their processes, are discussed below:
None of these steps should stop employers embarking on the introduction of AI. In the new GDPR world we live in, we should all be ‘baking in’ privacy by design into workplace systems. Automated decision-making simply adds an additional GDPR layer to the mix.