AI and...HR
Bandwidth: Enabling AI-driven success
AI and...HR
Slide title
AI’ s not going to take over the world – or even your business in the short term. But in many businesses it’s already doing repetitive, data-driven tasks, as well as some larger, complex ones.
And it’s already shown that in some cases it can achieve better results than humans.
So people are thinking strategically about what this new capacity and opportunity means for the shape of the business.
It’s clear that many roles will have to adapt and change. And that will be a challenge for HR functions.
Workforce planning now involves asking: what jobs are going to evolve, what jobs does we need to create, and what skills are needed for them – and can we get people across into those?
We’re already seeing businesses looking at underlying skills rather than job experience, and searching for people with the capacity and willingness to engage and adapt.
Extensive training for digital skills will be needed, as will the ability to match skills with particular roles.
And we’re seeing businesses modify their long-term recruitment pipelines – by reevaluating how they engage with education providers and redesigning their graduate programmes.
You can use AI to assess your workforce, and evaluate what someone is suited to, how best to develop them and what their progress in the organisation might be.
Used this way, AI starts to facilitate career planning and performance and people management.
But it also starts to be controversial if people think it’s deciding their future.
Leaving aside how AI involvement suits the ethics or culture of a particular organisation, most businesses will prefer to be able to show that decisions are based on real human understanding.
Certainly, if you’re in an employment tribunal where someone is unhappy with an outcome that’s been reached or a decision that’s been imposed on them, you’ll want to show that the company has maintained a personal relationship with them, and that you can demonstrate and justify what’s happening.
The law gives a pretty high level of protection to employees, and any sort of decision-making without a human element is potentially problematic.
So it’s important to understand the difference between what can be done – what’s technically possible – and what should be done in light of the regulatory framework. A simple ‘computer says no’ is unlikely to go down well at a tribunal.
AI technology is increasingly being harnessed by employers in the context of biometric recognition systems, including to enhance security, monitor staff attendance and manage work schedules.
Whilst there are clear benefits to be had, these systems have been subjected to significant regulatory scrutiny. In part, this is due to the fact that biometric data, like fingerprints and facial recognition data, are classified as special categories of personal data, when used in this context.
Accordingly, the use of this technology is governed by enhanced privacy rules and requirements, and the associated processing of personal data is regarded as highly intrusive, by default.
This doesn’t mean that biometric recognition systems cannot, or should not, be used but it does reinforce the need to be fair and proportionate when deciding how, and the extent to which, they should be rolled out within an organisation.
It is also critical for businesses to ensure (in advance) that their handling of this data is lawful under the GDPR and to recognise, document and mitigate the risks. In some cases, prior consent from employees may be advised and an alternative method of recognition may need to be offered.
Staff engagement is essential to meet privacy requirements in relation to transparency but also to establish trust and encourage uptake. If staff are consulted before roll-out employers can address concerns as they arise. In practice, this can help to mitigate the risk of claims and enforcement action, further down the line.
To explore the impact of AI and biometrics in more detail, do reach out to me and the team here at CMS – we’d be happy to help.
Businesses are rapidly incorporating AI into many of their key operations.
But in some areas, we’re already seeing complaints that people have been treated unfairly because of AI.
You might think that using AI in decision-making would help reduce bias. But AI is only as good as what it learns from and how it’s trained.
If it learns from biased or inaccurate sources, its output may reflect that.
There are thorny issues to grapple with in terms of automated decision-making from a data protection perspective – and in terms of non-discriminatory decision making from an Equality Act perspective.
These could be decisions about recruitment and retention, customer journeys or any form of application or verification procedure.
There will certainly be a risk of discrimination claims if your implementation of AI has unduly negative effects on any particular group with protected characteristics.
At CMS, we’re helping clients integrate AI into their policies and procedures to maximise the benefit of new technology while managing the regulatory and litigation risk.
If you’d like to find out more, please do get in touch with us.