AI and…risk
Bandwidth: Enabling AI-driven success
AI and...risk
Slide title
As AI becomes more deeply embedded in businesses, the opportunities for innovation and growth are immense. But so too are the cybersecurity risks that come with it.
For many of our clients, AI is shifting rapidly from being an experimental tool to a business-critical function – powering internal operations, customer interfaces and data-driven services. And that means the resilience of AI systems is becoming vital.
In the past, if an AI tool failed, it may have caused inconvenience. But as AI increasingly runs key systems — saving time, reducing costs, and enabling scale — downtime becomes far more disruptive.
That’s when cyber risk starts overlapping with disaster recovery planning. If your AI systems are outsourced, managing risk becomes a contractual issue. But the fundamental concern is the same: when your system isn’t working, you can’t easily switch to another.
And if automation has led to a leaner workforce, you may not have a pool of trained staff to aid you with recovery. There can also be tensions between innovation and assurance.
In our experience, some cutting-edge AI providers are brilliant at the technology, but may lack the mature security infrastructure that companies want for their business-critical systems.
So I think we’re likely to see some new partnerships – with AI innovators on one side and enterprise-grade security specialists on the other.
At CMS, we’re supporting our clients as they address this rapidly changing landscape in their risk management strategies.
From negotiating robust contractual protections to enabling resilient AI deployment, we’re providing advice and assistance at every step of the way – always with the aim of helping our clients harness the power of AI responsibly and confidently.
If you’d like to talk about how we can help you, feel free to reach out to me or any of my colleagues.
AI is growing at astonishing speed – and so is the number of standards around the world that relate to the way we use it and develop it.
The UK’s AI standards hub now lists over 500 standards that have been published or are being developed by various standards bodies.
Some of them relate to specific areas like transport or health. But others, like the NIST AI Framework from the US, have much more general application.
Any business aiming to make the most of the incredible opportunities of AI needs a framework in which to operate.
And for a lot of business, working with established standards is the most practical and pragmatic way to put such a framework in place.
The International Standards Organization or ISO as it is commonly known has published several standards relating to AI, including ISO 42001 – the world’s first management system standard for AI.
But it’s another – ISO 23894 – that’s most directly relevant to businesses seeking to create effective risk management frameworks.
ISO 23894 builds on the ISO’s existing General Risk Management Standard (ISO 31000) to provide guidance on how businesses that develop or use AI can manage AI-specific risks.
ISO 23894 also helps businesses integrate risk management into their AI-related activities and business functions – covering topics from the purely technical to wider issues like stakeholder impacts and cultural change.
As AI systems can introduce new or emerging risks for an organisation, or change the likelihood of existing risks, ISO 23894supports businesses get to grips with such issues
Standards like ISO 23894 and ISO 42001 can also streamline contract negotiations between customers and vendors in AI procurement.
By providing a common language and expectations, they help to clarify the requirements and expectations for both parties, reducing the chances of misunderstandings and disputes.
And using standards as compliance requirements within contracts can ensure that a vendor’s AI solution meets a purchaser’s predefined criteria – helping to manage many of the risks of AI procurement.
We’ve guided many clients through the process of using and adopting AI standards – as well as ensuring their frameworks reflect the evolving regulatory situation in the UK, Europe and elsewhere.
And as someone who’s been the co-convenor of an ISO Working Group developing a risk management standard, I know first-hand how these standards are crafted to function in a wide variety of jurisdictions and situations, and to reflect diverse cultural perspectives. They’re genuinely international.
So if you’d like to know more about AI standards and the practicalities of working with them, please do give me or one of my colleagues a call.
AI models rely on vast volumes of information and developers may be under pressure to gather this quickly. However, indiscriminate data collection is a risky activity. Besides potentially breaching laws relating to privacy, IP and equality, if there are flaws in training data, the AI system may not function properly.
Imbalanced training data, which is not representative of the relevant population can result in statistical bias, meaning that the technology may make unfair decisions when it comes to certain groups. Societal bias can also arise where prejudices are baked in to training data. These may be reproduced, and even amplified by the AI model and can result in discriminatory outcomes.
Discrimination can even be a product of the way that the AI system is designed and implemented, for example if bias influences the way that the training data is labelled, if designers, developers and stakeholders hold certain prejudices, or if the system, once in final form, is presented in a way that is not accessible to all users.
Awareness of the risks is the first step, and linked with this, making users aware that AI output should be checked, before using it to make important decisions.
We have all heard stories about AI malfunctioning and in reality, despite best efforts, the technology may not be perfect. In practice, where issues arise, there is often a fix, but it is always better to find out at the earliest possible stage. In that respect effective testing, oversight and monitoring is key.
There’s more to discuss here, so if you would like to find out more, feel free to reach out to me or the team at CMS.
Where you have people engaged in checking and validating the outputs of your AI, you need to ensure that they can cope with the pipeline and don’t become bottlenecks or fail to perform properly. If AI guardrails are too restrictive, they can clog up the whole system.
And if that happens you’ll probably not realise the transformative potential of deploying AI , and you may also see business units which have begun to use AI start to revert to their previous way of working.
We have helped clients to develop guardrails that balance resource availability, regulatory, legal and other parameters, and organisational risk appetite.
Our clients also find that this is an evolution – guardrails that may be appropriate today may not be fit for purpose tomorrow, as the AI develops or new tools become available, or the use case changes, or the regulatory landscape shifts.
Continuous learning and adaptation are therefore necessary to make sure that your guardrails remain effective – if your business units use of AI is – whether in the short term or the longer term – going to have to be governed in some way by checks and balances, they are more likely to be accepting of that if those are seen as enabling the adoption and use of AI and in a way that is safe and aligned with the organisation’s risk profile.
Learn more about this and other AI topics in our Bandwidth AI series.
Protecting the information in your organisation is a fundamental issue for most businesses, requiring vigilance concerning access controls to ensure that information is only available to those who need it.
The introduction of AI technology in your business brings this issue to the fore, given AI's power to locate, summarise, and present information in ways that can make it easier for individuals to work around your controls.
For example, you wouldn’t want people outside, say, the HR team, to access data related to your employees and use AI to summarise what their colleagues are getting paid. You also wouldn’t want someone to use AI to locate and summarize your trade secrets so that they could sell them to a competitor.
The key point here is to apply the same level of diligence to any AI technology you deploy as you would for the introduction of any technology in your business.
For more on this topic, take a closer look at our Bandwidth AI series.
To understand how we can support you with this, feel free to reach out to any of our experts here at CMS.