Introducing AI: Select the right tech partners
Bandwidth: Enabling AI-driven success
Introducing AI: Select the right tech partners
Slide title
There are now quite a lot of AI providers. But there’s only a few providers of so-called foundation models. A lot of other AI is, as it were, built on top of those models. So in certain respects, your choice of AI is still fairly limited.
In many ways it’s still an immature market. What’s more, whilst some of our clients are already at the sophisticated end, many customers don’t have the deep expertise that’s needed to differentiate between the models on offer.
So, a lot of benchmarking you might do in traditional procurement just isn’t widely available. But that doesn’t mean there’s nothing you can do.
There are still levels of understanding you can arrive at, and – with suitable guidance and direction – pertinent questions you can ask. For example, we’ve helped clients adapt their procurement tools and governance to ensure they’re more fit for purpose for AI.
The good news is that, over time, the market will mature. AI skills and know-how will become more widespread, and people’s understanding of what they’re buying will get clearer and clearer.
We take a closer look at this and more AI topics in our Bandwidth AI series.
The positions that AI providers take in relation to certain terms and conditions in their contracts have really changed in the last few years.
A key change was when Microsoft announced in 2023 that it was extending its copyright commitment in relation to its AI service.
Broadly speaking, Microsoft said that customers using their AI service to generate output could use the material that was generated without worrying about copyright claims.
If customers were challenged on copyright grounds, Microsoft would assume responsibility for the potential legal risks involved.
Although there were some conditions attached, that commitment gave customers – and potential customers – a lot of comfort.
Increasingly, there’s more comfort available in other areas too. For example, we’ve seen changes around data privacy. Providers have started to offer tiered pricing for their models.
If you pay a higher price for an enterprise version of an AI model, this usually means you get greater control over the use of your data and the provider does not use your data for AI training or development.
Changes like these are very much a response to the market. Usually the issue is not whether risk exists, but how it is shared between the provider and customer in areas such as copyright, data privacy and confidentiality.
Providers are much more ready to discuss sharing risk, rather than adopting a position of take it or leave it. And that has made it easier for many businesses to bring AI within their risk tolerance.
From the earliest stages of a project involving AI, you’re going to be thinking about issues like liability and intellectual property. But what happens when you get to the contract?
How do you ensure that your contractual agreement with an AI provider adequately addresses the issues you’ve identified?
Here at CMS we work with a range of clients to help them agree contracts with AI providers, and it’s fair to say that we see clients take different approaches to how they negotiate the contract terms.
Some businesses will look to update their contractual templates to address some of the key risks that arise when procuring AI technology. These provisions often look to cover both the issues with solutions that are building AI, as well as regulating how a provider may use AI in delivering services to the business.
On the other hand we also see clients that take a more bespoke approach to the procurement of AI technology – particularly in situations where the provider is required to develop an AI solution for a particular use case. In these situations the contract tends to be much more tailored to the specific solution.
Regardless of the approach you take, one key provision will usually be the question of whether you are comfortable with the provider using your data to improve the provider’s solution. This raises important intellectual property, data protection and regulatory issues for most clients so this needs to be carefully considered.
In addition, you need to consider how you plan to use (and potentially rely) on output from an AI system. If your provider is building a solution you need to consider whether the contract has sufficient clarity as to the provider’s responsibility for the development and training of the model as well as the output that it creates. For established solutions you may also wish to consider what contractual commitments the provider will give concerning the development and testing of the model and the output it may create.
In either case, the contractual provisions need to dovetail with your broader governance process for procuring technology. This includes ensuring that you have a process to assess the risk associated with the solution, as this should help you determine whether you can use your standard provisions or require something more bespoke. In some cases, it also makes sense for you to have a checklist of issues to consider when you are looking to acquire an AI solution.
Explore this topic further, in our Bandwidth AI series. There’s much more to unpack here. So if you’d like to know how we can help you with this, feel free to reach out to me or any of my colleagues at CMS.