AI shopping agents: How will UK Consumer Law apply?
Key contacts
AI shopping agents — which can find and even purchase products for users — are starting to reshape e-commerce. Big tech companies and retailers are experimenting with AI in this context, with different approaches being taken.
Walmart has partnered with OpenAI and Google Gemini, allowing customers to browse and buy Walmart items directly within ChatGPT and Gemini. Amazon does not allow this, but its customers can use "Buy for Me" to purchase products from third-party websites without ever leaving Amazon. These features are currently live for US customers only — but it's surely only a matter of time before UK consumers encounter them too.
And it's accelerating. With the rise of OpenClaw (and its partnership by OpenAI), we're seeing a future where a truly personal agent which interacts with other agents to complete an entire shopping journey autonomously on behalf of a user. No human intervention needed.
This raises important questions from a consumer protection perspective. Does UK consumer law apply to AI shopping agents? If so, how? And what do these laws mean for how AI shopping agents can operate in the UK?
The answers to these questions are uncertain and untested: while AI as a topic is firmly on the radar of UK regulators (with plenty of papers and studies having been published), there is no UK legislation that explicitly regulates AI, and no regulatory guidance addressing AI shopping agents. The position may also change in the future: given the UK's approach of vertical regulation of AI, perhaps the newly minted Digital Markets, Competition and Consumers Act and other consumer protection legislation will need to be updated sooner than expected.
What Role Is AI Playing in the Customer Journey?
Much like "AI" itself — often used as an all-encompassing tag for sophisticated technology — the term "agentic" covers a broad collection of different technologies and use cases.
There is no settled definition of an AI agent. Broadly, it's an AI system that can autonomously pursue goals over multiple steps, making decisions about what actions to take without needing human input at each stage. Some would say AI agents are systems that can observe, understand, plan, and do — with the emphasis firmly on that last element.
That "do" is where things get interesting. In the shopping context, we can usefully separate agents into two categories: those that assist with the shopping journey, and those that assist and then execute a transaction. The distinction matters.
Assist
The best examples here are the "answer engines" — ChatGPT, Gemini, and similar platforms. In their agent or deep research modes, they take the user's requirements (and potentially existing knowledge about the user) and search for products to suggest. The transaction itself is then carried out off-platform by the human, though we're starting to see instances where transactions can be completed within the platform.
The key point: the human is still executing the transaction.
Execute
This is where the agent actually completes the transaction on behalf of the user. In the summer of 2025, this was possible for some agent models in the UK, but guardrails have since been introduced to prevent this. With the rise of OpenClaw execution may be the predominant future model, with users having their own personal agent that interacts with other agents (potentially "assist" agents, as mentioned above) to navigate and complete the entire shopping journey.
So what does UK consumer law have to say about all this?
What UK Consumer Protection Laws Apply?
UK consumer protection laws are largely principle-based, meaning they can be applied flexibly to any scenario where a business is involved in the sale or promotion of products to consumers.
Unfair Commercial Practices — The DMCC Act 2024
The UK Digital Markets, Competition and Consumers Act 2024 (DMCC Act) prohibits unfair commercial practices. The definition of "commercial practice" is broadly drafted — it includes any act or omission by a trader relating to the promotion or supply of a product to a consumer. That's wide enough to catch AI agent providers in relation to how they design and operate their agents.
How might an AI agent provider breach the unfair commercial practices rules? Consider these scenarios:
- Failing to provide required information. When an AI agent presents products to consumers with pricing information — say, "I found a CoffeeMaster 5000 espresso machine for £200 — shall I buy it for you?" — certain information must accompany that. This includes the main product characteristics, the total price, the seller's identity, and the existence of any withdrawal or cancellation right. If the agent doesn't surface all of this, the provider may be in breach of the DMCC Act, regardless of whether there’s consumer harm or not.
- Misleading information or omissions. If an AI agent hallucinates, misinterprets information from a third-party retailer, or gets the consumer's instructions wrong — and this causes the average consumer to take a transactional decision they wouldn't otherwise have taken (say, making or authorising a purchase based on inaccurate information or without being told that a third-party seller has paid the AI shopping agent to promote their product) — that would constitute a misleading commercial practice.
- Dark patterns. Harmful online choice architecture practices — also known as "dark patterns" — can be in breach of the DMCC Act. For example, an AI agent provider could include false countdown timers alongside a request for the consumer to authorise a purchase before the timer runs out, or false scarcity / popularity messaging to encourage a purchase. These features are commonplace in e-commerce today and AI agent providers could therefore use them too, especially if AI agent providers are commercially motivated (for example, by getting a percentage cut of sales from retailers). The selling tactics we see today could even take a different form with AI agents. For example, could we be told in the future that “78,000 other people with your criteria have been shown this product, only 3 left in stock!”? Or if an agent is executing the transaction and working with other agents could there be practices which are not visible to consumers but which are designed to somehow trick the consumer agents?
AI shopping agent practices which breach the DMCC Act may also result in a poor customer experience and a loss of consumer trust, which may ultimately make or break a provider’s success.
Pre-Contractual Information — Consumer Contracts Regulations 2013
Also relevant are the UK Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 (“CCRs”), which set requirements for distance contracts (including contracts entered into online). The CCRs require, among other things, that:
- Certain information (including the main characteristics of the product and the total price) is given to the consumer in a clear and prominent manner, directly before the consumer places their order.
- When placing the order, the consumer explicitly acknowledges that the order implies an obligation to pay, with any order button labelled in a way that communicates this (for example, "Buy now").
These requirements apply to the trader entering into the contract with the consumer (for example, the third-party retailer) and not directly to the AI agent provider. But there's a possible unfair commercial practices risk if an AI agent provider designs an experience that prevents compliance with these requirements, given that the third-party trader has no influence over the design of the AI agent.
AI agent providers who are assisting consumers in the purchase would be wise to surface the required information (as provided by the relevant third-party seller) to the consumer; ask the consumer to confirm they wish the agent to purchase the product; and make it clear that the consumer will be required to pay when the agent proceeds with the purchase (with a clearly labelled payment button, if the consumer must press a button to confirm the purchase).
The Really Interesting Question: What About Executing Agents?
Here's where it gets genuinely uncertain. Applying the CCRs arguably doesn't make sense where the consumer gives the agent free rein to enter into purchases based on high-level criteria only — for example: "Buy me a blue men's t-shirt in a size XL, from any store, for between £30 and £40. Deliver to my home address and use my American Express credit card" — without the consumer giving final approval of the specific product selected by the agent, or other important details like delivery speed. This may seem fanciful but the provision of a prepaid debit card for purchases is becoming widespread within the OpenClaw community.
This issue isn't unique to AI shopping. It could happen with human agents: for example, a consumer could entrust a personal assistant or family member with buying something online using the consumer's credit card. But AI could make this model mainstream rather than niche.
AI agent providers will naturally want to enable this kind of frictionless purchasing — and many consumers will want it too, at least for certain product types (such as lower-value, everyday, consumable items). The problem is that the current regulatory framework is designed around a human actively reviewing and placing an order. If it becomes commonplace for UK consumers to use AI shopping agents to execute transactions based on high-level instructions alone — without final approval of the specific product — how will regulators like the Competition and Markets Authority (CMA) respond? If an AI agent is acting as the consumer’s personal agent, potentially it could be considered acceptable that the consumer does not receive all the required information under consumer law before purchase, provided that this is provided by the relevant retailer or marketplace to the agent.
What happens when purchases through AI shopping agents go wrong?
In practice, one of the biggest risks when consumers shop using AI agents will be that the consumer receives something they wouldn’t have ordered if the agent had provided correct and comprehensive information about the product. In practice, this risk will be mitigated by the fact that in most cases, consumers will be able to return the item and get a refund. However, this does not avoid the legal risk which providers could face if their agents provide incorrect or incomplete product information, and regulators could still take enforcement action. AI agent providers could potentially find themselves on the receiving end of damages claims from consumers.
Could an AI agent provider just exclude liability to consumers in relation to online sales?
The contractual terms between an AI agent provider and a consumer which govern the use of the AI agent will also need to comply with consumer protection law, including the unfair terms provisions of the Consumer Rights Act 2015. Under those provisions, a blanket exclusion of the provider’s liability would be unfair and unenforceable, and the provider would need to ensure that all other contractual terms (including any limitation on the provider’s liability) are fair (unless an exception to that requirement applies) and transparent. Therefore, AI agent providers need to carefully consider their potential liability and what mitigations could be adopted to prevent such liability from arising in the first place.
What Comes Next?
AI shopping agents promise consumers genuine convenience — a hassle-free way to get what they want, with the AI doing the heavy lifting. But regulators will challenge providers if consumer protection laws are breached.
In time, we anticipate legal developments in this area. These could take the form of specific regulatory guidance, enforcement action, the exercise of regulatory powers (for example, certain AI shopping agent providers could be designated as having strategic market status by the CMA and subjected to conduct requirements), and/or legislative changes. Any such developments would provide much-needed clarity about how UK consumer protection law applies to AI shopping agents.
For now, the space is wide open. AI providers will be focused on exploring, testing, and developing their shopping agents and establishing their foothold. Retailers will be focused on how AI agents can maximise sales. But when doing so, businesses would be well advised to build with existing consumer law in mind. Adherence now may yield long-term rewards as it will help reduce regulatory concerns and potentially avoid restrictive new rules, guidance and/or enforcement action happening later. Like many consumer trends the ultimate winners will be those who gain or retain the trust of the consumer, and compliance with the law forms a key part of this.