Yan LeCun in conversation with the All-Party Parliamentary Group on Artificial Intelligence (AI APPG)
Key contacts
On 26 January 2026, CMS representatives joined the AI APPG in conversation with Yan LeCun and other experts. Below, you will find a summary of the key insights from this thought-provoking session.
CMS continues to act on the Advisory Board for the group. Please do reach out to your usual CMS contact with any questions or for additional insights.
The topic
The topic of the January 2026 AI APPG evidence meeting was “horizon scanning” about emerging AI frontiers including agentic AI systems, artificial general intelligence AGI-orientated models, post-large language model (LLM) architectures and quantum–AI interfaces. Parliamentarians asked the evidence givers for their views including “Do you think the UK should have an AI bill?”.
The evidence givers
| Yan LeCun | One of the Godfathers of deep learning |
| Viv Kendon | Professor of Quantum Technology |
| Mihaela van der Schaar | Professor of Machine Learning |
| Bob De Caux | Chief AI Officer in an Enterprise software organisation |
Bob de Caux
Bob spoke about AI agents as software systems based around an LLM and where multiple AI agents can work together.
In Bob’s view a key development area for agentic AI agents is tool selection; industrial environments can have thousands of tools and the agent has to select a tool using a protocol such as model context protocol (MCP). Bob suggests the AI agent capability has to be paired with tool governance. He spoke about a need to use determinism by design to prevent an agents guessing it’s way to cause harm. Auditability of agents especially around hand off between agents and humans was mentioned.
Agents will improve fastest when tested in simulated environments and need identity and sponsorship, complete audit trails, change control tools and data governance. Bob’s recommendations included to: make accountability explicit, require incident reporting, and to treat agent adoption as change management, with practical tools to help enterprises.
Bob was asked by parliamentarians about AI agents crossing organisational boundaries and how to regulate that.
Viv Kendon
The UK has a national quantum strategy where mission one is a 10 year plan for quantum computing. The aim by 2035 is to have accessible quantum computers supporting applications well in excess of classical across multiple sectors. Viv discussed how AI may use this future computing resource. In her view we are currently at 1980s classical with quantum computing. Quantum machine learning is at the science stage. There is scant evidence that quantum provides a step advantage in machine learning, but there are some grounds that we may be able to speed up training and achieve fewer errors.
Future quantum computers will not process large amounts of data so compression algorithms will be important. At present we don’t have long term quantum data storage. If the mission is met, then quantum computing then becomes cost effective for super computing, and will provide accelerators for specific tasks not necessarily for AI tasks. Viv expressed the need to fund science and engineering, so we are agile. It is not enough to buy turnkey solutions because you need people that know how to use them.
Yan LeCun
AI technology is going to change over the next few years so it is dangerous to extrapolate since AI tools will be very different, they will accelerate science and medicine and automate manufacturing. At present AI models manipulate and generate language and deal with the real world (sensor data, industrial processes) in a piecemeal way; there are ways to predict proteins but no general approach. Future AI systems will understand the real world but will have the same level of generality as LLMs. Most of our digital diet will be mediated by AI systems.
Much of the information we consume is picked by AI and in future it will be more so and AI tools will live in wearable devices. There a need for diversity of AI systems for the same reasons we need a high diversity of news systems, since they will all be biased. This will require open platforms, open source, open weight, so anyone can fine tune them. So there is a need to federate open source AI platforms as these will eventually be a repository of all human knowledge.
Parliamentarians asked Yan about giving access to training data in addition to making models open source
Yan explained it is difficult for a single company to have access to all the data it needs to train a model for all languages of the world, e.g. 29 languages of India. India is collecting this data. Each country could do this. One possible future is where each region collects their own data and contributes to training a global model.
Parliamentarians asked Yan about privacy
The assistants will become your best friend and will know everything about you. Most will have to run in the cloud. So privacy rules apply but differ around the world. This will be an issue.
Yan was asked about how to regulate AI
AI should not be used to regulate AI. Whenever AI needs to be deployed there should be some regulation. Do not regulate research and development. Because that stops innovators from exchanging data and you kill open source. You create regulatory capture by the main players. Make incentives for companies to open source their platforms.
Mihaela van der Schaar
Mihaela spoke about how to use AI tools to improve NHS functions and make clinical trials smarter. In her view the UK can realistically become world leading in this way. Advances will not be in increases in model size. The advances will be in agentic AI. The burden of deciding in the NHS still resides with clinicians. Simulation models and digital twins will be used and will be powerful for NHS efficiency such as by having a digital twin of a patient.
Questions
We are already interacting with agents but we don’t know the agent is there? Within organisations that is happening a lot.
Agents can talk to each other and humans may not know they are talking to agents. There is opportunity for agents to derail each other. How do we audit them?
Yan – We are still very far from systems that can match human intelligence, we are fooled into thinking they are intelligent because they manipulate language. But language is simple whereas the real world is complex. Language is sequences of discrete symbols. But the real world is messy sensor data. We don’t need to worry about superintelligence now, it is like worrying about turbo jets in 1915.
An MP asked how do we not get hoodwinked as parliamentarians and protect the human race? Could AI agents design communication protocols that humans cannot understand. Viv explained this is not new, and may be dealt with by having standards for communication protocols. Michaela pointed out that agents may be safer since the agents are from different companies.
Does the UK need an AI bill?
Yan – if the bill is to accelerate progress such as infrastructure, then yes. If it is for regulating the tech itself then it would be hurtful.
Bob – no it would be better to have a sector based approach.
Viv – too early because need to explore how to set standards and do verification, the national physical lab is looking into the question of regulation for quantum. I think we want more feel for the different pieces that need regulation.
Michaela – regulators need to fully understand the technology, do not copy USA and do not copy EU AI act, so we enable the right capabilities to emerge.
A member of the house of lords commented that there have been two debates in the lords in the last 3 weeks about superintelligence.
Yan – it will be difficult to predict behaviour of multi agent systems and we are far from this. He suggested that future AI systems would not be the LLMs that we know today and would not be agentic AI models because those are too difficult to control. He expects a lot of change over the next few years.