Open navigation
Search
Offices – United Kingdom
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – United Kingdom
Explore all insights
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights
About CMS
UK Pay Gap Report 2024

Learn more

Select your region

Publication 26 Jan 2026 · United Kingdom

How AI uses recursion to conquer long-context prompts

2 min read

On this page

Recently, machine learning engineers have developed recursive language models (RLMs) – a new way of using large language models (LLMs) so they can complete long-context tasks.  Long-context tasks are where the prompt is extremely long, such as where it might contain medical images, huge legal documents, database contents or huge source code repositories. 

How does recursion work?

Machine learning engineers from MIT’s Computer Science & Artificial Intelligence Laboratory (CSAIL) used recursion to break down the task of processing a huge prompt into smaller tasks. Recursion is a technique used in computer science whereby a process repeatedly tells itself what to do.  

A classic LLM is wrapped in code that enables recursion to happen.  The wrapper code receives the prompt first (before the LLM) and breaks down the prompt into smaller pieces. The classic LLM with the wrapping is called a recursive language model.  The wrapping can use the classic LLM to peek into the huge prompt (without reading the whole prompt) and decide how best to break it into smaller pieces.  The wrapping can use the classic LLM to decide what order to work on the prompt pieces and how to assemble the results. 

What engineers discovered – and why it matters

The machine learning engineers observed that RLMs often use the same types of strategies to decompose the prompts and to stitch together outputs to make a final output.  These observations are very interesting since the strategies are determined by the classic LLM rather than a human or explicit training. 

RLMs will be extremely useful in fields such as law, medical image analysis and others where prompt sizes are very large. 
 

Back to top Back to top
Warning: Fraudulent emails and messages