Singapore Issues Global First Agentic AI Model Governance Framework at Davos
Key contacts
On 22 January 2026, Singapore’s, Ministry of Digital Development and Information published the Model AI Governance Framework for Agentic AI (the “Framework”). Developed by the Infocomm Media Development Authority (“IMDA”), the framework provides a structured overview of the risks of Agentic AI (“Agents”), and the emerging best practices in managing these risks.
The Framework is the first to be published globally and serves as a best-practice guide. Given the rapid adoption of Agentic AI, it is useful for any organisation considering the deployment of Agents. By providing actionable steps organisations can take to minimise risk, IMDA hopes to build a trusted and safe Agent ecosystem.
Introduction to Agentic AIs
Agents are AI systems that can plan across multiple steps to achieve specified objectives. This usually involves some degree of independent planning and action taking (e.g. searching the web or creating files) over multiple steps to achieve a user-defined goal. What separates Agents from other AI systems is their ability to complete more complex tasks by planning steps, using tools to take actions and interact with other systems, and communicating with tools and other agents through Protocols.
These features allow the creation of agentic systems – systems where multiple Agents are set up to work together. In these systems, Agents can specialise in selected functions and work in parallel with each other, often improving performance.
The framework proposes four key measures to manage risk.
Assessment of the Risks and Bind to Individuals
Prior to deployment, organisations should identify the potential risk and impacts of Agent use and limit the scope of the Agent’s involvement accordingly. Limitations should be placed on the capabilities of the Agent – i.e. the tools and systems available to the Agent – or the application of the Agent. Organisations should consider factors – such as whether the Agent would have access to sensitive data or external systems – to determine if the use-case is suitable for Agent deployment.
In agentic systems, identity management is also important to track individual Agent behaviour and establish who holds accountability for each Agent. It is advised that in such systems, Agents are tied to a supervising agent – a human user or department – who authorises the actions of the Agent.
Make Humans Meaningfully Accountable
The framework stresses that humans should remain accountable for the behaviours and actions of Agents. To account for the increasing complexity of processes and systems, organisations should allocate clear responsibilities within and outside the organisation across the Agent’s value chain and lifecycle. This includes defining roles and responsibilities in internal AI governance policies, ensuring users are provided sufficient information to hold the organisation accountable, and clarifying the distribution of risk and obligations when working with external parties (such as model developers).
The design of the Agent deployment should also allow for human oversight; Agents should seek human approval when reaching defined significant checkpoints and before specified actions are taken. The level of human involvement required should scale with the complexity of the task. Automated systems that flag anomalous Agent behaviour can also be employed to assist in human oversight.
Implement Technical Controls and Processes
In addition to the software and LLM controls common in AI systems, further technical controls should be considered to account for the new agentic components – such as planning and reasoning, and tools – and the increased security concerns from the larger attack surface and new protocols. Organisations should conduct comprehensive testing both prior to, and after deployment. Before deployment, Agents should be stress-tested across numerous and multiple scenarios and be evaluated on overall task execution, policy compliance, tool calling, and robustness. During deployment, constant monitoring and testing allows organisations to detect and resolve issues in real-time to minimise risk and impact. Testing should continue throughout the lifecycle of the Agent. Organisations are also advised that Agents should be introduced into production gradually, to control the risk exposure.
Enable end-user responsibility
Organisations should provide sufficient information to end-users to promote trust and enable responsible use. Transparency is especially important where Agents are used to act on behalf of the organisation (e.g. customer service or sales Agents). In these cases, the limitations of the Agent and the responsibilities of the end-users must be communicated clearly. Avenues to speak with human contact points who are responsible for the Agents should also be available where there is dissatisfaction with the Agent’s output.
Where Agents are used to assist the end-user as part of their workflow (e.g. coding assistants), further education and training should be provided. Users should be taught how to use the Agents responsibly and be familiar with the capabilities, common failure points, and potential risks and impacts of the Agent.
Conclusion
The adoption and use of emerging technologies, such as Agents, tends to be fast developing with ever-evolving implementations and practices. Notably, the Framework is described as a “living document”, reflecting the evolving landscape. Nevertheless, in the face of risk and uncertainty, the Framework provides a good starting point for organisations looking to capitalise on this Agentic AI and will play a critical role in helping organisations navigate the complexities of using Agentic AI systems.
This article was co-written by Leong Tzi An (Zaine).