RICS introduces mandatory AI standard for surveyors: what insurers and their clients need to know
Key contacts
The Royal Institution of Chartered Surveyors (RICS) has published its first global professional standard on the responsible use of artificial intelligence in surveying practice, effective from yesterday (9 March 2026). The standard imposes mandatory requirements on RICS members and regulated firms, with resultant considerations for professional indemnity insurers and brokers.
The new RICS professional standard applies globally to all RICS members and regulated firms. It addresses the growing integration of AI across valuation, construction, infrastructure and land services, setting out both mandatory requirements and recommended best practice. Overall, RICS has sought to take a balanced approach, recognising that AI offers significant potential for the profession whilst acknowledging the high levels of professional and commercial risk if not managed appropriately.
Purpose and scope
RICS acknowledges that AI systems are being adopted across the built and natural environments, with some uses specific to areas such as valuation or construction, and others designed for more generic business purposes. The standard is supportive of their use, but recognises that if not managed appropriately, AI carries high levels of professional and commercial risk to individuals and firms. This is something we have seen across all professions, not just surveying. Accordingly, RICS has placed at the core of its requirements the importance of the skill and experience of the professional surveyor, alongside a need to guard against complacency when using this technology.
The standard sets requirements that provide a basis for upskilling the profession, establish a baseline of practice management aimed at minimising the risk of harm, enable informed decisions on AI procurement and reliance on outputs, represent good communication with clients and stakeholders, and provide a framework for the responsible development of AI systems. It applies to AI system outputs that have a material impact[1] on the delivery of surveying services, which in general means outputs that affect how the work of the surveyor is rendered meaningful. If a member or firm determines that their use of AI will have a material impact on service delivery, they must record that determination and their reasoning in writing.
Key features of the new standard
Amongst the new features, some of the key elements which may have potential implications in professional negligence claims are:
Baseline knowledge requirements
Members who use AI systems to deliver surveying services must develop and maintain sufficient knowledge to support responsible use. As a minimum, this includes understanding the different types of AI systems and their limitations, the risk of erroneous outputs, the inherent risk of bias in AI systems, and data usage and associated risks.
System governance and risk registers
Before using an AI system that will have a material impact on service delivery, firms must carry out and record in writing an assessment of whether AI is the most appropriate tool, having considered available alternatives, environmental and stakeholder impact, data risks, and the risk of erroneous or biased outputs. Firms must also maintain a written register of AI systems used, including the purpose, date of first use, and the date for next review.
Professional judgement and accountability
Members must apply professional judgement to make a decision about the reliability of the output of any AI system used that will have a “material impact” on service delivery. This written decision must detail any relevant assumptions made, key areas of concern regarding reliability, whether anything could be done to lessen each concern, and a concluding statement on whether the output can reasonably be used for its intended purpose. The written decision must be prepared by, or under the supervision of, an appropriately qualified and named surveyor who accepts responsibility for its use.
Where AI is used to automate outputs or produce high volumes, firms must undertake randomised dip samples at regular intervals to scrutinise and assure the quality of outputs.
Transparency and client communication
Members must make clear to clients, in writing and in advance, when and for what purpose AI is to be used. Terms of engagement and service agreements must detail: when AI will be involved; the parts of the process in which AI will be involved; the extent of professional indemnity cover for AI use (if available); internal processes to contest AI use; processes for clients to seek redress if negatively affected; and how a client can opt out of the use of AI, if at all.
Looking ahead
The standard creates a detailed framework of mandatory requirements that will inform the assessment of professional negligence claims against surveyors. RICS has stated that in regulatory or disciplinary proceedings, it will take relevant professional standards into account when deciding whether a member or firm acted appropriately and with reasonable competence. It is also likely that during any legal proceedings a judge, adjudicator or equivalent will take RICS professional standards into account.
This standard, therefore, marks a significant step in the regulatory landscape for surveyors. It places the professional surveyor's skill and experience at the core of AI use, whilst requiring robust governance, transparency and accountability. It will provide clear benchmarks against which to assess conduct.
Surveyors and their firms should ensure that their policies, procedures and training are fully aligned with the standard's requirements. Insurers and brokers will want to consider how the standard affects risk assessment, policy coverage and claims investigation.
More broadly, as the profession gains experience operating under the new framework, market practice will evolve. We may see emerging norms around benchmarking reliability decisions, expectations for dip‑sampling in high‑volume workflows, and the role of technical specialists in advising on AI procurement. Future iterations of RICS guidance are also likely as the landscape continues to develop.
We will continue to monitor developments in this area and provide further updates as guidance and market practice evolve.
[1] “Whether an output has a material impact on the delivery of a service depends on whether the output is capable of influencing the delivery of the service and, if it is, the nature of the influence it exerts over the delivery of that service, based on the facts and circumstances of output use.” (RICS Professional Standard: Responsible use of artificial intelligence in surveying practice, pg.5)