Posted by Karen Garthwaite, head of professional development at Legal Futures Associate ILFM

Garthwaite: Firms need to consider how AI use affects their compliance obligations
It was a significant moment for the legal landscape last month when the Solicitors Regulation Authority (SRA) authorised Garfield.law, the UK’s first AI-driven law firm.
This “landmark moment”, as SRA chief executive Paul Philip described it, brings exciting opportunities but also introduces complex compliance considerations for traditional practices.
Garfield.law specialises in small-claims debt recovery using an enterprise-grade large language model that can reportedly “handle the entirety of a small-claim track debt claim”.
The SRA’s authorisation came with specific conditions: the system must not propose case law (a precaution against AI “hallucinations”) and cannot operate autonomously, requiring client approval at each step.
This regulatory approach provides important insights into how the SRA may handle future AI implementation across the legal sector.
Compliance implications for traditional firms
Traditional practices must now evaluate how AI integration affects their compliance obligations.
Firms should proactively update their risk assessment frameworks to identify, measure and mitigate AI-specific risks, particularly around data protection, confidentiality and maintaining appropriate human oversight.
The SRA’s close scrutiny of Garfield.law suggests heightened attention to AI systems in legal practice. Compliance officers should anticipate more detailed questions during regulatory inspections about any AI tools deployed, their role in service delivery, and safeguards implemented to prevent misconduct or errors.
Rule 3.2 of the SRA code of conduct for individuals requires solicitors to ensure that the service they provide is competent and delivered in a timely manner. With AI becoming integrated into legal practice, this now extends to understanding AI capabilities and limitations.
Compliance teams should consider how to demonstrate appropriate technical competence when deploying these technologies.
Financial implications
There are also significant financial considerations regarding both implementation and ongoing compliance, including initial investment in AI systems and staff training, costs associated with continuous monitoring and quality control, the potential savings in routine legal work and documentation, and compliance infrastructure investments to meet new regulatory expectations.
AI-driven efficiencies may also impact fee structures, potentially challenging existing billing models.
Compliance officers will need to ensure transparent communication about AI use in service delivery and how this affects client charges, particularly under the SRA transparency rules.
Practical steps for compliance teams
So what can risk and compliance teams do to future-proof against the threats – and opportunities – that AI will introduce?
- Create an AI governance framework: Develop a comprehensive approach to AI governance that addresses accountability, transparency, and risk management.
- Update policies and procedures: Review and modify existing compliance documentation to incorporate AI-specific considerations.
- Staff training: Implement training programs focusing on ethical AI use, recognising AI limitations, and proper oversight mechanisms.
- Client communication: Review client-care documentation to appropriately disclose AI use in legal service delivery.
- Insurance coverage: Ensure professional indemnity insurance adequately covers risks associated with AI-assisted legal services.
The authorisation of Garfield.law signals the beginning of a new compliance era for the legal profession. While many traditional firms may not immediately deploy similar systems, the regulatory precedent established means compliance teams should begin preparations now.
Leave a Comment