AI governance is now a firm level risk. Unauthorised, informal use is already happening in most practices. This “shadow use” shows up as a quick summary, a drafted email, a rewritten paragraph, a tool embedded in a vendor platform that’s switched on by default. That is where the exposure sits. Client material leaving controlled systems, uncertain retention/training settings, inconsistent verification, no reliable record of what input was, what was generated and what was checked.
Meanwhile, regulation in this area has moved from principle to enforcement and is built on the assumption that AI use is already embedded in operational workflows and that firms can evidence control.
The EU AI Act (Regulation (EU) 2024/1689) is now in force and it introduces a risk based regime. Certain AI practices are prohibited, “high-risk” systems carry extensive governance obligations and general-purpose AI models come with transparency and documentation expectations.
In Ireland, the Law Society issued guidance on 11 November 2025 on the ethical and responsible use of generative AI by solicitors and anchored this to core duties including confidentiality, competence, supervision and verification.
The practical, direct implication is that if your firm is asked how AI is used, supervised, and evidenced then you need to be able to answer with documents and file-based proof.
We created the Praxis AI Compliance First Pack v1.0 (January 2026) as an AI audit ready starter kit that small and mid-sized firms can implement immediately.
It includes six templates:
- Internal AI Use Policy
- AI Matter Risk Checklist
- AI Output Verification File Note
- Technology Supplier Due Diligence Checklist
- Incident Response One Pager (Data, Cyber, AI)
- Vendor Risk Register
This is the evidence layer most firms are missing. Clear internal rules, a matter level risk gate, a verification record you can put on the file, vendor discipline and a same day incident playbook.
The part firms underestimate is that when something goes wrong it becomes operational very quickly.
Under GDPR, where feasible, a controller must notify the supervisory authority within 72 hours of becoming aware of a notifiable personal data breach.
If your AI use is informal, unmanaged and undocumented then reconstructing what happened inside that window will be extremely difficult.
Implementation takes one hour
We designed this pack for immediate adoption. Implement the policy firm wide, integrate the matter checklist into file opening, store the verification note in matter templates, circulate the incident one pager and assign ownership of the vendor register.
By implementing these templates your firm turns policy into a defensible operating framework. Staff are protected with clear boundaries and documentation enabling them to use AI efficiently where appropriate and capturing the benefits without leaving the practice exposed. With that structure in place you can be confident you are meeting your governance obligations and materially reducing the risk of an avoidable incident.





