AI Is Here. The Risk Register Is Empty: Why Unacknowledged AI Use May Be the Real Threat Inside Your Firm

AI didn’t arrive at your firm with fanfare. It didn’t need board approval or a procurement process. It came in quietly, wrapped into tools you already use like Microsoft Word, Gmail, Zoom, Adobe Acrobat. It began suggesting subject lines, cleaning up your writing, summarising transcripts. It arrived ambiently already embedded in platforms you rely on every day.

And for most firms, it arrived without governance.

No update to the risk register. No internal briefing. No client communication. No staff training. Just new features, gradually adopted in a system built for a pre AI world.

This is the real compliance blind spot of the moment. Not that firms are using AI but that they’re using it without acknowledgement.

The Real Risk Isn’t the Technology – It’s the Governance Vacuum

AI tools are no longer experimental. They are live and influencing how legal services are delivered, sometimes without the person using them fully realising it.

Your associate uses Microsoft Copilot to draft a summary of a client memo.

Your legal secretary relies on Adobe Firefly to polish a presentation.

Your litigation team lets Zoom auto generate a transcript and summary.

Someone pastes sensitive instructions into ChatGPT “just to get started.”

These are not hypothetical edge cases. They are daily operational realities. And yet most firms haven’t formally addressed them. Not in their risk assessments, not in their client terms and not in their internal playbooks and this isn’t a failure of technology but rather a failure of governance design.
And it’s one that exposes firms to legal, regulatory, reputational and ethical risk.

When Policy Lags Behind Practice

Most firms’ internal documentation assumes a workflow that’s already obsolete.
Your risk register may still state:

That legal drafting is performed by qualified staff.

That client data is processed manually or with supervised automation.

That no third party tool is used without formal approval or DPIA.

But here’s the uncomfortable truth – AI tools have quietly reshaped all of that and the resulting policy/practice mismatch is now itself a form of risk.

Let’s take a closer look at where the exposures really lie.

Client Expectation vs Operational Reality

Legal services are defined not just by outcomes, but by how those outcomes are delivered.

If your firm is now using AI assisted tools to:

Draft correspondence,

Summarise discovery material,

Translate or restructure contracts

…then you are materially altering the service model your clients are paying for. Have you told them?

A client may reasonably expect that legal work is being done by a human, under supervision, and fully within the firm’s perimeter. If AI tools – especially cloud based ones – are being used without disclosure, that expectation is breached. This is about expectation management as much as risk management.

Confidentiality and the Upload Trap

Every time a staff member pastes a paragraph into an AI assistant they create a potential leak vector. Even when using “enterprise-grade” tools, unless your systems are configured properly, you may be:

Logging sensitive data in prompt histories,

Exposing material to unintended sub-processors,

Feeding information into model training without realising it.

A recent study found that over 40% of employees in regulated industries have used AI tools on sensitive material without checking the data policies of the provider. This is where “innocent convenience” becomes latent liability.

Outdated DPAs and Unacknowledged Sub processors

If you haven’t revisited your vendor contracts since 2022 they almost certainly don’t reflect AI architecture changes.

Your Microsoft 365 DPA may now include Copilot specific sub processors.

Your cloud platforms may have added generative capabilities with new usage terms.

Your legal tech vendors may be layering in LLMs from third party APIs without direct mention.

The result? You could be in breach of your own contractual obligations, or worse, exposing client data to sub processors you didn’t even know were involved.

Supervision, Delegation and Training

Traditional legal delegation involves:

A human doing the task,

A senior reviewing it,

A file note recording the process.

AI breaks that flow. When an associate uses Copilot or Notion AI to create a first draft, who is supervising that content? Is the reviewer aware of what was human written and what was machine generated? If errors or hallucinations creep in – and they will – who owns the outcome?

Training is the missing link. Most firms have provided none. No AI use policy. No risk awareness sessions. No clarity on what’s permitted or prohibited. And so decisions are being made ad hoc, by individual team members, in high pressure moments with tools designed to feel authoritative. Again this is ultimately a governance exposure.

The Regulatory Environment Is Shifting Fast

Three years ago AI policy was a whiteboard exercise. Today it’s on the regulatory radar. The EU AI Act introduces new categories of risk, documentation duties and obligations for both providers and professional users of AI systems.

The Law Society of Ireland has already issued guidance stressing the need for competence, caution and disclosure in the use of generative tools.

The Data Protection Commission has stated that the use of AI does not override GDPR obligations – particularly around lawful processing, transparency and data minimisation.

If you’re waiting for clarity before you act, you may already be behind.

What Smart Firms Are Doing

The most resilient firms aren’t banning AI but governing it.

They are:

Auditing their toolsets to identify where AI is already present e.g. Microsoft, Zoom, Adobe, Google.

Updating risk registers to reflect real world behaviour and not legacy assumptions.

Refreshing DPAs and vendor assessments to account for new sub processors and AI terms.

Inserting basic AI clauses into client engagement letters and terms of business clarifying that some tools may be AI assisted with appropriate safeguards.

Running internal briefings to demystify the tools and explain what’s permitted, what’s not, and what needs escalation.

They are not waiting for a problem. They are building credibility now.

The Real Risk Is the Silence.

The assumption that your tools haven’t changed.
The hope that clients won’t ask.
The default to business as usual when the landscape has already shifted. Because when – not if – a regulatory question, client concern or an audit shines a light on AI usage your defence will be preparedness which begins with acknowledgment. AI adoption isn’t the risk. Unacknowledged AI use is.

Related Articles