Ten Decisions Every Irish Law Firm Must Make About AI Now

AI is already in your firm and it’s there not because you adopted it but because Microsoft, Google, Adobe, your case management provider and your staff did.

Until recently, firms could shrug this off as a passing experiment. That margin of denial is now gone. On 11 November, the Law Society published Guidelines for the use of generative artificial intelligence by the legal profession in Ireland, setting out how GenAI interacts with core professional obligations and the EU AI Act. The summary is, yes, there are suitable tasks – admin, summaries and checklists as an example – but firms must not use LLMs for legal advice, document review or citing legislation or case law and must treat confidentiality, competence and independence as non-negotiable. Law Society of Ireland

The question for partners is no longer “Should we have an AI policy?” It’s “What exactly are we prepared to allow, forbid and stand over?”

This piece proposes ten concrete decisions to make now. Get this right and you have a living AI policy aligned with Law Society guidance, your risk register and your client obligations.

Decision 1: What is our stance on AI?

The Law Society guidance starts from fundamentals. Explain what GenAI is, set out its limits and frame everything through the Solicitors’ Guide to Professional Conduct. Your policy should do the same in your own words. You don’t need a manifesto. You need a clear, partner owned position that can sit on an intranet page, in an induction pack and, where needed, in tenders.

Example:

“Our firm uses certain AI tools to support internal drafting, formatting and research. We do not use AI tools to replace legal analysis, to provide uncited legal advice or to review documents unsupervised. Partners remain responsible for all work product.”

This single paragraph becomes the anchor for every later decision: tools, training, supervision and client communication.

Decision 2: Where is AI in bounds and out of bounds?

The Law Society guidance draws a bright line between low-risk support tasks and high-risk legal work.

In bounds: simple administrative tasks, summarising documents you already have, generating checklists and templates.

Out of bounds: giving legal advice, reviewing documents for accuracy, citing legislation or case law or relying on GenAI for real-time information. Law Society of Ireland

Your policy should translate that into firm specific rules by work type, not by tool.

For example:

Clearly allowed (with care):

  • First-draft email wording.
  • Plain English summaries of your own documents.
  • Draft agendas, checklists, timelines.
  • Brainstorming alternative phrasings or argument structures.

Clearly prohibited:

  • “What’s the law on X in Ireland?” style prompts.
  • Asking AI to “review this contract and tell me the risks”.
  • Generating caselaw or statutory references and pasting them in unchecked.
  • Any unsupervised use in court documents or advices.

Grey zone (partner sign off required):

  • Using AI to help structure a complex opinion.
  • Using AI within a specialist legal AI product (see Decision 7).

The key is that everybody can answer, quickly: “Is this kind of task allowed with AI here?”

Decision 3: Which tools are actually approved?

Right now, your people are probably using whatever is nearest – free web chatbots, mobile apps, browser plug ins.

The Law Society guidelines do not endorse or approve any particular legal AI products. They explicitly focus on general GenAI tools and stress that risk profiles differ by technology. That means you must decide which tools are in bounds.

At minimum, your policy should:

Name the approved tools.
For many firms, that will be:

Microsoft 365 Copilot.

AI features inside your DMS or practice management system.

Any other enterprise tool that your IT / vendor due diligence process has signed off.

Ban consumer and shadow IT.

  1. No use of personal ChatGPT / Gemini accounts for client or firm material.
  2. No unvetted browser extensions or plug ins.

Tie tools to tasks.

  1. E.g. “For document summarisation, use Copilot in Word – do not paste text into external websites.”

This is where your vendor due diligence and DPAs intersect with AI – every AI button is still a data processor relationship (Decision 7).

Decision 4: What must never go into an AI tool?

The Law Society guidance highlights confidentiality, GDPR and privilege as key risk areas. Firms are urged to define permitted uses, safeguards and accountability to ensure those obligations are respected. Law Society of Ireland

In practice, that means drawing a hard perimeter around certain data:

Absolutely prohibited inputs:

  • Privileged communications.
  • Sensitive personal data (health, criminal, immigration, children).
  • Non-public financials, deal terms, pricing.
  • Identifying details where anonymity is not strictly necessary for the task.

Conditionally allowed (within approved enterprise tools only):

  • Anonymised client scenarios for internal training / know how.
  • Redacted document excerpts where you are confident of the vendor’s security and data use terms.

A one page matrix of “green / amber / red” data categories with examples is a good place to start.

Decision 5: How will we supervise AI assisted work?

Across the Law Society’s AI resources and international guidance, the message is consistent: AI does not dilute your professional duty. You remain responsible for the work product, regardless of tools used. Your AI policy should make supervision explicit, not assumed.

Consider rules like:

Any AI generated draft going outside the firm must be:

  • Reviewed line by line by the fee earner.
  • Checked for hallucinated facts, citations and invented caselaw.
  • Brought within the firm’s usual supervision structure e.g. junior → associate → partner.

For litigation and court documents:

  • All authorities must be checked in primary sources.
  • Any AI assisted drafting must still comply with court rules and practice directions.

Decision 6: What will we record about AI use?

The new guidance explicitly encourages firms to set out accountability and safeguards including how AI fits into GDPR compliance, client confidentiality and privilege. That pushes AI from an interesting tool into the world of governance and record keeping.

You don’t need to log every prompt. But you should decide:

At matter level:

Will you note significant AI use in the file? E.g. “First draft generated with Copilot and redrafted by [name]”.

For certain work types e.g. regulatory investigations, do you want a clearer audit trail?

At system level:

  • Do your enterprise tools already log AI interactions? If so, who can access logs if there is a complaint or claim?
  • Is AI use reflected anywhere in your GDPR Article 30 records or DPIAs?

At risk register level:

  • The EU AI Act and Law Society AI resources both emphasise embedding AI risks into existing governance and risk registers as opposed to treating it as a side project.

The aim is simple: if a regulator, insurer or client later asks, “How did you use AI on this?”, you can give a calm and well documented answer.

Decision 7: How will we treat legal AI vendors?

The Law Society’s new guidelines deliberately do not assess or approve specialist legal AI tools. They focus on fundamentals and signal that further, more specific guidance will follow as use cases evolve. That leaves firms to make their own call.

Treat legal AI products as you would any other critical outsourcing arrangement – but with extra questions:

What exactly is the AI doing?

Contract clause extraction? Drafting? Prediction? Summarisation?

Is it using your documents to train models or only for transient processing?

Where is the data going?

Jurisdictions, sub processors, retention periods.

Can you get your data back in a usable format?

How does this map to Law Society guidance?

If a vendor can’t explain in plain language how its AI works, what data it touches and how it supports your professional obligations, it’s not ready for your client files.

Decision 8: What will we tell clients and courts?

The guidance places the onus on firms and solicitors to maintain ethical and professional standards when using GenAI. That includes how transparent you are with the people who rely on your work.

You have three broad options:

Silent but defensible:

You treat AI as an internal tool akin to a spell checker.

You maintain full supervision and quality control.

Baseline disclosure:

Standard wording in engagement letters along the lines of:

“We may use secure, vetted software tools – including certain AI enabled tools – to support our internal drafting, research and document management. These tools do not replace legal analysis or partner supervision.”

Client specific agreements:

For institutional or public sector clients you may need explicit clauses about AI use, data residency and audit rights.

For courts, the safest assumption is that any AI assisted submission must be fully checked and owned as if it were drafted entirely by you.

Decision 9: How will we build AI literacy across the firm?

The EU AI Act introduces a general requirement for adequate AI literacy among users and developers of AI systems.

Your policy should answer three training questions:

Who needs what?

Partners: risk, governance, client messaging.

Fee earners: permitted uses, supervision, confidentiality, practical do/don’t examples.

Support staff: admin use cases, data handling rules.

What’s the minimum baseline?

A short compulsory session on the firm’s AI policy and Law Society guidance.

A simple one pager: “Five safe ways to use AI” and “Three things you must never do.”

AI literacy is not about turning every solicitor into a tech specialist but rather it’s about ensuring nobody can say “I didn’t realise that wasn’t allowed.”

Decision 10: Who owns this and how often will we revisit it?

The Law Society explicitly acknowledges that GenAI is moving fast and that further guidance and updates will follow. Your policy cannot be a one off.

Decide now:

Ownership:

One responsible partner (or a small tech/risk committee) with a clear mandate.

A simple reporting line into the partnership meeting or management board.

Review cycle:

At least annually, and additionally:

When the Law Society issues new AI related guidance.

When you adopt a new AI enabled core system.

After any incident, complaint or near miss involving AI.

Change management:

How will updates be communicated? (Email plus short briefing, not just a PDF upload.)

How will you check that people have actually read and understood changes?

Think of this as the AI chapter in your existing risk and governance framework rather than a parallel universe.

The Law Society’s guidance is a timely and welcome intervention offering clarity where many firms have been operating in a grey zone. It has drawn the outer boundaries and reminded everyone that AI does not suspend your duties and also that the hard work is local:

Turning those principles into ten clear decisions.

Writing them down in plain language.

Training people to live them.

Updating them as the tools and rules evolve.

If your firm can do that you will move AI from quiet, stealthy, unmanaged use in every corner of the firm to a governed capability you can defend to clients, regulators and insurers. AI isn’t the risk here – another year of silent, unmanaged AI use across your firm is. A clear, Law Society aligned AI policy is no longer optional but a baseline requirement for professional practice.

Related Articles