The EU AI Act’s Practical Implications for Irish Firms: Q1 2026 Actions

AI is entering every workflow in much the same manner as the internet did, often under the radar with take up across firms uneven but at the same time, suddenly, it’s everywhere.

Shadow AI is the quietest way a firm can take on the biggest risk through tools that are adopted or used informally. This could be a browser plug in, a free chatbots, a quick document summarisation or contract review. All of these uses are operating outside procurement, outside IT management and outside any defensible supervision. In that environment client data can be copied into systems you haven’t assessed, stored under terms you haven’t read and processed in ways you can’t later reconstruct. This means that privilege hygiene frays, GDPR exposure increases and the firm’s ability to demonstrate competence and control is in question.

The EU AI Act is built to shut down that casual, unmanaged posture by pushing organisations toward inventory, governance, human oversight and evidence. The express purpose of this is so that AI use becomes auditable and accountable rather than invisible and improvisational. This risk based regulatory framework for artificial intelligence classifies AI systems by the level of harm they can cause and then attaches obligations accordingly ranging from outright prohibitions for certain unacceptable practices to heavy compliance duties for high-risk systems to targeted transparency rules for AI that interacts with people or generates content.  Lighter requirements apply for lower risk tools, but these too are heavily regulated now.

The Act entered into force on 1 August 2024 but it applies in phases. The general provisions, including AI literacy expectations, applied from 2 February 2025. 2 August 2025 saw key governance provisions and the rules for general purpose AI (GPAI) kick in and from 2 August 2026 the Act becomes broadly applicable across the regime. The Service Desk notes a full roll out foreseen by 2 August 2027.

Because that timetable is already running, Irish firms should treat 2026 readiness as a systems and evidence project. Inventory every AI use case including free tools built into other software. tighten procurement so no one introduces shadow AI, define permitted uses and red lines by practice area and build auditable supervision around training records, review checklists, escalation paths and incident logs. The AI Act adds an accountability layer that clients, insurers, regulators and increasingly clients will expect you to demonstrate in a concrete, reviewable form.

This supply chain framing means that law firms can be any of the following, depending on what you deploy:

Deployers: you use an AI tool in your practice (drafting, summarising, due diligence, intake bots).

Providers: you put an AI system “into service” under your name. For example, a client-facing chatbot embedded on your website, an AI contract review product sold to SMEs, or an internal system you materially modify and roll out across offices.

Importers/distributors: if you resell or package tools, especially under your brand.

That role shift changes the question from if AI is allowed to exactly what obligations attach to your role in the chain.

It’s worth noting that over 2025 there were competing signals around implementation dates with some voices pushing for delays. The Commission at points insisted the schedule would proceed while proposals and media reporting raised the possibility of pushing parts of the “high risk” regime out.

If you run a law firm you need to behave as though the published timetable is real and build your AI programme now to put governance in place. Lock down procurement, train people to competence and start generating evidence of oversight.

The AI Act is, at its core, a risk regime. That is something that should feel familiar to Irish solicitors who already practise inside risk frameworks every day. The AI Act simply applies that same discipline to software that can influence decisions, shape behaviour and produce outputs with the persuasive sheen of authority. In broad terms, it draws four lines and it helps to picture them in concrete practice terms.

At one end sit prohibited or “unacceptable risk” practices. These are defined as systems designed to manipulate behaviour in ways that cause harm for example social scoring style profiling and certain forms of workplace emotion inference. These are all things most solicitors are not building but that can still arrive quietly through third-party HR, marketing, or analytics tools.

Next are “high risk” systems, where the law expects a full compliance architecture – documented risk management, strong data governance, technical documentation, logging, testing and meaningful human oversight. This would apply, as an example, to AI used in recruitment and performance management or systems that materially influence access to essential services. Any area where bias, opacity and error can translate into real world disadvantage.

Separate again are transparency triggers. These are uses where people are interacting with AI, for example a client facing intake chatbot on your website or receiving AI generated content that could be mistaken for human output. All of these uses require clear signalling and careful handling of what is being generated and why.

Everything else sits in a lighter touch category. These could be your drafting assistants, summarisation tools and extraction workflows. The danger with uses in this category is that they might become the basis for advice. For solicitors, the dividing line is ultimately evidential – can you show that AI in your practice is bounded to appropriate tasks, supervised by competent humans and controlled with the same discipline you would demand of any other high consequence system handling client rights, client money or client trust?

Particularly important for Irish solicitors is that the law defines legal advice broadly as advice on applying law “to any particular circumstances” of a person.

And that can be where AI risk becomes practice risk. If a tool generates outputs that look like advice, then a firm’s exposure becomes about competence and standard of care, supervision and delegation, transparency to clients, confidentiality and privilege hygiene, complaints and professional discipline exposure. And that’s before even considering data protection.

This is exactly why the recently published Law Society of Ireland guidance is so welcome. It highlights useful tasks for AI such as summaries, checklists, administrative support to a degree and warns against relying on LLMs for legal advice, document review or citations without robust verification and controls.

What Changes Inside A Firm: Five Shifts the AI Act Accelerates

1) AI literacy becomes an enforceable expectation.

The AI Act’s first phase explicitly includes AI literacy obligations. This essentially means that your staff must understand enough to use AI responsibly and recognise risk.

In a firm context that means you need:

  • a baseline training module for all staff – what AI is, what it isn’t, hallucinations, confidentiality, verification etc
  • role-specific training
  • recorded completion and refreshers – audit trail

2) Procurement becomes compliance

The fastest way to fail the next two years is not malicious AI. It’s ad hoc tool adoption.

Again, this something that can happen very easily in a firm that has not implemented proper AI management. It could be something as simple as a solicitor installing an AI browser plugin or a team buying AI minutes on a corporate card. Someone uploads documents to a free tier product that trains on inputs. Any of these situations means that you now have:

  • client confidentiality risk
  • GDPR exposure
  • professional conduct risk
  • and depending on the tool and use case, AI Act obligations you didn’t realise you triggered

This is why firms need to treat AI like with the same degree of oversight as they do with cloud services, document management and practice management systems. As with these systems usage needs to be controlled, approved and logged.

3) Use of AI becomes a disclosure question

Even if you never touch a high-risk use case, transparency expectations are tightening. The Act includes transparency obligations for certain categories of AI systems and AI generated content with key provisions applying from 2 August 2026.

For solicitors, the client expectation curve may move faster than the statute. Some clients will imminently start to ask if their data was used to train anything, if their document was drafted with AI, if so, who checked it, can you evidence the checks?

You will also need to be ready for questionnaires from institutional clients, insurer questions and potential disputes where tool use becomes relevant to standard of care arguments

4) Internal AI systems can drift into regulated territory

Most law firm AI usage is likely to be low-to-medium risk for tasks such as summarising, drafting, extracting and translating. But beware of unmanaged drift to other areas of the firm.

If you start using AI in employment decisions around hiring or performance management that’s a high-risk zone under Annex III categories.

If you deploy AI to assess creditworthiness or access to essential services, that’s another regulated zone.

Your legal tech stack includes HR tech, marketing tech and client onboarding tech. Map them all.

5) The firm becomes responsible for human oversight as a system

The Act repeatedly pulls toward operational discipline such as risk management, documentation, oversight, robustness and record keeping. This aligns with the Law Society’s framing that AI can support but professional judgment cannot be delegated to a model.

In practice then human oversight becomes the defined review steps, the checklists for verification, the documented boundaries on permissible uses, the escalation paths for uncertainty and a quality control loop that learns from use.

The Landmines

The first wave of the AI Act bans certain “unacceptable risk” practices effective from 2 February 2025.

Some are obviously irrelevant to most firms. But a few can creep in via third party systems. Some of these include:

Emotion recognition in workplaces (with narrow exceptions). If HR tools claim to infer employee emotions from voice/video that’s a serious red flag.

Manipulative “dark pattern” AI designed to materially distort behaviour. This is relevant in the context of client facing web funnels, intake forms or marketing automation.

Social scoring style profiling. Not your product but need to be aware if a vendor bakes it into risk scoring for client desirability or debt recovery targeting.

Next Steps?

Irish firm need to treat AI readiness as a practice discipline rather than a tech experiment. The aim now in Q1 2026 is to create a compact programme that would withstand a client questionnaire, an insurer’s underwriting call or a regulatory inspection. This is all possible without disrupting the work of your practice. The goal here is not perfection but defensibility. It’s about knowing where AI is used, controlling how it enters the firm, setting clear boundaries on permitted tasks and being able to evidence supervision and verification in a form that stands up under scrutiny.

Step 1: Build an AI inventory

List:

  • every AI tool used
  • where it’s used in workflows
  • what data it touches
  • what decisions it influences
  • whether it’s client facing
  • whether outputs go into advice, pleadings, affidavits, contracts

The AI Act treats governance as infrastructure and the Commission explicitly frames implementation as a key priority.

Step 2: Classify use cases

Vendors will tell you their tool is compliant. That’s often meaningless without your context. For full oversight you need to classify your use cases. For example:

  • drafting assistance (low to medium risk – but high professional duty)
  • summarisation (risk of omissions, needs verification)
  • client intake (privacy, bias, transparency)
  • HR decision support (potential high risk)
  • automated profiling for marketing (risk of manipulation)

Then attach controls per category.

Step 3: Create permitted use patterns and make them easy to follow

Policy fails when it’s abstract. What works is a set of approved patterns. For example:

  • “Summarise this document, do not quote case law, provide issues list only.”
  • “Generate a first draft” – then solicitor reviews, no direct client sending without review.
  • No client confidential data in non-approved tools.
  • No AI generated citations without manual checking against authoritative sources.

This mirrors the Law Society guidance’s pragmatic approach. Suitable tasks yes, but legal advice/citations/document review require heightened caution and verification.

Step 4: Build an evidence trail

If an incident happens or an allegation is made you will need to be in a position to evidence this. To that end you should store all:

  • training completion records
  • tool approvals and DPIA style assessments where relevant
  • versioned prompts/templates (where used)
  • review checklists
  • incident logs

The AI Act is very focused on documentation and accountability, and this will increase especially as obligations mature through to 2027.

Step 5: Decide your client facing stance now

Firms take very different opinions on this but overall, there are three main camps:

  • AI is internal only, never client facing.
  • AI can be client facing but clearly framed and supervised.
  • AI first delivery as a product line.

Whatever you choose, make it deliberate. If you deploy client facing AI, you edge closer to “provider” territory. If you keep it internal, you remain a deployer but you still have professional and data obligations, and potentially AI Act transparency duties depending on the system.

What Happens Next: Codes, Guidance and Enforcement

A key part of the AI Act’s practical impact will come from codes of practice and guidelines, how enforcement bodies interpret “provider” vs “deployer” and how market surveillance takes shape across Member States.

On the general purpose AI side the Commission has already positioned the GPAI Code of Practice as a voluntary route to demonstrating compliance and reducing uncertainty for model providers. The practical knock-on for solicitors is that your vendors’ willingness to align with that code will shape what you can safely procure and how much assurance you can credibly give clients. Alongside that, expect increasingly pointed interpretive guidance from the EU’s AI governance bodies on questions that matter to firms in practice such as what counts as “putting into service,” where the provider/deployer line is drawn for client facing tools and how transparency duties should be implemented in real user journeys. Finally, watch the enforcement texture. This is where the Act stops being a policy document and becomes an operational reality for firms and their vendors. From 2 August 2026, the AI Act is broadly applicable across the regime, and market surveillance authorities (and for general purpose AI, the EU’s AI Office) can interrogate how systems have been classified, what controls exist in practice and whether the documentation matches the way tools are actually deployed. In practical terms, that means requests for the sort of material most organisations have never had to assemble in one place such as technical and compliance documentation, records of oversight, logging, risk assessments and (for certain systems) visibility into data governance and testing assumptions. The penalty regime is designed to ensure this is not optional. Member States must provide enforcement measures that are explicitly “effective, proportionate and dissuasive,” with the framework setting upper ceilings for serious infringements alongside the possibility of non-monetary measures  such as warnings, restrictions and corrective actions that can be just as commercially damaging as a fine.

The firms that win this phase will be the ones that can answer:

  1. What AI do we use?
  2. Where does it touch client data?
  3. What tasks is it permitted to do?
  4. Who oversees it?
  5. How do we verify outputs?
  6. What happens when it goes wrong?

Download our free AI Compliance First Pack here to get started today.

Related Articles