In 2026 legal tech stops being your internal operational choice and becomes part of your client service. This isn’t a sudden client obsession with tools. It’s the reality that technology is now entangled with professional obligations, data protection, cyber risk and the credibility of the work product.
The shift is already visible in the guidance ecosystem. The Law Society of Ireland’s November 2025 guidance on generative AI reframes GenAI as something that must be used (if at all) through the lens of existing duties: confidentiality, competence, supervision, and professional judgment. At EU level, the AI Act has moved from headline to timetable. Prohibitions and AI literacy have applied since February 2025 with major compliance obligations staged through 2026 and beyond.
Meanwhile, in-house teams are adopting GenAI at speed and many are increasingly uneasy about not knowing what their external firms are doing with it. The result is a new baseline where clients will expect your firm’s technology to be not just useful but explainable, governable and auditable.
This is what that looks like in practice.
1) No surprises about AI
Your clients want to know whether their confidential information is entering systems they did not approve, whether outputs are being verified and who is accountable.
By 2026 many clients will treat AI use like any other material aspect of service delivery meaning they will ask about it upfront and they will expect a straight answer. In America the ACC has pointed to a transparency gap with in house counsel reporting limited clarity on how law firms are using AI. Here, that expectation is being reinforced from two directions – the Law Society’s guidance on the responsible use of generative AI and the EU AI Act’s phased compliance timetable – which together make not knowing how AI is being used an increasingly unacceptable position for any serious firm or practice.
What clients will expect you to have ready
- A plain English AI use position, it can be just one page. It should state where AI is permitted, where it is not and what safeguards apply.
- A matter level approach – if GenAI is used for any step that could affect substance, risk or confidentiality then it should be the case that the firm can explain the controls and, where appropriate, discuss it with the client as part of instructions.
- A clear stance on public tools v enterprise tools and whether prompts or client data are used for model training.
The bar here is not perfection. It is governance and candour.
2) Evidence of AI competence, not just enthusiasm
The AI Act’s early obligations already include AI literacy requirements, and the broader framework ramps up again in 2026. For clients, this won’t feel like abstract regulation. It will surface as a practical question: can your people use these tools safely, and can your firm prove it?
Clients will increasingly assume that unmanaged AI use is already happening somewhere in the firm, often inside familiar products, whether people realise it or not. The expectation in 2026 is that you have moved from informal experimentation to deliberate competence.
What clients will expect you to be able to demonstrate
- Staff training that is role based (partners, associates, support, trainees) and updated.
- A supervision model: what is checked, by whom and when.
- A documented workflow for verification, especially for anything that resembles legal research, citations, or factual assertions These are all areas explicitly cautioned about in the Law Society guidance.
This is also where “AI incidents” become reputational events. Clients will ask what your firm does when AI output is wrong, biased or leaked and they will expect a defined response.
3) Secure collaboration as the default, not an add on
Clients in 2026 will measure firms against the digital experience they get everywhere else: banking, tax, procurement, HR, insurance. Email attachments and version chaos will feel archaic and they will increasingly be framed as risk, not tradition.
So the expectation is not simply a portal. It is a secure collaboration layer with auditability: controlled access, clear versioning, activity logs, permission based access and predictable ways to share drafts, approvals and key documents.
This expectation is also indirectly reinforced by the wider cyber landscape. Ireland’s NCSC has been explicit that NIS2 has not yet been transposed into Irish law (and that implementation timing remains uncertain), but the direction of travel is unambiguous: supply chain security, governance and incident readiness are becoming the norm.
What clients will increasingly expect from your collaboration setup
- Secure portals or deal rooms for matters where sensitivity justifies it.
- Granular permissions: different access for directors, finance, HR, in house counsel, external stakeholders.
- An audit trail that makes it easy to evidence who saw what, when and what changed.
This is not about buying “more tech”. It is about reducing avoidable exposure.
4) A credible story on confidentiality and data handling
In 2026, sophisticated clients will ask questions that used to appear only in bank panels or Big Tech questionnaires. They are asking these questions because their own organisations are under pressure to map data flows and manage vendors.
Expect questions like:
- Where is client data stored and in what jurisdiction?
- Who has administrative access?
- What sub processors exist down the chain?
- What is retained, for how long and how is it deleted?
- Does any tool use prompts or documents to train models?
The Law Society guidance anchors this to familiar duties. Client confidentiality remains the fixed point regardless of the tool. Law Society of Ireland
What clients will expect to see
- A short, usable data map for core platforms.
- Vendor due diligence evidence for key systems.
- A standard response pack for security and privacy questionnaires so you’re not reinventing the wheel for each client.
5) Faster work, but with provable quality control
Clients do want speed. They also want fewer errors, fewer surprises and fewer “we’ll revert shortly” cycles. The tension in 2026 is that clients will assume your firm has access to AI enabled efficiency but they will not accept AI enabled sloppiness.
This is where the internal discipline matters: quality assurance, cite checking, and human responsibility for every deliverable. The market has seen enough public cautionary tales about unverified AI output to make this a live concern rather than a theoretical one.
The emerging client expectation
- If you use automation it should improve quality as well as speed.
- Verification is part of the workflow, not an optional extra.
- The firm can articulate how it prevents errors, hallucinations and missing context.
Speed is valued, but reliability is prized.
6) Predictable pricing that reflects efficiency, not just hours
AI forces a basic question that clients are becoming more comfortable asking directly: if technology reduces time spent, why are fees rising?
Some clients will still pay premium rates for premium judgment. But they will increasingly scrutinise whether they are funding inefficiency, duplicated effort or junior heavy process work that could be systematised.
This is just basic procurement logic reinforced by industry reporting that frames AI adoption as a competitive gap tied to customer experience and cost structures.
What clients will expect in 2026
- More matters priced with scope clarity and predictable ranges.
- A willingness to use fixed fees, phased fees or subscriptions where the work is repeatable.
- A narrative of value: what is human expertise, what is process, what is automated and how does that benefits the client.
Efficiency that only benefits the firm will become harder to defend.
7) An audit ready technology posture
By 2026 clients will expect that your firm can withstand scrutiny from regulators, from professional bodies from insurers and from the client’s own compliance team.
The AI Act timetable matters here because it normalises the idea that organisations must be able to show their working when AI is involved. Major requirements apply in stages through 2026. Here in Ireland the Law Society guidance is a big signal that the profession is moving toward explicit expectations of responsible GenAI use.
What an audit ready posture looks like
- A simple AI and tech governance file: policies, training, vendor reviews, incident plan and a clear owner.
- A record of what tools are in use – including AI features inside mainstream products.
- A repeatable process for assessing new tools before rollout.
The 2026 client questionnaire you should prepare for now
Build your answers to these before the first serious client asks:
- What AI enabled tools do you use and for what tasks?
- Do you ever input client confidential information into GenAI tools and if so under what controls?
- Are prompts, outputs or documents retained? Can they be deleted?
- What is your verification process for AI assisted work?
- Who in the firm is accountable for AI governance?
- What training have staff received and how is competence maintained?
- What are your core systems for collaboration, document control and audit trails?
- What is your incident response plan for a cyber event involving client data?
- What is your vendor due diligence process and how do you manage sub processors?
- How do you ensure clients share in efficiency gains through pricing or turnaround?
If you can answer those cleanly, you are already ahead of most of the market.
The real change: clients now judge firms by their systems
In 2026, your clients will not just buy your advice. They will buy your delivery capability, your security posture, your collaboration experience, your governance maturity, your transparency on AI and your ability to prove that all of it is under control.





