AI adoption is exposing in some firms an underlying issue that has quietly existed for decades. Workflows that have traditionally functioned through professional instinct, informal shortcuts and the competence of individuals. That model held when the tools were slow and the practice could absorb this inefficiency as just the way things were done. AI changes the playing field. It increases throughput, reduces friction and makes it possible to produce persuasive legal text at scale. But if your underlying workflow is loose, undocumented and dependent on tacit knowledge then AI doesn’t modernise it. Rather it amplifies the weaknesses. It accelerates the wrong steps as efficiently as the right ones and it does so with the appearance of polish. This is why the EU AI Act is best understood not as a technology regulation but as a systems and accountability regime. It forces organisations to make their internal operations legible. Who did what, using which systems, touching which data, under whose authority, with what oversight and with what evidence that the output was checked. The firms that treat this as a workflow programme will build an operating model where technology can be safely layered because the work itself has been engineered to be predictable, auditable and resilient under scrutiny.
The most common mistake now is trying to bolt new capabilities onto old pathways. Legacy workflows in many firms were never designed as systems but evolved as a set of adaptations such as a partner’s preferred way of drafting, a secretary’s informal checklist, a trainee’s personal method of managing deadlines, a WhatsApp message here, a quick copy paste there and an assumption that someone will catch any mistakes. Legal tech on top of a system like this is merely ornamentation. Something that looks good in a demo will likely collapse under real volume because it doesn’t solve the underlying constraint. Work that is not standardised cannot be reliably automated, work that cannot be reconstructed cannot be defensibly delegated and work that has no defined decision points cannot be safely accelerated. In practice, this is why AI pilots so often stall after the initial enthusiasm. The tool works but the firm’s workflow cannot accommodate it without raising risk. This could be confidentiality risk, privilege risk, negligence risk and now, explicitly, governance risk as the AI Act phases in obligations that make evidence essential.
Workflow readiness, then, is the prerequisite that makes legal technology actually usable. It begins with a shift in what you treat as the unit of improvement. Not the app. Not the feature. The workflow. A workflow is a sequence of real actions with defined boundaries such as what data enters, where it goes, who is authorised to use it, what outputs are produced, what checks occur and what escalation happens when something looks wrong or uncertain. Once that is defined, technology becomes straightforward. You can decide where AI is permitted (low-consequence drafting support, summarisation, extraction), where it is restricted (anything that resembles advice, anything filed or served, anything that materially affects rights), and where it is simply not allowed without controlled oversight. The AI Act’s logic is built around exactly this. Classifications, roles, transparency in specified interactions and an expectation that organisations can demonstrate oversight and accountability. In other words, the Act does not merely ask whether you used AI it asks whether you can show you used it within a controlled system.
The blockage created by old workflows is economic. Firms that retain highly bespoke, partner specific ways of working cannot scale quality without scaling cost. They cannot standardise supervision without constant friction. They cannot implement AI safely because the variability in inputs and outputs is too high. This is what happens when systems are absent. Technology doesn’t fix that. It makes it visible, faster. And once visible, it becomes a liability.
A higher end way to think about this is that legal practice is moving from an artisanal production model where expertise is expressed through individual craft to an engineered service model, where expertise is embedded into repeatable processes. This does not reduce the importance of judgment but ensures that professional judgment is applied where it is genuinely needed rather than being consumed by preventable variability like missing information, inconsistent templates, untracked changes, undocumented decisions, unchecked outputs. AI can be extraordinary in an engineered environment because it can operate inside clear constraints. It can draft within defined parameters, summarise within agreed formats, extract within consistent structures and accelerate work without changing the meaning of it. But in a non-engineered environment AI becomes a force for drift. It can shift the shape of work, it introduces untraceable decisions and it produces outputs that are difficult to attribute and harder to defend. In this situation a firm can end up with more text, more speed, and less control which is the worst combination under a risk-based regime.
The best move firms can make now (Q1-Q2 2026) is to start the process of making the firm’s workflows legible, bounded and auditable because that is what turns AI from a risk multiplier into a productivity engine. Inventory the workflows where AI is already creeping in. Define approved patterns of use that are easier to follow than improvisation. Put gates at high-consequence moments like client advice, court outputs, undertakings anything that touches rights or carries external reliance. Tighten procurement so tools cannot enter the firm faster than governance can follow. And build evidence as a by-product of the workflow rather than an after the fact scramble. Once that is done, legal technology stops being a series of pilots and becomes an operating upgrade. The EU AI Act is simply the clearest signal yet that the era of casual, invisible AI use in professional use is ending. Workflow readiness is now the foundation of defensible, scalable practice.





