Roads? Where we’re going we don’t need roads.

Text from my editorial in IIC.

In courtrooms across the globe, a quiet crisis is brewing. As the number of artificial intelligence copyright litigations increases, judges are being asked to decide the fate of technologies that often operate in dimensions the human mind struggles to visualise accurately. From the High Court in London to the Munich Regional Court, the central question is no longer just “Who owns this?” but “How does this actually work?”

The answer to that question is rarely simple. AI is a complex technology that involves concepts such as neural networks, high-dimensional vector spaces, latent diffusion, and probabilistic token generation. Yet, the fate of these cases, and by extension, the future of the creative and AI industries, often rests on how well a non-specialist judge grasps these concepts. This is by no means a new problem, but we are being presented with a new gap between the “black box” of AI and the black letter of the law. This is why we need the figure of the “tech lawyer” to rise again.

Writers, artists, and legacy media conglomerates are suing AI companies and tech giants, alleging that the ingestion of their works to train these models constitutes copyright infringement. The defendants often argue that studying data to learn patterns is no different from a human student reading a library of books. On the surface, it sounds like a standard copyright dispute. But beneath the surface lies a labyrinth of technical nuance. Does a model “memorise” a book, or does it learn the statistical probability of which words follow others? When an image generator creates a picture in the style of a famous artist, is it collaging existing pixels, or is it hallucinating a new image from mathematical noise based on a learnt understanding of aesthetic concepts?

These are not semantic differences; they are the entire point. If a judge believes an AI model is a high-tech Xerox machine, infringement is obvious. If they understand it as a statistical engine that learns abstract concepts, the legal landscape shifts entirely. The margin between these two understandings is where a case is won or lost.

It is no insult to the judiciary to suggest that it is facing a steep learning curve. Judges are experts in law, not machine learning. In recent months, we have seen instances where this gap has led to widely diverging decisions. There is a real risk that we could see the establishment of legal precedents based on flawed technological metaphors. In copyright law, metaphors are powerful. Is an AI training set like a “mixtape”? Is it like “Google Books”? Is it a “compression algorithm”? If a lawyer successfully convinces a judge that a large language model (LLM) is simply a compressed database of the internet, a ruling against the AI company becomes likely. If the defendants successfully argue it is a “synthetic brain” learning from observation, the outcome swings the other way.

The problem is that traditional lawyers often struggle to make these arguments effectively because they rely on the analogies provided to them by outside experts, often without fully grasping the mechanics themselves. This is where tech lawyers become indispensable.

The term “tech lawyer” might conjure images of a solicitor who is simply good with Excel or knows how to use the latest software or trendy app. However, in the context of high-stakes AI litigation, it means something far more specific and rare. These are academics and practitioners who possess a dual fluency: they can parse a complex dataset as easily as a contract, and they understand the architecture of a neural network as well as the architecture of a legal argument. This dual expertise is shifting the power dynamic in the courtroom. In the past, technical explanations were the sole domain of the expert witness, usually a computer scientist brought in to provide an explanation of the technology. But expert witnesses, while brilliant, often struggle to speak in legalese. They may explain the technology accurately but fail to connect it to the dispositive legal element.

Tech-savvy lawyers and academics can bridge this divide. Instead of parroting expert advice, they can curate explanations with more accurate legal explanations. They know which parts of the “transformer architecture” are legally relevant and which are engineering trivia. They can explain why the concept of “overfitting” in a model matters for a copyright claim, using language that resonates with judicial logic.

This kind of translation is an art form. It requires a deep enough understanding of the tech to know which simplifications are permissible and which are misleading. A lawyer who doesn’t truly understand the technology often risks using a bad analogy that falls apart at the first examination.

The rise of the tech lawyer is not just a matter of professional development; it is a necessity for justice. If courts get these decisions wrong, the consequences are severe. A ruling based on a misunderstanding of how AI “reads” data could effectively hinder the training of large models, slowing down the development of the technology in that jurisdiction. Conversely, a ruling that fails to appreciate how easily these models can replicate the heart of a copyrighted work even without exact copying could devastate the livelihoods of human creators, leaving them without recourse against machines that mimic their life’s work.

We have already seen early skirmishes where technical details determined the outcome of the case. In cases involving code-generating AI, the defence often hinges on whether the AI is reproducing specific distinct chunks of code or merely functional logic. A judge who cannot distinguish between “expressive code” and “functional syntax” is ill-equipped to make that call. More recently, we have had two very different decisions in the UK and in Germany regarding AI training. In the Getty Images case in the High Court of England, the judge found that models do not store data, and therefore they were not reproductions. In the GEMA case in Munich, the judge found that models are capable of memorising data, and therefore they could infringe copyright. The way in which experts explained the details of the technology in each case had a very important bearing on the final result.

The demand for this dual competency is reshaping legal education and hiring. Law firms are no longer just looking for the best debaters; they are recruiting computer science graduates and former software engineers. Law schools are beginning to offer courses not just on “Law and Technology” (which often focuses on policy) but on the actual mechanics of emerging tech. However, the supply is nowhere near meeting the demand. For now, the tech lawyer remains rare, a high-value asset capable of translating the binary code of the machine into the moral code of the law. As AI copyright litigation moves from preliminary hearings to landmark rulings, the voices that will matter most are those that can speak both languages. The fate of our digital future may well depend on the lawyer who can look a judge in the eye and explain, with perfect clarity, exactly what happens inside the machine.

As an academic who has been enamoured with technology for many years, I have found that the role of being a translator has been a rewarding aspect of my writing. We need more people who are confident in reading technical papers and understanding AI models at a deep level. For fellow tech law academics this moment is both an opportunity and a responsibility. For years, those who wrote about the interplay of law and technology were sometimes dismissed as niche enthusiasts, safely relegated to the “cyberlaw” corner of the curriculum. We were often seen as the nerds of the profession, the geeky tech enthusiasts easily ignored. AI copyright litigation has made it abundantly clear that the mainstream of IP can no longer treat technology as someone else’s problem. Copyright law is being shaped by technical assumptions, whether courts acknowledge it or not. Academics who can write, teach and testify across that boundary have a chance to influence not just one case, but the conceptual vocabulary with which an entire generation of lawyers will think about AI.

Categories: Academia

3 Comments

Khrys’presso du lundi 26 janvier 2026 – Framablog · January 26, 2026 at 6:44 am

[…] Why We Need Tech Lawyers to Shine Again (technollama.co.uk) […]

Intelligence Artificielle · February 19, 2026 at 10:52 am

[…] Why we need tech lawyers to shine again (Technollama) […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.