IT’S HERE! The Getty Images v Stability AI judgment from the English high court has finally landed at a whopping 205 pages. Im STILL reading it. Wait until you see our massive deep dive analysis! But… | Dr. Barry Scannell | 205 comments

IT’S HERE! The Getty Images v Stability AI judgment from the English high court has finally landed at a whopping 205 pages. Im STILL reading it. Wait until you see our massive deep dive analysis! But from my initial scan – it could reshape how legal systems understand AI. This is VERY important – Mrs. Justice Joanna Smith concluded that the Stable Diffusion models “do not store or reproduce any [of Getty’s] copyright works and have never done so.” That may sound like a narrow point, but to my mind it strikes at the heart of how regulators across Europe, including the European Data Protection Board, have been analysing AI systems.

The judgment dismisses Getty Images’ claim of secondary infringement on the basis that the model does not contain or reproduce the works it was trained on. In paragraph 758(viii), Justice Smith was clear that while an infringing copy can be an article under the Copyright, Designs and Patents Act, a trained AI model is not one. It is not a library of images or text. It is a network of statistical weights and parameters that describe relationships, not the data itself.

That technical conclusion has implications far beyond copyright. It potentially collides with the EDPB’s recent opinion on personal data in AI models, which said that a model can only be considered anonymous if “the likelihood, either direct or probabilistic, of extracting personal data” is insignificant.
Justice Smith’s ruling suggests there is nothing to recover because the data isn’t there. The EDPB opinion said “… information from the training data set, including personal data, may still remain ‘absorbed’ into the parameters of the model, namely represented through mathematical objects … [they] may still retain the original information of those data…”

This reflects two competing worldviews about what AI models actually are. To some, the EDPB could be seen as regarding AI as if models function like an abstract mathematical database, holding and potentially revealing personal data. The English High Court has recognised that they do not operate that way. An AI model is just a bunch of numbers. Without an AI system – can it produce anything?

I find it striking that much of Europe’s regulatory discussion about AI assumes that data somehow sits dormant inside a model, waiting to be extracted. That assumption underpins recent privacy guidance, transparency obligations, and even proposed audit mechanisms. Training involves learning patterns, not storing content. The distinction matters, both legally and scientifically.

Our legal systems are struggling to keep pace with the technical realities of machine learning. Both copyright and data protection frameworks rest on old notions of copying and storage. When those notions meet systems that learn rather than store, legal coherence breaks down.

This decision will reverberate across regulatory debates for years to come. It challenges lawmakers, regulators, and technologists to speak the same language at last.


Publié

dans

par

Étiquettes :