
Doctor Strangellama: or How I Stopped Worrying and Learned to Love AI Slop.
If you were paying attention to music news last month, you may have come across a curious story, unnerving even depending on your priors. An AI-generated artist called IngaRose climbed to the top of the iTunes music charts after becoming viral in TikTok videos. Having listened to it, I’m perhaps as baffled as anyone else how anyone would like this blandly mediocre auto-tunned artist would become popular. A remarkable thing about the songs is that the project is reasonably open about what it is, in its Instagram bio, the artist describes the songs as having human-written lyrics with stems and arrangement refined using Suno, and the human behind the project appears to be a songwriter called Ingrid using IngaRose as a sort of synthetic calling card (but those details are disputed. Still, the broader point stands. We are currently witnessing the rise of so-called AI slop everywhere we look, and one could even argue that the images I generate for my own blog posts fall under that category.
Why is this happening? Has everyone gone mad and lost their taste all of a sudden? And should the law get involved at all?
The rise of AI slop
All of a sudden AI-generated content is everywhere. Viral videos on almost every platform, images and memes flooding Facebook, those adorable cats playing instruments in the middle of the night, the deeply strange “Italian brainrot” universe of tralalero tralala and bombardino crocodilo (I’m not going to link to that one because, well, you’ll find out); the fruit “Love Island” parody, I could go on. I have my own guilty favourites: the viral Arsenal song that has somehow become popular with the players themselves, and the wonderfully absurd “The Shape Store“, which is genuinely innovative in its own right. And don’t get me started on the masterpiece that is the Street Fighter-styled battle of the philosophers, where Nietzche gets his arse handed to him by an amazing cast of thinkers.
But what is AI slop, really? The term itself is doing a lot of work. In its narrow sense it refers to low-effort, mass-produced AI-generated content designed to harvest engagement: the Shrimp Jesus images on Facebook, the endlessly recycled motivational TikTok narrations, the algorithmically-optimised YouTube channels pumping out fake history. In a broader sense it has expanded to cover almost any AI-generated content at all, which is where I start to get uncomfortable with the label. AI slop ends up being a pejorative term that means everything ends up meaning nothing.
One thing that I notice is that while it started as an insult with negative implications, it’s starting to become almost like a badge of honour amongst some people.
Why is this happening?
Is there a lot of AI-generated crap out there? Sure, but that is the norm. I am reminded of Sturgeon’s Law, when confronted with the comment that most science fiction is crap, sci-fi author Theodore Sturgeon commented that “ninety percent of everything is crap”. This has always been true of music, film, television, literature and journalism. We tend to forget that formulaic, lowest-common-denominator content has always been the norm, and that most of what gets produced in any given medium is mediocre at best. AI has not invented bad culture, it is just another medium with which to produce crap at a speed and scale that makes the underlying mediocrity much more visible.
There is also a slightly ageist assumption built into a lot of the discourse, namely that AI slop is mostly consumed by clueless Boomers in the same way they are supposed to be uniquely susceptible to misinformation. The data does not really support this. The viral AI content on TikTok is mostly being shared and enjoyed by people under thirty, and the audiences for Suno-generated playlists skew young rather than old. Slop is genuinely cross-generational, and I suspect the urge to attribute it to an out-group is partly a way of avoiding a more uncomfortable thought, and that is that most people, most of the time, are not looking for capital-A Art. They want background, mood, vibes, something to scroll past. And synthetic content is extraordinarily good at filling that role. I’m Latin American, and I grew up with an ever-present sea of bad content, telenovelas, cheap romantic balads, El Chavo del Ocho… but I digress.
Any legal issues?
Yes, there is a bit of law in this blog post. When it comes to AI slop, there are two main legal questions. The first is the familiar copyright authorship problem, which I have written about so many times on this blog that I will spare you another round (more on that forthcoming, stay tuned).
The other question is that of transparency, in other words, should there be a legal requirement to signpost AI-generated content? The EU has already answered this in the affirmative. Article 50 of the AI Act imposes a set of transparency obligations on both providers and deployers of generative AI systems. Providers must ensure that outputs are marked in a machine-readable format so they can be detected as artificially generated. Deployers must label deepfakes and certain AI-generated text intended to inform the public on matters of public interest, in a clear and distinguishable manner at the latest at the time of first exposure. The Commission published its draft Guidelines on Article 50 on 8 May 2026, with a consultation running until 3 June, and there is also a Code of Practice on Transparency of AI-Generated Content sitting underneath it which is rapidly becoming the de facto compliance benchmark.
The framework is more quite interesting. The deepfake labelling obligation under Article 50(4) does not apply where the content is evidently artistic, creative, satirical or fictional, nor where there has been substantive human review and editorial responsibility. So an AI-assisted song with human-written lyrics and a producer behind it, of the IngaRose variety, would not obviously be caught, at least not as a deepfake. The machine-readable marking obligation on the provider side is broader and would apply to Suno’s outputs regardless. Whether platforms like Spotify or TikTok will then surface that marking in any way that is meaningful to listeners is another question entirely, and one that the AI Act does not really answer.
My own view is that some form of transparency requirement is desirable, but I am wary of expecting it to do too much work. People who currently enjoy IngaRose tracks aren’t going to stop enjoying them because a small label appears under the title. The transparency intervention only really bites where there is genuine deception about authenticity (a synthetic singer being passed off as a real human artist on a platform like Spotify, or a deepfake video presented as journalism), and we already have a reasonably good consumer protection toolkit for those cases. The harder problem is the volume one, and I do not think labels solve that.
Concluding
I’m not particularly fond of AI slop, but I’m not particularly fond of slop, period. Most of what is called AI slop nowadays is indeed low-quality stuff, but we are starting to see some really good quality stuff out there. I do believe that there has been a bit of over-compensation in calling any AI content as slop, when the reality is that there are quite a nice things that are AI generated.
But we are also starting to see some genuinely good things being made with AI tools by people who treat them as tools, and I think there has been a degree of over-compensation in the discourse where any AI-generated content gets tarred with the same brush. The interesting cultural and legal work in the coming years will be in drawing the line between the two, and I am not at all convinced that either critics or regulators have got that line right yet.
On the meantime, I’m going to sit here nervously awaiting the end of the Premier League season listening to the Arsenal song… COYG!
0 Comments