The last few weeks I have witnessed a number of interesting discussions breaking out on social media. A couple of weeks ago a US-based academic admitted using AI in some of his writing, which prompted a response from a prominent AI researcher. I’m not interested in commenting on the particulars of the dispute, and I’m not naming the people involved, but if you were paying attention to AI Twitter and Bluesky recently, you will probably know of which online drama I’m talking about. Similarly, a social media poll of legal academics found that 45% of respondents is now using AI to write either a lot or a little of their articles, which prompted another round of outrage in many circles. Similarly, another legal academic wrote that anecdotally a lot of people had admitted that they’re using AI to write their abstracts, another admission that was met with angry reactions, and other admissions from other anonymous legal writers have generated quite a stir.

The scale of the issue

In my experience as a journal editor, I have seen evidence of more AI use in writing. I don’t want to reveal too much, but in the journal we have a screening service that identifies potential plagiarism and AI use in submissions, and I have noticed a marked increase in articles flagged as having used AI in some form. I don’t want to give any specific figures, but it’s substantial. I have heard from other editors that this is happening everywhere. But it’s not only the clear use of AI that is at stake, there is indication that AI may be boosting submission rates across the board, with an article in Scholastica calculating that an increase of about 50% has been observed this year.

If this is happening in the sacrosanct hallways of academia, the last bastion of the written word, imagine what is going on elsewhere. Well, we’re starting to get an idea. A recent survey conducted by the UK’s Higher Education Policy Institute (HEPI) found that 95% of students were using AI, and 12% admitted having used AI-generated text in their assessment, which is an increase from previous surveys. We have to take into account that this figure is likely much higher, and I can say with a high degree of confidence that I expect the actual figure to exceed 50%.

In the wider public, AI use is also growing. Research from last year shows that 31% of people in the US interact with AI at least several times a day, while two thirds of teens report using chatbots regularly, and around half of adults under 50 use AI about once a day or more. Another report found that around 40% of all work-related ChatGPT messages in July 2025 were for writing tasks, more than research, programming, or analysis.

It is easy to see how all of the above may be worrying to many people. For thousands of years writing has been the pinnacle of human thinking; it is how we transmit ideas, communicate with each other, explore deep philosophical questions, tell stories, and shape society. The suggestion that a machine can replicate even a fraction of this capacity attacks deeply held beliefs about what it means to be human. The idea that we can offload this skill to an algorithm is distasteful, a herald of the end of civilisation as we know it.

But nobody should be surprised that any sort of use of AI in writing is met with immediate opposition from professional writers, academics, and the media. For the writing professional this is a fundamental skill that is inherently human, but also where the top thinkers are able to shape opinion. And those who write constantly either for work or as a hobby, writing is fun, it’s pleasant to shape sentences to convey ideas and emotions.

Why is there an increase in AI use for writing?

The first instinct will be to blame lazy people, if you offer your average slacker an opportunity to do something with the minimum effort, they will take it most of the time; and AI is a lazy person’s dream. But blaming lazy people is, well, lazy. They have always been around, AI is just another way to take the easy road.

The other main reason is that not everyone enjoys writing. I know, shocking but true. In my experience as a teacher, when presented with an alternative to writing an essay, many students will choose the alternative. Not everyone is a good writer, many people have learning difficulties, many of us are non-native speakers, others never had a good education, or they were never required to learn to write well. Moreover, writing is hard work. Even for those of us who enjoy it, the process of organising thoughts into coherent sentences, finding the right words, restructuring a paragraph that isn’t quite working, is time-consuming and mentally draining. And for those who don’t enjoy it, it can feel like an insurmountable mountain. There is also the problem of confidence; many people have something worth saying but are held back by the fear that they will say it badly, that their grammar will betray them, or that they will be judged for how they express themselves rather than for what they are trying to express.

Then there is the fact that as a whole, we are collectively writing less and less outside of everyday messaging. Young people in the UK are writing less for fun, a figure that will indicate whether they go on to write later in life. Anecdotally, I kept diaries and have been writing since I was a kid, so this is an important indicator of developing a life-long love for writing. Another indication is that fewer people are reading recreationally, another report found that reading rates in the US have fallen 40% in the last two decades; the relevance here is that there’s a high correlation between reading and writing, if people aren’t reading for fun, they definitely aren’t writing for fun.

Another reason for wider adoption is one that I’ve seen rarely mentioned, but it’s very important, and it is the fact that models are getting better all the time. While AI has a specific voice that many find annoying, there’s no doubt that for most everyday writing tasks it serves the purpose. I have been feeding my essay questions to AI each assessment period, and last year so-called reasoning models started producing essays that would pass my class, with a few tweaks to the references. As a researcher, I have found LLMs extremely useful for some tasks, particularly changing citations and cases to the OSCOLA format, or finding whether there has been new developments in one area I’m writing about, which they surprisingly do quite well. So pretending that chatbots are not useful will not get us anywhere.

And finally, the reason for the growing use of AI for writing is that over the years we have created a number of unintended incentives to use AI. We demand from students written essays, a skill that they will likely never use in practice. In academia, we have tied people’s employment and promotion to the “publish or perish” paradigm, so publishing at any cost is encouraged, and this could act as an incentive to use AI. Every job has its share of mindless admin that now can be completed with an LLM.

So we can’t possibly be surprised that people are using AI to write, because large number of people don’t like it, don’t enjoy it, or are not good at it. If you present many members of the general public with a tool that can write for them, they will take it, and we can’t possibly expect otherwise.

The near future

We are starting to see more discussion about this in the wider discourse because many people are now experiencing the appearance of AI writing in every sector, and this is going to have an immediately negative effect in all walks of life.

In academia we’ve been hit with an unending barrage of AI-generated assignments, and the education sector is in a bit of a panic, not knowing what to do exactly. At first there were clear rules that AI use in assignments was personation, cheating in other words as the student didn’t write the essay, but this was either ignored or abandoned as the scale of the problem became clear, it was practically impossible to police AI use. My own practice has been to take a pragmatic approach, I will read everything in front of me assuming it was written by a human, and mark it accordingly. A bad essay is a bad essay regardless of how it was made, I just don’t have the time or resources to chase every suspected AI use. Hallucinated references are often a big sign, and I punish those strongly.

The legal sector has also been hit, with hundreds of examples of hallucinated cases being brought to the attention of courts around the world. While difficult to police, enforcement is easier here, submitting hallucinated material to a court should and is being punished, but we have to assume that for every case that is caught, thousands get through.

In publishing we are also starting to see an increase in submissions of AI slop, and the Internet and social media are also being inundated with AI-generated text. Our own journal policy is to have a screening system that identifies possible AI use, but this is not a reason to reject an article because many times the AI use will be for copy-editing, or for translation. I edit a journal where the writers are mostly non-native speakers, so this is an area where AI can actually help people publish their ideas. So I read each paper with the same assumption as I do assessment, I will assume that this was written by a human, and act accordingly.

But we should be prepared, things will only get worse as models improve and detection becomes more difficult. We will soon be presented with a “Dead Internet” scenario, where people send emails that are read and summarised by an AI, admin forms are sent that were AI generated and then process by a machine, and students will submit AI-generated essays that will be read and marked by an AI.

You ain’t seen nothing yet.

Solutions

So, this is a nightmare scenario, right? For the most part in the short term, the answer is yes, but I’m a glass-half full kind of guy, so I think that we’ll find ways around the AI slop hellscape.

The first thing I’m going to propose may not be popular, but I don’t see how we can move forward otherwise, and it is that we will have to have a presumption of humanity. Let me explain. Many in the writing professions are extremely confident that we can identify AI writing most of the time, and while this is probably still the case, this is the worse that the technology is ever going to be, and it will get more difficult. Almost two years ago I wrote a guide on how to identify AI-generated text, and a few months after, these giveaways had already disappeared. You probably have seen videos and read essays telling you of the latest ways to identify AI text, but those are also already outdated. I’m sure we’ll find a few other hallmarks, but eventually these will also be removed. So if AI detection becomes impossible, we will have to assume humanity just to operate normally. As I mentioned, this is serving me relatively well in editing and marking, I will assume that if something has someone’s name or signature, they wrote it, and they should assume all of the consequences of that text.

For the same reason, I don’t think that any sort of legislative solution will work. The technology is too far ahead to expect any sort of ban. We could probably try to enact legislation that sets the obligation for LLM developers to clearly identify when an AI has been used to generate text, but this would only open the door for models that have been trained in countries without such restrictions to become popular. And then there will probably be AI humanisers that will get rid of such identifiers.

A solution that appears to be emerging in many writing circles is to loudly attack anyone who is using AI text, and to try to gather consensus in the writing professions to loudly oppose any sort of AI use. Writers are now at the stage in which artists were back in 2022, AI is just about to get good enough as to threaten people’s jobs. So there is a bit of a siege mentality emerging, where the first instict will be to punish and ostracise anyone who breaks this code. I’m highly skeptical of this approach as it is likely to lead to witch-hunts, false accusations, purity spirals, and other nasty online behaviour that is not likely to fix the problem.

Eventually, I think that we will find some balance. Perhaps there will be a new metadata standard in which you have to provide evidence of how much time you spent on a document (although this too could be faked). In academia, assessment methods will change, the reliance on the written essay will be over. We will also hopefully start to look at some longstanding workflows and remove useless paperwork, perhaps moving to simpler methods. And maybe, just maybe, one day academia will move away from the tyranny of the “publish or perish” regime.

There is also possibly going to be a move away from written media and into video and audio, although these can also be subject to AI generation. I think that we may see a return to in-person events, we will give more prominence to public lectures, and instead of a paper, we will listen to more presentations, which will become more valued as we look for human connection in a sea of AI-generated abundance.

But we have to also understand the reasons why people are adopting AI in such large numbers, and we have to make sure that we don’t demonise every such uses. I have mentioned that not everyone enjoys and/or is good at writing, and we should recognise that as a valid reason to use AI for some tasks.

And finally, try to meet people where they’re at. A couple of months ago I received an email from a former student thanking me for something I had written that they found useful, it was long, florid, and clearly written by an AI, but I found it endearing and touching because I assumed that the sentiment behind it was real. Am I deluding myself? Maybe, but I can’t choose to live my life assuming the worst from people.

Concluding

I have to admit that I’m sort of apprehensive about publishing this blog post, the torches are out in some circles about all types of AI use, and I don’t know if my pragmatic approach will be misinterpreted. But I think that this is a very important conversation to be had as things are probably going to get much worse. We should be discussing the effects of living in a world where writing can be done easily by everyone, whether a loss of writing skills will be detrimental to society as a whole, and whether this will lead to a loss in cognitive functions, as many fear. But we cannot ignore that this will continue to happen, and that it may even accelerate despite protestations from the writing professions.

I will admit that my feelings about this development are ambiguous. I love writing. I have always loved both reading and writing. I am also someone who has the privilege of doing it for a living. I have always considered writing as one of the best ways to shape my own thoughts, I will often teach myself about a subject by writing about it, if I can explain it in written form, I think that I can understand it. This skill has allowed me to learn about all sorts of technologies, and it has also allowed me to write for fun, which is what this blog is all about. I can’t claim to be a good writer, but I can’t overemphasise just how important writing is for me. So I have always recommended writing to my students as a good skill to pick up beyond their obligatory assignments. Learning to write is a skill for life.

But I am also aware that not everyone enjoys it and try to avoid it at all costs. Plagiarism, cheating, paying for essays, ghost-writing, employing assistants to write drafts, all of these are practices that predate AI.

Personally, I will continue to write until they prise my keyboard from my cold dead hands. I suspect that for most people who enjoy it as much as I do this will also be the case. Humans love to create, and that will never go away.

Note: I’m keenly aware that this blog post (and past ones) will probably be put through AI detectors as soon as it has been published. As a non-native speaker, I’m not ashamed to admit that I often use AI for copy-editing, but not always as I find that LLMs tend to erase my voice. This blog post is 100% written by me, I only asked Claude for help in editing a tricky paragraph that I was not happy with. I’ve used AI in the past also to copy-edit.

But I love writing too much to give it away to a machine, my prolific writing output before AI can attest to that.


4 Comments

Anonymous · March 22, 2026 at 2:36 pm

Brilliant piece. Particularly like the acknowledgment of the incentive structure and that for many people AI is a gift because for them writing is not easy, it’s a pain. The latest Hardfork episode on this from a journalistic episode is very interesting. Not least on the idea that GPT-2 was a better writer due to less stringent post-training, but also on the distinction of writing stages: ideation, structure, writing, editing etc. Like you I use AI a lot as my editor, but also for research, for fine-tuning certain paragraphs and for finding examples / checking facts – but rarely for the ideation, structure or first draft. This seems to work for me. But like you, I enjoy writing, so this may not work for everyone. P.S. I am on the same page on AI detection. Our company looked into this (admittedly a few years back) and the results were not great, particularly regarding false positives for non-native speakers.
[Stephan Geering]

    Andres Guadamuz · March 22, 2026 at 6:14 pm

    Thanks Stephan!

Anonymous · March 22, 2026 at 6:24 pm

I’m a very old man who learnt to write with stylus and nib and still remembers the city streets soiled with horse poop.
Throughout such a long life I had to part with many old things I had grown accustomed to and to welcome (not always with good grace) a series of new contraptions meant to replace them: stylus & nib -> fountain pen -> ball-point -> typewriter -> word processor

-> ChatGPT, just to name a few. And honestly I cannot even complain about that, for as a researcher I’ve put a couple of stones on the cairn myself.

The basic tenet I learnt from that is “if you cannot fight it, strive to accept it” — not that there’s much wisdom in there: the choice is very limited and tearing your hair out or becoming a hermit wouldn’t help much. Yet the notion agrees well with the ‘presumption of humanity’ you propose: AI is still a tool, though a pretty sophisticated one, and the human being who uses it should be held responsible for all the consequences of its use, misuse or abuse — you wouldn’t charge a gun with murder.

Still the main point seems to be more about the possible unfair advantage some will (and do, and did) take by using AI to generate outcomes they lack the knowledge or the skills to produce themselves.
To complicate matters, the day the AI-produced outcomes will be totally undistinguishable from human-written ones seems to be right around the corner, which promises to cross off any hope to regulate legally this kind of abuse.
So what? “If you cannot fight it, strive to accept it”: I’m afraid there will be no other way than reclassify the AI as a sort of cerebral prosthesis and legalise its use as such.
After all we already have the paralympics, right?

March Copyright Reads 2026 – Open Research · March 27, 2026 at 9:26 am

[…] Why are people adopting AI to write? […]

Leave a Reply to AnonymousCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.