Utopia or dystopia?

A few years ago, a student came to see me during office hours to talk about their assignment. They were upset because their essay had received a low mark—not a fail, but much lower than expected. I knew from seminars that they were exceptionally bright and conscientious, participated in discussions, and had completed all the reading. Yet their essay was quite poor, hence the frustration. I tried to guide them to some materials on how to write legal essays, but they said, “It’s easy for you because you can write, but I can’t; it’s really difficult!”

Setting aside the pedagogical debate about whether we should rely on essays to assess students, the conversation stuck with me as evidence that not everyone can write, and that sometimes we shouldn’t expect people to. Whether due to disabilities, coming from an underprivileged background, or writing in a second language, we should never take writing skills for granted. Plagiarism and essay mills exist in response to this fact.

Why am I discussing this in the context of AI? Because I believe this helps explain the explosion in popularity of AI language tools such as ChatGPT. For people who can’t write, large language models (LLMs) have become a very useful tool, capable of checking grammar immediately or helping to write an important email or letter with just a few instructions. They are similarly useful for people who want to write code with little experience or those learning a new programming language. In short, people find this technology extremely useful. The same goes for image generators; I personally have found numerous uses for them, mainly to create more personalised slides and generate images for this blog and social media. Moreover, it has become a hobby for me; I enjoy playing around with prompts to see what I can produce.

If you only read Twitter, these assertions might come as a surprise. There’s a dismissive quality to some of the criticism: LLMs are over-hyped, they’re inaccurate, they’re not intelligent, and they’re just glorified autocorrect. While some of the criticism is warranted, I feel that it misses the point of why people are using these tools. True, there are some who appear to over-hype the technology, and others who believe that we are just a few months away from sentient machines and Artificial General Intelligence (AGI), but in my experience, these are the minority. Not everyone thinks they’re interacting with an intelligent or sophisticated agent; spending any time with Bard or ChatGPT will make it evident that you’re dealing with a limited language model.

The truth is that for many of us, generative AI tools such as language and image models are just useful applications, and they’re here to stay. This may sound like a bold statement, given the many hurdles that will have to be overcome. We are likely to see years of litigation as well as further regulatory action. We’re also going to have to grapple with real problems presented by the use of generative AI, from IP to privacy concerns. We’ll have to address various licensing issues, opt-outs for creators, technical tools to manage risk, and many other challenges.

But for those who are now using these tools daily, I don’t think there’s any going back. We’ve gotten a taste of the future, and the technology may only improve and become more embedded in every aspect of our daily lives. I don’t think these are merely glorified auto-complete apps, but I also don’t believe that we’re facing sentient machines. We’re experiencing a new generation of tools that I like to equate to the invention of other communication tools such as the printing press, recorded music, radio, TV, film, and photography. Reading lets you experience other people’s thoughts, while you don’t have to be a musician to listen to music. Photography allows you to see the world through other people’s eyes using technology. I think that we may start thinking about generative AI in the same way, enabling people who can’t write or draw to express their thoughts using the accumulated knowledge of the world.

Yes, we will have to sort out the legal details, just as we did with other technologies. But we’ll get there.

I posted the above and asked GPT-4 to clean the grammar, and it did, they’re still my words, but cleaner. I also asked it to write a conclusion (it’s pretty good at summarising), and it produced this:

“In conclusion, despite the challenges and the criticism, generative AI tools like ChatGPT have already made a significant impact on how people write, create, and communicate. As the technology continues to improve and becomes more integrated into our daily lives, it is likely that these tools will remain indispensable for many users. Rather than dismissing their potential, we should focus on addressing the concerns and maximizing the benefits they can bring to a diverse range of individuals. The future is here, and it’s time to embrace the possibilities that AI-assisted language and image generation have to offer.”

Not how I would have concluded the blog post, but it will do. It’s only missing the one-liner.

I’ll reserve those for me.


2 Comments

Avatar

Gilbert · April 16, 2023 at 10:11 am

Somewhere in “The Complete Murphy’s Law” you run into a small pearl of wisdom: “Any argument carried far enough will end up in semantics”.
Deciding whether the LLM’s are really intelligent (or even ‘sentient’) would require us first to define exactly what those words mean, which is definitely harder than it looks.
Even today’s vastly perfectible LLM’s would easily pass the Turing test, weren’t it for that obsessively repeated “As an AI language model, I…” put there to exorcize our Frankenstein complex.
But the Turing test, sound-looking as it was when Turing proposed it, looks now questionable due to its implicit subjectivity: “which” human evaluator should the machine convince of its humanity, a psychologist, a politician, a cleaning lady, a smart child, an AI specialist? Those contraptions have become so good at sounding human that they would easily dupe most of of the humankind.
And if you can pretend to play violin well enough to persuade a musician that you’re a true violinist, then you ARE a violinist, no matter how you do it.

That being said, I concur that despite their almost incredible skills today’s LLM’s are just tools.
A legitimate question would be though, how much longer will they STAY just tools?

Privacy Concerns in Tech: TikTok Bans and AI Legal Guardians – AI Lawyer Talking Tech · April 17, 2023 at 3:42 pm

[…] Artificial intelligence is neither hype nor utopia […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.