If you’ve spent any time online in the last few days you may have seen pictures from DALL·E, the AI tool by OpenAI that takes text prompts and converts them into pictures. While developers have been cagey in offering a full-working demo to the public, there are some researchers have made a more limited version available to test, called DALL·E mini, prompting a flood of amusing and often bizarre pictures. Similarly, you may have come across a conversation between Blake Lemoine, a Google engineer, and a GPT-3 enabled chatbot called LaMDA. The leaked transcript of some conversations is eye-opening, which prompted Lemoine to make the case that LaMDA is sentient.
There has been quite a lot of pushback against the idea of sentience, and while I am not qualified to discuss on that subject, one cannot help to be impressed by some of the advances in artificial intelligence. The question of AI consciousness is besides the point, at least for now. What is interesting from the above examples is that while we were distracted by the pandemic and other global catastrophes, AI got good. Really good. It’s not only GPT-3 and DALL·E, or face recognition, but everyday tools such as translation, or the code-writing marvel that is CoPilot, we are witnessing a truly transformative time with AI that has the capacity of being more and more involved in our lives.
I know what you’re thinking. Surely we’re prepared! AI Ethics is one of the hottest technology academic topics in the world! All AI providers have in-house AI ethicists! And isn’t the EU going to deploy a shiny new AI Act? I’d like to think that I’m often an optimist, but this is too little, too late. The AI Act is designed with some specific types of AI in mind, and one could argue that it is reactive, not proactive. As to the army of AI ethicists… no disrespect, but the tech companies mostly employ them for ethics-washing, their input seems to have little to no effect in the final deployment of AI tools.
However, the most pressing issue with our preparation for the coming age of AI is that we are thinking about it in the wrong terms, and I count myself in those who have been missing the bigger picture, at least until recently. Most of the debates about AI tend to concentrate on the potential harms. Artificial intelligence liability. AI as a tool for oppression in the shape of face-recognition. AI decision-making that removes human agency and entrenches discriminatory practices. Accidents due to self-driving cars. AI trained on data that is offensive and based on stereotypes.
But while these topics are important, AI will bring about challenges at the wider societal level that are not necessarily evil or detrimental, they just are. What happens to academia where everyone can write a decent essay based on a few prompts? What happens to art when everything that you can describe can be easily painted by an AI? What happens to music when an AI composer can churn out thousands of passable tunes? What happens to programmers when AI can code faster and better than any one person? What happens to journalism when AI can produce quick outputs based on a few key prompts? What happens to photography when you can create images of people who do not exist?
All of these technologies are already here, they’re just not widespread, but we have to start thinking about content-creation in different terms. We’re reaching a point in which AI outputs are passable, sometimes even beautiful, and while one could argue that there’s no competition with human creativity, that is besides the point, human creators will be competing with unlimited, cheap, free content. It doesn’t matter if that picture is not perfect, it is free, and you made it by writing a couple of prompts to an AI that is probably free because it is harvesting your data to further train their AI.
These changes will happen, are happening, already happened. We just haven’t realised that the world has changed, and we haven’t even started having the conversations about what this new AI-enabled world will mean.
In his Culture novels, Iain M. Banks envisioned a future where humans would co-exist with benign AI, all menial tasks performed by them, allowing humans to exist in a techno-utopia. Interestingly though, Banks never really made the point that this was a future that was desired and desirable, it just was. In many novels, the Culture, this techno-utopian society, is described as hypocritical, often needing to rely on a shadowy organisation called Special Circumstances (SC) when things went sour. The Culture is ambiguous, its heroes flawed, its accomplishments suspect, its anarchic ethics often fall in favour of the status quo. Humans could be seen as utterly free, or just pets kept around by the AI overlords for their own amusement.
While we’re very far away from the world of Culture ships and megastructures, perhaps we can use it as a warning that AI could change us in ways that we do not foresee. The techno-utopia could be the best thing to happen to us, or it could be a jail, or even worse, a zoo. What is clear is that change is inevitable, so we should perhaps start thinking about how we will organise society around it.
For now, I’ll be off playing with DALL·E, I wonder what happens if I ask for “cave painting llamas”? Only one way to find out…