Cyberpunk Llamas

Cyberpunk Llamas.

If you’ve spent any time online in the last few days you may have seen pictures from DALL·E, the AI tool by OpenAI that takes text prompts and converts them into pictures. While developers have been cagey in offering a full-working demo to the public, there are some researchers have made a more limited version available to test, called DALL·E mini, prompting a flood of amusing and often bizarre pictures. Similarly, you may have come across a conversation between Blake Lemoine, a Google engineer, and a GPT-3 enabled chatbot called LaMDA. The leaked transcript of some conversations is eye-opening, which prompted Lemoine to make the case that LaMDA is sentient.

There has been quite a lot of pushback against the idea of sentience, and while I am not qualified to discuss on that subject, one cannot help to be impressed by some of the advances in artificial intelligence. The question of AI consciousness is besides the point, at least for now. What is interesting from the above examples is that while we were distracted by the pandemic and other global catastrophes, AI got good. Really good. It’s not only GPT-3 and DALL·E, or face recognition, but everyday tools such as translation, or the code-writing marvel that is CoPilot, we are witnessing a truly transformative time with AI that has the capacity of being more and more involved in our lives.

I know what you’re thinking. Surely we’re prepared! AI Ethics is one of the hottest technology academic topics in the world! All AI providers have in-house AI ethicists! And isn’t the EU going to deploy a shiny new AI Act? I’d like to think that I’m often an optimist, but this is too little, too late. The AI Act is designed with some specific types of AI in mind, and one could argue that it is reactive, not proactive. As to the army of AI ethicists… no disrespect, but the tech companies mostly employ them for ethics-washing, their input seems to have little to no effect in the final deployment of AI tools.

However, the most pressing issue with our preparation for the coming age of AI is that we are thinking about it in the wrong terms, and I count myself in those who have been missing the bigger picture, at least until recently. Most of the debates about AI tend to concentrate on the potential harms. Artificial intelligence liability. AI as a tool for oppression in the shape of face-recognition.  AI decision-making that removes human agency and entrenches discriminatory practices. Accidents due to self-driving cars. AI trained on data that is offensive and based on stereotypes.

But while these topics are important, AI will bring about challenges at the wider societal level that are not necessarily evil or detrimental, they just are. What happens to academia where everyone can write a decent essay based on a few prompts? What happens to art when everything that you can describe can be easily painted by an AI? What happens to music when an AI composer can churn out thousands of passable tunes? What happens to programmers when AI can code faster and better than any one person? What happens to journalism when AI can produce quick outputs based on a few key prompts? What happens to photography when you can create images of people who do not exist?

All of these technologies are already here, they’re just not widespread, but we have to start thinking about content-creation in different terms. We’re reaching a point in which AI outputs are passable, sometimes even beautiful, and while one could argue that there’s no competition with human creativity, that is besides the point, human creators will be competing with unlimited, cheap, free content. It doesn’t matter if that picture is not perfect, it is free, and you made it by writing a couple of prompts to an AI that is probably free because it is harvesting your data to further train their AI.

These changes will happen, are happening, already happened. We just haven’t realised that the world has changed, and we haven’t even started having the conversations about what this new AI-enabled world will mean.

In his Culture novels, Iain M. Banks envisioned a future where humans would co-exist with benign AI, all menial tasks performed by them, allowing humans to exist in a techno-utopia. Interestingly though, Banks never really made the point that this was a future that was desired and desirable, it just was. In many novels, the Culture, this techno-utopian society, is described as hypocritical, often needing to rely on a shadowy organisation called Special Circumstances (SC) when things went sour. The Culture is ambiguous, its heroes flawed, its accomplishments suspect, its anarchic ethics often fall in favour of the status quo. Humans could be seen as utterly free, or just pets kept around by the AI overlords for their own amusement.

While we’re very far away from the world of Culture ships and megastructures, perhaps we can use it as a warning that AI could change us in ways that we do not foresee. The techno-utopia could be the best thing to happen to us, or it could be a jail, or even worse, a zoo. What is clear is that change is inevitable, so we should perhaps start thinking about how we will organise society around it.

For now, I’ll be off playing with DALL·E, I wonder what happens if I ask for “cave painting llamas”? Only one way to find out…


1 Comment

Avatar

Grant Castillou · June 12, 2022 at 8:28 pm

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.