Back in 2022, when ChatGPT arrived, I was part of the first wave of users. Delighted but also a little uncertain what to do with it, I asked the system to generate all kinds of random things. A song about George Floyd in the style of Bob Dylan. A menu for a vegetarian dinner party. A briefing paper about alternative shipping technologies.
The quality of what it produced was variable, but it made clear something that is even more apparent now than it was then. That this technology wasn’t just a toy. Instead its arrival is an inflection point in human history. Over coming years and decades, AI will transform every aspect of our lives.
But we are also at an inflection point for those of us who make our living with words, and indeed anybody in the creative arts. Whether you’re a writer, an actor, a singer, a film-maker, a painter or a photographer, a machine can now do what you do, instantly and for a fraction of the cost. Perhaps it can’t do it quite as well as you can just yet, but like the Tyrannosaurus rex in the rear vision mirror in the original Jurassic Park, it’s gaining on you, and fast.
Faced with the idea of machines that can do everything that human beings can do, some have just given up. Lee Sedol, the Go Grandmaster who was defeated by DeepMind’s AlphaGo system in 2016 retired on the spot, declaring AlphaGo was “an entity that couldn’t be beaten”, and that his “entire world was collapsing”.
Others have asserted the innate superiority of art made by humans, effectively circling the wagons around the idea that there is something in the things we make that cannot be replicated by technology. In the words of Nick Cave:
Songs arise out of suffering … the complex, internal human struggle of creation … [but] algorithms don’t feel. Data doesn’t suffer … What makes a great song great is not its close resemblance to a recognisable work. Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past.
It’s an appealing position, and one I’d like to believe – but sadly, I don’t. Because not only does it commit us to a hopelessly simplistic – and, frankly, reactionary – binary, in which the human is intrinsically good, and the artificial is intrinsically bad, it also means the category of creation we’re defending is extremely small. Do we really want to limit the work that we value to those towering works of art wrought out of profound feeling? What about costume design and illustration and book reviews and all the other things people make? Don’t they matter?
Perhaps a better place to begin a defence of human creativity might be in the process of creation itself. Because when we make something, the end product isn’t the only thing that matters. In fact it may not even be the thing that matters most. There is also value in the act of making, in the craft and care of it. This value doesn’t inhere in the things we make, but in the creative labour of making them. The interplay between our minds and our bodies and the thing we are making is what brings something new – some understanding or presence – into the world. But the act of making changes us as well. That can be joyous, and at other times it can be frustrating or even painful. Nonetheless it enriches us in ways that simply prompting a machine to generate something for us never will.
What’s happening here isn’t about unleashing our imaginations, it’s about outsourcing them. Generative AI strips out part of what makes us human and hands it over to a company so they can sell us a product that claims to do the same thing. In other words the real purpose of these systems isn’t liberation, but profit. Forget the glib marketing slogans about increasing productivity or unleashing our potential. These systems aren’t designed to benefit us as individuals or a society. They’re designed to maximise the ability of tech corporations to extract value by strip-mining the industries they disrupt.
This reality is particularly stark in the creative industries. Because the ability of AI systems to magic up stories and images and videos didn’t come out of nowhere. In order to be able to make these things, AIs have to be trained on massive amounts of data. These datasets are generated from publicly available information: books, articles, Wikipedia entries and so on in the case of text; videos and images in the case of visual data.
Exactly what these works are is already highly contentious. Some, such as Wikipedia and out-of-copyright books, are in the public domain. But much – and possibly most – of it is not. How could ChatGPT write a song about George Floyd in the style of Bob Dylan without access to Dylan’s songs? The answer is it couldn’t. It could only imitate Dylan because his lyrics formed part of the dataset that was used to train it.
Between the secretiveness of these companies and the fact the systems themselves are effectively black boxes, the inner processes of which are opaque even to their creators, it’s difficult to know exactly what has been ingested by any individual AI. What we do know for sure is that vast amounts of copyright material has already been fed into these systems, and is still being fed into them as we speak, all without permission or payment.
But AI doesn’t just incrementally erode the rights of authors and other creators. These technologies are designed to replace creative workers altogether. The writer and artist James Bridle has compared this process to the enclosure of the commons, but whichever way you cut it, what we are witnessing isn’t just “systematic theft on a mass scale”, it’s the wilful and deliberate destruction of entire industries and the transfer of their value to shareholders in Silicon Valley.
This unconstrained rapaciousness isn’t new. Despite ad campaigns promising care and connection, the tech industry’s entire model depends upon extraction and exploitation. From publishing to transport, tech companies have employed a model that depends upon inserting themselves into traditional industries and “disrupting” them by sidestepping regulation and riding roughshod over hard-won rights or simply fencing off things that were formerly part of the public sphere. In the same way Google hoovered up creative works to make its libraries, filesharing technologies devastated the music industry, and Uber’s model depends on paying its drivers less than taxi companies, AI maximises its profit by refusing to pay the creators of the material it relies on.
Meanwhile the human, environmental and social costs of these technologies are kept carefully out of sight.
Interestingly the sense of powerlessness and paralysis many of us feel in the face of the social and cultural transformation unleashed by AI resembles our failure to respond to climate change. I don’t think that’s a coincidence. With both there is a profound mismatch between the scale of what is taking place and our capacity to conceptualise it. We find it difficult to imagine fundamental change, and when faced with it, tend to either panic or just shut down.
But it’s also because, as with climate change, we have been tricked into thinking there are no alternatives, and that the economic systems we inhabit are natural, and arguing with them makes about as much sense as arguing with the wind.
In fact the opposite is true. Companies like Meta and Alphabet and, more recently, OpenAI, have only achieved their extraordinary wealth and power because of very specific regulatory and economic conditions. These arrangements can be altered. That is within the power of government, and we should be insisting upon it. There are currently cases before the courts in a number of jurisdictions that seek to frame the massive expropriation of the work of artists and writers by AI companies as a breach of copyright. The outcome of these cases isn’t yet clear, but even if creators lose, that fight isn’t over. The use of our work to train AIs must be brought under the protection of the copyright system.
And we shouldn’t stop there. We should insist upon payment for the work that has been used, payment for all future use and an end to the tech industry practice of taking first and seeking forgiveness later. Their use of copyright material without permission wasn’t accidental. They did it on purpose because they thought they could get away with it. The time has come for them to stop getting away with it.
For that to happen we need regulatory structures that ensure transparency about what datasets are being used to train these systems and what is contained in those datasets. And systems of audits to ensure copyright and other forms of intellectual property are not being violated, and that enforce meaningful sanctions if they are. And we need to insist upon international agreements that protect the rights of artists and other creators instead of facilitating the profits of corporations.
But most of all, we need to be thinking hard about why what we do as human beings, and as creators and artists in particular, matters. Because it isn’t enough to fret about what is being lost, or to fight a rearguard action against these technologies. We have to begin to articulate positive arguments for the value of what we do, and of creativity more broadly, and to think about what form that might take in a world where AI is a pervasive reality.
-
This is an edited version of the Australian Society of Authors 2024 Colin Simpson Memorial Keynote lecture, titled ‘Creative Futures: Imagining a place for creativity in a world of artificial intelligence’
Source link
lol