Updated: Jan 28, 2021
Why is it safe to assume that the last bastion of human creativity will be the artist, after the driver, doctor, and scientist have been replaced? Because the artist is an agent who must continually step outside the system and understand its implications, contradictions, limitations, and its unspoken dynamics. Whereas science manipulates and predicts physical phenomena, I propose that art works because it manipulates and predicts other minds. Therefore, art is considerably more challenging than science. Science I expect will be automated before art. An artwork is an entry to a game. By making the artwork, the artist submits their art to a game. The artwork can fail or succeed in that game. This is a reinforcement learning view of art. The rules of the game depend on the formal-compositional conventions for producing works in that aesthetic category, e.g. impressionism, or haiku. An artwork entered in one category may fail in another. Typically the context determines the category of game that is being played. In any case, it is a strange human behaviour that people will go and look at two and three dimensional sensory stimuli that remain unchanged for years on end, over and over again, that are found in quiet spaces.
The invention of a new artistic movement is a paradigm shift which involves the production of an entirely new kind of game, defined by rules and goals (success criteria) by which entries to that artistic game will be judged. The role of the artist is to play pre-existing games well, and to invent new games to play. But then the question is; what is a good new artistic game and why? Warhol and Pop Art invented a new game that contrasted with Abstract Expressionism. In this sense it was like everyone was happily enjoying playing Tennis, and then someone comes along and invents a new game called Football. An artwork has an additional meta-role in needing also to manipulate the viewer to evaluate the work according to the rules of the game the artist wishes the work to be evaluated upon. In this sense, an artwork has been called an 'Attentional Engine'.
So I think art is the process whereby agents invent games of the following form. Some artist A makes something according to some felt out unclear generative rules G, and some viewer V (maybe the same person) evaluates the thing they made according to some felt out unclear evaluative criteria C. What makes a game a (good) artistic game rather than say a (good) board game, or a (good) science game, or a (good) politics game, is that it is one where success in the game is judged by how well A scores according to evaluative criteria C, by using G. Unlike other games, the evaluative criteria C are about the mind of V. They are not about objective things in the world like whether G allowed you to get to the moon, or whether G allowed A to get balls into a net, or whether G allowed A to win an election. To play the art game (without cheating) A implicitly has G and C in mind and acts to produce work according to G,C. Its not an art game if A doesn't have any C. Consider some possible criteria:
The artwork communicates accurately to V some properties of the scene depicted by A.
The artwork is maximally ambiguous about what object it is representing, whilst being strongly considered to be a natural form rather than a manmade form.
The artwork is shocking.
I will try to automate the artist here, and will slowly fail less horrifically, starting from the most basic algorithms, but progressing towards an agent that has the potential for artificial emotions. Aaron Hertzman rightly believes that computers currently do not make art. Why? Because following Dutton he believes art can only be created by agents capable of social relationships and so far computers are not capable of social relationships (with humans), because they are not capable of a sufficient level of agency, e.g. it would be futile for us to fall in love with a computer or give it a gift, even to the extent that a cat lover gives their cat a gift at Christmas perhaps. Because a computer can reciprocate our love far less than a cat. I agree that this is true of all existing 'art software' today. I agree when he writes "All of this leads to the inevitable conclusion that AI-based artwork is still artwork made by a human. Our current "AI" software is just software, despite the fancy branding, and there is a long precedent of art made with software." But the question "do computers make art?" is quite different from the question, "can computers make art?", and Hertzman conflates these two questions sometimes, advertising in his lecture at Neurips 2020 the title "Can computers Create Art?" but publishing the paper "Computers Do Not Make Art, People Do". The first claim is clearly a much stronger one, and it is one I disagree with. If a computer does ever make art then whether it was "ultimately the result of a human writing software, and then experimenting with and improving the software algorithms, parameters, and training data until they get results they like." becomes irrelevant. We believe a computer can and does play the board game "Go" at a level better than the greatest human Go player. It does not detract from the computer's ability to play Go that it was created by humans. Hertzman is right that no computer currently does art. A computer currently does art to the extent that a computer would play go if it needed constant intervention by humans during the game to tell it about the quality of its proposed actions. The human creator is currently far too involved in the creative loop of any so called art making software. The critical decisions that make or break the art's value appear always to be made by the human. Hertzman is right that in these circumstances "Assigning authorship of their art to software is perverse, dismissing the value of the artists' own hard work and creativity.". Moving to Hertzman's stronger claim that computers cannot be artists unless they have social relationships, he writes ". This means computers cannot be credited as artists until they have some kind of personhood, just as people do not give gifts to their coffeemakers or marry their cars. If there is ever such a thing as human-level AI, with thoughts, feelings, and moral status comparable to ours, then it would be able to create art.". He then makes the very strong claim "Even if we could someday develop an algorithm that autonomously produces an endless stream of artworks that are original, beautiful, surprising, provocative, expressive, and culturally relevant, as long as we understand the software as just executing the instructions it has been given, it will continue to be a dumb machine, and not an artist." This makes me think, yes, nobody thinks to reward alphaGo itself, nobody would give alphaGo the software the credit, because it is not its own achievement, it is the achievement of the authors. But none of the authors could beat the world champion without alphaGo. So who is the player of Go, and who should be given credit? AlphaGo does not deserve the credit for it's own creation. Lee Sedol does deserve some of the credit for his own creation I think. But why? Because he is a meta-learning agent. He had to work very hard to learn and master Go, alphaGo did not. Lee Sedol in many ways programmed himself with far greater competence than alphaGo was capable of modifying its own algorithm. I disagree with Hertzman that if such a machine did exist that we would not call it an artist. We would, we would call it the greatest artist in the world. But we would still probably give credit to its creators for its invention. But we might say, it only does art, just as we say of alphaGo, it does only Go.
I believe that to automate the artist it would be sufficient to produce an algorithm that could create new games to play and play them. But what kind of game? Football isn't art (although some people might disagree). Games that manipulate our minds in new and interesting ways, mainly our perceptual and limbic systems, and mainly games which require not much running around or physical exertion on our part. When machines understand us well enough to invent games for us and manipulate us in novel ways, then they have become artists.
Figure 1. An image produced by an algorithm that makes random smooth concave shapes using Bezier curves, according to a size distribution, and then applies random shading to them. The aim was to produce forms that were more human-like than those I had previously produced, along the lines of Harold Cohen's Aaron program from the 70s and 80s. The images ended up looking rather Miro'y. There is no self-evaluation here by the algorithm. It is not pleased or displeased by what it sees. This last property of non-self-criticalness constitutes the majority of generative art algorithms.
I am interested in more than generative art. Generative art is art created by algorithm (or recipe). Many algorithms for producing art are not self-critical; they do not judge how good or bad what they have done is. And even if they do, the criteria by which they judge themselves may have been specified by a human at a rather superficial or task-specific level. For the algorithm itself to be an autonomous artistic agent it must be capable of being self-critical, i.e. of evaluating itself by interesting criteria that it constructs for a reason that we can feel is a good reason. In some sense, this must be a criteria which helps us to understand something that we did not understand before. So an artist is a meta-learning agent, who creates their own games (G,C) and evaluates the games themselves at some level, based on some neural manipulative property (which currently we don't completely understand).
My interest in the creative machine began with child development. How and why do children invent new games for themselves to play, without explicit external rewards? For example, when I was a child sitting in the back of my parents car, I would close one eye, and hold out my finger and move it so that it never touched the lines in the middle of the road. It was a fun game which required timing and coordination. I invented the success criteria and defined the constraints. In the machine learning community, this kind of process is called Intrinsic Motivation, the process whereby very general internal rewards produced in the brain are given to a reinforcement learning agent (also in the same brain) in order to shape what it does. The question is how exactly should these rewards be produced such that games like the one above, and more interesting games arise by their maximization? What is the minimal set of low dimensional reward functions that is required to produce new and interesting games?
We will get to these difficult questions in due course, but I want to describe my journey which started when I and my family were recovering from Coronavirus in Norfolk in March 2020. I felt incapable of my usual work in machine learning and artificial general intelligence, and I started to explore generative art, which I wished to do in a principled minimal way, from the bottom up.