top of page
Search

AARON by Harold Cohen: The Doodle and Human-Like Abstract Drawing

Updated: Feb 2, 2021

This post continues to think about generative procedures G which the artificial artist could use to produce marks. All the images I'd produced so far were very 'computery' and inhuman. I became interested in producing drawings that looked more human, more like drawings made by the human hand. I'd left off with a particular way of generating a drawing made of lines, namely, generate a random initial x,y position and dx, dy velocity. Use a local view, pos, vel, last_vel, i, → dx, dy (robot) controller to move the pen. If a line crosses another then do some dX, dY which is a function of the angle of the crossed line and one’s own angle. I realized I wanted a more general space of possible images, with the properties present in Vera Molnar’s images for example or here.


My first idea was to use a CPPN to generate smoothly changing objects. I make a random CPPN, and I put in a 3D sampled grid of coordinates. The outputs are interpreted as the start and end coordinates of a line. I then plot those lines. The resulting images look like this.

The patterns produced are very regular, and each line is highly similar and has a clear relation with its predecessor and 'post-decessor'. They show regular changes of angle, change of length, continuous changes, rather than discontinuous changes are seen between lines… Can this be changed by altering the transfer functions in the random neural network. There is what appears to be about one or two folds.


I then became interested in a way of producing a smooth objects with Bezier curve algorithms, producing things like this...

I tried to mutate the smooth objects and make them more complex, e.g.

and..

and...

and I mutated the smooth shapes in place...

These had some aspects of humanness, it is very interesting for example how we make up smooth closed shapes ourselves, how we manage to close them up smoothly at the end for example, but I was still unsatisfied with the humanness of the drawings...


At this point, let us cast our minds back to 1979, the year I emigrated to England with my parents from Sri-Lanka (aged 4) and the year Mrs Thatcher became prime minister. Harold Cohen, an abstract artist was developing his drawing program AARON. 1979 I think was the peak of Aaron’s beauty. After that the images produced became quite ugly. These early first images have a simplicity and humanness which was unsurpassed by any drawing algorithm I've seen so far. How did Aaron work? Aaron was a complicated image conditional production system, a complex set of if/then rules which acted in hierarchy to make a drawing. It had about 300 productions with 30 productions being responsible for the drawing of a single line. On overall view of the production system is shown below...



  • It was a production system, a hierarchically organized set of condition action rules. The higher levels of which did global image composition in general terms, and the lower levels of which operated the pen itself.

  • It operated on computational geometry level representations that contained hard-coded visual concepts such as closed object, figure-ground detection etc..

  • It used line drawing actions that were carefully designed to make the lines seem very human looking.

  • There was no learning and no evaluation (G only, no C).


Some examples of drawings by Aaron are shown below...

The productions for producing closed forms, shading, open-objects etc... were hand coded by Harold Cohen. The productions for composition of these objects relative to each other were also had coded, to promote suitable non-overlapping placement. If attempts to make an object without overlap with another object failed then it was erased and tried again, i.e. there was planning involved. How can such naturalistic drawings be produced without hard-coding them?


While not really even trying to answer the above question, I wanted to experiment with using planning in the placement of random smooth shapes. Maybe to see how easy such a production system would be to code up, maybe one could do stochastic search eventually over such productions? I checked for intersections/overlap and removed objects that would result in overlap if they would be placed.

I then experimented with ways to cast shadows.

Trying a larger canvas and Cauchy distributed shape sizes.

We can compare this to early Aaron and Miro

I then made a minimal composition machine. An attempt was made to use an LSTM for composition. Trajectories produced by its internal dynamics. However, these were too noisy and could not be well detected. A much more deterministic object positioning algorithm is needed which carefully accounts for the sizes and positions of other objects in a better way than here.


Input : Prev. Object Positions (x,y) list.

Output: Next. Next Objects Relative Position (dx,dy) from Last.

Machine: e.g. LSTM, reads prev positions in sequence.

  • The trajectories produced by the LSTMs are not very interesting patterns spatially.

  • It’s hard to get this method to produce balanced trajectories that are well composed.

  • There is no attempt at any particular composition anyway.

One idea I wanted to explore was to think about the dynamics of the human hand and arm and how much they constrained the kinds of drawing we produced. The dynamics of the body is always talked about w.r.t. Jackson Pollock. Let's try to simulate a drawing made by a randomly writing arm model. Let us use a model of the human arm, elbow, wrist, and fingers. Random but fixed oscillations and phases applied to these should produce a Fourier like encoding of curves, which is properly constrained should be able to generate human like gestures. e.g. http://bencaine.me/maddux/tutorial.html or https://pypi.org/project/pyarm/ or here


What is required to produce closed objects randomly by the movements of such an arm? One might expect the right kind of noise to arise from normal noise to the torques at each joint, or some systematic wobble at the finger joints for example…. So if you get a robot arm and move it to random points with some Gaussian noise you get this. It’s quite different from the random scribblings of my children, but there is something human about it.

A video of the arm is shown below...

Here is another one working in another way... producing much finer finger movements.

Below we see a scribble made by the arm with inverse kinematics movement to random points + random joint angle noise. Limitation to an A3 sheet of paper (in scale with the arm)

Here are some drawings which look disturbingly too spirography... It seems to me that anything spirography is the kiss of death in terms of generative aesthetics.

At this point, I wanted to leave the human arm and think about the drawing dynamics that can result from the use of recurrent neural networks. My first step was to investigate the dynamics of randomly initialized RNNs and LSTMs. The initial thought being that these could eventually be used to control humanoid arms to draw.


The first experiment simply gives random inputs into a randomly initialized LSTM and interprets the outputs as x,y coordinates of a pen. The result is predictably random.

Then I try feeding the output of the LSTM back into itself, and interpreting the outputs, and we see that most of the time the LSTM settles into a point attractor eventually. Each shape is the trajectory taken by one random initialization of the LSTM. When feeding the current position [in range -1,1] as input and allowing the LSTM to control the dX and dY of positions with some noise, ether it settles to zero, it goes to an extreme, or it oscillates.

This is basically just a trajectory through the phase space of the random LSTM. We can make this phase space clearer in various ways...

We see that random LSTMs have different dynamics. These can be plotted on the pen plotter, here with a glitch when made them overlap each other.

The next step for me was to evolve neural networks for drawing. I started with using a genetic algorithm to evolve the weights of a convolutional neural network to shade in a circle.

Here we see samples from the evolutionary trajectory on the way to filling in the circle.

So far it only has the picture itself, not the current pen position that is input to the network. So it can’t really know what line it will make when it decides to go somewhere. When this is fixed, performance improves.

Here the circles are shown so you can see what it was trying to shade in.

You can make life harder for the neural network by moving the circles around a bit.

The real advance however came when I decided to evolve LSTMs to draw circles. This resulted in really natural human like squiggles being evolved, which I was very happy with.

I then moved to using the decoder of Sketch-RNN which as well as an RNN has a Gaussian Mixture Model (GMM) to interpret the output of the RNN before drawing. I was not going to learn it, just evolve it.

I was indeed able to evolve very very nicely filled in circles but I didnt like them as much as the LSTM version.

Other drawing tasks suggest themselves for evolving RNN-GMMs or LSTMs for.

  1. Colour in circle

  2. Colour in more complex objects e.g. N random rectangles… etc…

  3. Make a maximally long non-overlapping path

  4. Continue a line segment

  5. Connect N points(circles)

  6. Draw circle around a point

  7. Draw N sided objects

  8. Draw N non-overlapping objects

  9. Draw N parallel lines

In other news, archeological digs in Baluchistan have unearthed an early form of art criticism.













236 views0 comments

Recent Posts

See All
bottom of page