top of page
Search

Minimal Generative Algorithms for Drawing Lines

Updated: Feb 1, 2021

What generative rules (G) can the artificial artist use to produce images? During my recovery from coronavirus in April 2020 confined to my bedroom by the sea in North Norfolk, and not well able to motivate myself for any other research, I decided to explore minimal images made of lines. I was interested in the aesthetic of generative art (e.g. Vera Molnar, George Nees, et al). For simplicity I decided to start with images made of n straight lines, initially with no interactions between lines. In one class (explored here) images were generated according to a random distribution from which n sets of 4 samples were taken that defined the Euclidean coordinates of the start and end points of each line.

A: 100 uniform random lines, B: 1000 uniform random lines, C: 2500 Binomial lines, D: 2500 Chi squared lines, E: 2500 Hypergeometric lines, F: 2500 Laplace distributed lines.


I made the following conclusions based entirely on my own artistic intuition and with no justification in reason. By far the most pleasing and interesting image with greatest visual complexity which inspires my imagination the most is the random uniform distribution in A. I experience the greatest visual interest when looking at that image. Whilst one of the most regular images is that produced by the binomial distribution; in many ways it is too regular and does not bear inspection for long periods of time. Most of the distributions are boring and ugly to me, but I’m not sure why this is, but it applies to the exponential distributions that resemble D most clearly.


Consistent with previous posts might be the explanation that A produces the greatest ambiguity in my visual system. I can read a lot into the picture, much more than in the other images. One way to test this might be to show these distributions to an imageNet classifier and examine the entropy of the resulting class probabilities.


The next step in complexity is to allow sequentially drawn lines to interact. How can two lines interact when they meet? A quick brainstorm resulted in the following possibilities: 1. One line can rotate in some relation to the other line : e.g. special case bounce away. 2. One line can move some translation (+/- rotation) relative to the other : e.g. one line follows another line in parallel. 3. One line can experience some more general transformation which is a function of itself and the line it just hit. The function which determines the new line's properties is now a function of both lines. The determination is only made at the exact time of intersection based on properties measurable there. 4. The lines interact to produce a shape before one lines continues through the other: e.g. one circuit diagram curves over the other or a pattern is produced. 5. One line moves to a different random place entirely unrelated to the first lines characteristics (special case of 1, where there are random numbers involved). 6. One line disappears (is killed) 7. A new line is produced which has properties of both ‘parental’ lines, but the original parental lines continue through each other. In essence a line can replicate from the point of intersection, producing a new line. This results in a combinatorial explosion of lines. 8. Both or one line can be cancelled out (removed). We’ve not really allowed removals before. Independently to the above behaviours upon interaction, the initial position of the lines can be drawn from any of the previously explored distributions above.Let us remind ourselves of the baseline case of non-interacting lines from a uniform distribution.

The next step is to add an angle increment to the second line once it meets an existing line. If I add one radian to the second line's angle then I get a picture like the one below.

Due to a rather sloppy implementation, sometimes the current line thinks it has interacted with another line but it has only interacted with itself. This is because intersection is defined by the brightness of the current pixel, but the line may not have moved out of that pixel as it is being drawn and so the line may believe it has interacted with another line, when it has only interacted with itself. This can cause a line to rotate around itself. See the same algorithm extended to more lines. Later lines do not travel as far as the initial lines.

A smaller change in angle of only 0.5 radians results in a smoother pattern. Conversely to the first sloppiness, other lines are not 100% accurately detected either and are sometimes crossed without being noticed. Here, the angle of turning upon intersection is half that in the earlier generation.

A special case arises if we turn by a right angle (pi/2 radians) see below.

In all the previous images the change in angle of the current line has been only a function of its own angle, and not of the angle of the line which it met. Let us try to change this assumption and make the angle change a function of both line angles. How can the second line angle be determined? We will keep a special grid which will store line angles explicitly, where other_angle = line_angle_grid[int(math.ceil(x))][int(math.ceil(y))] Let us consider the following change first: angle = other_angle + pi/2, i.e, the current line will leave at an angle perpendicular to the line it just met.

Variants of the "city" style can be produced, e.g. with lines obaying angle = other_angle/ ( (math.pi*2) + math.pi/2.0+ np.random.uniform(-0.1, 0.1)] where some randomness is introduced.


With this process, information is transmitted long distances because one line's information is reflected (quite literally) in the other line. angle = other_angle + math.pi/2. This process seems to resemble the process by which roads are produced in cities, with some differences, 1. a road is not trying to go anywhere like normal roads do i.e its not trying to go from A to B, whilst doing sensible things on the way. Each road lacks intentionality. But that is another issue. In the previous case dA = F(B) , now let us truly make dA = F(A,B) e.g. angle = ( angle + other_angle) / 2.0, so the new road will be traveling in the average of the angles of both roads.

This process results in very organic looking thread like behaviour, with the lines being attracted to each other somewhat on meeting. Adding more lines in this process makes it look like a microscopic view of fibers.

The next step was to mix the non-uniform distributions for initializing lines with some of the above rules.


Spider’s webs can be made by moving right along a radial thread and then starting perpendicular to it, and continuing to be perpendicular as you turn around, on each radial thread moving out a little.


I made the following conclusions:

  • The pixel and aliasing issues have very significant effects on the patterns produced, e.g. whether floor, ceil, or round is used to determine which pixel to store the current position in.

  • The most organic form which was quite an outlier was the use of angle = ( angle + other_angle ) /2

  • Global structure is lacking

  • A line does not have very much ‘line intentionality’; it’s not really trying to get anywhere.

  • In general we have only used random uniform initialization of line positions. We could try all the other random distributions of line initial positions.

There are very few visual concepts being implemented here explicitly. The only concept is that of a dA = F(A,B) i.e. a change in direction of a line is a function of the angles of the two intersecting lines. In a drawing, other concepts from computational geometry could be used.

  1. Lines need not change direction only at intersections with other lines, but as a function of other nearby lines within a local view.

  2. The start point of lines can be a function of existing lines within a local view.

  3. The end point of lines can be a function of existing lines within a local view.

All these notions have the concept of a local line-centric view, as would be achieved with a plotting robot.So I felt my next step was to construct such a local view, as if a robot plotter were moving across the paper with a camera looking directly down at what it was passing over. Because I am interested in implementing these line drawing algorithms eventually in a drawing robot which has a local camera view of what is being drawn, this robot will be influenced by other things it sees other than what it has drawn itself, e.g. shadows, other patterns falling on the paper it is drawing on. In this way the robot’s situated and embodied nature will be expected to make the drawings more “natural”. I implemented with slicing and a rotation a local view that tracks the pencil. Currently there is less geometric information available in the local view, e.g. the angle of the other line is not known from the local view. Easily calculable things from the local view are, brightness of the local view. A neural network can move the pencil as a function of the local view. For example, here the pencil holding robot rotates left when the brightness of the local view is above some threshold. There is some noise.

Here we see the same thing without any noise.


The details of how turns are made as a function of local view brightness can be tweaked ad infinitum.

Here the little robot tries to avoid itself by moving away from lines in its local visual field.

and after running for longer with a slightly different amount of rotation...

If the turning is made sharper we get...

If we turn towards the bright hemisphere of the local view we get this...

Let us add some learning. The rule for the robot moving based on its local view can be evolved (or learned) according to some local or global criterion, e.g. we might want a pattern that produces very few line crossings but which quickly stops a ball from passing through the net made by the pattern, or we can evolve the agent to maximize line crossing... A small neural network controls the agent and we evolve its parameters.

At this point I tried to think of measures of aesthetic niceness. We have such a rich visual contextual experience. We know what things look like, their shapes, their arrangements relative to each other as they normally appear etc.. How can this all just be captured in an equation? Q1. How can we obtain the richness of visual experience from which to derive aesthetic niceness? A1. What about something like imageNet? Q2. If you show good paintings vs. bad paintings to ImageNet, is there any distributional property of its logits that will tell you which is which? You can even show hierarchical image patches of the painting to imageNet and think of a function of the logits that see each patch, that might be able to tell you what is nice? Q3. If you have an abstract image e.g. a Jackson Pollock and show it to imageNet, what distribution will it show of logits, and what about a Rothko, or a regular square grid? Q4. If you can evolve a drawing, a drawing being a set of lines, say 2000 lines, then the fitness of the drawing is some function of the logits of imageNet on observing the drawing, then what drawing will make imageNet maximally certain that it is seeing ALL objects? And what will this drawing look like? In short, I'm thinking of ways in which a pre-trained visual object discriminator can be used in some way to tell what is and what is not an 'interesting' abstract image. These investigations were continued briefly here.





















62 views0 comments

Recent Posts

See All
bottom of page