AI and art

Some artists have been using AI techniques for decades, a particularly well known one being Harold Cohen, who made work using a system that he called AARON. I have been using a form of AI in my work since at least the mid 1980’s. I have not however been using machine learning techniques of the type that are currently getting an enormous amount of attention in popular media. My programs use state engines and state transition tables to create agent-based systems for image generation. This type of AI technique has been exploited in an enormous number of contexts, not least in games and digital effects. The ‘animals’ in my Smallworld programs are like game ‘bots’. The main reason for using these simple techniques is because I am fascinated by the compositions that can be created using them. Combined with this has been a desire to explore, question and celebrate the way that we can respond to automatic phenomena as if they were caused intentionally. The automatic agents in my programs (that I have referred to variously as animals, creatures, bees and so on) are not only intended to generate shapes and forms that have aesthetic effects related to organically generated forms like plants and other living organisms but also to present behavioural characteristics similar to those of actual animals.  The intention is that we might read more into them than is actually there and know that we are doing so. To me a key element in experiencing the work is thinking about the way that the apparently organic cause of the appearance of the compositions, or the impression that the ‘animals’ are intelligent, is an illusion constructed by us. We imagine it as we try to make some kind of sense of what we are looking at.

Early Smallworld species design interface including state transition table

The current controversy caused in the media by the promotion of programs that generate images, texts, music, etc. using deep learning techniques seems rather surprising. It is as if artists have never used AI techniques before. Speculation appears to be being fed by the idea that the programs are the artists, rather than being tools used by artists. The fact that the programs have been created by humans can evoke age-old visceral fears and ethical doubts, envisaging hubristic consequences of the creation of devices capable of artificial human-like activity. The potential of making autonomous artificial humanoids is also being alluded to.  One example that seeks to draw attention to and encourage discussion of this issue is the Ai-Da project which does so by using current technology to present a new riff on the rich history of constructing art-making humanoid machines.

The latest achievements will I hope lead to some original discussions rather than simple repetition of previous arguments. To achieve more depth in such discussions, it would be valuable to acknowledge that these new developments are not happening in an historical vacuum. There are precedents which should inform the discussion so as to lead to more satisfying conclusions.

Later Smallworld species design interface with urgency for each event

From my experience using “Good Old-Fashioned AI” (GOFAI) in my work suggests that the use of the techniques currently being promoted in the creative arts should not be too great a problem, in fact it may open possibilities. The surge of interest seems to be due to the apparent ease with which the programs can be used, as well as their effectiveness at generating products that could plausibly be the product of human creativity alone. Personally, I have found most of the work that I have been aware of being produced using these currently celebrated techniques superficial or aesthetically repulsive. It reminds me of something Harold Cohen said about the way artists need to embrace “difficult to use” technology. Thus, expertise developed through practice in using these new tools will be needed to produce anything of real worth. There are still issues to be resolved around what AI technology might be used for, including imitating human activities and the creation of human facsimiles more convincing and hence potentially misleading than game bots. It would surely be instructive when addressing these questions, to consider what has gone before.

Sydney exhibition and PhD

I am very pleased to have some of my work in the exhibition Prisms of Influence: Echoes from The Colour in the Code at the Mosman Gallery in Sydney Australia. The exhibition runs in parallel to Ernest Edmonds: The Colour in the Code a retrospective exhibition at Mosman of Ernest Edmonds’ work. Ernest was the director of research of my PhD and Susan Tebby, who also has work in the show, was my supervisor.

My work in the exhibition is a video that, through recordings of interactions, shows the development of the interface I created to enable people to explore the generative properties of the Smallworld algorithms that I had developed at UKC. The development of versions of the Smallworld suite that people could access at exhibitions served as a case study in the focus of my PhD. The goal of my research which was to find out just what artists and audiences are offered by a medium that may demand active participation in the realisation of the work rather than, as is more often the case, engagement in viewing and interpretation of existing material.

A pdf of the PhD thesis, which includes the conclusions of the research, is available to download here.

Leicester digital art pioneers

Exhibition and Zoom Event

Pursuing Limited Resources (2018) and Depth Cued Smallworld Images (1989)

In April 2022, The above pieces were exhibited in an exhibition organised by Sean Clark at the start of a project to collect and display the role people and institutions in Leicester and Leicestershire contributed to the development of computational art.

More about the project and exhibition can be found here

A Zoom Event with the four artists exhibited is scheduled for 7.00pm London time on 28th April 2022. Register here

Smallworld phase 2, videos

Here are several realtime recordings 1 of the Smallworld suite of programs as it was developed in the 1980’s while doing PhD research into interactive computer art at LUTCHI – the Loughbourough University of Technology Computer-Human Interface Research Unit.

As Artist in Residence at the University of Kent at Canterbury, during Phase 1 of the Smallworld project, I had become fascinated with the way that a slight change in one characteristic of a Smallworld ‘animal’ behaviour can have a significant effect on the resulting interactions with other animals and hence the shape generated by their mapped trails. I had explored this by producing still images. My intention during the PhD research was to develop a way to enable people to interact with Smallworld to gain a deeper understanding of the processes that were generating the images rather than only being able to look at the final result.

Smallworld Clip 1

Exhibited in the “Art Science and Industry” exhibition at the Consort Gallery, Imperial College, London, in 1986. In the first part ‘animals’ of two species – first 25 blue and then 5 red – are ‘planted’ and then the program is run and a shape is generated as the paths of the ‘animals’, are recorded. Three more versions are run with the same starting positions but the ‘speed’ characteristic of the red species is altered each time. Altering the relative speed of predator to prey leads to different results and therefore different shapes. The trails were in the same plane.

The final part shows animals moving in a three-dimensional space. The shape is the result of one red ‘animal’ chasing 26 of the blue species and ‘eating’ most of them, hence their trails come to an end. The red predator started in the centre of the cube-like arrangement of blue prey. To indicate passage of time the trails change colour over their length. The red’s trail gradually changes from red through orange to yellow and the blue species trail changes from blue to a paler blue. The shape is rotated to reveal its three-dimensionality.

Smallworld Clip 2

I was very interested in what might happen if the changed shapes were played as an animation rather than being displayed as still images next to each other as had been done previously.

In the first part of the video the six images on the left were generated by running the Smallworld behavioural program and saving an image of the end result each time. Each image was generated by plotting the trails of ‘animals’ starting from the same starting positions but with the ‘speed’ characteristic of one species altered each time. In the larger window to the right the same still images are repeadedly displayed as frames of an animation.

When I first saw this animation I was struck by the similarity to organic movement. So, I made the next part of the video where the animation is shown at a slightly larger scale and the animating shape is translated as if it is an organism moving itself.

In the last section of the video, the animals are grouped into four populations. Depending on the changes in the parameters that govern their behaviour the populations of animals have differing levels of contact. In the first frame the two populations on the left have some contact but do not contact the two populations on the right. At a critical point, as the parameters change the two polulations on the left do not connect with each other but instead contact the populations on the right.

Smallworld Clip 3

This version of the Smallworld suite was exhibited at the “Fearful Symmetries” Art Exhibition of the World Science Fiction Convention held in Brighton, UK in 1987. This was the first exhibition of an interactive version of Smallworld. Visitors to the show were able to explore some of the possibilities of generating and viewing shapes. Sometimes the data that the visitor’s interactions had created were saved and they could come back later, reload the data, and look at the shape again, show it to other people. One of these shapes features in another video (See Smallworld Clip 5).

The first sequence in the video shows the way visitors had access to the suite.

Entering ‘1’ on the keyboard would run the ‘plant’ program of the Smallworld suite, which at this stage in the interface development had a pop-up menu to select the species of ‘animal’ to locate. Species were classed by colour. The user had control over the x,y location (across the screen) of the starting points of creatures but the z plane that they were introduced to (how deep into the space relative to the user) was predetermined.

Entering ‘2’ would run the ‘sworld’ program. This program worked out step by step how the individual ‘animals’ would respond to each other. Once a given number of interactions had been completed the suite would wait for the user to enter ‘3’.

Entering ‘3’ would run the ‘zoomwind’ program, so called as the user could view the shape that had been generated using keys to ‘zoom’ in or out and ‘wind’ (i.e. rotate) the shape to view it.

Smallworld Clip 4

The ‘track’ program was an experiment in getting the viewpoint to move along the tracks of the ‘animals’ generated using Smallworld. This is a recording made in 1987 as an example of it being applied. The motivation was to try and show the events in the environment as if one was in the position of one of the creatures. Would it feel different if the trail were made by different species? Would following the path of a predator feel instinctively different from following the path of its prey?

As I did not want to create a ‘flying down a tube’ graphic the paths were represented by six parallel lines running parallel to the path and the viewpoint moved along between them. Getting the view to change in an appropriate manner proved hard to resolve. The ambiguity of distance travelled on some long stretches of path was rather interesting as I was effectively handing over the ‘camerawork’ to one of the ‘animals’. Problems with overcoming the ‘gimbal lock’ phenomenon and other visual aspects that were unsatisfactory led the development of Smallworld to take a different route but the potential is still there.

Smallworld Clip 5

An anonymous visitor to the exhibition in Brighton (See Smallworld Clip 3) made the choices that led to the generation of a shape that they named ‘Sirior’. I particularly liked the way that the green lines share something with a landscape seen from an aircraft in some views. Then when the whole object fits on screen it looks like some kind of jellyfish-like entity.

The video records a phenomenon that I found to be interesting when controlling the viewing of these shapes: When parts of the shape in view are cut off by the frame of the window the control of the moving viewpoint feels like piloting a craft through a space. As soon as the shape is completely visible and no part of it crosses the boundaries of the frame the sensation is like manipulating an object in front of a static viewpoint. The controls do not change, only the perception.

Smallworld Clip 6

Smallworld was exhibited at the 1988 Exhibition “Art and Computers” at the Cleveland Gallery in Middlesbrough, UK. The “Art and Computers” exhibition toured and Smallworld was also shown in Utrecht in The Netherlands, at the First International Symposium on Electronic Art (FISEA).

In this version of the interface, extra items had been added to the pop-up menu, including a neater way of running the different programs in the suite. In the clip, after a shape has been generated, it is viewed first using the “Depth” program which displayed the shape in a depth-cued mode. Then the same data is displayed using the “Fire” program, which took the data and displayed the trails section by section as frames in an animation. The animation looped so that the viewer could control the view to look at different events being repeated.

I can’t remember why I called the program Fire, it was a significant development in the Smallworld programs though as it showed the interactions of the ‘animals’ much more clearly and appeared to reveal more of their ‘motivation’ than the static trail objects had.
A company from Bristol (called Red Dot I seem to remmeber) had made a stop-frame video of this kind of movement for me in 1985 with an early version of Smallworld but it was too expensive for me to get a copy made so there’s no record of it. It was good to get back to this stage again though.

Smallworld Clip 7

This clip shows examples of the ‘Fire’ program being used to replay the movements of Smallworld ‘animals’.

It took such a long time to calculate the interactions between a population of ‘animals’ that the sequences are necessarily short. Current processing power of course allows for particle systems and behavioural systems to have enormous numbers of agents but back in the 80’s even with extremely expensive high end kit like a Silicon Graphics IRIS the calculations took a lot of time.

The low number of agents was not a problem though as I was not actually looking to make particle systems with massive numbers of ‘animals’ but small populations where the individual ‘stories’ of each animal could be followed by repeating the whole complex sequence of events and seeing how they fitted into the bigger picture.

I was also interested in the shapes created by the fleeing ‘prey’ and intercepting predators particularly evident in the examples in the middle sequence. For the last part of this clip each ‘animal’ is represented by an object which is itself a small copy of a shape generated using Smallworld. One of the reasons that I chose not to represent the ‘animals’ using graphic objects with solid rendered surfaces or polygons was that I wanted a logical consistency to the compositions.

Smallworld Clip 8

The development of the interface led, at the end of the period at Loughborough where I studied for my PhD, to this version, which overcame the challenge of enabling users to ‘plant’ individual animals at a given location in 3D space.

Visitors to the “Smallworld Vistas” exhibition of the work held in the University’s Pilkington Library in 1989 were able to book an appointment to access Smallworld in one of the LUTCHI Labs.

Users selected the current species from colour swatches at the bottom left of the screen. They could then point to a particular location in the 3D reference cube of space which appeared when they moved their cursor to the upper part of the screen. The 3D cursor had ‘shadows’ (a 2D cursor on each plane of the cube) to help the user follow where it was pointing. If they held the middle mouse button down the 3D cursor would move in ‘z’. If they did not hold it down the cursor moved in ‘x’ and ‘y’. Clicking the left mousebutton would ‘plant’ an ‘animal’. rotation and scaling were controlled with keyboard buttons.

They could then select one of the programs in the suite from the ‘PLANT’, ‘DEPTH’ and ‘FIRE’ on-screen buttons below. To the right of these buttons was one providing contextual help and an introduction.


The work was developed and displayed on Silicon Graphics IRIS workstations using C and IRIS GL.
More at

1) The videos were recorded directly onto video tape (U-Matic) as the program was running. They were later copied to VHS (When U-Matic was phased out) and even later digitized from the VHS tapes. ^


Grid Explosion, 2 Predators, 98 Prey, Photograph from Screen 1985

I began developing the Smallworld suite of programs whilst Artist in Residence in the computing laboratory of The University of Kent at Canterbury from 1984 – 85.

Since then I have continued to use versions of the algorithm to produce work.

Smallworld uses algorithms based upon observations of animal and human social behaviour, including conflict and collaboration, and other interactive phenomena to generate computer graphic forms to interact with, animate and print.

More about my work can be seen at my older site: