Some artists have been using AI techniques for decades, a particularly well known one being Harold Cohen, who made work using a system that he called AARON. I have been using a form of AI in my work since at least the mid 1980’s. I have not however been using machine learning techniques of the type that are currently getting an enormous amount of attention in popular media. My programs use state engines and state transition tables to create agent-based systems for image generation. This type of AI technique has been exploited in an enormous number of contexts, not least in games and digital effects. The ‘animals’ in my Smallworld programs are like game ‘bots’. The main reason for using these simple techniques is because I am fascinated by the compositions that can be created using them. Combined with this has been a desire to explore, question and celebrate the way that we can respond to automatic phenomena as if they were caused intentionally. The automatic agents in my programs (that I have referred to variously as animals, creatures, bees and so on) are not only intended to generate shapes and forms that have aesthetic effects related to organically generated forms like plants and other living organisms but also to present behavioural characteristics similar to those of actual animals. The intention is that we might read more into them than is actually there and know that we are doing so. To me a key element in experiencing the work is thinking about the way that the apparently organic cause of the appearance of the compositions, or the impression that the ‘animals’ are intelligent, is an illusion constructed by us. We imagine it as we try to make some kind of sense of what we are looking at.
The current controversy caused in the media by the promotion of programs that generate images, texts, music, etc. using deep learning techniques seems rather surprising. It is as if artists have never used AI techniques before. Speculation appears to be being fed by the idea that the programs are the artists, rather than being tools used by artists. The fact that the programs have been created by humans can evoke age-old visceral fears and ethical doubts, envisaging hubristic consequences of the creation of devices capable of artificial human-like activity. The potential of making autonomous artificial humanoids is also being alluded to. One example that seeks to draw attention to and encourage discussion of this issue is the Ai-Da project which does so by using current technology to present a new riff on the rich history of constructing art-making humanoid machines.
The latest achievements will I hope lead to some original discussions rather than simple repetition of previous arguments. To achieve more depth in such discussions, it would be valuable to acknowledge that these new developments are not happening in an historical vacuum. There are precedents which should inform the discussion so as to lead to more satisfying conclusions.
From my experience using “Good Old-Fashioned AI” (GOFAI) in my work suggests that the use of the techniques currently being promoted in the creative arts should not be too great a problem, in fact it may open possibilities. The surge of interest seems to be due to the apparent ease with which the programs can be used, as well as their effectiveness at generating products that could plausibly be the product of human creativity alone. Personally, I have found most of the work that I have been aware of being produced using these currently celebrated techniques superficial or aesthetically repulsive. It reminds me of something Harold Cohen said about the way artists need to embrace “difficult to use” technology. Thus, expertise developed through practice in using these new tools will be needed to produce anything of real worth. There are still issues to be resolved around what AI technology might be used for, including imitating human activities and the creation of human facsimiles more convincing and hence potentially misleading than game bots. It would surely be instructive when addressing these questions, to consider what has gone before.
My work in the exhibition is a video that, through recordings of interactions, shows the development of the interface I created to enable people to explore the generative properties of the Smallworld algorithms that I had developed at UKC. The development of versions of the Smallworld suite that people could access at exhibitions served as a case study in the focus of my PhD. The goal of my research which was to find out just what artists and audiences are offered by a medium that may demand active participation in the realisation of the work rather than, as is more often the case, engagement in viewing and interpretation of existing material.
A pdf of the PhD thesis, which includes the conclusions of the research, is available to download here.
From what I have experienced, the technology that I and others have used to make work that is intended to be seen on screen and sometimes interacted with can and does become outdated. The machines can simply break and no longer be manufactured or they can be victim to what I would call a ‘classic’ issue in that when operating systems are updated some programs no longer work.
For quite some time now, I have believed what had been a working hypothesis when I started programming in the 1970’s – that there is a real similarity between the algorithms used to make these works and musical compositions or plays. At The Slade, I was in the company of artists in the Systems Group and other people influenced by them. Their work and previously whilst at Bristol Polytechnic the work of artists like Yoko Ono, Sol Lewitt and others had convinced me that works of art could consist of instructions, or rely upon instructions to generate them. Some of my contemporaries at Bristol had been influenced by the work of Kenneth Martin and together, encouraged by our tutors, we explored the ideas of the Systems Group. There was a real fascination at the time with implications of minimalism and work created without making decisions during execution. Instructions, like musical compositions, can of course be reinterpreted. One way therefore of dealing with the apparent impermanence of the work is to treat it as a composition to be performed. Performance may consist of making an artefacts.
Using the basic principles of Smallworld is like taking a musical composition and playing it, possibly rearranging it and so on. I have taken various elements of the Smallworld suite and re-used them. In much the same way that musical composers, writers, poets and users of other media continue to explore how similar elements and proposals can reveal new aspects if differently combined. It is this kind of practice that led me to believe that working like I do sits very easily within established art practice.
It is worth remembering that when I was at Art College many artists had an ambiguous relationship with, if not complete antagonism towards the commercial art market. Artists were endeavouring, with varying degrees of success, to make art that could not be turned into a commodity. As a student this had also made an impression on my willingness to make ephemeral work alongside work that might last.
In April 2022, The above pieces were exhibited in an exhibition organised by Sean Clark at the start of a project to collect and display the role people and institutions in Leicester and Leicestershire contributed to the development of computational art.
More about the project and exhibition can be found here
A Zoom Event with the four artists exhibited is scheduled for 7.00pm London time on 28th April 2022. Register here
Here are several realtime recordings1 of the Smallworld suite of programs as it was developed in the 1980’s while doing PhD research into interactive computer art at LUTCHI – the Loughbourough University of Technology Computer-Human Interface Research Unit.
As Artist in Residence at the University of Kent at Canterbury, during Phase 1 of the Smallworld project, I had become fascinated with the way that a slight change in one characteristic of a Smallworld ‘animal’ behaviour can have a significant effect on the resulting interactions with other animals and hence the shape generated by their mapped trails. I had explored this by producing still images. My intention during the PhD research was to develop a way to enable people to interact with Smallworld to gain a deeper understanding of the processes that were generating the images rather than only being able to look at the final result.
Exhibited in the “Art Science and Industry” exhibition at the Consort Gallery, Imperial College, London, in 1986. In the first part ‘animals’ of two species – first 25 blue and then 5 red – are ‘planted’ and then the program is run and a shape is generated as the paths of the ‘animals’, are recorded. Three more versions are run with the same starting positions but the ‘speed’ characteristic of the red species is altered each time. Altering the relative speed of predator to prey leads to different results and therefore different shapes. The trails were in the same plane.
The final part shows animals moving in a three-dimensional space. The shape is the result of one red ‘animal’ chasing 26 of the blue species and ‘eating’ most of them, hence their trails come to an end. The red predator started in the centre of the cube-like arrangement of blue prey. To indicate passage of time the trails change colour over their length. The red’s trail gradually changes from red through orange to yellow and the blue species trail changes from blue to a paler blue. The shape is rotated to reveal its three-dimensionality.
I was very interested in what might happen if the changed shapes were played as an animation rather than being displayed as still images next to each other as had been done previously.
In the first part of the video the six images on the left were generated by running the Smallworld behavioural program and saving an image of the end result each time. Each image was generated by plotting the trails of ‘animals’ starting from the same starting positions but with the ‘speed’ characteristic of one species altered each time. In the larger window to the right the same still images are repeadedly displayed as frames of an animation.
When I first saw this animation I was struck by the similarity to organic movement. So, I made the next part of the video where the animation is shown at a slightly larger scale and the animating shape is translated as if it is an organism moving itself.
In the last section of the video, the animals are grouped into four populations. Depending on the changes in the parameters that govern their behaviour the populations of animals have differing levels of contact. In the first frame the two populations on the left have some contact but do not contact the two populations on the right. At a critical point, as the parameters change the two polulations on the left do not connect with each other but instead contact the populations on the right.
This version of the Smallworld suite was exhibited at the “Fearful Symmetries” Art Exhibition of the World Science Fiction Convention held in Brighton, UK in 1987. This was the first exhibition of an interactive version of Smallworld. Visitors to the show were able to explore some of the possibilities of generating and viewing shapes. Sometimes the data that the visitor’s interactions had created were saved and they could come back later, reload the data, and look at the shape again, show it to other people. One of these shapes features in another video (See Smallworld Clip 5).
The first sequence in the video shows the way visitors had access to the suite.
Entering ‘1’ on the keyboard would run the ‘plant’ program of the Smallworld suite, which at this stage in the interface development had a pop-up menu to select the species of ‘animal’ to locate. Species were classed by colour. The user had control over the x,y location (across the screen) of the starting points of creatures but the z plane that they were introduced to (how deep into the space relative to the user) was predetermined.
Entering ‘2’ would run the ‘sworld’ program. This program worked out step by step how the individual ‘animals’ would respond to each other. Once a given number of interactions had been completed the suite would wait for the user to enter ‘3’.
Entering ‘3’ would run the ‘zoomwind’ program, so called as the user could view the shape that had been generated using keys to ‘zoom’ in or out and ‘wind’ (i.e. rotate) the shape to view it.
The ‘track’ program was an experiment in getting the viewpoint to move along the tracks of the ‘animals’ generated using Smallworld. This is a recording made in 1987 as an example of it being applied. The motivation was to try and show the events in the environment as if one was in the position of one of the creatures. Would it feel different if the trail were made by different species? Would following the path of a predator feel instinctively different from following the path of its prey?
As I did not want to create a ‘flying down a tube’ graphic the paths were represented by six parallel lines running parallel to the path and the viewpoint moved along between them. Getting the view to change in an appropriate manner proved hard to resolve. The ambiguity of distance travelled on some long stretches of path was rather interesting as I was effectively handing over the ‘camerawork’ to one of the ‘animals’. Problems with overcoming the ‘gimbal lock’ phenomenon and other visual aspects that were unsatisfactory led the development of Smallworld to take a different route but the potential is still there.
An anonymous visitor to the exhibition in Brighton (See Smallworld Clip 3) made the choices that led to the generation of a shape that they named ‘Sirior’. I particularly liked the way that the green lines share something with a landscape seen from an aircraft in some views. Then when the whole object fits on screen it looks like some kind of jellyfish-like entity.
The video records a phenomenon that I found to be interesting when controlling the viewing of these shapes: When parts of the shape in view are cut off by the frame of the window the control of the moving viewpoint feels like piloting a craft through a space. As soon as the shape is completely visible and no part of it crosses the boundaries of the frame the sensation is like manipulating an object in front of a static viewpoint. The controls do not change, only the perception.
Smallworld was exhibited at the 1988 Exhibition “Art and Computers” at the Cleveland Gallery in Middlesbrough, UK. The “Art and Computers” exhibition toured and Smallworld was also shown in Utrecht in The Netherlands, at the First International Symposium on Electronic Art (FISEA).
In this version of the interface, extra items had been added to the pop-up menu, including a neater way of running the different programs in the suite. In the clip, after a shape has been generated, it is viewed first using the “Depth” program which displayed the shape in a depth-cued mode. Then the same data is displayed using the “Fire” program, which took the data and displayed the trails section by section as frames in an animation. The animation looped so that the viewer could control the view to look at different events being repeated.
I can’t remember why I called the program Fire, it was a significant development in the Smallworld programs though as it showed the interactions of the ‘animals’ much more clearly and appeared to reveal more of their ‘motivation’ than the static trail objects had. A company from Bristol (called Red Dot I seem to remmeber) had made a stop-frame video of this kind of movement for me in 1985 with an early version of Smallworld but it was too expensive for me to get a copy made so there’s no record of it. It was good to get back to this stage again though.
This clip shows examples of the ‘Fire’ program being used to replay the movements of Smallworld ‘animals’.
It took such a long time to calculate the interactions between a population of ‘animals’ that the sequences are necessarily short. Current processing power of course allows for particle systems and behavioural systems to have enormous numbers of agents but back in the 80’s even with extremely expensive high end kit like a Silicon Graphics IRIS the calculations took a lot of time.
The low number of agents was not a problem though as I was not actually looking to make particle systems with massive numbers of ‘animals’ but small populations where the individual ‘stories’ of each animal could be followed by repeating the whole complex sequence of events and seeing how they fitted into the bigger picture.
I was also interested in the shapes created by the fleeing ‘prey’ and intercepting predators particularly evident in the examples in the middle sequence. For the last part of this clip each ‘animal’ is represented by an object which is itself a small copy of a shape generated using Smallworld. One of the reasons that I chose not to represent the ‘animals’ using graphic objects with solid rendered surfaces or polygons was that I wanted a logical consistency to the compositions.
The development of the interface led, at the end of the period at Loughborough where I studied for my PhD, to this version, which overcame the challenge of enabling users to ‘plant’ individual animals at a given location in 3D space.
Visitors to the “Smallworld Vistas” exhibition of the work held in the University’s Pilkington Library in 1989 were able to book an appointment to access Smallworld in one of the LUTCHI Labs.
Users selected the current species from colour swatches at the bottom left of the screen. They could then point to a particular location in the 3D reference cube of space which appeared when they moved their cursor to the upper part of the screen. The 3D cursor had ‘shadows’ (a 2D cursor on each plane of the cube) to help the user follow where it was pointing. If they held the middle mouse button down the 3D cursor would move in ‘z’. If they did not hold it down the cursor moved in ‘x’ and ‘y’. Clicking the left mousebutton would ‘plant’ an ‘animal’. rotation and scaling were controlled with keyboard buttons.
They could then select one of the programs in the suite from the ‘PLANT’, ‘DEPTH’ and ‘FIRE’ on-screen buttons below. To the right of these buttons was one providing contextual help and an introduction.
The work was developed and displayed on Silicon Graphics IRIS workstations using C and IRIS GL. More at stephenbell.org.uk
1) The videos were recorded directly onto video tape (U-Matic) as the program was running. They were later copied to VHS (When U-Matic was phased out) and even later digitized from the VHS tapes. ^
The algorithms that I have been exploiting often have the potential to generate an enormous number of images.
When I was at The Slade, in a conversation with Malcolm Hughes I asked him how one can decide how to deal with a proposition that has innumerable iterations or generated patterns. His suggestion was to select archetypal instances that demonstrate fundamental aspects of the system and its results.
In the many years since that conversation, I have followed Malcolm’s advice.
For quite a while now I have been making some images and animations by moving the graphic objects generated by my programs so that they appear to be being sliced through so we can see their interiors. It is another way that the shapes can be explored and leads to a visual effect that has something in common with the animation of images of the interior of the human body generated by medical scanners. We see inside something that would normally be hidden and what we can see changes as we move our viewpoint through it. Some of the outsides of the shape can also be seen, so it is also a bit like seeing a log of wood after it has been sliced through, or a salami, or in fact anything that has been sliced through. I once made some animations based upon microscopic views of slices through granite gathered from the Antarctic.
One way to understand how these images are made is to think about how the patterns on the surface of wood are created. Wood is made up of many cells joined together. If we cut through it along their length, we get one kind of pattern and sawing across the cells we get different patterns. They all reveal something of how the cells became arranged as the tree grew.
The forms in my work are made by trails or paths through space that are represented by shapes arranged along the routes. These markers can be considered as being a bit like the cells in wood. The algorithm that determines how to display the forms is made to ‘clip’ anything that gets within a certain distance of the viewpoint of the ‘virtual camera’. So, when the graphic display of the work is calculated it effectively slices through the forms, revealing the inside of the shapes and this creates the patterns that can be seen. It is as if the shapes have been sawn through at a certain distance from the viewer. As this view is continually recalculated as the program runs, as the viewpoint or the object move, or the distance at which the clipping is to take place is changed, the patterns also change.
Usually, where computer graphics is used to create images, in games or movies, this kind of clipping would be considered a ‘glitch’. Having a graphic object representing, say a human actor, sliced through, revealing how it is simply an illusion created using geometrical data and clever rendering techniques, is not desired as it interrupts the audiences willing suspension of disbelief. The effect is not always avoided as it can be exploited, for example to simulate the creation of an object via an advanced technology or magic.
These kinds of patterns can also be seen when 3D printers build up an object layer by layer.
[first published March 22 2022, edited July 25 2022]
I began developing the Smallworld suite of programs whilst Artist in Residence in the computing laboratory of The University of Kent at Canterbury from 1984 – 85.
Since then I have continued to use versions of the algorithm to produce work.
Smallworld uses algorithms based upon observations of animal and human social behaviour, including conflict and collaboration, and other interactive phenomena to generate computer graphic forms to interact with, animate and print.