Smallworld phase 2, videos

Here are several realtime recordings 1 of the Smallworld suite of programs as it was developed in the 1980’s while doing PhD research into interactive computer art at LUTCHI – the Loughbourough University of Technology Computer-Human Interface Research Unit.

As Artist in Residence at the University of Kent at Canterbury, during Phase 1 of the Smallworld project, I had become fascinated with the way that a slight change in one characteristic of a Smallworld ‘animal’ behaviour can have a significant effect on the resulting interactions with other animals and hence the shape generated by their mapped trails. I had explored this by producing still images. My intention during the PhD research was to develop a way to enable people to interact with Smallworld to gain a deeper understanding of the processes that were generating the images rather than only being able to look at the final result.

Smallworld Clip 1

Exhibited in the “Art Science and Industry” exhibition at the Consort Gallery, Imperial College, London, in 1986. In the first part ‘animals’ of two species – first 25 blue and then 5 red – are ‘planted’ and then the program is run and a shape is generated as the paths of the ‘animals’, are recorded. Three more versions are run with the same starting positions but the ‘speed’ characteristic of the red species is altered each time. Altering the relative speed of predator to prey leads to different results and therefore different shapes. The trails were in the same plane.

The final part shows animals moving in a three-dimensional space. The shape is the result of one red ‘animal’ chasing 26 of the blue species and ‘eating’ most of them, hence their trails come to an end. The red predator started in the centre of the cube-like arrangement of blue prey. To indicate passage of time the trails change colour over their length. The red’s trail gradually changes from red through orange to yellow and the blue species trail changes from blue to a paler blue. The shape is rotated to reveal its three-dimensionality.

Smallworld Clip 2

I was very interested in what might happen if the changed shapes were played as an animation rather than being displayed as still images next to each other as had been done previously.

In the first part of the video the six images on the left were generated by running the Smallworld behavioural program and saving an image of the end result each time. Each image was generated by plotting the trails of ‘animals’ starting from the same starting positions but with the ‘speed’ characteristic of one species altered each time. In the larger window to the right the same still images are repeadedly displayed as frames of an animation.

When I first saw this animation I was struck by the similarity to organic movement. So, I made the next part of the video where the animation is shown at a slightly larger scale and the animating shape is translated as if it is an organism moving itself.

In the last section of the video, the animals are grouped into four populations. Depending on the changes in the parameters that govern their behaviour the populations of animals have differing levels of contact. In the first frame the two populations on the left have some contact but do not contact the two populations on the right. At a critical point, as the parameters change the two polulations on the left do not connect with each other but instead contact the populations on the right.

Smallworld Clip 3

This version of the Smallworld suite was exhibited at the “Fearful Symmetries” Art Exhibition of the World Science Fiction Convention held in Brighton, UK in 1987. This was the first exhibition of an interactive version of Smallworld. Visitors to the show were able to explore some of the possibilities of generating and viewing shapes. Sometimes the data that the visitor’s interactions had created were saved and they could come back later, reload the data, and look at the shape again, show it to other people. One of these shapes features in another video (See Smallworld Clip 5).

The first sequence in the video shows the way visitors had access to the suite.

Entering ‘1’ on the keyboard would run the ‘plant’ program of the Smallworld suite, which at this stage in the interface development had a pop-up menu to select the species of ‘animal’ to locate. Species were classed by colour. The user had control over the x,y location (across the screen) of the starting points of creatures but the z plane that they were introduced to (how deep into the space relative to the user) was predetermined.

Entering ‘2’ would run the ‘sworld’ program. This program worked out step by step how the individual ‘animals’ would respond to each other. Once a given number of interactions had been completed the suite would wait for the user to enter ‘3’.

Entering ‘3’ would run the ‘zoomwind’ program, so called as the user could view the shape that had been generated using keys to ‘zoom’ in or out and ‘wind’ (i.e. rotate) the shape to view it.

Smallworld Clip 4

The ‘track’ program was an experiment in getting the viewpoint to move along the tracks of the ‘animals’ generated using Smallworld. This is a recording made in 1987 as an example of it being applied. The motivation was to try and show the events in the environment as if one was in the position of one of the creatures. Would it feel different if the trail were made by different species? Would following the path of a predator feel instinctively different from following the path of its prey?

As I did not want to create a ‘flying down a tube’ graphic the paths were represented by six parallel lines running parallel to the path and the viewpoint moved along between them. Getting the view to change in an appropriate manner proved hard to resolve. The ambiguity of distance travelled on some long stretches of path was rather interesting as I was effectively handing over the ‘camerawork’ to one of the ‘animals’. Problems with overcoming the ‘gimbal lock’ phenomenon and other visual aspects that were unsatisfactory led the development of Smallworld to take a different route but the potential is still there.

Smallworld Clip 5

An anonymous visitor to the exhibition in Brighton (See Smallworld Clip 3) made the choices that led to the generation of a shape that they named ‘Sirior’. I particularly liked the way that the green lines share something with a landscape seen from an aircraft in some views. Then when the whole object fits on screen it looks like some kind of jellyfish-like entity.

The video records a phenomenon that I found to be interesting when controlling the viewing of these shapes: When parts of the shape in view are cut off by the frame of the window the control of the moving viewpoint feels like piloting a craft through a space. As soon as the shape is completely visible and no part of it crosses the boundaries of the frame the sensation is like manipulating an object in front of a static viewpoint. The controls do not change, only the perception.

Smallworld Clip 6

Smallworld was exhibited at the 1988 Exhibition “Art and Computers” at the Cleveland Gallery in Middlesbrough, UK. The “Art and Computers” exhibition toured and Smallworld was also shown in Utrecht in The Netherlands, at the First International Symposium on Electronic Art (FISEA).

In this version of the interface, extra items had been added to the pop-up menu, including a neater way of running the different programs in the suite. In the clip, after a shape has been generated, it is viewed first using the “Depth” program which displayed the shape in a depth-cued mode. Then the same data is displayed using the “Fire” program, which took the data and displayed the trails section by section as frames in an animation. The animation looped so that the viewer could control the view to look at different events being repeated.

I can’t remember why I called the program Fire, it was a significant development in the Smallworld programs though as it showed the interactions of the ‘animals’ much more clearly and appeared to reveal more of their ‘motivation’ than the static trail objects had.
A company from Bristol (called Red Dot I seem to remmeber) had made a stop-frame video of this kind of movement for me in 1985 with an early version of Smallworld but it was too expensive for me to get a copy made so there’s no record of it. It was good to get back to this stage again though.

Smallworld Clip 7

This clip shows examples of the ‘Fire’ program being used to replay the movements of Smallworld ‘animals’.

It took such a long time to calculate the interactions between a population of ‘animals’ that the sequences are necessarily short. Current processing power of course allows for particle systems and behavioural systems to have enormous numbers of agents but back in the 80’s even with extremely expensive high end kit like a Silicon Graphics IRIS the calculations took a lot of time.

The low number of agents was not a problem though as I was not actually looking to make particle systems with massive numbers of ‘animals’ but small populations where the individual ‘stories’ of each animal could be followed by repeating the whole complex sequence of events and seeing how they fitted into the bigger picture.

I was also interested in the shapes created by the fleeing ‘prey’ and intercepting predators particularly evident in the examples in the middle sequence. For the last part of this clip each ‘animal’ is represented by an object which is itself a small copy of a shape generated using Smallworld. One of the reasons that I chose not to represent the ‘animals’ using graphic objects with solid rendered surfaces or polygons was that I wanted a logical consistency to the compositions.

Smallworld Clip 8

The development of the interface led, at the end of the period at Loughborough where I studied for my PhD, to this version, which overcame the challenge of enabling users to ‘plant’ individual animals at a given location in 3D space.

Visitors to the “Smallworld Vistas” exhibition of the work held in the University’s Pilkington Library in 1989 were able to book an appointment to access Smallworld in one of the LUTCHI Labs.

Users selected the current species from colour swatches at the bottom left of the screen. They could then point to a particular location in the 3D reference cube of space which appeared when they moved their cursor to the upper part of the screen. The 3D cursor had ‘shadows’ (a 2D cursor on each plane of the cube) to help the user follow where it was pointing. If they held the middle mouse button down the 3D cursor would move in ‘z’. If they did not hold it down the cursor moved in ‘x’ and ‘y’. Clicking the left mousebutton would ‘plant’ an ‘animal’. rotation and scaling were controlled with keyboard buttons.

They could then select one of the programs in the suite from the ‘PLANT’, ‘DEPTH’ and ‘FIRE’ on-screen buttons below. To the right of these buttons was one providing contextual help and an introduction.

Footnotes

The work was developed and displayed on Silicon Graphics IRIS workstations using C and IRIS GL.
More at stephenbell.org.uk

1) The videos were recorded directly onto video tape (U-Matic) as the program was running. They were later copied to VHS (When U-Matic was phased out) and even later digitized from the VHS tapes. ^