Stepping Stones in the Mist
Copyright © Paul Brown 2000
All Rights Reserved
This essay was written for Creative Evolutionary Systems
edited by Peter Bentley and Dave Corne to be published by Morgan Kaufman
in late 2000 or early 2001. It is based on the presentation I made at
the First Iteration Conference in Melbourne, December 2-6, 1999.
http://www.csse.monash.edu.au/~iterate/
Summary
On my approach as an artist - a disclaimer
Major Influences
Historical work - 1960’s and ‘70’s
Early computer work
Recent work
Current & future directions
Acknowledgments
References
Summary
This essay is a idiosyncratic and non-rigorous account of my work
as an artist who has been involved in the field now known as
Artificial Life for over 30 years. To give the reader some
context I begin with a few opinions that define my position
within the visual arts (which is far from the current mainstream)
and then go on to describe early influences from the 1960’s
and 70’s that have framed my involvement in the field of
computational arts. This includes some examples of my work from
this period. The latter part of the essay describes my working
methodology and includes examples of my more recent work and ends
with a some speculations about where I may go in the future.
The title is a metaphor for my self view as an artist, and
individual. A long time ago I stepped off the bank of a misty
river or lake and onto a line of stepping stones. Now, many
years later, the stepping stones are shrouded in the mist. Those
behind me are dimmed by the mists of memory and those in front
are hidden by the mists of uncertainty. The one in front of me
is quite clear (as is the one behind) but then they quickly fade
as they progress. I have no idea what lies on the further bank,
or indeed if such a shore even exists! Memories of the bank I
left are now long eroded.
I only really know where I am at this moment or, perhaps, where I
have just been.
Thanks to my longstanding interest in computational systems as a
medium for the visual arts I have been relegated to the fringes
of the arts mainstream for most of my career. The role of
outsider is one that I enjoy and I was somewhat disturbed when
the global art mafia appeared to be acknowledging the
computer-based arts in the early 90’s. In retrospect I had
no need to fear. The mainstream’s adoption of this area of
work was, and is, extremely parochial, one dimensional and, dare
I say, paranoid.
Work that uses computer-aided tools (productivity enhancers based
on traditional tools and methods) has been adopted and, for a
brief time at least, became exceptionally fashionable. However
the concept of the computational metamedium (to quote Alan
Kay’s term KAY84) as a unique new paradigm for the arts
quickly fell prey to the ...”no skills please - we’re
postmodernists” kind of rhetoric that the international
contemporary arts scene use to defend their position whenever it
is threatened.
The paradigm shift that is foreshadowed by the computational arts
is, quite correctly, perceived by the holders of the status quo
as a significant threat to their jurisdiction. They are building
barricades of rhetoric to shore up the crumbling foundations of
their glass menageries. As I have commented elsewhere a
revolution is in process in the arts although, given the extreme
conservatism of the discipline, its cliquish nature and
it’s lack of any kind of quantitative foundation, I have
little expectation of a resolution in the near future. When I
was 20 I expected that the revolution would be over long before
the time I was 30. Now I’m 52 I will be surprised if
it’s resolved in my lifetime. I nevertheless remain
optimistic that it will be resolved, and in my favour!
I hold dear to many unfashionable concepts in this brave new
postmodern world. A favourite is my belief that the culture
vultures of the arts mainstream have completely confused
postmodernism with their own brand of ultra-conservative late
modernist rhetoric.
Here, however, are some more productive opinions:
- the artistic mind is a “butterfly” mind
that can fly from flower to flower, from source to source, with
little respect for logic or scholarship. The result is a grand
synthesis formed at an meta or pre-conscious level. From this it
follows that:
- the visual arts are beyond language, beyond conscious
processing, at least when they are created.
note 1: The studio/production basis for the visual arts often
means that they sit uneasily within scholastic institutions like
Universities where their studio component is often undermined in
favour of theory. This is partly due to economic rationalism
(studio is one-to-one and expensive, theory is one-to-many and
cheap) and partly due to the academic pressures of the university
tenure and promotion policies (which favour theory.
From this we can expect that:
- theory does not (necessarily)
inform creation although creation, of necessity, informs
theory.
In believing that living art (art that is still in the process of
being created) is by definition beyond the linguistic mechanisms
that critical theory demands I nevertheless acknowledge that dead
art (like Dada which is a complete corpus and cannot now be
modified) is susceptible to theoretical deconstruction and
analysis. Perhaps Dada can be totally described by language
though I suspect that Godel’s concept of incompleteness
will apply here as it does in any defined and rigorous domain. If
we want to learn about Dada it will probably help us if we look
at some Dadaist artworks although I’m not convinced this is
absolutely necessary. If we want to learn about a living art
process them we are obliged to look at and/or interact with the
work. It is the only portal for understanding that we possess.
I also suspect that there may be analytical tools, like Charles
Saunders Peirce’s Semiology or George
Spencer-Brown’s Laws of Form which may have more
success in their application to the living arts. My suspicions
are based on the intuition that these tools may share, or overlap
with, the metalinguistic domain of visual artistic creativity.
- the medium informs the work and: skill with the medium
determines the quality of the work. This is a very unpopular
point of view at present and considered a legacy of high
modernism’s ... “truth to the medium”. I
challenge all critics to write a poem in an language with which
they are not familiar that a native speaker of that language
would consider as acceptable. It doesn’t have to be good,
just acceptable. Random “cut-ups” and computer
translations don’t count.
I was recommended to read Anton Ehrenzweig’s “Hidden
Order of Art” soon after it was first published in 1967.
As a young art student it meant little to me and it wasn’t
until I had become interested in system or procedural art in the
early 70’s that it made much sense. Ehrenzweig was a
psychoanalyst who has been credited, by Anthony Storr, with being
responsible for the major revision of the discipline that
redirected it from the mainly pathological focus of Freud to a
more creative and celebratory emphasis. I suspect that
Ehrenzweig may himself (had he still been alive) have credited
Marion Milner with this transition.
His first book “The psycho-analysis of artistic vision and
hearing : an introduction to a theory of unconscious
perception” is a flamboyantly unstructured and almost
unreadable cry of “eureka”. “Hidden
Order” if he had lived to complete it would have been his
masterwork. It was published soon after his death in an
unfinished format, unindexed and after a flurry of interest
disappeared from view before reaching a second edition. After a
long hiatus it reappeared and is still in print from University
of California Press.
Ehrenzweig overcame the significant problems that had undermined
Freud’s analysis of the overt content of the artwork by
analysing instead its structure and, in particular, the structure
of the creative process itself. This was of course a period when
Abstract Expressionism - random or “subconscious”
mark making - was the dominant model for the visual arts.
Ehrenzweig proposes three major stages in the creative process:
an initial rejection of unwanted or repressed material followed
by; an “oceanic” engagement and synthesis of the
material and; a final reintegration or “reification”
of the material at a conscious level.
I read “Hidden Order” at one sitting and remember my
excitement and agitation when I had finished it. Within a couple
of hours I had devised a procedure which I though may be capable
of “testing” Ehrenzweig’s hypothesis (fig. 1).
It involved replacing Ehrenzweig’s initial rejection stage
with a system for positioning tiles according to the output of a
random number generator - a dice! This appealed to me because I
was suspicious of all references to a subconscious and then, as
now, was extremely sceptical of the concept of art as self
expression (at least in the emotional sense of the phrase).
Figure 1:
A recent reconstruction of the first
image I made using the technique devised after I had read Anton
Ehrenzweig’s “Hidden Order of Art”. It used
octagonal tiles and the four orientations were dictated by
flicking the pages of the book and using the last digit of the
page number modulo 3 - I didn’t have a dice at that time!
The black squares indicate holes.
My conscious motives (as I remember them) were involved with
issues that obsessed me at the time like removing myself from the
work and objectifying the art making process. Issues that had,
in that period of history lead to Minimalism, Conceptual Art and
Art Language. In retrospect my achievement was the establishment
of a personal methodology for creative production that has
governed my work ever since.
Around that same time I was recommended George
Spencer-Brown’s “Laws of Form”. I found
it’s clinical notation intimidating and, in consequence, I
read a teach-yourself format book on symbolic logic. On
returning to the “Laws” I discovered my concern had
been unnecessary. Spencer-Brown leads his readers step-by-step
through his calculus and towards his conclusions. More recently
it has been described to me as a “boundary grammar”
since it deals with the ideas of distinction and of crossing.
Although I remain convinced that “Laws of Form” is
one of the most important books I have ever read I am not aware
of any direct influence it has had on my work. I remain
surprised about this and often carry a copy around with me so
I’m ready for the breakthrough when it occurs!
Spencer-Brown’s clinical and precise methodology has
certainly influenced me. Ironically symbolic logic which I
perceived at the time as merely a spin-off or perhaps more
correctly a portal to the “Laws” has had a profound
and on-going influence. This is probably because of my immersion
in computational methods and their foundation in logic and formal
languages.
Another book which was a major influence at the time was Charles
Biederman’s classic “Art as the Evolution of Visual
Knowledge” (BIE48).
Since 1968 I have been fascinated by the structure of an
classical Chinese text called the “I Ching or Book of
changes” (WIL23). It is a two state system of broken (yin)
and unbroken (yang) lines that influenced Leibniz who developed
the European version of binary notation (LEI66). The basic
“word” is a three bit trigram. The eight trigrams are
multiplexed together to form six bit hexagrams which index the 64
chapters of the book. Via changing lines (bits that can flip
from yin to yang) any chapter can change to any other and so the
combinatorial permutations of the book total 4096. The book is
believed to have first appeared around 1800 BCE. It is possible
to interpret the book as a symbolic cosmology which derives the
three dimensional cartesian universe by repeated subdivisions
(the trigrams) and then populates it with agents (the hexagrams).
This process is echoed in the opening stanzas of the “Tao
Te Ching”:
- One gives birth to two - the yin and yang
- Two gives birth to three - the trigrams
- Three gives birth to the myriad creatures - the hexagrams
In the 1970’s I also became aware of the work of the System
Art Group. Several members taught at the Slade School of Art
where I was to study from ‘77 to 79. None of them, at that
time, used computers and although I cannot think of any direct
influence their work has had on me it was certainly reassuring to
meet others with a similar mindset!
In 1970 I read Martin Gardiner’s column “Mathematical
Recreations” in “Scientific American” (GAR70)
where he described John Horton Conway’s “Game of
Life”. For several months I persevered with large sheets of
graph paper layed out over the floor of my home. Pencil, paper
and eraser were too limited and I had to wait four years until I
found my “ideal” tool - the digital computer.
However this initiated my fascination with cellular automata
(CAs) that continues to this day.
The final influence I will mention from that brief but formative
period between 1968 and 1972 is the exhibition Cybernetic
Serendipity which was curated by Jasia Reichard and held in 1968
at the Institute of Contemporary Art at it’s then new
premises on the Mall (REI69). It was the first historical review
of artists using computers. I was fascinated and returned to
London in order to spend a second day at the show. Although I was
attracted to the idea of working with computers my attempts to
get involved didn’t come to anything. Then the
Polytechnics were formed and it was possible for me to enrol, in
1974, as a mature fine arts student at Liverpool Polytechnic (now
John Moores University) and then spend most of my time in the
Mathematics Dept. learning Fortran (on an ICL 1903a) and in
Engineering discovering PAL3 Assembler on their DEC PDP 8.
Despite this, in 1977, I was awarded an first class honours
degree in fine art.
As a young art student at Manchester College of Art in 1965 I had
the choice of being pigeon-holed as either a painter, sculptor or
printmaker. I survived three years of boredom before dropping
out to co-found the lightshow Nova Express which toured the North
and Midlands of England for several years. In addition to the
leading bands of the day like Pink Floyd, The Nice and Canned
Heat, we also played with contemporary music, dance and
performance groups like “Music Electronica Viva” and
“Meredith Monk and The House”.
About that time John “Hoppy” Hopkins introduced me to
video and, in collaboration with the musician and composer
Michael Trim (who had one of the first AKS digital audio
synthesisers) and an engineer (whose name I am sorry to admit I
have forgotten) we made several primitive video synthesisers.
These worked on principles of feedback and utilised both
electronic and mechanical components (like rippled glass
filters). Although most of our work was intended to be played
live we did make a few tapes and one “Mandala” was
included in the UK’s first major retrospective of video art
- The Video Show at the Serpentine Gallery in 1974.
My years in the lightshow and video
were a revelation. Contrary to my training, which had demanded
“meaning”, “significance” and “context” I
discovered that a simple feedback circuit or some oil and water together with a
few dies and some heat could produce large scale immersive experiences that
were, to me at least, a lot more attractive and interesting that the stuff on
the walls and floors of the trendy galleries. A significant insight was my
redundancy as the creator of these works. I could show someone else (who
didn’t need to be an “artist”) how to do it. Or, possibly, I
could build a machine to do it!
Being of a logical disposition I
realised I needed to find a formal method of codifying and creating work of this
kind as a systematic procedure. That’s why the experiment with tiles that
followed my reading of Ehrenzweig was such a revelation. But my interest, some
might say obsession, with tilling systems has a longer history. Back in 1967,
whilst still an undergraduate student at Manchester I had produced a series of
tile drawings (fig. 2) that were probably the first pieces of work that I ever
made which I considered to be significant in the sense of
“originality” or of being unique to me as their creator. I equate
this to the idea of “personal signature” in the emotive or
self-expressive arts.
Figure 2a:
a simple square tile has
it’s corners labelled with the ordinal symbols 1-4 in clockwise
order.
Figure 2b:
a two by two arrangement of the
tile. The peripheral vertices are now labelled with the symbols 1-4 in a
clockwise zigzag pattern. The internal shared central vertex repeats this
pattern in an anticlockwise sense. Note the emergence of new symbol
relationships at the adjacent edge vertices.
Figure 2c:
a simplification of figure 2b
showing only the peripheral vertices and the zigzag
pattern.
Figure 2d:
the tile from fig. 2c can now be
arranged in a similar fashion to that in fig. 2b. Note how this restores the
clockwise order of the symbols in the peripheral vertices. Also that the inner
shared vertex repeats this order but in an anticlockwise sense. Compare the
relationship of the adjacent edge vertices with those in fig. 2b.
Note also that fig 1a is a
simplification of this arrangement. This demonstrates that applying the same
arrangement procedure twice returns the peripheral and central vertex labels to
the same state.
Figure 2e:
here we see the full expansion
without simplification. It clear that this process can continue indefinitely.
Every second expansion it will restore the initial conditions at the peripheral
and centre vertices whist producing an expanding set of codes at the ever new
intermediate vertices.
It would be ten years before I
became acquainted with the work of Benoit Mandelbrot and “self
similarity” and even longer before I first heard the term
“emergence” used in the sense that we understand it today. However
I’m now aware that back in 1967 I glimpsed these concepts in creating and
studying these drawings.
Soon after I made these drawings I
showed them to my lecturers who dismissed them as inconsequential and irrelevant
and then suggested that I reconsider my career in the visual arts. Not long
after this I dropped out and began working with the lightshow.
Six years later I enrolled in the
College of Art at Liverpool Polytechnic with the express intention of learning
about computers. After a few months learning FORTRAN and the graphics package
Gino-F I began to develop a tile-based image generating system. Although I
initially used a random number generator to drive the system I soon became
dissatisfied with the simple equation of randomness with intuition. I recalled
my earlier interest in “The Game of Life” and began to devise both
deterministic and probabilistic CAs to create the input data for the
system.
At Liverpool the painters were
unsympathetic to my work and I transferred to sculpture. This department
included several members of the 60’s Kinetics Group and they were very
supportive and helped me develop a small digital electronics lab. I kept my
head down and took my lecturers advice when they suggested I make some 3-D stuff
for my final assessment.
Then in 1977 I began two years of
postgraduate studio work at the Slade School of Art at University College
London. The Computing and Experimental Department had been formed in 1974 by
Malcomb Hughes (then head of postgraduate) and Chris Briscoe (who had begun
working with computers as an undergraduate student at Portsmouth College of Art)
together with an alumni endowment which helped procure a Data General Nova II
minicomputer.
The late A-life pioneer Julian Sullivan
was also on the staff and the late Edward Ihnatowicz, who had created the early
adaptive robots “SAM - Sound Activated Mobile” and “The
Senster” was a regular visitor. Harold Cohen, who was then working on the
early version of his drawing automaton “Aaron” visited whenever he
was in the country. Darrel Viner, who had been working with the computer
graphics pioneer Dr. John Vince at Middlesex Polytechnic since 1972, was also
around. The place was a magnet for artists working with computers and
generative systems. Many of them were involved with automata or other
procedural or rule-based systems and we were all fascinated by the area that
would later be called “Artificial Life” or A-life.
I also joined the Computer Arts Society
which had been founded in 1968 at Event one at the Royal College of Art.
Meetings were held at the late John Lansdown’s offices in Bloomsbury
Square. John became a friend and mentor who, after seeing some of my work
invited me to look into the dynamic generation of unique foliage drawings for
use on CAD architectural plans and perspectives. This project introduced me to
Mandelbrot’s work on fractal, iterative and non-linear systems - the area
that has now been dubbed “Chaos Theory” (MAN77). John published a
brief overview of this work-in-progress in his “Not Only Computing - Also
Art” column in 1978 (LAN78).
It’s interesting to note that by
the mid to late ‘70’s the Chaos field was well populated by
scientists (who were nevertheless often working “underground” to
protect their career status) and by artists. By contrast the nascent A-life
field was then almost exclusively the domain of artists.
The system that I developed then
refined first at Liverpool Poly. then at the Slade School consisted of three
components: a cellular automaton; a graphics interpreter and; a back-end
display generator (fig. 3)
Figure 3:
Schematic of the CA graphic
system
Most often I have worked with simple
CAs that are identical or similar to Conway’s “Life”. The
automaton is based in a regular rectangular matrix where each cell of the matrix
can have one of only two states. In general these states can be symbolised as
“empty” and “occupied”. Such a matrix can be
represented by a one bit array where 0 = empty and 1 = occupied. This array is
the current time slice. The next time slice is calculated by applying a set of
rules to each cell in the matrix. These rules examine the immediate neighbours
of the cell (fig. 4).
|
Figure 4:
Rule 1 - If the cell is occupied and it
has 2 or 3 neighbours occupied then it will remain occupied in the next
timeslice
Rule 2 - if the cell is empty and has 3
neighbours occupied then it will become occupied in the next
timeslice
Rule 3 - otherwise the cell will be empty
in the next timeslice
|
When the rules have been applied to all
the cells in the matrix the next timeslice replaces the current timeslice and
the process is repeated.
If any readers are not familiar with
Conway’s “Game of Life” then Poundstone offers an excellent
introduction (POU87).
Conway’s rules can be implemented
in a simple look-up table:
|
Future Timeslice Cell State if
the:
|
No. of Neighbours
occupied
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Since each cell has 8 neighbours the
neighbourhood family consists of 256 members and we can also illustrate
Conway’s rules using a series of state diagrams (fig. 5)
Figure 5 a:
a diagram showing all the
possible neighbourhood states of the Life CA.
As in each of these state
diagrams the top row and leftmost column outside the square are four-bit nybbles
that address the columns and rows of the diagram.
Figure 5 b:
a diagram showing those
neighbourhood states that allow a cell that is occupied to remain occupied in
the next time slice. This corresponds to rule 1.
Figure 5 c:
a diagram showing those
neighbourhood states that enable a cell that is empty to become occupied in the
next time slice. This corresponds to rule 2.
Note that rule 3 is implied by the
empty spaces in both b and c.
Edge conditions (where a cell does not
have the requisite neighbours) are often dealt with by wrapping the array so the
bottom edge is considered adjacent to the top and the left edge adjacent to the
right. The finite rectangular array represented becomes equivalent to the
continuous surface of a torus or doughnut. This arrangement is often referred
to as “wraparound”.
For every timeslice the CA produces a
single bit 2-D array of data as output. This array is the input to the graphics
interpreter. The graphic interpreter has a fairly simple task: it controls the
way that the symbolic input array maps to an actual graphic
layout.
The mapping is one-to-one. For each
cell in the 2-D bit matrix there will be a corresponding tile in an equivalent
2-D graphic matrix. For single bit input the mapping is simple (fig.
6).
Figure 6 left:
Single bit mapping to two different tiles.
Figure 6 right:
Single bit mapping to the same tile rotated.
Most of my recent work uses multiple
bit arrays. Although I have worked a little with CAs that operate on multiple
bits I have in general preferred to derive a multiple bit solution by
integrating single bit arrays over time. For example we can consider an
identical cell in two timeslices Tn and
Tn+1:
|
|
Anthropomorphic
Interpretation
|
|
|
|
|
|
|
|
|
|
|
|
|
Two-bit mapping is illustrated in
figure 7 and three-bit in figure 8.
Figure 7:
At two-bit state is mapped on to a
family of four tiles
Figure 8:
At three-bit state is mapped on to a
family of eight tiles
Time integration imposes an important
constraint. For example the three-bit code 011 can only change to 110 or 111 in
the next frame. The code shifts left one bit and then the CA provides the least
significant bit.
Although in my early work I often
mapped bit states onto a set of different tiles (figure 6 a) I have more
recently chosen to map onto families derived from rotations and mirror rotations
of a single tile (figure 6 b and figure 7 and 8).
The tiles I use have patterns on them
which, in the final piece, dominate the visual appearance of the work. Many
viewers are, in fact, surprised to discover the underlying tile matrix and the
relative simplicity of the elements that make up often complex images. The idea
of complexity emerging from simplicity -or- to use the older homily “the
whole is greater than the sum of the parts” has been a guiding concept
behind my work for longer than I can remember. I find myself equally attracted
to holism and reductionism and constantly oscillate between these two
extremes.
By patterning the tiles it’s
possible to explore ambiguities like those illustrated in fig.
9.
Figure 9:
In the top diagram the area tagged A is most likely
to be read as the positive or foreground space whereas
in the bottom one it is more likely to be
read as the negative or background space.
The negative space has been hatched in
both cases. In this way it is possible for the work to explore and/or reveal
features like: boundary; closure; inside; outside; negative; positive;
foreground; background; inversion and; crossing. Readers who are familiar with
Spencer-Brown’s “Laws of Form” may now share my astonishment
that there it has had so little direct influence on my work despite the fact so
many similar guiding concepts are shared!
It’s also necessary for me at
this point to issue another disclaimer! In exploring concepts or features like
these I am not trying to understand them in the way that a cognitive scientist
might attempt to do. Nor am I trying to create models or simulations that help
us understand creative behaviour or perception. As an artist I am simply
exploiting such concepts, exploring them and mining them. This is not to say
that I don’t, at least at some level, understand them but that such
understanding is not perceived by me as being of any particular relevance to the
production process of my work. It is not intended to be an illustration of such
concepts however it’s quite legitimate for me or for others to interpret
the work in relationship to these concepts.
In 1986 I gave up my PC-AT and the use
of mainframe and minicomputers for an Apple Macintosh and this has been my
preferred platform ever since. Around 1988 I became aware of a software
application called VideoWorks that was subsequently renamed Director and has
since become the de-facto standard for multimedia authoring worldwide. Director
V7 is a powerful object-oriented fourth generation application generator that
allows relatively naive users to create sophisticated products for CD-ROM and
the Web. For more advanced users it supports Lingo, a well featured object
oriented programming language. Since the mid 90’s it has been possible to
create and support products on both the Mac and PC systems.
All of my recent time-based works have
been produced using Director. These include: Infinite Permutations V1 - a one
bit system, 1993/94; Infinite Permutations V2 - a two bit system, 1994/95 and;
SAND LINES - another two bit system completed in 1998. During 1999 and 2000 I
have been working on a new piece, provisionally titled Chromos, which will be a
three bit system. Samples of these works are included in the images/timebase
section of my website.
Several of these works utilise
pre-computed animation tables where, for example, one member of a tile family is
inbetweened to each other member (fig. 10).
Figure 10:
Page 1 of the animation table for a
new work chromos where family member 1 inbetweens via 10 stages to each of the
eight members of the family (including itself on line 1)
More of chromos can be seen here.
These drawings of animation tables are
so interesting in themselves that I’m planning to exhibit the work in an
installation which will include a large format projection of the time-based
component together with wall-hanging and book format working drawings and other
related material.
In all of these time-based works I
process the family of tiles using the filters and image processing facilities of
Adobe PhotoShop in order to both colour and texture their appearance. Although
this is sometimes merely decorative or playful it can be used more significantly
to anchor analogical or structural references that are otherwise contained in or
implied by the work. For example this mechanism can be used to both increase
and decrease ambiguity or to lock into a graphic metaphor. In Infinite
Permutations V1 the colour has been specifically chosen to make it difficult for
the viewer to resolve the foreground/background or negative/positive conflict
inherent in the pictorial representation of the work. In SAND LINES the image
processing has been selected to consolidate the stone/sand identification and
create a conflict with the animation which does things that the materials
represented could not.
In 1992 as professor of art &
technology at Mississippi State University I was able to use an Iris ink-jet
printer for the first time and was amazed at it’s resolution, surface
integrity and colour fidelity. More recently Iris together with third party
suppliers have introduced a variety of archival inks that can print on acid-free
watercolour papers. Since 1995 I have been making limited edition prints of
images that begin as timeslices of large format tile automata like those used in
the time based pieces. I often get lost in an oceanic orgy of image processing
whilst producing these images and occasionally even undermine or disguise the CA
foundation of the work. (See the images/prints section
of this website for
examples).
At the time of writing I have just
begun a year as artist-in-residence at the Centre for Computational Neuroscience
and Robotics and the School of Cognitive and Computing Sciences at the
University of Sussex. For some time now I have been following work in
evolutionary computational methods and have become convinced they have an
important contribution to make to my future work. The thick mists shrouding
these particular stepping stones make it difficult for me to predict just how
these processes may relate to my practice. I’m also concerned that
prediction often prejudices the outcomes of a project and want to keep an open
mind.
However I can foresee one way that this
technology could be integrated into my current system and it’s likely that
this will be the focus of my work for the coming months.
It would involve the addition of a
pre-evolved or a dynamically evolving “observer”. This should be
able to analyse: the bitmap array and/or; the symbolic graphic array and/or;
the “actual” raster graphic display data for each frame or
timeslice. It would then be capable of dynamically modifying: the generating
CA’s rules and/or; the graphic interpreter’s mapping rules and/or;
the display generators image processing filters.
This would be a learning system that
would be capable of evolving certain kinds of graphic behaviours (fig.
12).
figure 12:
Proposed schematic for an evolving
time-based system. Compare this with figure 3.
I indicated in my opening paragraphs
that the far shore of my life’s work is completely invisible or may not
exist! However as I have implied in the later text I do have some long term
ambitions. The main one is to contribute to the development of autonomous
creative behaviour. I look forward to automata that can create artworks that a
peer group of either humans or other machines will accept as legitimate creative
activity.
The singular and remarkable success of
artists like Harold Cohen is his achievement in externalising his own personal
creative drawing behaviour. Aaron, the automaton he has created, produces 100%
genuine Harold Cohen drawings. I believe that we will eventually be able to
create automata that will make artworks that do not bear the signature of the
creator of the system. That the system will be capable of evolving it’s
own personal style.
I leave the reader to ponder the
problem of just what the words “own” and “personal”
might mean in this context.
My own feelings are that the every
growing realm popularly referred to as cyberspace is an ecosystem and that
creatures must evolve to exploit that space. Since a primary fitness
characteristic would be to hide from humans (who would almost certainly try to
destroy them) they may already exist! Such entities would have access to an
vast repository of information from both real-time sources and from archives. I
find it inconceivable that creatures with such a broad bandwidth of input
“sensations” will not develop behaviours that are analogical to
human art making.
I would like to thank Dr. Phil
Husbands, joint coordinator of the Centre for Computational Neuroscience and
Robotics and Richard Coates, Dean of the School of Cognitive and Computing
Sciences at the University of Sussex for the invitation to join their program
for a year as artist-in-residence.
In particular I must thank Gavin Sade
of the Communication Design program of the Academy of the Arts, Queensland
University of Technology who recently turned my hacked Lingo code into a series
of elegant modular objects.
I am especially grateful to the
Australian Commonwealth Government and the Australia Council, it's arts funding
and advisory body for their award of a New Media Art’s Fellowship which
will fund my work throughout 2000 and 2001.
This essay is based on a presentation I
made at First Iteration, a conference arranged by Jon McCormack and Alan Dorin
at Monash University, Melbourne, December 1-3, 1999 (FIR99).
BIE48 Biederman, C, “Art as the
Evolution of Visual Knowledge”, Red Wing, Minnesota,
1948.
EHR65 Ehrenzweig, A, “The
psycho-analysis of artistic vision and hearing : an introduction to a theory of
unconscious perception” : 2 ed, Braziller, 1965.
EHR68 Ehrenzweig, A, “The hidden
order of art : a study in the psychology of artistic imagination”,
Weidenfeld, 1967.
FIR99 First Iteration Conference,
http://www.csse.monash.edu.au/~iterate/
GAR70 Gardner, M, “Mathematical
Recreations”, Scientific American 223(4), October, 1970, pp
120-123
KAY84 Kay, Alan, “Computer
Software”, Scientific American, September 1984 V 251 #3 pp 41-47.
LAN78 Lansdown, John, “Only God
can make a tree” in “Not only computing - also art”, Computer
Bulletin, British Computer Society, September 1978.
LEI66 Leibniz, GW, "De Arte
Combinatoria", 1666
MAN77 Mandelbrot, BB, “Fractals :
Form, Chance and Dimension”, Freeman, 1977
POU87 Poundstone, W, “The
recursive universe: cosmic complexity and the limits of scientific
knowledge”, O.U.P. 1987
REI69 Reichardt,J Institute of
Contemporary Arts, “Cybernetic serendipity : the computer and the arts :
Special issue of : Studio International”, Studio International
1969
SPE69 Spencer-Brown, G, “Laws of
Form”, George Allen and Unwin, 1969.
WIL23 Wilhelm, R (Ed., Trans. German),
Baynes, CF (Trans. English), The I ching, or, Book of changes / The Richard
Wilhelm translation (1923) rendered into English by Cary F. Baynes, Princeton
University Press, 3rd ed 1967.
Return to the top of page