Beyond Art

Copyright © Paul Brown 1989
All Rights Reserved


This essay was written in 1989 and an edited version appeared as: Metamedia and CyberSpace - advanced computers and the future of art in Heywood, P, Ed., Culture Technology and Creativity in the Late 20th Century. Arts Council of Great Britain/John Lebby Press, October 1990. The original essay was eventually published in Pickover, C A, Ed., Visions of the Future - Art, Technology and Computing in the Twenty-First Century, Science Reviews, Northwood UK. pp 193-204, in 1992.

This version was edited and minor revisions were made in 2000


Abstract

Speculation about the future of information technology and its implications for the creative visual arts helps us identify the unique features of this new tool/medium. Computer aided art in its essential form is not concerned with the production of artefact but instead with communication and interaction.


The Last Decade

For better or for worse the USA has won the Cold War and over the last few months the countries of Eastern Europe have begun to establish western style democracies. By the time that this essay reaches print I hope that we will have found renewed optimism and look forward to a peaceful planet in the not too distant future. Perhaps now at last there can be hope that the billions of dollars invested annually around the world to maintain the cold war can be redirected into solving the pressing ecological problems: pollution, population and poverty, that face humankind and threaten our future.

Donald Michie (1) has suggested that these problems are too complex for humans to understand and solve, that our only hope is to develop artificial intelligence systems that can grasp the totality of the problem and so suggest viable paths of action. A dilemma here is that in order to create that technology we need a level of industrialisation that will, in the short term, increase pollution: by committing ourselves to this particular solution we also guarantee its need.

This essay speculates on the development of a super-intelligent technology and, in particular, upon the implications of this technology for the creative visual arts. Not only are artists discovering a new and, seemingly, infinitely flexible medium or meta-medium (2) and, possibly a new role for art but, in my opinion, they should also become directly involved in developing this new technology (3).

Let me repeat, this essay is speculative and it probably owes more to science fiction than to science fact. It will find many critics.


What Does Fast Mean?

A modern supercomputer can sustain about one thousand million instructions per second (or 1000 MIPS). These instructions could be, for example, simple integer sums - the addition of two whole numbers. If humans wished only to read this many numbers, and could read one each second it would take them 32 years of full-time work, with no breaks for eating or sleeping to complete the task.

In making such a simplistic comparison, that one second of supercomputer time equates to 32 years of human time, we must retain perspective. The internal processing speed of the brain appears to be a good deal faster than those of the supercomputer as we shall see below. And this comparison makes no allowance for human psychophysical processes, like intuition, about which we still know very little. Nevertheless a grand-master chess player can still beat powerful chess playing computers by making inspired decisions during the complexities of the end-game.

Imagine that we have those 1000 million numbers before us. We know that most are in the range 0 to 100 but one, just one, is much larger, say 30,000. Our task is to find this renegade value as fast as possible. We can convert those numbers to colours so there is a direct relationship between colour and value then display them as pixels on a computer graphic screen. If we use a resolution of 1000 rows of 1000 pixels each we can see 1,000,000 numbers simultaneously. 1000 screens are required for all the numbers so if we play these back at 25 frames per second we get to see all the numbers in just 40 seconds. The renegade number will be apparent on the first or second viewing and after a period of cuing we should be able to isolate the unique frame containing the number, then its row and column number will identify it absolutely. I suspect that we could find that rogue number in this way in about 5 or 10 minutes at the most.

It’s worth commenting that, in facilitating that perception we have traded speed against precision. We can find the rogue number quickly but we have no idea of its precise value. We only know that it is significantly different from its neighbours and, with well selected colour attributes (via a look-up-table) we should also be able to estimate the magnitude and the sign of that difference.

Here’s the power of computer visualisation at work. Scientists associated with projects at the National Center for Supercomputer Applications (4) have reported finding errors in their algorithms as a direct result of using visualisation-based analysis tools similar to the process outlined above.

The human brain has evolved elegant processing wetware. Vision is one of its masterpieces. Lou Doctor (5) has suggested that we can increase the speed of modern supercomputers by a factor of 100 before we begin to compare with the processing power of the human visual cortex.

During this Last Decade of the 20th century we can expect to see increases in processing speed that surpass even that factor. A researcher in Queensland (6) has outlined a reversible gate (so-called gates that implement logical functions like AND, OR and NOT are the fundamental building blocks of digital computer systems) that operates on quantum or sub-atomic levels. It could theoretically use single photons as bit information carriers, be very small and be able to steal energy from its environment in the form of spare heat. As such it may be considered potentially non-polluting however, as I have expressed elsewhere (7) the major source of pollution within a computing device is a function of the quality and quantity of the information it is processing and eventually outputs into the environment.

Researchers at IBM have taken these sub-atomic concepts further and produced a practical communication device that operates on quantum levels and uses the uncertainty principle to ensure security (8). What is particularly interesting about this system is that it is the first computational device to exceed the capabilities of the Turing Machine, a theoretical model proposed in 1936 by the English visionary mathematician Alan Turing.

Another route for the computer of the future is the bio-chip. I have spoken with one researcher who referred to Drexler’s work at MIT (9) and speculates that we should be able to produce supercomputers systems that will be the size of large organic molecules by the year 2010. Like modern pharmaceuticals they will be capable of being targeted for specific psychophysical sites and functions. Once introduced into the body they will be capable of tackling illness: restoring immune deficiencies and attacking damage like carcinogenic tumours. They will be able to act as a adjunct to memory and, perhaps most exciting of all, they could conceivably aid with DNA replication. The implication here is that such systems may be able to stop, and perhaps even reverse the aging process.

Parallel processing, where several interlinked processors work together on the same problem is now a viable technology. Massive parallelism and in particular the attempts to emulate human wetware by the use of neural networks shows great promise. The value of even simple systems is well demonstrated by the Neural Net demo running on the Next Computer which shows a seal learning to balance a stick on its nose by trial and error.

There are many other interesting and exciting examples of the new worlds being opened up by modern information processing (10). Within the industry the rate of change of change itself is increasing and even hardened professionals are expressing surprise and occasional concern at the pace.

Although with parallel computing, neural networks, quantum and bio-chips the measurement of performance in MIPS is less indicative there seems to be general expectancy that sometime during this Last Decade we will achieve 1,000,000 MIPS - the ability to process one million million numbers per second (11) Using my earlier metaphor a human would need 32,000 years to merely scan that many numbers.

32,000 years ago humankind were hunter gatherers. The foundations of the kind of social order that led to civilisation would not appear for over 20,000 years. Some of the earliest evidence of human art making comes from this period but the elegant paintings of hunting scenes on the cave walls at Lascaux in the south of France would have to wait another 15,000 years.

Bearing in mind my reservations above it’s nevertheless remarkable that we could soon have an information technology that will have the potential of condensing these phenomenal human time scales into just one second.

If we now apply computer graphics visualisation to this vast quantity of numbers - animating one million numbers per frame we require 40,000 seconds, 666 minutes (a portent here that would amuse Mary Shelley (12)) or 12 hours. It’s a long movie but still a big improvement on 32,000 years. Nevertheless visualisation is here being pushed to its limits and it’s important that we improve human machine communication in order to handle the ultra-high bandwidths that will be associated with the coming generations of supercomputers.

An important development that has taken place during the 1980’s has been more and more sophisticated long-distance interprocessor communications. A key to the development of Fourth Generation Computing and the success of personal computers were the networks that now cover the planet and extend into space like a fine global system of nerves. Though high-speed nets link inter-departmental and occasionally inter-institutional sites the major national and international data highways are painfully slow. Steps are being taken to ameliorate this bottleneck by using more efficient and sometime novel data compression methods (13) and higher bandwidth optical transmission technology.

Networks open a system up to attack from without and have also brought problems as anyone who has had to upload reams of junk email at 300 baud in order to extract an important message will know to their cost. But of especial interest are the diseases that have infected the internet. As far back as 1975 science fiction author John Brunner suggested a data processing construct that would travel along networks changing or destroying their node contents. Brunner called it a Worm (14). The first real worms, together with their streetwise cousins the viruses appeared in the 80’s. They have caused, and indeed are still causing, major damage. The concept of computer security has been modified and what is virtually a new technology has evolved - information system immunology.

The personal computer on your desk is not a closed system. Connected to a modem it becomes one processing node in a supersystem - the internet - that includes just about every other computer in use anywhere that humans have been, or sent their remotes.

Currently it’s unlikely that this supersystem has any concept of itself. Developments in artificial intelligence, neural networks and in particular the refined analysis of human cognitive activity are likely to trigger self perception in such a system. Spencer-Brown (15) infers that any system that goes beyond a certain complexity has to develop self perception in order to be able to resolve space/time paradox. A system that has no awareness of self, for example, is unlikely to discover the Uncertainty Principle.


Gateway to the Brain

Epileptics suffer violent and seemingly chaotic electrical brainstorms. Outwardly they loose control of their body and suffer distressing and often damaging muscular fits. One treatment for this condition involves severing the corpus callosum. Studies of patients who have had this treatment has given insight into aspects of brain functioning. In particular it is relatively easy to induce sensory paradox using material that to a ‘normal’ person would appear trivial.

This is particularly the case when material mixes visual information handled by the right lobe of the brain with linguistic material which is processed by the left. This is because the corpus callosum is the highway that allows the two lobes of the brain to communicate with each other.

Studies of others who have suffered brain damage has confirmed that, under normal operation, specific parts of the brain are dedicated to particular functions. Nevertheless it has been demonstrated that if one part of the brain is damaged another area can often learn to do the work associated with the lost tissue.

Over the last two years I have been involved in helping physicists at the Swinburne Center for Applied Neurosciences (16) to develop low cost visualisation tools. One of their experimental rigs uses a modified bicycle helmet to hold 64 electrodes in place on the scalp. Electrical activity is sampled several hundred times each second at each of these sites and stored in digital form. Postprocessing reveals detail and demonstrates dynamic links between brain structure and cognitive processing.

It would appear that humans have evolved a twin processor. Each is capable of maintaining a basic level of independent functionality. However under normal circumstances each is dedicated to particular functions and, communicating with each other along the corpus callosum, a very high level of activity can be maintained. The corpus callosum is like a bus structure that allows rapid and high-bandwidth exchange between the two processors. It may well be that all high-level brain activity finds itself commuting along this route.

If we wish to bypass the sensory inputs (touch, sight, smell, taste, vision and esp) to establish high-bandwidth and direct communication with the brain it would appear that the corpus callosum, rich in interbrain information traffic, may provide an excellent gateway.

Current theory suggests that we first monitor the electro-chemical activity across the corpus callosum and have an artificial intelligence (AI), possibly in the form of a neural network try and figure out what’s going on. Once it thinks it does it will be allowed to send some signals back. We will have begun to build a direct link between brain and computer. Not too long after this I would expect to see the first artificial super-intelligences (SIs) emerging.

It has been suggested that we should have computing systems with the physical capacity of the human brain within fifteen years. Connect something like that up to the corpus callosum and it shouldn’t be too long before we see a true artificial cognitive system emerging. It’s links to the internet will give it a source of information and knowledge that will reinforce a ultra-fast learning curve. Very soon after it has evolved machine intelligence will eclipse human facilities and the new SIs will begin to inhabit the Net. Many who are far more qualified than I have suggested that by facilitating this process we are, in fact, creating our evolutionary successors.

Graphics, the mainstay of current computer-human-interface (CHI) development, may seem redundant once direct communication is developed. I doubt it and suspect that visualisation will still provide a semiologically rich gateway, possibly via direct stimulation of the visual cortex. From about the age of six, when the child learns metric vision - the ability to estimate measure by just looking - sight begins to dominate other senses in the learning and communication process. The potential here - of directly controlling and manipulating vision, of creating virtual spaces (whole new and coherent universes of interest) is one of the most exciting challenges the artist has ever faced.


Art and Beyond

The implications for humanity of a self-aware superintelligent internet are mind boggling as are the implications for the visual arts. Most artists to date have used computers as tools using prepackaged software that emulates traditional techniques for artefact production like painting, drawing, photo-retouching and so forth. Oil paint is a simple thing and its lack of intelligence makes it easy to simulate. Despite their limitations these graphics arts systems have proved of value: they are non-toxic or significantly less toxic than traditional media; they can significantly enhance productivity and; despite the often strong signature of the particular system in use, they have proved the viability of this new meta-medium to handle a diversity of styles and methods ranging from the formal and often geometrical languages of structuralism to the free association of surrealism and abstract expressionism.

A few artists have pioneered new ground and are helping to define the unique attributes of this information transaction based meta-medium. Two immediate potentials seem to offer themselves. The first is involved with establishing an interaction or intercourse between a human and an AI. The second is interaction between two, or more, humans mediated by an AI. Several artists including the Melbourne-based Simon Veitch have begun to investigate the former. Others, most notably Myron Kruger, have pioneered the latter.

Veitch (17) is an artist who has developed an interactive system he calls 3-Dis. Using two or more small monochrome video cameras volumes of space as small as a few cubic centimetres or as large as a whole room can be identified and tagged. Up to 96 of these volumes can be monitored simultaneously. When the contents of a volume change this can be converted by the computer into a command that can trigger any number of events like, for example, the control of a midi channel on a sound synthesiser or a remote surveillance logger.

The essential simplicity of the concept belies its application and usefulness. Although there are similarities with the work of Canadian artist David Rokeby (18) there are subtle and essential differences, not least the ability to independently track the behaviour of a large number of people or events simultaneously. Now the system has been proved in arts events (where groups of dancers created their own soundtracks) and installations (where fountains followed visitors about as they strolled around gardens at the 1988 Brisbane World Expo) the security people are getting interested and promise Veitch a new source of support.

Kruger's (19) installation at SIGGRAPH 85 in San Francisco used a single monochrome camera and clever edge-detection software to allow individuals and groups to interact with projected images of themselves together with computer generated artefacts like little green gremlins and b-spline curves. What was particularly evocative about this exhibit was the way it encouraged people to play together. On several occasions I discovered salespeople in suits interacting with the most unlikely partners - students in sawn-off jeans amongst them in a completely uninhibited, joyful and humorous way.

Encounter group therapists have been trying to create this degree of relaxed and intimate behaviour for decades and yet here a relatively modest computer program acted as a catalyst for close spontaneous human interaction. Nevertheless, four years later at SIGGRAPH 89 critics and gallery curators on the panel Computer Art - an Oxymoron (20) still felt confident to reiterate their belief that computer art is cold, intimidating and heartless.

Those speakers, like many in the art mainstream, were expressing their problems in getting a suitable handle on this kind of work. Their complaints mainly concern the lack of tangibility of the artwork - that it can’t be framed, revered or monetarised. Computer art is not concerned with the production of artefact. By contrast it is the exactly the inverse of those attributes the mainstream miss that defines this areas uniqueness and potential. Computer art, like Dada and many of the works of the Art Language and other Conceptual groups, is essentially an ephemeral and virtual artform concerned with communication and interaction.

It is my opinion that practitioners shouldn’t waste their time trying to convince the arts mainstream of the value of their work. Our involvement in SIGGRAPH (1990 will mark the 10th anniversary of the SIGGRAPH Art Show), Ars Electronica, FISEA and other events constitute the evolution of an international and interdisciplinary Salon des Refusés. Putting energy into consolidating this movement is infinitely more valuable than wasting time trying to convert the high-priests and culture-vultures of the establishment who, in any case, are mortified by the threat to their quasi-religious and usurious value systems that this new and egalitarian artform endorses.

They will come over in the end never fear: just as soon as they find out how to get an edge. Look at what they did to the Dadaist’s dream of undermining the academy - the establishment now drinks champagne from Duchamp's Urinal.


Twenty First Century Alchemists

If today’s artists can achieve so much with the limited computer technology that’s currently available we can look forward to a renaissance as they contribute to the development of the brain interface and a fully interactive internet with its resident SIs. By entering the internet creative artists bring knowledge of a host of human experiences to the system, the expression of both intellect and emotion and, not least, the value of the celebration of existence. They also will bring the streetwise consciousness - an ability to survive, both within and from, whatever is at hand. And, most often their aims are benevolent - a quality they will do well to pass on to the new intelligence.

A tightly coupled human-machine symbiosis should lead to a close creative collaboration between human and machine. Eventually it’s likely that we will see pure machine art - the product of what is essentially an alien intelligence - for the first time in human history.

The potentials offered by interaction with these artificial and, once they pursue an independent evolutionary path, alien intelligences will open up exciting new potentials for the creative artist. Already two young Australian artists take their inspiration from the area.

The CyberDada events of Troy Innocent and Dale Nason are an anarchistic, acid-house mix of free-form sound, scratch video, analog and digital screen displays, slides and live performance. Their equipment looks like it was built with a sledge hammer - a sound deck that features a raw and mangled cassette transport linked (functionally?) to a caseless Apple II all hooked together with recycled wires and bits of dead circuit board. Although both are young and their work still immature it has attracted the attention of the alternative arts network and the two recently received financial support to complete a videographic version of their CyberDada Manifesto (fig 1) along with facility support from leading professional digital video studios like The Video Paint Brush Company.

These two are amongst the first generation of kids who have had computers around for most of their lives and who are not, in the least, phased by either their existence or potential. Although both are used to microprocessors until recently they had little opportunity to learn about the more mainstream activities in computer science, graphics and CHI and have been surprised to discover that many of the concepts that excite them are actually being developed ‘for real’ by scientists and engineers. Their main source of inspiration has been science fiction and, in particular, the work of the cyberpunk writers (21).

William Gibson is author of the canonical texts of cyberpunk - the CyberSpace trilogy: Neuromancer; Count Zero and Mona Lisa Overdrive. CyberSpace is the virtual space within the matrix - the internet - reached by jacking in via a socket or a ‘trode net. When Gibson wrote Neuromancer (in 1982 on a manual typewriter) he was unaware that NASA and others were working on the real thing. He’s also concerned about the attention his work has brought him from the technical world ... “it never occurred to me that it would be possible for anyone to read these books and ignore the levels of irony”. (22)


Brave New Worlds To Go

The goals of science don’t seem to have changed much in millennia. The Chinese Taoist alchemists of the Han Dynasty sought the elixir of youth over 2000 years ago. They and their colleagues down the ages have also attempted the creation of life, homunculus and succubus, from inanimate matter. Now with biotechnology and quantum communications there seems to be some reasonable expectation that we may be getting close. (Yet another claim that alchemists of all ages have made - particularly to their patrons.)

Whilst some, taking Mary Shelley's lead are concerned about the religious, moral and ethical implications of such speculation others rejoice in the fact that we may at last be able to create an intelligence that is capable of understanding the fragilities of the tenuous ecosystem of planet Earth and may be able to help us remedy our past errors. My interest in getting artists involved in the process of developing these new intelligences is because they will bring a whole set of values, particularly those that may help ensure that the new technology is benevolent.

Many questions must wait to be answered. By plugging into the internet will humans bring requisite self-awareness to the system or instead will we need to create an autonomous self-awareness? Will the Net be dependent on, or independent of, its human creators?

Such questions aside the potential for the creative arts is promising. We may, at last, have broken the stranglehold of the gilded frame and bypassed the parasitic high-priests and culture vultures to establish an egalitarian art of and for and by the people. Not the constrained and hierarchical social realism of totalitarianism but a hetrachical and streetwise cyber-grafitti, an art from the grassroots of democracy that, like urban spraycan walls, will impinge on and possibly integrate all our diverse consciousness’s.


References

  1. Donald Michie and Rory Johnston, “The Creative Computer”, Viking, London 1984

  2. Alan Kay has suggested the term Meta-medium to describe the flexibility of digital simulation. Not only can we program computers to emulate old media - like paint or typography - but we can, and are, discovering new and unique media potentials that this technology offers.

  3. Paul Brown, “Art at the Computer Human Interface”, Artlink, Vol 9 No 4 Summer pp. 64-65 Adelaide 1989-90. (363 The Esplanade, Henley Beach, South Australia 5022).

  4. Mathew Arrott and Stephen Fangmeier speaking at SIGGRAPH 88 pointed out that their simulation of the interaction of the solar wind with the ionosphere of Venus showed peculiar visual artefacts that they later identified as errors in calculation. They speculated that without visualisation such errors could have been overlooked.

  5. Lou Doctor (President of Raster Technologies) speaking on the videotape “Visualisation: State of the Art”, ACM/SIGGRAPH Video Review Vol 30, New York 1988.

  6. “Reversible computers take no energy to run”, New Scientist, 10 June 1989 p 14. A report on the work of GJ Milburn at the University of Queensland.

  7. Paul Brown, “Art and the Information Revolution”, Leonardo Supplemental Issue, Computer Art in Context - the SIGGRAPH 89 Art Show Catalogue pp. 63-65, The International Society for Art Science and Technology, Pergamon Oxford 1989.

  8. “Quantum communication thwarts eavesdroppers” New Scientist, 9 December 1989 No 1694 pp. 13. A report on the work of Charles Bennett and John Smolin at IBM’s Thomas J Watson research Lab.

  9. K Eric Drexler, “Engines of Creation”, New York, Anchor Doubleday 1986

  10. Heinz R Pagels, “The Dreams of Reason”, New York, Simon & Schuster 1988 (Bantam 1989).

  11. “Data Parallel Computers From MasPar Top Out At 30,000 Mips And 1,250 Mflops”, Klein Newsletter Vol 11 No 23 p. 3 Dec 8 1989. MasPar offers this 16,384 processor system for just US$25 per Mip.

  12. Mary Shelly, “Frankenstein”. This 19th century author was one of many commentators who have been critical of humankinds efforts to create life. The current debate over the scientific use of human embryos demonstrates many similar concerns. As yet the critics have not directed their attention into the issues raised by digital simulation.

  13. Michael Barnsley the author of “Fractals Everywhere”, Academic Press, 1988, has developed one of the more novel data compression methods. A raster graphics image is scanned to extract its fractal initiators which are then transmitted. The receiver then reconstructs the image. Barnsley claims that an image can be reduced to less that 1% of the normal storage, ie it can be transmitted at least 100 times faster.

  14. John Brunner, “The Shockwave Rider”, JM Dent & Sons, London, 1975. Brunner, along with the more poetic sci-fi author Samuel Delany has been a major influence on the development of the streetwise cyberpunk style (see 20 - below).

  15. G Spencer-Brown, “The Laws of Form”, George Allen and Unwin London 1969.

  16. SCAN - the Swinburne Centre for Applied Neurosciences, PO Box 218, Hawthorn, Australia 3122. Director Dr Richard Silberstein and his colleagues are currently preparing to publish their initial results on monitoring cognitive activity. The speculative conclusions that I draw above have been inspired by their scientific rigour but owe an generous debt to my interest in sci-fi.

  17. Simon Veitch may be contacted via: Perceptive Systems p/l, PO Box 1008, St Kilda South VIC 3182, Australia.

  18. David Rokeby is featured in “ACM/SIGGRAPH 88 Art Show: Catalog of Interactive Installations and Videotape”, ACM/SIGGRAPH New York 1988. See also “Experiential Computer Art”, ACM/SIGGRAPH 89 Course Notes No 7. ACM/SIGGRAPH New York 1989.

  19. Myron Kruger is featured in the previous reference. A pioneer artist of virtual spaces his book “Artificial Reality”, Addison Wesley, 1983/91, introduced this term.

  20. “Computer Art - an Oxymoron - Views from the Mainstream”, A Panel at ACM/SIGGRAPH 89. (Unpublished)

  21. Bruce Sterling (editor), “Mirrorshades - the cyberpunk anthology”, Paladin/Grafton London 1988. A good introduction to the work of the Cyberpunk sci-fi authors by one of their members. Essential reading includes: the Cyberspace trilogy by William Gibson - “Neuromancer” (Gollancz London 1984), “Count Zero” (Gollancz London 1986) and “Mona Lisa Overdrive” (Gollancz London 1988); Rudy Ruckers “Wetware”, (Avon 1988) and; Greg Bears “Blood Music” (Arbor House 1985).

  22. William Gibson quoted by Richard Guilliatt in “SF and the tales of a new romancer”, Melbourne Sunday Herald, 17 December 1989.


Return to the top of page