Towards an Unknown State:
Interaction, Evolution, and Emergence in Recent Art
Breeding an “Evolutionary”:
by Dan Collins
…we can no longer accept causal explanations. We must examine phenomena as products of a game of chance, of a play of coincidences…
--Vilém Flusser, from Next love in the electronic age, 1991
Learning is not a process of accumulation of representations of the environment; it is a continuous process of transformation of behavior…
--Humberto Maturana 1980 http://pangaro.com/published/cyber-macmillan.html
Art is not the most precious manifestation of life. Art has not the celestial and universal value that people like to attribute to it. Life is far more interesting.
--Tristan Tzara, “Lecture on Dada” (1922)
INTRODUCTION
In August 2000, researchers at Brandeis University made headlines when they announced the development of a computerized system that would automatically generate a set of tiny robots—very nearly without human intervention. “Robots Beget More Robots?,” asked the New York Times on its front page. Dubbed the Golem project (Genetically Organized Lifelike Electro Mechanics) by its creators, this was the first time that robots had been robotically designed and robotically fabricated. While machines making machines is interesting in and of itself, the project went one step further: the robot offspring were “bred” for particular tasks. Computer scientists Jordan Pollack and colleague Hod Lipson had developed a set of artificial life algorithms—instruction sets—that allowed them to “evolve” a collection of “physical locomoting machines” capable of goal oriented behavior. (footnote the rest of the story)
The Golem project is just one example of a whole new category of computer-based, creative research that seeks to mimic--somewhat ironically given its dependence on machines--the evolutionary processes normally associated with the natural world. The new research is characterized by high levels of interactivity, dynamic changes over time including evolution and mutation and emergent behaviors, and a more fluid and active involvement on the part of the user.
The research suggests a new and evolving role for artists and designers working in computer aided design process and interactive media, as well as an expanded definition of the user/audience. Instead of exerting total control over the process and product where the artist has a “direct” relationship to every aspect of the creative process, the task of the artist/scientist becomes one of interacting effectively with machine-based systems in ways that supercharge the investigative process. Other individuals—research collaborators and audience members alike--are called upon to “interact” with the work and further the dialogue.
How can we begin to understand the wealth of processes and shift in philosophical perspectives such an approach to artmaking represents? The unpredictable nature of the outcomes provides an ideational basis to art making that is less deterministic, less bound in inherited style and method, less totalizing in its aesthetic vision. (footnote to historical precedents), and, perhaps, less about the ego of the individual artists. In addition to the mastery of materials and harnessing the powers of the imagination that we expect of the professional artist, our new breed of artist—call her an “evolutionary”--is equally adept at developing new algorithms, envisioning useful and beautiful interfaces, and managing/collaborating with machines exhibiting non-deterministic and emergent behaviors. Like a horticulturalist whooptimizes growing conditions for particular species but is alert to the potential beauty of mutations in evolutionary strains, the evolutionary works to prepare and optimize the conditions for conceptually engaging and aesthetic outcomes. In order to do this, this new breed of artist must have a fuller understanding of interactivity, an healthy appreciation of evolutionary theory, and a gift for setting into motion emergent behavior.
We begin this admittedly utopian vision of the artist by trying to understand and extend what we mean by the concept of “interaction.”
------
Most computer-based experiences claiming “interactivity” are a sham. Ask any twelve year old who has exhausted the “choices” in their state of the art “interactive” computer game. The choices offered are not significant choices. Most games, even games of great complexity, are finite and depend upon a user accessing predefined routines stored in computer memory.
The “game” is not limited to the arcade of course: the links and rollovers that clog the margins of our computer screens—and increasingly our television sets (footnote TV Guide Interactive)—add to the illusion of infinite possibility. Ironically, as we become acculturated to a “full menu of choices”, the options because less distinct, less meaningful. Try it. You can “Build Your Lexus” on the Toyota website—just click on any of the Lexus 2002 product line and “interactively” select exterior color, interior fabrics, and accessories.1 Nevermind that nearly identical options can be found at Saturn, Ford, GM, and Daimler Benz.2
The promise of “interactive TV” has also been receiving a lot of attention of late—particularly in Europe where a full range of interactive programming has been available for several years.
http://www.itvt.com/sky.html (see text in “notes”)
While Interactive TV represents an impressive wedding of the broadcast model (one to many, plus passive viewing) with the experience of the Internet (1 to many, many to many, many to one, plus active participation), we are still a long way from high level interactive experiences (such as generating new content on the fly in collaboration with another person or a machine). (see notes for diagrams of network topology)
Stephen Wilson writes in “Information Arts” (p. 344):
The inclusion of choice structures does not automatically indicate a new respect for the user’s autonomy, intelligence, or call out significant psychic participation. In fact, some analysts suggest that much interactive media is really a cynical manipulation of the user, who is seduced by a semblance of choice.3
Wilson argues further that the “nature of the interactive structure is critical” and requires a “deeper involvement by viewers.”
But what is an “interactive structure”? Is there a consensus on what constitutes interactivity? Is there a definition, a set of rules, or a handbook for designing effective interactive experiences? What distinguishes systems that provide a sense of user autonomy and control? Can the new sciences of “computational evolution” and “emergence” help us to transform computer-based systems from simply attractive data management and navigation tools into collaborative partners for creation, research, and learning?
Educational technologist Ellen Wagner defines interaction as "… reciprocal events that require at least two objects and two actions. Interactions occur when these objects and events mutually influence one another." (Wagner, 1994).
High levels of “interactivity” are achieved in human/machine couplings that enable reciprocal and mutually transforming activity. Interactivity—particularly the type that harnesses emergent forms of behavior—requires that both the user and the machine be engaged in open-ended cycles of productive feedback and exchange. Beyond simply providing an on/off switch or a menu of options leading to “canned” content, in an ideal state, users should be able to interact with the system in ways that produce new information.
While the demand for "interactivity" is a relatively recent phenomenon in the arts, the culture at large has long been obsessed with the idea of machines that learn—and can in turn interact with a user. From media spectacles such as Big Blue's defeat of World Chess Champion Garry Kasporov in May of 1997 to quieter revolutions in teaching autistic children (see NY Times article), computers that master the behaviors of their users are beginning to find a place in contemporary society.
There is more than a hint of narcissism in our desire to be personally reflected in the machines we make. Our desire for “interaction” can be understood as a kind of “Pygmalion complex”(*) in which the world is animated according to our own designs and desires. This has both negative and positive aspects—negative in the sense of a self-absorption in which we see ourselves reflected in the world around us; positive in the sense that we have within us the energy to transform and give life to inanimate material through our powers of invention. In any event, there is a trend away from "dumb" tools and toward "intelligent" machines that respond and learn by interacting with their owner/operators. While our cooking appliances and VCRs, are already "programmable" to reflect individual tastes, the idea of agents and wizards that know our “personal tastes and preferences” represent a rapidly growing trend. (See “Interface Culture” pp. 189-190). (In contrast to the “push” of direct marketing and junk mail, we need the “pull” of just in time delivery of information and goods on an “as needed” basis.).
Few art schools provide courses for producing let alone interpreting or critiquing "interactive” or “emergent” artworks. Though the borderline between the fine arts and other cultural practices (such as science, technology, and entertainment) is becoming increasingly blurred, it is clear that the development of "interactive art" is largely dependent on "non-art" traditions. Interactive and emergent art practices, at least from a technical perspective have more in common with computer gaming, combat simulation, and medical diagnostics than main stream art history or criticism. Theorizing this territory is less a matter of mining, say, the Art Index, and more a matter of conducting systematic research into areas such as communications theory, human computer interaction, educational technology, and cognitive science. With this in mind, it may be helpful to briefly review how other disciplines are looking at the issues surrounding interaction.
Historical and Theoretical Context
A brief look at the history of communication theory shows an evolution from "one-way" systems of communication to "multi-directional" systems. C.E. Shannon, the inventor of “information theory,” developed a mathematical theory of communication in the 1940’s (Shannon, 1948) that revolutionized the way we think about information transfer. In fact he coined the term “bits”—short for “binary digits”—as a fundamental particle, an irreducible unit of measure, that could be used to represent virtually any kind of information--be it smoke signals, music videos, or satellite images. Initially, Shannon posited a highly linear engineering model of information transfer involving the one-way transmission of information from a source to a destination using a transmitter, a signal, and a receiver. Later theorists built upon Shannon's model to include the concepts of interactivity and feedback.
The feedback loop is perhaps the simplest representation of the relationships between input and output elements in a system. One element or agent (the 'regulator' or control) sends information into the system, other agents act based upon their reception/perception of this information, and the results of these actions go back to the first agent. It then modifies its subsequent information output based on this response, to promote more of this action (positive feedback), or less or different action (negative feedback).
System components (agents or subsystems) are usually both regulators and regulated, and feedback loops are often multiple and intersecting (Clayton, 1996, Batra, 1990)." (Morgan, 1999)
Feedback is essential for a system to maintain itself over the course of time. Negative feedback leads to adaptive, or goal-seeking behavior such as sustaining the same level, temperature, concentration, speed, direction in a given system. In some cases the goal is self-determined and is preserved in the face of evolution: the system has produced its own purpose (to maintain, for example, the composition of the air or the oceans in the ecosystem or the concentration of glucose in the blood). In other cases humankind has determined the goals of the machines (automats and servomechanisms). In a negative loop every variation toward the positive triggers a correction toward the negative, and vice versa. There is tight control; the system oscillates around an ideal equilibrium that it never attains. A thermostat or a water tank equipped with a float are simple examples of regulation by negative feedback.
http://pespmc1.vub.ac.be/FEEDBACK.html
While the concept of feedback provides a concise description of how inputs and outputs interact in a closed system, the science of cybernetics uses the circularity of feedback mechanisms as a key to understanding organization, communication, and control in systems of all kinds.
The term “cybernetics” was coined in 1947 by the mathematician Norbert Wiener (see http://www.well.com/user/mmcadams/wiener.html), who used it to name a discipline apart from, but touching upon, such established disciplines as electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology. Wiener and his colleagues, Arturo Rosenblueth and Julian Bigelow, needed a new word to refer to their new concept; they adapted a Greek word meaning "steersman" to invoke the rich interaction of goals, predictions, actions, feedback and response in systems of all kinds (the term "governor" derives from the same root) [Wiener 1948]. Early applications in the control of physical systems (aiming artillery, designing electrical circuits and maneuvering simple robots) clarified the fundamental roles of these concepts in engineering; but the relevance to social systems and the softer sciences was also clear from the start. <http://pangaro.com/published/cyber-macmillan.html>
Cybernetics grew out of Shannon's information theory which, as mentioned above, was designed to optimize the transmission of information through communication channels, and the feedback concept used in engineering control systems. As cybernetics has evolved, it has placed increasing emphasis on how observers construct models of the systems with which they interact to maintain, adapt, and self-organize (*). Such circularity or self-reference makes it possible to make precise, scientific models of purposeful activity, that is, behavior that is oriented towards a goal or preferred condition. In that sense, cybernetics proposes a revolution with respect to the linear, mechanistic models of traditional Newtonian science. In classical science, every process is determined solely by its cause, that is, a factor residing in the past. While classical science is based in understanding cause/effect relationships, cybernetic science seeks to understand the behavior of living organisms in some future, unknown state--a state of being that does not as yet exist and, therefore, cannot be said to have a relationship to a definable “cause.”
Cybernetics has discovered that teleonomy (or finality) and causality can be reconciled by using non-linear, circular mechanisms, where the cause equals the effect. The simplest example of such a circular mechanism is feedback. The simplest application of negative feedback for self-maintenance is homeostasis. The non-linear interaction between the homeostatic or goal-directed system and its environment results in a relation of control of the system over the perturbations coming from the environment.