Page 34 of 41

Data Physics – An environment for Adaptive Agents

or

The Blind Watchmaker with the Invisible Hand

Dissertation project September 2000

Cefn Hoile

MSc Evolutionary and Adaptive Systems

We must…not be discouraged by the difficulty of interpreting life by the ordinary laws of physics. For that is just what is to be expected from the knowledge we have gained of the structure of living matter.” [Schrodinger 1944]

Abstract

The internet and business intranets represent a forum for virtual interactions between users, their software and the computational resources of the network. The promise of the multi-agent system paradigm was to allow users to assign goals to autonomous programs which could engage in virtual interactions to achieve these goals on their behalf. However, research in this area has been dominated by the top-down, centralised and explicit programming, planning and representation techniques of classical AI. This has led to design difficulties. The complex dynamics of an open network are difficult for human designers to anticipate, and the solutions they build are fragile, over-engineered and inextensible. It is often unclear in what way the agent paradigm has assisted in the delivery of a solution.

This artificial life project is a novel synthesis of ideas from computer science, agoric systems, ecology, ethology, biochemistry, thermodynamics, economics, and sociology leading to a template for a bottom-up, fully distributed, adaptable and scaleable ‘physics’ providing a basis for the open-ended evolution of digital organisms.

It is argued that an open-ended approach to agent evolution can only be achieved by imposing a physics on the agent world, determining the interactions of low level data components, rather than imposing biological abstractions which enforce high-level structure. Within the “Data Physics”, structures can maintain themselves only through meeting users’ requirements, but are otherwise free to explore and exploit a broad range of the strategies seen in natural multi-agent systems in order to achieve survival.

In this approach, top-down, centralised or explicit agent specification and planning are rejected. Instead, local interactions of data objects imply locally defined fitness functions over data complexes. This guides a distributed selection algorithm which credits agents for participation in service delivery to the network user.

It is argued that the features of this architecture, in concert with a human society of users and developers, could allow the open-ended niche specialisation and adaptation required to optimise the productivity of the computational resources which are now available on the network.

An implementation of such an architecture is presented, experiments detailed, and future research challenges identified.


A major problem in making effective use of computers is dealing with complexity” [Miller and Drexler 1988a]

The sequence of instructions which tells the computer how to solve a particular problem is called a program. The program tells the machine what to do, step by step, including all decisions which are to be made. It is apparent from this that the computer does not plan for itself, but that all planning must be done in advance. The growth of the computer industry created the need for trained personnel who do nothing but prepare the programs, or sequences of instructions, which direct the computer. The preparation of the list of instructions to the computer is called programming, and the personnel who perform this function are called programmers.”[1] [Bartee 1977]

Anything produced by a computer must (barring hardware faults) have been generated by the computational principles built/programmed into it.” [Boden 1996a]

How can computers be programmed so that problem-solving capabilities are built up by specifying ‘what is to be done’ rather than ‘ how to do it’?” [Holland 1975]

Virtual life is out there, waiting for us to provide environments in which it may evolve.” [Ray 1996]

Introduction

The computer has radically changed the way in which we interact with each other, economically, socially and intellectually. Object-oriented design, optimised compilation algorithms and advances in semiconductor technology are all aspects which have contributed to this success. The common element in all these advances is the participation of the human mind in the design process. However, the human design process represents a bottleneck in the further development of the computing paradigm.

Conventional design is carried out by first agreeing a specification, which encapsulates the objectives which the design must meet and the range of conditions under which the design should function. If the design process is successful, an explicit and human-comprehensible solution results. Comprehensibility is applauded, since it reassures investors, users and future designers that the system will indeed perform as promised. However, as computational systems have become more complex, the limitations of this process of rational human design has been exposed [Hoile and Tateson 2000] [Maes 1991] [Miller and Drexler 1988a].

Furthermore, there may in fact be advantages to an increase in complexity. The eventual demise of Moore’s Law demands the proper co-ordination of multiple processors to meet our expanding computational needs. However, the exploitation of parallel computers, “[g]etting messages between processors and making sure that processors are fully occupied most of the time is far from easy… and difficult to automate” [Bossomaier and Green 1998].

The multiplicity, responsiveness, originality and complexity of solutions required to shape a productive network ecosystem are beyond the scope of human design. Artificial limitations may be imposed to minimise complexity to maintain control, but the solutions which could exploit complexity will remain unexplored.

When the dynamics of a system are beyond human comprehension, top-down control of the behaviour of that system will certainly be beyond human comprehension. The lesson from nature is that its own agent ‘designs’ – organisms and collectives – thrive in the context of complex ecosystems and economies, and do not suffer from the limitations of a designer’s comprehension. In spite of the anthropomorphism of commentators, (for example ‘Mother Nature’, ‘Blind Watchmaker’, or ‘Invisible Hand’), structures can adapt to exploit the dynamics of the system without the explicit comprehension of those dynamics. Within such systems well-adapted, self maintaining, embodied and embedded complexes made up from primitive elements can sustain themselves and adapt to prevailing conditions.

It is proposed to identify the “enabling infrastructure” [Mitleton-Kelly 2000] which can support this phenomenon within a digital environment. The study of natural optimising systems will inform the design of a ‘Data Physics’, an environment in which complexes of data – agents – may maintain themselves through the satisfaction of human specified ends.

[I]mprovements in machinery…have been made by the ingenuity of…those who are called philosophers or men of speculation, whose trade it is not to do any thing but to observe every thing and who, upon that account are often capable of combining together the powers of the most distant and dissimilar objects.” [Smith 1776]

Methodology

This paper draws on observations from many disciplines to propose an architecture which could solve some of the design problems of multi-agent systems, (MAS), avoiding some of the simplifying assumptions which devalue the results of other MAS experiments. It is hoped that a synthesis of features identified within these diverse areas of study can provide a network environment for an ecosystem of adaptive agents or virtual organisms, which collaborate to deliver services to users who specify (following Holland [1975]) ‘what is to be done’ rather than ‘how to do it’.

There is a long history of exchange between evolution and the social sciences, and between theories of computation and of self-replication. Malthus’ 1798 “An Essay on the Principle of Population as it Affects the Future Improvement of Society” sought to explain why population did not increase in geometric progression as a simple mathematical treatment might suggest. Amongst other influences, Malthus proposed a limitation of food stocks which would check this exponential growth. This treatment of resource constraints on reproductive populations inspired Darwin’s principle of natural selection. Darwinism in turn inspired social Darwinism and sociobiology [Cohen 1994] and strongly informed the development of Hayek’s concept of “spontaneous order” [Hodgson 1994]. Two common themes have contributed to the fruitful exchange between these domains. Firstly, the exploration of co-operative and competitive strategies under resource constraints. Secondly, niche specialisation or division of labour. These two themes will be treated in later sections.

Computation and self-replication also have a long history together. John von Neumann, who is credited for the conceptual design of the modern stored program computer, proposed that the secret of self-replication was to employ a representation at the heart of the process. To avoid an infinite regress of Russian dolls, each of which must have the tiny seed of its future offspring, he proposed that the representation would play two roles in the life cycle of the self-replicating creature. It could be ‘decoded’[2] to generate the body of the offspring, and it could be duplicated to allow a copy of the information to be passed on, creating a new self-sufficient creature, whose duplicate copy allows it to reproduce indefinitely in the same way. Von Neumann presented an implementation of this concept using a cellular automaton model, in which the replication of the creature was achieved using local interactions between the states of cells in a plane, according to a hand-designed rule table [McMullin 2000]. The ‘software’ or genotype of the artificial creature, took the form of a long string of cells, decoded by a ‘hardware’ reading head which constructed the offspring by responding to the information encoded therein – a distinction which still exists in ‘von Neumann’ computers today. This specification is all the more remarkable considering that “von Neumann’s 1949 lectures predated the discovery and elucidation by Watson and Crick in 1953 of the self-replicating structure and genetic role of DNA” [Koza 1994].

Although the intersections between these disciplines are apparent, many commentators, (and research supervisors) warn of the dangers of combining too many features in artificial life experiments, a viewpoint well captured by the following criticism of biological models;

It may seem natural to think that, to understand a complex system, one must construct a model incorporating everything one knows about the system. There are two snags…The first is that one finishes up with a model so complicated that no-one can understand it: the point of a model is to simplify, not to confuse. The second is that if one constructs a sufficiently complex model one can make it do anything one likes by fiddling with the parameters: a model which can predict anything predicts nothing.” [Maynard-Smith and Szathmary 1999]

It is of the nature of the artificial system proposed that many different aspects are integrated together. These criticisms must be addressed head on.

It should be stressed first of all that the proposed system is not a model of a biological, social or economic system, but an actual computation architecture with dynamics which (it is proposed) can be exploited to optimise the design of multi-agent systems with an unprecedented flexibility. Since it draws on metaphors from natural systems, it may indeed shed light on the dynamics of systems which share those features, but its primary role is not to model such systems.

Secondly, it is worth reaffirming the objectives which led to an examination of natural systems in the first place. We wish to provide the conditions for solutions to arise which are presently beyond our understanding. In an ideal world, we will be able to identify the ‘slick tricks’ which agents discover to satisfy our requirements. However, if we restrict their behaviour to strategies we already understand, we have defeated the object of the exercise.

Finally, generality or universality should not be confused with complexity. Although it will be claimed that a very large range of interactions and behaviours can be implemented within the proposed architecture, it is through the minimalism of its specification that this is achieved, rather than the complexity of its specification.

In spite of the rejection of system simplicity as a guiding principle, it is important for the design specification of the architecture itself to satisfy the following criteria.

…axioms should not be too numerous, their system is to be as simple and transparent as possible, and each axiom should have an immediate intuitive meaning by which its appropriateness can be judged directly.” [von Neumann and Morgenstern 1990]

Section 1 will detail the proposed benefits and recognised difficulties of multi-agent system design. Subsequent sections will establish the means by which natural optimising systems achieve similar benefits whilst avoiding such difficulties. This discussion will lead to the specification of a set of axioms or ‘design principles’ which will preserve the desirable properties of each domain, following Pfeifer [1996]. These design principles (DPs) will be explicitly stated and argued for. The approach is effectively that of reverse engineering.

The general idea [of Reverse Engineering] is to start with the product…and then to work through the design process in the opposite direction and reveal design ideas that were used to produce a particular product….Stages in reverse engineering are system level analysis, …subsystem analysis… and finally component analysis where physical principles of component are identified.” [Dautenhahn 2000]

Design ideas in nature do not precede the systems which exploit them. They are abstractions which we derive from our observation of the system. Nevertheless, our treatment of natural systems will follow Dautenhahn’s prescription of a three level analysis.

‘Section 2 - System Level Analysis’ examines whole economies and ecosystems, discusses their dynamics, and characterises the way these dynamics are constituted by interactions between constitutive subsystems. This section addresses the highest level of organisation, and profits mainly from economic perspectives for resource allocation.

‘Section 3 - Subsystem Level Analysis’ examines the multiple levels of modularity which can contribute to our understanding of global behaviour at the subsystem level (for example cell, organism, colony, department, firm, corporation). Economics, ethology, evolutionary theory, behavioural ecology, symbiotic theory, game theory and thermodynamics are exploited to provide an understanding of the emergence of niches, and the correlated differentiation and adaptation of subsystems.