Complexity in Computer Systems: a Risky Business

A talk in Dublin, September 10th 2009

Ian O. Angell, London School of Economics

The message I want you to take away from this keynote today is one that many technology specialists don’t want to hear. It is that no-one is immune to the hazards implicit in computerization – from the novice user, to the most sophisticated computer professional and hacker. Indeed the most sophisticated users are most at risk because they just don’t see the problems coming.

Let me start by asking a question that would never occur to most technologists: “why do we use technology at all?” The reason is that technology imposes structure on our actions, which gives us a tenuous handle on uncertainty. We swap hopelessness, for the optimism in a plan of action. Uncertainty is thereby transformed into Risk – a heady mix of hazard and opportunity.

Today I want to show you that the prevalent so-called ‘scientific’ notion of risk is flawed. We are NOT managing risk, but uncertainty. Risk is something we produce to help us deal with uncertainty. Risk is an output, not some ‘thing’ there to be captured. Many ‘so-called’ risk managers simply don’t understand this. Think of Lehmann Brothers, or Northern Rock, or the Icelandic Banks! Imagine – Enron ran a “management of risk” consultancy! Risk depends on the structures we impose on uncertainty. Different structures, different risks. Unfortunately the structures we introduce to produce risk, themselves introduce new uncertainties.

Technology cannot deal with the singularities that emerge spontaneously when it mixes with human activities. You must look beyond the functionality of technology, beyond the good intentions of the designers, towards the observable consequential risks that occur when computers are integrated into human activity systems.

Computers can deal with objective well-structured problems at amazing speeds. They deal with detail, but they cannot cope with subjective subtlety, ambiguity, complexity and paradox. Computerisation is a prisoner of societal and personal consequences, which cannot be controlled, no matter what the management regime. A good technology platform, although necessary, is not sufficient for success. Success (and failure!) is determined by unique political, social, organisational, and particularly personal factors, not just the functional.

We may know the price of digital technology, however, the costs accrue from here to eternity. We have no way of calculating total costs, because we simply do not comprehend the consequences of using the technology. Even if the technology is simple, it becomes highly complex the moment anybody touches it. Grand schemes may solve the problem as intended, only to create worse problems.

Systems misbehave! Grand schemes may solve the problem as intended, only to create worse problems. People bandy the word system around without the slightest understanding of what it means. Everywhere the term ‘system’ is used when referring to mere ‘installations.’ But a system is what that installation becomes, what it will become – NOT what it was intended to be. Installations are linear, but systems are intrinsically non-linear. In a linear world of practice, the consequences of an action stop with a reaction; in the non-linear world, unforeseen consequences, so-called systemic risks, both hazards and opportunities, feed back as a result of the inevitable complexity of interactions implicit in all large systems. When solving a particular problem, you may or may not succeed, but what is certain is that completely unexpected phenomena happen.

When information systems are mixed with human activity systems results can be perverse. When you use the tools of any technology to solve a particular problem, you may or may not succeed, but what is certain is that completely unexpected phenomena will happen. Leave that job to computers, and the complexity increases to a point where the utility of computers turns into reliance, reliance becomes dependence, and the Law of Diminishing Returns precipitates a galloping descent into nightmare. Welcome to the Twilight Zone, and the Management of Information Systems!

Preposterous claims are made in the early days of every technology. In its pioneering days, electricity was claimed to have a therapeutic effect. Small electric shocks were thought to cure consumption, dysentery, cancer, blindness, worms, and impotence. Only after the nonsense stops is the technology used propitiously to its full potential. Become complacent, and computers will really screw you up, along with the surrounding human activity systems.

In the good old days, before Sarbanes Oxley, those in positions of authority in a firm could be highly ‘imaginative’ with the company’s books and resources, because they had limited responsibility for their actions, and little or no liability. Nowadays when the CEO signs off the accounts, he has sleepless nights – get it wrong … and a jail cell beckons. Their personal freedom is now at risk. This focuses the collective mind of every Board of Directors.

Who knows what incriminating documents will pop out of the computer in the present financial crisis? So never put anything on a computer that you wouldn’t want the whole world to see – because they will see it. Potential employers are scanning the web to check out job applicants. HR departments are uncovering incriminating evidence posted long ago on social networks. The ‘cool’ guy who shares his innermost secrets online, or his risqué photographs, or his humorous anecdotes; or who vents his spleen about ex-colleagues, or brags about drug taking or other illegal, antisocial or inappropriate behaviour, is … an idiot with tunnel vision. Disloyal comments about past employers, or the release of their confidential material, the use of inappropriate screen names, sloppy or lewd vocabulary, even a poor writing style, can all lead to that rejection slip.

The naïve still think that computer applications are groundbreaking, fundamentally changing the nature of work, signalling the advent of the Information Revolution. Unfortunately, “Every revolution evaporates and leaves behind only the slime of a new bureaucracy.” The real problem is not computerised bureaucracy per se, but its expansion into inappropriate areas. Quite reasonably, firms aspire to bureaucracy, as it is the most effective way of controlling ‘normal’ problematic situations, thereby mitigating risk. However, in ‘singular’ situations, the unilateral imposition of predetermined rules of a bureaucracy (both explicit and implicit) can lead to chaos. The American Psychological Association says that omnipresent computer screens distract, and cause confusion and errors of judgement: they call it the ‘Glass-Cockpit Syndrome.’ This syndrome has two effects: end users rely entirely on the system without exercising any judgement; the ensuing information overload results in them ignoring many pieces of (sometimes very important) information.

Normally computerised bureaucracy is highly efficient and effective, however, we cannot know if a situation is ‘normal’ until after its consequences have all played out. The inappropriate use of bureaucracy can be mitigated, but only by using discretion. Unfortunately, bureaucracy often signals the end of discretion. Discretion is frowned on, since bureaucracy, in its self-referential certainty, claims that everything is normal, and thus under control. However, in inevitable ‘abnormal’ singular situations, a lack of discretion leads to a vicious circle of hazard.

Bureaucracies are self-referential systems; they operate on their own terms. In a bureaucracy, all decisions and choices have been made well in advance, with each situation seen as ‘normal,’ along with a self-referential insistence that there are no ‘abnormal’ situations. But there are always unique ‘singular’ situations that don’t fit these rules. And yet we tend to accept what we are told by a computer screen, rather than admit the contradictory evidence of our own eyes.

Behind many computer applications is the malignant belief that human thought is mere calculation; we are biological analogue computers. There is a sinister hidden agenda stemming from the dominance of two ideas: that a number can be a meaningful representation of human experience; and that arithmetic operations on such representations, implemented on a computer, can produce ‘rational decisions’ about, and can control, the human condition. The resulting brave new world will not be one of ordered, constrained and controlled lives, but a rule-based bureaucratic shambles.

What the control freaks don’t appreciate is that their actions actually trigger an ‘uncertainty principle.’ There’s no point in measuring something that will change the moment it is measured, and because it is being measured – like the nonsense of British healthcare targets: surgeons actually push dying patients out of the operating theatre and into the corridor, so that ‘death in surgery’ figures are kept low. Pointing mathematical instruments at the complexity in the non-linear world of practice can only create instability, where the only certainty is uncertainty. Take the "personalization technology" of profiling, being used everywhere, to predict, for example, the tastes of consumers by tracking what they buy, watch, or listen to. These software packages may be mathematically sophisticated, but their implied assumptions are extremely naïve in the ways of the world. Do they really imagine that we can control our complex world by computerising the arbitrary use of numbers, statistics and measurement; with profiling, with systems analysis, opinion polls, market research, socio-economic classifications, performance measures, efficiency audits, cost-benefit analyses?

The basis of decision-taking has shifted to a numerical justification. And the lust for numerical solutions is spreading. Everywhere I see forecasting techniques that are merely an assignment of numbers to the future. Strategy becomes a matter of controlling the future by labelling it with numbers, rather than by continually re-evaluating the uncertain situation, and by depending on people. Searching for the right numerical label to represent the future is no different to numerology and astrology, and the personality tests found in women’s magazines – and putting such data on a computer just amplifies the error. Enter the Peter Principle: individuals and computers are promoted to the level of their incompetence, but then computers magnify and amplify the incompetence of all around.

Underneath all numerical methods is a belief in atomism, in category. However, category is not truth, but merely a choice that says it is OK to treat similarity as though it is sameness, and then to assume that all comparisons between such data-choices are absolute facts. As time moves on, or the perspective or the environment changes, then so does category. We are trapped, because categories are our way of differentiating meaning. However, as category changes, so does meaning. All data is context sensitive. There are no such things as absolute facts. Nuances of detail, as well as deliberate, accidental and arbitrary actions feed back to continuously modify, and amplify, elements, processes and sub-systems. Squeezing complex human experience into a straitjacket of categories needed for computerisation, is forcing square pegs into round holes. This sheds a debris of detail that conspires with the context, to form a critical mass, and a subsequent explosion from the “capricious division and fragmentation” in the flux of events that is life.

By now I can hear many of you mumbling what has this got to do with security? Everything!!! You must realise that the only thing systems have in common is that they all fail. Failure is inevitable, and the fault “lies not in our stars, but in ourselves.” That fault is in the way we humans categorise what we observe in the world. You’d better get used to it. To show you just what I mean I’ll introduce you to a relevant bit of theory. So prepare yourself! Now comes the really heavy bit: the Fallacy of Residual Category

When we categorise, when we computerize, we separate each thing, each data object, from everything else (its complement – its residual category), so that we can model, and thereby control the world with science and technology. We restrict access to that thing. However, according to Niklas Luhmann: “The concept of the environment should not be misunderstood as a kind of residual category. Instead, relationship to the environment is constitutive in system formation.” What he is saying is that there are tacit properties of the ‘whole,’ latencies, which are not apparent in any component part. Any one component is structurally coupled to everything else, however, by bringing that one part to the fore, observing it as a stand-alone unit, those couplings are cut by the distinctions implied in the separation, and the latency vanishes from view but the potential for disruption remains.

Let me illustrate this by resorting to a diagram: the Borromean rings, originally the coat of arms of the Borromeo family. The rings are three interlinked circles that, because of their topology, are inseparable as a triple. The special characteristic of these rings lies in the fact that no two of them are linked, and as a result, if you distinguish one of the rings, then the structural coupling is lost, and the residual category of the two remaining rings will fall apart. Thus the structural couplings/latencies are made to disappear, and the two parts (the thing and its residual category) no longer comprise the original “whole,” and we have introduced a paradox.

For in the original whole the two artificially separated parts continue to operate (and interrelate) as the unobservable whole. Because of this asymmetry (between the world as it is, and as it is observed), observation is conditional, but those conditions are necessarily unobservable, unappreciable, uncertain. Those truncated structural couplings, so casually discarded by observation, {data collection} stay on as uncertainties and thus, as risks to the observer in any further observations, and they can reassert themselves in the most inconvenient ways. And we are back with uncertainty and risk. Hence, we are confronted by a most inconvenient fact: by introducing the structure that turns uncertainty into risk, we have distinguished the world into categories, and in doing so we introduce paradox, and yet more (and far more subtle) uncertainties, which lead to future risks, as yet unstructured. Such is the human condition! There is no escape.

Initially the paradoxes are aligned, and don’t cause trouble – which is how the descriptive pattern was perceived in the first place. But as the system gets bigger, and as time moves on, that alignment will falter. The paradoxes will ultimately conspire to upset the system. All computer systems are flawed, and can only have a temporary utility. This has got nothing to do with the quality of design. Being smug in your expertise with computer installations won’t help you, because the flaw has nothing to do with computers, and everything to do with systems, and the fallacy implicit in all data. The act of imposing a computerised tidiness on an untidy world can lead to chaos. A blind belief in perfecting computer installations is folly.