Infosphere: Tilting the balance.
(Internet Evolution and Nanotechnology).
S. Osokine.
Server Architect, Infolio.
30 Mar 2001.
Contents.
1. The Big Crash…………….……………………….1
2. “Good help is hard to come by”……………………………………...……….2
3. “Good hacker is a silent hacker”……………………………………...……….3
4. Balancing the “Infosphere” ……………………………………...……….4
5. Infosphere. The Limits to Growth…..………………………...……….5
6. Gnutella and Distributed ‘Broadcast-Route’ Protocols………...………..7
7. From Infosphere to the Physical World………………….………...………..7
8. What are the ‘Network Immune System’ Components?………...………..9
9. How to Grow the Nanonet without Nanomachines?………...………..10
10. References………...………..12
1. The Big Crash.
“On January 15, 1990, AT&T’s long-distance telephone switching system crashed.”
This is the first phrase of Bruce Sterling’s excellent book The Hacker Crackdown [1]. This book tells a riveting story, describing how exactly the large part of the Continental US lost its phone service for nine hours, why did that crash happen, and what happened after that. In a sense, some social effects of that crash – like the aftershocks after the really big earthquake - can be felt even now, more than eleven years after the event.
It took a whole book (and a considerable literary talent!) to tell the full story of that crash. Here, however, we will concentrate on just a few technical aspects of this story and use these as the ‘anchors’ to illustrate some practical ties between several seemingly unrelated knowledge fields – like nanotechnology, Gnutella music copying software and the methods used to protect the computer networks against attacks.
As far as the network-wide crashes go, there was nothing really special about that particular one. The huge computers used to route the long-distance phone calls went out of service and all the attempts to put them back on-line were failing immediately after being tried. The similar scenarios were observed on the Internet long before that crash – the Morris Worm was already an ancient history in 1990.
What made this crash special was the fact that it was arguably the first case when the pure information (in form of the software) caused the disruption of something as essential and taken for granted as the phone service on the continental scale. If we define ‘magic’ as the direct way of causing the material effects with the purely informational means [2], the January 15, 1990 date is the prime candidate for going into history as the Birthday of the Magic.
Several aspects of what was going on are important in the context of this article.
First, an original assumption of the AT&T technicians was that the network went down because of the sophisticated hacker attack. This assumption led to the ‘hacker crackdown’ eloquently described by Sterling, even though eventually it turned out that it was the bug in the AT&T software that has caused the crash. Naturally, the knee-jerk reaction cost several people some time behind bars on the unrelated charges as a result, but this is not what is really interesting here.
What is interesting is the fact that a very simple bug was able to cause such major disruptions throughout the whole system. Even if the hackers would really have a goal of disrupting the phone service, they would encounter some major problems along the way and might even be unsuccessful, whereas a small piece of garbled data was able to achieve the same result with no effort.
If we compare the hacker activity with the viral and bacterial diseases that multiply the foreign DNA in the organism, this crash was more close to cancer, where the error in the native cell DNA might cause some significant problems for the whole organism.
Second, the only reason why the full nation-wide phone system collapse was avoided was that some switches still ran the ‘obsolete’ code, which was in the process of being gradually replaced by the new (and buggy!) software. The switches with the old software acted as the ‘naturally immune hosts’ – a certain percent of the population which cannot be affected by a particular disease due to sheer luck.
Third, during the crash and immediately afterwards it was impossible to tell the difference between the malicious activity and the ‘normal’ bug in the code. The effects were precisely the same and had to be fought in a similar way regardless of the underlying cause. The response mounted by the technicians was largely independent of their thoughts about the crash reason. The phone system was down, it had to be brought up, and that’s exactly what they did after nine hours of frantic effort.
The old software was installed throughout the system, effectively erasing the defective DNA in the phone network ‘organism’. If someone would have told the technicians at that point that what they did very closely resembled the immune system response to the foreign proteins, and the technicians themselves were essentially doing the job done in the organism by the immune system cells, they would probably be surprised. But that is exactly what they did. The phone system had all the reasons to be proud of its immune system that was able to identify the threat, come up with the cure and distribute it throughout the organism in mere hours.
The problem is, the AT&T phone system was very lucky.
2. “Good help is hard to come by”.
Very few organisms can boast the immune system with the T- and B-cells with an intellectual level of the Bell Labs engineers. In fact, most of today’s corporate networks do not have engineers of that caliber. AT&T long-distance network was a Very Important Network and it got all the VIP (oops, VIN) treatment that it deserved.
However, the Internet and the number of computers attached to it grow exponentially, and there are certain limits placed on the number of IT personnel by the educational system and by the human reproduction rate. Shortly speaking, the total IQ of all the engineers in the world shared between all the computers in the world, approaches zero very fast when counted on the ‘IQ units per computer’ basis.
Of course, some measures are taken to counteract this process. The Computer Emergency Response Team (CERT), created shortly after the Morris Worm, collects the best and the latest knowledge about the security threats and distributes it among the network users. The problem is, for every computer someone still has to regularly read the CERT bulletins and make sure that all the latest countermeasures listed there (software upgrades, patches, configuration settings, etc) are applied to this particular machine.
Not surprisingly, even that level of ‘health care’ is unavailable for many (and arguably for the most) computers on the Internet. Even if every corporation would somehow manage to hire the knowledgeable IT staff, which is something that is not easy (and not cheap!) to do these days, that would not be the end of the story. The proliferation of home networks, permanent connections, DSL and cable puts an amazing number of networked computers into the hands of people who have no idea what CERT is, and would not recognize the patch even if it would hit them on the head.
Naturally, this is not the fault of these people – this is the network-wide problem that has to be solved on the network infrastructure level with some automatic instruments. For example, the virus-protection software does an amazing job of constantly updating the virus databases on the millions of machines.
Unfortunately, in order to use it, the user has to know what the anti-virus software is in first place, which is also far from certain these days. To make the long story short, any hacker with an idea of going out on the net and compromising some machines can do it easily – there are no omnipotent ‘security forces’ that would protect the computer without the skilled human help, which is difficult to come by.
From the security standpoint, this situation is bound to be rapidly deteriorating. The number of the connected computers grows very fast. PDAs, cell phones, wireless pagers – all these devices have CPU that is running some code and thus is vulnerable to the potential attack (or self-inflicted damage that could be just as bad from the practical standpoint, as the Big Crash story shows).
As the number of such devices increases, the hope to somehow keep all of them away from trouble disappears pretty fast. Fortunately, it is not necessary to do so.
3. “Good hacker is a silent hacker”.
Many of the computers on the Internet are compromised – that is, many computers are routinely used by the people who are not supposed to use them. Even more (arguably most) computers contain bugs in the code that can potentially lead to some very serious problems under the certain conditions.
The Internet has been in this state for years – one might say that it is a natural state of affairs for the network, and the exponential Internet growth won’t make the situation better (or different, depending on the one’s viewpoint).
Every time the ‘hacker’ or a ‘vulnerability’ problem is raised in the public’s mind by the media, it is usually a result of some hacker or bug causing some effect that is ‘reportable’. Someone might steal some money from the bank, deface a Web site or jam a major Web service with phony requests, and it becomes news instantly. At the same time hundreds of hackers and thousands of bugs are just sitting silently, minding their own business, not interfering with anyone’s activity – and no one notices them.
The Internet as a whole might be regarded as an organism that is concerned with its own survival. Many years of the Internet ‘evolution’ led to the creation of certain ‘standard practices’. Some of these practices are codified, some are passed from one sysadmin to another, and some probably exist on the subconscious level only – the system of that complexity is bound to have many well-hidden ways of controlling itself in order to survive.
Pretty much as an organism does not really care about any one of its cells, the existence of the Internet does not depend on the existence or well being of any particular computer, so its evolutionary developed ‘rules’ tend to be lenient towards the ‘silent hackers’. Everyone knows that these hackers are out there, that some computers are compromised – but so what? If Windows (or some other OS) has some dangerous bugs or security flaws, it is not a good enough reason to ban it from the network, as long as the existence of these bugs does not threaten the network as a whole.
Now, the deliberate destructive attack or the serious bug (like the one that has caused the Big Crash) is quite another matter. Every time this happens, all the Internet resources are mobilized to fight that threat in a cooperative effort that can literally span the globe. The new virus can make the whole countries unavailable on the net, and something has to be done about it – fast. After the cure is found, the network comes back to norm, and an occasional attempt by some uninformed user to double-click the “anna_kournakova.jpg” mail is not a cause for the global concern anymore.
Looking at the whole Internet as an entity, such behavior is strikingly similar to the ‘primary’ and ‘secondary’ immune responses in the organism. New viruses can make an organism sick for weeks until the proper countermeasures (antibodies) enter the ‘production state’. This is the ‘primary’ immune response. If, however, the organism survives this particular sickness, every time it encounters the same virus, the ‘secondary’ immune response is fast and decisive. The existing antibodies attack the virus when it has not had a chance to replicate yet, and the resulting virus destruction is normally not even noticeable for the attacked organism. Sure, some cells might be destroyed in the process, but who cares about the individual cells?
4. Balancing the “Infosphere”.
This whole process of Internet protecting itself against the attacks can be viewed as the continuous struggle between the ‘good’ and ‘bad’ information for access to the Internet storage space and CPU power.
‘Bad’ information (data and code) can appear on the individual hosts in form of the Trojans, viruses, can be downloaded when the host password is compromised and so on. In many cases the ‘bad’ information also physically removes the ‘good’ information from the host – for example, the self-monitoring tools can be replaced by their ‘bad’ versions that hide the attack from detection. Destructive viruses can wipe out the vital data for the whole companies.
‘Good’ information tries to fight the ‘bad’ one. ‘Good’ code can scan the files for the virus signatures, detect the suspicious access patterns and so on, deleting the ‘bad’ information when it is found.
In any case, barring the final victory of one side (which seems unlikely), the certain balance is usually achieved between the ‘good’ and the ‘bad’ information in the “infosphere” – the space where the information of different kinds coexists. Pretty much like the predators and the herbivores can coexist in the ecosphere, the attacking programs of different sorts, the security programs, the application programs can coexist in the same infosphere at the same time. This coexistence is difficult to call peaceful, but the same applies to the ecological coexistence.
On one hand, the analogy with the predators and herbivores might seem superficial, but on the other hand the hackers and security people really do need each other. Hackers justify the existence of the security and validate the approaches used by it; security provides the challenges for the hackers.
Pretty much like the real ecosphere, the infosphere is controlled by a huge number of very subtle mechanisms. From the opinion of the hacker’s referent group to the legislative efforts on the national level to the new ways to deflect the denial-of-service attacks to the latest bugs in some OS or programming language code – all these factors control the info-balance.
Many of these controlling factors have the social nature – roughly speaking, behind many advances of the ‘good’ or the ‘bad’ code there might be a real live person that has consciously or unconsciously helped to advance the ‘front line’ to one side or the other. This is natural, but despite the advances in the automatic attack and automatic defense capabilities, today the humans largely control the battle in a very ‘hands-on’ fashion. Every computer is still considered to deserve an individual treatment, and every computer normally has a real live human being that cares (or at least is supposed to care) about its state and tries to affect it.
This cannot continue forever.
5. Infosphere. The Limits to Growth.
There are two factors limiting the direct, ‘hands-on’ human intervention into the infosphere balance affairs.
The first one is the sheer size of the infosphere as it keeps growing. The virus-protection company can maintain a Web site with an automatic virus database update feature for only so many clients. After the number of clients reaches a certain limit, the Web site will collapse regardless of its capacity – the central distribution solution just does not scale.
Right now the virus-protection company might be happy to expand its client-handling capacity, but this is only because there is a human being behind every request. This human being can pay some money to offset the Web site maintenance cost, or be a target for advertising, but what if the download requests for the virus database start to arrive from the faceless networking entities? What if there are several thousands of computers per every human being on the face of the Earth, and all of them want the newest virus data? These faceless entities will put the same load on the central servers as the real live people at the displays, but won’t see any advertising and won’t increase the revenues. When this happens, the cost and the complexity of the Web site maintenance become prohibitive pretty fast, and several thousands of computers for every human is not even a very large number [3].
So clearly some distributed solution is necessary to deliver the data to the ‘network edges’ – the computers should be able to get the data from their peers, not from the central location. Of course, this approach opens the brand-new doors for the ‘bad code’ to subvert these distributed data transfers in order to avoid detection or in order to cause something resembling the ‘autoimmune diseases’, which would cause the legitimate software to be attacked and removed by the security mechanisms. This can be partially prevented by the sophisticated encryption and authentication technologies, but some new ways for the ‘bad code’ to propagate itself will certainly be created in the process, if only because of the bugs in the encryption and authentication software.