SIMULATION OF ADAPTIVE NETWORK RECONFIGURATION UNDER OVERWHELMING DENIAL OF SERVICE ATTACK (OUTPACE)

by

JOHN WILLIAM LEFEVERS, B. A.

North Carolina State University, Raleigh, NC, 2003

A Thesis

Submitted to the Faculty of the Graduate School of the

University of Colorado at Colorado Springs

in Partial Fulfillment of the Requirements

for the Degree of

Master of Engineering

Department of Computer Science

2010

© Copyright by John W. Lefevers 2010

All Rights Reserved

This thesis for the Master of Engineering degree by

John William Lefevers

has been approved for the

Department of Computer Science

by

______

Advisor: Dr. C. Edward Chow

______

Dr. Jugal K. Kalita

______

Dr. Xiaobo Zhou

______

Date

Simulation of Adaptive Network Reconfiguration

Under Overwhelming Denial of Service Attack (OUTPACE)

by

John William Lefevers

(Master of Engineering, Information Assurance)

Thesis directed by Associate Dean Professor C. Edward Chow

Department of Computer Science

Abstract

As Distributed Denial of Service (DDOS) attacks have gotten more sophisticated, attackers have crafted attacks that appear to the target to be normal traffic. Operators can no longer rely on firewall systems or router filters to block these attacks. Instead, the network must be able to mitigate the immense traffic flow based on behaviors such as the size and timing of a request in order to separate good traffic from bad. This thesis, hereafter referred to as OUTPACE, simulates a combination of network effects based on Border Gateway Protocol (BGP) Sinkholes through Domain Name System (DNS) record changes and time-sensitive reconfiguration of available servers to “age out” servers which are being attacked. This allows most legitimate clients to be serviced while most attack traffic is dropped before reaching the Internet backbone. Simulation results show significant promise in mitigating 'unfilterable' DDOS attacks and reducing the total impact of the attack to Internet Service Providers (ISPs).

This work is dedicated to all the hackers who came before, from Aristotle to Tesla and beyond, and especially to my contemporaries, for whom enough is never quite enough.
It's also dedicated to my ex-wife Alyssa. She saw the worst of this, and paid a great sacrifice for my pursuits. I hope you find your white picket fence.
Finally this is dedicated to my parents, one of which gave me an insatiable thirst for learning, the other of which gave me a self-critical perspective that tends to yield perfectionism.

The two are a dangerous combination.

Contents

Chapter 1 8

Introduction 8

1.1 Web-Browser System Fundamentals 8

1.1.1 Core Web Technologies 8

1.1.1 Core Web Technologies 8

1.1.2 BGP 9

1.1.3 DNS 10

1.1.4 High Availability Systems 11

1.2 DDOS Research 11

1.2.1 Origin of Denial of Service Attacks 12

1.2.2 Protocol-Conforming Attacks and Advanced DDOS 14

1.3 DDOS Defense Techniques 15

1.3.1 Taxonomy of DDOS Defense Strategies 18

1.3.1.1 Classification by Detection Strategy 20

1.3.1.2 Classification by Response Strategy 21

1.3.1.3 Classification by Degree of Cooperation 22

1.3.1.4 Classification by Location on the Network 22

1.4 Survey and Classification of Relevant Network Defense Literature 23

Chapter 2 49

The State of Adaptive Network Defense 49

Chapter 3 54

The OUTPACE Network Simulation 54

3.1 OUTPACE Overview 55

3.2 Choosing a Simulator 59

3.3 OUTPACE Design Considerations 60

Chapter 4: Performance Evaluation and Simulation Results 74

4.1 Comparing OUTPACE to the Baseline Simulation 84

4.2 Impact of Too Large a Bottleneck 86

4.3 Impact of Attacker/Client Ratio on Performance 88

4.4 Impact of Number of Servers on Performance 89

4.5 Impact of Number of Clients on Performance 90

4.6 Impact of Throttling on Performance 92

4.7 Putting It All Together: Comparative Performance 93

Chapter 5 94

Lessons Learned and Future Directions 94

5.1 Limitations of the OUTPACE Design 96

5.2 Future Research 97

Chapter 6 98

Conclusions 98

Bibliography 99

Appendix A: Source Code and Tools 101

A.1 Installation How-to 111

A.2 Execution How-to 111

A.3 Helpful Scripts and Tools 119

A.4 Demo Script Results 125

Chapter 1

Introduction

This thesis outlines a method by which Domain Name System (DNS) servers, dynamic allocation of Internet Protocol (IP) addresses, Border Gateway Protocol (BGP) sinkholes, and careful timing can allow significant amounts of client traffic to access a website which is under massive asymmetric denial of service attacks which conform to the expected connections for that server. This document outlines the modeling and simulation work performed as a proof-of-concept, the design choices involved, and the results. The appendices include the network simulator source code as well as data samples proving the correctness of the design.

1.1 Web-Browser System Fundamentals

In order to understand the causes and effects of denial of service attacks on ISPs, one must understand the core technologies used to deliver content from the web server to the client's browser such that it is displayed to the screen. In the following sections, we will outline the web server to browser connection, the protocol ISPs use to route data across the backbone (BGP), the name system (DNS) which links domain names to Internet Protocol (IP) addresses, and the most common way of pooling web servers to distribute load. These technologies will form the prerequisites for understanding OUTPACE.

1.1.1 Core Web Technologies

The core technologies involved in rendering web content to the screen include web servers, which listen for and respond to requests from web browsers. Webservers by convention offer an index page (index.html) which lists the rest of the pages available on that server. More sophisticated web applications may use a different type of index page, but an initial index page is required for the system to work. Web browsers connect to a known Transmission Control Protocol (TCP) port at the foreign address of the webserver. Once a handshake is complete, the browser sends a request for a particular page (index or otherwise) which includes the names of all files required to properly render the page. The browser makes serial requests for all items referred to within a page. Once they are all received, the browser displays them according to the way the page was setup and waits for the next request (when the user clicks a link). Common protocols are required throughout the network (TCP/IP), as well as an agreed on port (HTTP on port 80, HTTPS on port 443, etc) and a service standard (HTTP/1.1 per RFC 2616). Note that generally, web requests are fairly small (10kb) and responses are a bit larger (150kb). Large files can be transferred along with smaller ones, but HTTP is a poor choice for this purpose because it does not support resuming broken downloads and fails to reconnect when it times out.

1.1.2 BGP

Behind the scenes, the Internet is a collection of individual service providers with agreements to route traffic on each others' behalf in order to allow all interconnected networks to reach all destinations. The protocol by which ISPs advertise which parts of the network they control or can reach is called the Border Gateway Protocol. ISPs which have access to the backbone and need to route data to their peer ISPs each register as an Autonomous System (AS) in order to use BGP to route traffic to (and through) other AS's. BGP allows an ISP to tell its' peers which network addresses it can reach, which ones it's responsible for, and helps them determine the optimal route for traffic. This creates the 'highway' upon which Internet traffic rides to get from a browser on one ISP's network to a web server on another ISP's network and back again. BGP was not designed with security in mind, but offers some lesser-known features which can be used to intentionally drop traffic at extremely high speeds instead of routing it. This aspect of BGP, called sinkholing (or blackholing) is one of the cornerstones of the OUTPACE design.

1.1.3 DNS

Since computers don't understand addresses the way users do (ie, street addresses, telephone numbers, and social security numbers) there has to be a system which translates computer-understandable addresses (IP Addresses in the form “1.2.3.4”) to human-readable addresses (domain names in the form “www.google.com”). This system is called the Domain Name System and maps both names to IP addresses and IP addresses to names. DNS is a separate protocol from web traffic, runs on a different port, and servers are only required at the ISP-to-ISP level. A user's ISP keeps a name server for all subscribers to use which tracks which addresses map to which names. Service providers advertise a hostname to the DNS system and when a client types in that name in a browser, the browser asks their ISP's local nameserver (ie, caching DNS). If the local nameserver doesn't know the name, it asks a hierarchy of other nameservers for an authoritative answer. Once it gets the answer, it passes the result back to the browser and stores a local copy to save time when the next user asks for that domain name. DNS caching nameservers don't have to advertise any names, but they have to keep track of which root-level nameserver is likely to know which DNS server is authoritative for a given top-level domain. In the case of “www.google.com”, the local nameserver looks up the root for “.com” and asks it about “google”. It will be handed the address of the name server for “google.com”. Once it has this address, it will ask the “google.com” nameserver which IP address corresponds to “www.google.com”. Note that the “.com” domain server knows which server is authoritative for “google.com”, but it doesn't have to know anything about “www.google.com”. This is an interesting distinction in that the tree structure of DNS servers means that any one server only has to know the root DNS server addresses, plus the sub-domains for which it is responsible. Caching servers which do not advertise any addresses simply keep a cache of the most recent authoritative reply for all the queries they've been given. At some interval, they will decide a name record is 'stale' and re-query the authoritative servers in case “www.google.com” has changed addresses. Most important to OUTPACE is the fact that except for the top-level domain name (“yahoo.com”), any server name (such as mail.yahoo.com) can be changed at any interval to point to a new server. Changes replicate through the system fairly quickly so that no one server failure has to cripple an entire domain or business.

1.1.4 High Availability Systems

Given enough business, a single web server can be overwhelmed with the number of requests sent to it or the amount of traffic it has to respond to. High Availability Systems are a type of server implementation which allows a pool of servers to answer the requests sent to a single server. For high-load applications or given limited computing resources, this system and network configuration is very advantageous. An example of this system is the Linux Virtual Server system (LVS). In this system, all requests are sent to one address (called the coordinator) who hands connections off to whichever server is able to handle them. Some systems distribute loads evenly among all servers, some distribute loads based on whichever server is least taxed, and others weight the traffic based on the capability of each server's hardware. Note that this system overcomes the limitations of physical server hardware, it does nothing to mitigate an overloaded network pipe or the equipment along the network path that can be overtaxed (ie, firewalls and routers). OUTPACE borrows from the LVS design in redirecting traffic from one server to another, but uses DNS as the coordinator.

1.2 DDOS Research

Given an understanding of the basic fabric of the Internet as seen by a web browser, we must now understand the threat most likely to take down an ISP or hosting service: Distributed Denial of Service (DDOS) attacks. These attacks are best described as the coordination of a large number of agents, under the control of one malicious user, which simultaneously direct huge amounts of traffic towards a target server in hopes of disabling that server or the network infrastructure that supports it.

1.2.1 Origin of Denial of Service Attacks

Distributed Denial of Service attacks first emerged in the summer of 1999, with a tool called Trinoo. This tool began the proliferation of botnets (virus-infected computers which act on behalf of the virus writer, not the system owner) which lead to more and more sophisticated attacks. Initially, DDOS was limited to single-payload attacks, but Trinoo was quickly followed by more sophisticated and tricky implementations of both the attacker element (a bot) and the payload element (the traffic generated). Examples include the Tribal Flood Network (TFN) and Stacheldraht. [KES00]

The key aspects each of these systems have in common is their organization, which is intended to hide where the control is coming from. Botnets are controlled by a small number of “masters” which direct “handlers” to use infected client (“bot”) machines to generate traffic to attack a target. Those creating and controlling a DDOS network (“botnet”) are unlikely to be caught because they have no direct contact with the bots. Bots are programmed to check in with coordinators in Internet Relay Chat (IRC) channels and wait for orders. Masters need only log in to the IRC channel (or log into a handler) and issue orders, which handlers will pass on to the bots, who in turn will attack the target host. In the early days of DDOS, relatively few infected clients could take down most any Internet resource [GIB00].

In the early 1990s, attackers used brute force approaches from a small number of machines to commit denial of service attacks. As defenders realized they could filter out the source addresses of these machines, attackers resorted to spoofing their IP addresses so that they appeared to be different hosts each time they attacked. In early Windows machines, this was trivial due to an incomplete implementation of the TCP/IP stack. Around 2000, the DOS trend shifted towards traffic amplifiers which generated large amounts of traffic in response to small requests. Attackers spoofed their source addresses to be that of the victim when they made their requests so that large flows of seemingly requested data would take down the victim's network. Then came the era of DDOS, where botnets which are proliferated by viruses transmit the payload. Botnets need not bother spoofing their source addresses for two reasons: first, the individual host a bot lives on is of no value because of the sheer number of bots, second, compromise of any one host in the botnet will reveal nothing useful about who is controlling the botnet or where they can be found. Most recently, attacks have included asymmetric payload reflector attacks where the attack payload comes from a legitimate source that believes it is fulfilling a legitimate request. This approach protects the attacker because the defender can only know where the attack came from by examining the logs of a third party which does not want to be associated with the attack.