Technical Report: Sizing Tool for 802.11 Wireless LANs

Project Team: Dr. Varsha Apte (IIT, Bombay) & Gautam K. Deora (Tata Infotech Ltd.)

Project Duration: August 29, 2003 – November 29, 2003

Introduction

The networking world is rapidly moving towards 802.11 LANs as the preferred solution for “wire-free” connectivity, and therefore, tools and methods are needed that can help network designers appropriately size an organization’s wireless network. Though the current WLAN deployments primarily focus on coverage issues, we expect that as the growth and use of WLANs increase, WLAN capacity issues will also be a major focus in the LAN deployments. An analogy can be drawn from the cellular deployments where coverage was the initial focus of the service providers. With widespread acceptability, capacity analysis is now a major design issue in all cellular deployments.

The focus of this project with IIT, Bombay was to develop sizing methodologies for the 802.11 wireless LANs, given the workload characteristics of an organization, described at a “high level”, and given the performance requirements of the users on the 802.11 LAN. The primary focus was the capacity sizing of the LAN. (The focus was not on the “physical” sizing of the LAN – i.e. answering the question of where the access points should be placed.) The major goals that were set at the beginning of the project included developing a methodology that would:

·  Accept as input, a high-level description of the workload profile generated by the users of the LAN.

·  Use analytical models that would account for protocol overheads, stochastic models that would account for random phenomena (collisions, etc), for prediction of throughputs and delays of the LAN, and

·  Based on the models, derive sizing recommendations such as number of access points, and how many stations the access points would support, in what mode they should operate and recommendations for setting of any configurable parameters (such as the 802.11 fragmentation threshold, etc.).

Literature Review

A sufficient amount of time was utilized in understanding the 802.11 protocols and its functionality. A comprehensive overview of the protocol is provided in [1]. Many of the values for the physical parameters used in the simulation were obtained from the IEEE 802.11 standards [2]. Reference [3] was used to refresh the concepts on Queuing theory and traffic modeling.

The framework of the analytical model presented in this report was developed on the model proposed in [4]. The model proposed in [4] provides a simple but nevertheless accurate, analytical model to compute the 802.11 Distributed Coordination Function (DCF) throughputs. The model assumes a finite number of terminals and ideal channel conditions. The analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS/CTS access mechanisms. The model concentrates on the “Saturation throughput” as the fundamental performance figure - defined as the limit reached by the system throughput as the offered load increases. The saturation throughput (S) represents the maximum load that the system can carry in stable conditions. It is further assumed that in saturation conditions, each station has a packet immediately available for transmission, after the completion of each successful transmission. Moreover, all packets being “consecutive,” each packet needs to wait for a random backoff time before transmitting.

Proposed Model

The adjacent figure shows the conceptual network and specifies the various parameters used in the model.

There are ‘N’ users in the network with l being the packet arrival rate at each user. However, at any given time, not all N users may be active and transmitting/receiving data on the network. The channel bit rate and the arrival rates l are assumed to be the same for all the users.

If p = probability of any station (user) being active, then the number of active users (m) at any given time is given as,

m = N.p (Assuming p is the same for all users)

Also, if S(m) = Saturation throughput with m number of active, saturated users on the network, the network service rate is given by m(m) = S(m)/td, where, td = mean duration of a payload transmission.

The model presented here aims at finding the optimum number of users N that can be supported by the network to obtain the maximum network throughput. The algorithm is presented below.

Algorithm for the analytical model:

Given: N, l

Algorithm:

1.  Assume initial value of p.

2.  m = N.p

3.  Calculate S(m) from [4]. m(m) = S(m)/td.

4.  p_new = l /(m(m)/N), p_new is the network utilization, which is same as the probability of a station being active e.g. if l is the rate at which the packets arrive and m is the rate at which the network services the packets, then the probability of a station being active is l/m.

5.  p = p_new.

6.  Repeat steps 2-5 till the values for p converge and stabilize.

It is to be noted here that the model is based on an iterative algorithm. There are actually two steps on iteration, the first iteration being the convergence of p, and the second iteration being the convergence of S(m). The algorithm for calculation of S(m) is based on the model presented in [4].

It is also worthwhile to mention here that there were convergence problems for S and p with the successive overlap iteration method. Using the secant algorithm for iteration solved the convergence problem for calculation of saturation throughput S. There were convergence problems for p even with the secant method with the value of p (a probability measure) becoming negative at some iteration levels. To solve this, the iteration loop was preformed on m (rather than p).

The final algorithm had a number of other enhancements incorporated in it. The arrival rate l was calculated based on the offered load on the network. Thus, based on the bit rate and offered load, l is calculated as (l = offered_load_per_user*bitrate/payload).

Further more, it is important to note that when the arrival rate l is less than the network service rate seen per user m(m)/N, the total network throughput depends on the arrival rate l. However, when l > m(m)/N, the effective network throughput is calculated based on the service rate per user.

We notice that the algorithm requires the computation of the saturation throughput S(m). m is an integer value which is the average number of active users in the network. However, with m = N.p, the value of m recalculated in each iteration can be a floating point value. m therefore needs to be rounded off to the closest integer. This rounding off may not however be desirable since the value of S(2.4) will be the same as the value with S(2.01) viz. S(2). To make the model more accurate, a weighting process is followed whereby, the values of S is calculated as follows

S(m) = (ml+1 – m).S(ml) + (m- ml).S(ml +1), where ml = floor(m).

The above weighting methodology seems intuitive when one considers an example, say S(2.3), which implies that 70% of the time there are 2 active users, while 30% of the time there are 3 active users. Therefore, S(2.3) = S(0.7*2 + 0.3*3) = (3-2.3)*S(2) + (2.3-2)*S(3), where ml = floor(2.3) = 2.

The final algorithm based on the extensions & improvements discussed above is presented below

Modified algorithm

Given: N, Offered Load, bitrate, payload

Algorithm:

1.  Calculate l = offered_load*bitrate/payload

2.  Assume initial value of m.

3.  Calculate S(m) from [4] using secant iteration method.

4.  Calculate service rate m(m) = S(m)/td.

5.  p = l /(m(m)/N), p is the network utilization, which is same as the probability of a station being active e.g. if l is the rate at which the packets arrive and is the m rate at which the network services the packets, then the probability of a station being active is l/m.

6.  m = N.p

7.  Repeat steps 3-6 using the secant iteration method till the values for m converge and stabilize.

8.  if l < m(m)/N,

normalized throughput = l*N*payload/bitrate

else

normalized throughput = m(m)/N *N*payload/bitrate

The above algorithm is repeated for various values of N. For each value of N, we get a normalized throughput. The value of N which gives the highest normalized throughput is the sizing recommendation for the given network.

For (N values) < (recommended N value), the network is under-utilized and for (N values) > (recommended N value), the delays experienced by the packets go on increasing.

Framework for deciding the Fragmentation Threshold:

Large packets handed down from the Link Layer Control (LLC) to the Medium Access Control (MAC) may require fragmentation to increase transmission reliability. This is particularly true when the bit error rates experienced by the wireless channel are high.

Fragmenting the LLC packet into smaller chunks of data when the channel conditions are poor ensures that the overall transmission overhead is less compared to the overhead associated with re-transmission of large sized data packets. Similarly, fragmenting the LLC packet when the channel is good adds unnecessary header overhead to the packets. Thus, depending on the channel conditions (bit error rates), one has to decide whether to perform fragmentation. In the 802.11 protocols, the LLC packet sizes are compared to the configurable Fragmentation_Threshold parameter. The appropriate Fragmentation_Threshold parameter needs to be identified to endure that minimum overhead is introduced in the network for a given channel condition.

The framework to determine the fragmentation threshold for a given channel condition is presented below:

Given:

·  Channel bit error rate (e),

·  Allowable number of bit errors by the CRC error coding scheme (c)

Algorithm:

1.  Packet length L is given as L = min(D, F) + H

Where, D = Size of data packets,

F = Fragmentation Threshold,

H = Size of header

The packet is fragmented only if the data size D is greater than fragmentation threshold F.

Also, in case of fragmentation, the length of the last packet, Llast is given as

Llast = [D – floor(D/F)*F] + H, where floor(D/H) represents the total number of fragmented packets, excluding the last packet.

2.  If the CRC error coding allows for ‘c’ number of bit errors, the probability of a packet being in error

p[packet error] = probability {number of errors k > c}

3.  The number of transmissions and retransmissions for each packet are calculated as 1/(1-p) and 1/(1-plast).

4.  Therefore, the overhead for the D-bit data packet is given as

W = [ceil(D/F) – 1] x H/(1-p) + H/(1-plast)

The first term in the above expression represents the overhead associated with sending the header bits for floor(D/F) packets. The overhead takes into account the number of re-transmissions (and transmissions) 1/(1-p). The second term in the expression represents the overhead in transmitting & retransmitting the last packet after fragmentation.

Framework for deciding the RTS threshold:

The Request to Send (RTS)/Clear to Send (CTS) scheme in 802.11 protocols was designed to combat the hidden terminal problem, which occurs when pairs of mobile stations are unable to hear each other. Since RTS/CTS scheme reduces the length of the frames involved in the contention process, it is very effective in terms of system performance, especially when large packets are considered.

Stations can choose to never use RTS/CTS, use RTS/CTS whenever the packet size exceeds the value of the configurable RTS_threshold parameter or always use RTS/CTS.

The framework presented in [4] can be used to quantify the threshold value for the packet size over which it is convenient to switch to the RTS/CTS mechanism. The framework is developed by comparing the saturation throughputs (S) for the basic scheme and the RTS/CTS scheme. The packet size at which the saturation threshold for the RTS/CTS scheme becomes greater than the saturation throughput for the basic scheme is the value at which the RTS_threshold parameter needs to be set.

Results:

The following figures provide a graphical depiction of the analytical and simulation results. The analytical results are based on the model proposed in this document while the simulation results were obtained using the 802.11 simulation tool developed by Dr. Bianchi [4].

Figure 1: Normalized threshold versus the total number of users for different values of offered load per user. The results were obtained with bit rate of 1 Mbps, maximum number of backoff stages = 3 and initial window length = 32.

The total load offered to the system was increased linearly with the number of users. Analyzing the results shown in Figure 1, we note that the normalized carried throughput follows the total offered load till a point. Beyond this point however, the normalized throughput starts to decrease. This trend is expected because for a lightly loaded network (lower l), the normalized carried throughput will depend on the rate at which the packets enter the system rather than the service rate of the packet. Since the network is lightly loaded, the service rates will be higher than the arrival rates and hence l dictates the network throughput. Beyond a point, the arrival rate l increases to an extent where packets arrive at a rate faster than rate at which they can be serviced. Under such conditions, the normalized carried throughput will now depend on the network service rate and not l. Beyond the threshold point, it is expected that the packets will start to queue on transmission since the service rates will be less that l. The normalized carried throughput therefore closely follows the saturation throughput described in [4] beyond the threshold point.

We also observe that as the increased load per user increases, the total number of users that can be sustained decreases, as expected.

The results predicted by the analytical model were compared the results predicted by the Bianchi’s simulation tool [4] for different traffic loads and different data rates. As can be noted from the table, there is reasonable agreement between the values, especially at higher traffic load levels.

Datarates / W, m values / Offered load/user / # of users (Analytical) / # of users (Simulation)
1 Mbps / W = 32, m = 3 / 1 % / 54 / 67
2 % / 31 / 34
3 % / 22 / 23
5 % / 15 / 15
10 % / 8 / 8
W = 128, m = 3 / 1 % / 70 / 77
2 % / 38 / 39
5 % / 17 / 17
10 % / 9 / 9
5.5 Mbps / W = 32, m = 3 / 1 % / 53 / 63
2 % / 31 / 32
3 % / 22 / 22
10 % / 8 / 8
W = 128, m = 3 / 1 % / 67 / 71
2 % / 36 / 36
3 % / 25 / 25
10 % / 11 / 10
11 Mbps / W = 32, m = 3 / 1 % / 51 / 58
2 % / 29 / 30
3 % / 21 / 21
10 % / 7 / 7

Results for Fragmentation Threshold