143

CHAPTER 9 

CONCLUSIONS AND RECOMMANDATIONS

9.1  Conclusion

The contribution of this research was in the development and simulation of the load-balancing algorithms for the wide-web replicated networks. Based on the algorithms developed and the simulation results obtained, the following conclusions can be drawn.

(1)  Since the LBA-I and LBA-I-1 algorithms do not generate any overhead and do not depend any message period, they have the best performance when the transmission delay dominates the response time of the requests, and are simple, and feasible.

(2)  The LBA-II algorithm is simple and feasible. However the LBA-II algorithm generates heavy overhead due to the communication among LBAs, which increases the transmission delay and the request response time. The LBA-II-1 algorithm decreases the overhead and the transmission delay and improves the performance. However, when the dominating factor of the response time is the web queuing delay, this algorithm has the better performance than the algorithms LBA-I, LBA-I-1

(3)  The LBA-III algorithm is not feasible because it generates the heaviest overhead required for the communication among the web servers and LBAs. The LBA-III algorithm has the worst performance among all load balancing algorithms developed in this research and its performance depends on the period at which the path information is probed.

(4)  The LBA-IV-1, LBA-IV-2, and LBA-IV-3 algorithms are not feasible when the transmission delay dominates the response time of the requests, because of the heavy overhead required for the communication among the web servers and LBA. The heavy overhead increases the transmission delay and the request response time. Their performance depends on the period at which the information of web server loads is supplied. The LBA-IV-1 (2), LBA-IV-2 (2), LBA-IV-3 (2), LBA-IV-1 (3), LBA-IV-2 (3) and LBA-IV-3 (3) algorithms can reduce the starting errors, but still give poor performance when the dominating factor of the response time is the transmission delay. But when the dominating factor of the response time is the web queuing delay, this algorithm has the better performance.

(5)  The algorithms LBA-IV-1 (E), LBA-IV-2 (E) and LBA-IV-3 (E) algorithms reduce the overhead required for the communication among the web servers and LBA. However due to the high variability of the pattern of client requests, the collected web server performance information becomes obsolete quickly and does not improve the prediction of the future web server resources that are more likely to be available. The LBA-IV-1(E), LBA-IV-2(E) and LBA-IV-3(E) algorithms are not feasible and their performance depends on the prediction precision of the future web server performance.

(6)  The algorithm LBA-IV-1(Tc), LBA-IV-2(Tc) and LBA-IV-3(Tc) significantly reduced the overhead. Those algorithms are stable and feasible, and have good performance when the threshold changes within a range.

The following conclusions are related to the network design issues:

(7)  At the same frequency of client requests, reduce the document into smaller size can reduce the transmission delay.

(8)  The load balancing algorithm performance can be optimized by choose the right ratio between the web servers and LBAs.

(9)  The load balancing algorithm performance can be optimized by try the right ratio of the process power of web servers and path bandwidth.

(10)  The path bandwidth is a very important than the web server process power to the load balancing algorithm performance.

(11)  The locations of web servers affect the load-balancing algorithm performance.

9.2  Recommendation

The following are recommendations for further work.

(1)  The load-balancing algorithms were simulated on one real network, namely New-J network, and two abstract network r30 and r50. More real and larger networks should be used to test the developed load-balancing algorithms.

(2)  The developed load balancing algorithms were tested by simulation. These algorithms

with good performance should be tested and evaluated by experiments on real networks.

REFERENCES

[1] E.D. Katz, M. Butler, and R. McGrath, “A Scalable HTTP Server: the NCSA Prototype,” Comp. Net. And ISDN Systems, Vol. 27, 1994, pp. 155-164.

[2] M. Colajanni, P. Yu, and D.M. Dias, ”Scheduling Algorithms for Distributed Web Servers,” IEEE, 1997.

[3] D.M. Dias, W. Kish, R. Mukherjee, and R. Tewari, “A Scalable and Highly Available Web Server,” Proc. 41st IEEE Computer Society Intl. Conf. (COMPCON) 1996, Technologies for the Information Superhighway, pp. 85-92, Feb. 1996.

[4] H.W. Braun and K.C. Claffy, “Web Traffic Characterization: An Assessment of the Impact of Caching Documents from NCSA’s Web Server,” Comp. Net. And ISDN System, Vol. 28, 1995, pp. 37-51.

[5] M.F. Arlitt and C.L. Williamson, “Internet Web Servers: Workload Characterization and Performance Implications,” IEEE ACM Transactions on Networking, Vol. 5, No. 5, Oct. 1997.

[6] J. Sedayao, “Mosaic Will Kill My Ntwork!” Electron. Proc. 2nd World Wide Web Conf.: Mosaic and Web, Chicago, IL, Oct. 1994

[7] C. Cunha, A. Bestavros, and M. Crovella, ”Characteristics of WWW Client-Based Traces,” Tech. Rep. BU-CS-95-010, Boston Univ. Computer Sci. 1993, pp. 239-248.

[8] M. Thomas and E.W. Zegura, “Generation and Analysis of Random Graphs to Model Internetwork,” Technical Report GIT-CC-94-46, College of Computing, Georgia Tech.

[9] E.W. Zegura, K. Calvert, and S. Bhattacharjee, “How to Model an Internetwork,” Proceedings of IEEE Infocom ’96, San Francisco, CA.

[10] K. Calvert, M. Doar and E.W. Zegura, “Modeling Internet Topology.” To be published in IEEE Communications Magazine, June 1997.

[11] T.L. Casavant, J.G. Kuhl, “A Taxonomy of Scheduling in General-Purpose Distributed Computing Systems,” IEEE Trans. Software Eng., Vol. 14, Feb.1988, pp. 141-154.

[12] K.C. Claffy, H.W. Braun, and G.C. Polyzos, “Tracking Long-Term Growth of the NSFNET,” Communications of the ACM, Vol. 37, Aug. 1994.

[13] D.L. Eager, E.D. Lazowska, and J. Zahorjan, “Adaptive Load Sharing in Homogeneous Distributed Systems,” IEEE Trans. Software Eng., Vol. SE-12, May 1986, pp. 662-675.

[14] D.L. Eager, E.D. Lazowska, and J. Zahorjan, “A Comparison of Receiver-Initiated and Sender-Initiated Adaptive Load Sharing,” Performance Evaluation, Vol. 6, 1986, pp. 53-68.

[15] P. Krueger and N.G. Shivaratri, “Adaptive Location Policies for Global Scheduling,” IEEE Trans. Software Eng., Vol. 20, June 1994, pp. 432-444.

[16] Y.H. Liu, P. Dantzig, C.E. Wu, J.Challenger, and L.M. Ni, “A Distribution Web Server and its Performance Analysis on Multiply Platforms,” Proc. Putting System, 1996, pp. 665-672.

[17] K. Ramamritham, J.A. Stankovic, and W. Zhao, ”Distributed Scheduling of Tasks with Deadlines and Resource Requirements,” IEEE Trans. on Computers, Vol. 38, Aug. 1989, pp. 1110-1123.

[18] T. Brisco, “DNS Support for Load Balancing,” RFC 1794, Rutagers University, April 1995.

[19] T.L. Berners et al., “The World-Wide Web,” Communications of the ACM, 37(8):76-82, August 1994.

[20] R. Haskin, “Personal Communication,” 1994.

[21] R. Haskin and F.L. Stein, “A System for Delivery of Interactive Television Programming,” COMPCON. IEEE, March 1995.

[22] J.H. Howard, M.L. Kazar, S.G. Menees, D.A. Nichols, M. Satyanarayanan, R.N. Sidebotham, and M.J. West, “Scale and Performance in a Distributed File System,” ACM Transactions on Computers Systems, 6(1), Feb. 1988.

[23] F. Jahanian, S. Fakhouri, and R. Rajkumar, “Processor Group Membership Protocols: Specification, Design and Implementation,” Proceedings of the 12th Symposium on Reliable Distributed Systems, pp. 2-11, Princeton NJ, October 1993. IEEE Computer Society.

[24] E.D. Katz, M. Butler, and R. McGrath, “A Scalable HTTP Server: The NCSA Prototype,” Computer Networks and ISDN Systems, 27:155-163, 1994.

[25] A. Leff, R.P. King, and D.M. Dias, “HAV: Infrastructure for Building Highly Available Clustered Systems,” In Preparation, 1996.

[26] T.T. Kwan, R.E. McGrath, and D.A. Reed, “Ncsa’s World Wide Web Server: Design and Performance,” IEEE Computer, pp. 68-74, Nov. 1995.

[27] P. Mockapetris, “Domain Names-Implementation and Specification”, RFC 1035, USC Information Sciences Institute, November 1987.

[28] C.R. Attanasio and S. E. Smith, “A Virtual Multi-Processor Implemented by an Encapsulated Cluster of Loosely Coupled Computers,” IBM Research Report RC18442, 1992.

[29] R. Tewari, D. Dias, R. Mukherjee, and H. Vin. “Real-Time Issues for Clustered Multimedia Servers,” IBM Research Report RC20020, T.J. Watson Research Center, April 1995.

[30] R. Tewari, R. Mukherjee, D. Dias, and H. Vin. “High Availability in Clustered Multimedia Servers,” International Conference on Data Engineering, New Orleans, Feb., 1996. IEEE

[31] S. Bhattacharjee, M.H. Ammar, E. W. Zegura, V. Shah, and Z. Fei, “Application Layer Anycasting,” Proceedings of INFOCOM 97, 1997.

[32] J. Veizades, E. Guttman, C. Perkins, and S. kaplan, “Service Location Protocol,” RFC2165, June 1997.

[33] C. Partridge, T. Mendez, and W. Milliken, “Host Anycasting Service,” RFC 1546, November 1993.

[34] J. Bernabeu, M. Ammar, and M. Ahamad, “Optimizing a Generalized Polling Protocol for Resource Finding over a Multiple Access Channel,” Computer Networks and ISDN Systems, Vol. 27, pp. 1429-1445, 1995.

[35] D. Oppen and Y. Dalal, “The Clearinghouse: A Decentralized Agent for Locating Named Objects in a Distributed Environment,” ACM Transactions on Office Information Systems, Vol.3, pp. 230-253, July 1983.

[36] P. Mockapetris, “Domainnames – Concepts and Facilities”, RFC 1034, November 1987.

[37] I. Gopal and A. Segall, “Directories for Networks with Casually Connected Users,” Proceedings of INFOCOM 88, pp. 1060-1064, 1988.

[38] A. Birrel, R. Levin, and M. Schroeder, “Grapevine: An Exercise in Distributed Computing,” Communications of the ACM, Vol. 25, pp. 260-274, April 1982.

[39] D. Terry, “Caching Hints in Distributed Systems,” IEEE Transactions on Software Engineering, Vol. 13, pp. 48-54, January 1987.

[40] R. Fowler, “Decentralized Object Finding Using Forwarding Addresses,” PhD Thesis, University of Washington, 1985.

[41] J. Guyton and M. Schwartz, “Locating Nearby Copies of Replicated Internet Servers”, Proceedings of SIGCOMM 95, pp. 288-298, 1995.

[42] C.M. Bowman, P. Danzig, D. Hardy, U. Manber, and M. Schwartz, “The Harvest Information Discovery and Access System,” Computer Networks and ISDN Systems, Vol. 28, pp. 119-125, 1995.

[43] K. Moore, J. Cox, and S. Green, “SONAR - A Network Proximity Service,” Internet Draft (work in process) draft-moore-sonar-01.txt, Feb. 1996.

[44] R.L. Carter and M.E. Crovella, “Server Selection Using Dynamic Path Characterization in Wide-Area Networks,” Proceedings of INFOCOM 97, 1997.

[45] R.L. Carter and M.E. Crovella, “Dynamic Server Selection Using Bandwidth Probing in Wide-Area Network,” Tech. Rep. BU-CS-96-007, Computer Science Department, Boston University, Boston, MA, 1996.

[46] F. Lange, R. Kroeger, and M. Gergeleit, “Jewel: Design and Implementation of a Distributed Measurement System,” IEEE Transactions on Parallel and Distributed Systems, Vol. 3, pp. 657-671, November 1992.

[47] J. Gwertzman and M. Seltzer, “The Case for Geographical Push-Caching,” Proceedings of the 1995 Workshop on Hot Operating Systems, 1995.

[48] M. Humes, “Netscape’s Server Push, Client Pull and CGI Animation,“ http://www.emf.net/mal/animate.html.

[49] E.C. Rosen, “The Updating Protocol of Arpanet’s New Routing Algorithm,” Computer Networks, No. 4, pp. 11-19, 1980.

[50] M.F. Arlitt and C.L. Williamson, “Web Server Workload Characterization: The Search for Invariants,” in Proceedings of ACM SIGMETRICS’96 Conference, ACM Press, 1996.

[51] E. Shamir and E. Upfal, “A Probabilistic Approach to the Load-Sharing Problem in Distributed Systems,” J. Parallel Distrib. Comput. 4, 1987, pp. 521-530.

[52] Y. Wang, and R. Morris, “Load Sharing in Distributed Systems,” IEEE Trans. Comput. C-34, 3 (March 1985).

[53] S. Zhou, “A Trace-Driven Simulation Study of Dynamic Load Balancing,” University of Cal. Berkeley, Technical Report No. UCB/CSD 87/305, Sep. 1987.

[54] S. Zhou, “An Experimental Assessment of Resource Queue Length as Load Indices,” University of Cal. Berkeley, Technical Report No. UCB/CSD 86/298, Apr. 986.

[55] A.M. Tilborg and L.D. Wittle, “Wave Scheduling-Decentralized Scheduling of Task Forces in Multicomputers,” IEEE Trans. Comput. C-33 (1984), pp. 35-844.

[56] L.G. Valiant, “A Bridging Model for Parallel Computation,” CACM 33 (1990), pp. 103-111.

[57] John B. Evans. ”Structures of Discrete Event Simulation”, First Published in 1988

by Ellis Horwood Limited

Appendix A RANDOM R50 NETWORK TOPOLOGY

from-node to-node distance(km) bandwidth(Mb)
N00 N01 560 2.0
N00 N03 340 8.0
N00 N23 960 1.5

N00 N30 640 1.0

N00 N40 600 10.0

N00 N42 800 5.0

N00 N43 400 1.9

N01 N00 560 1.9

N01 N12 940 3.0

N01 N21 120 5.0

N01 N25 340 6.0

N01 N26 540 6.0

N01 N33 440 2.0

N01 N34 600 8.0

N01 N40 380 2.0

N02 N04 200 12.0

N02 N07 360 1.6

N02 N09 720 3.0

N02 N12 500 8.0

N02 N15 320 1.7

N02 N22 500 12.0

N02 N27 560 5.0

N02 N42 780 3.0

N03 N00 340 1.4

N03 N08 220 1.3

N03 N09 620 3.0

N03 N12 540 2.0

N03 N18 200 1.7

N03 N20 320 1.9

N03 N29 700 1.6

N03 N41 640 8.0

N03 N49 240 7.0

N04 N02 200 12.0

N04 N07 160 1.9

N04 N10 440 10.0

N04 N12 560 1.3

N04 N22 460 8.0

N04 N23 640 2.0

N04 N33 460 1.6

N04 N41 560 1.5

N04 N46 360 4.0

N04 N48 540 12.0

N05 N10 920 3.0

N05 N17 380 1.4

N05 N19 420 1.4

N05 N21 120 5.0

N05 N24 880 2.0

N05 N40 460 12.0

N05 N42 560 1.4

N05 N44 500 9.0

N05 N45 360 8.0

N06 N09 720 5.0

N06 N15 260 3.0

N06 N24 280 1.8

N06 N36 440 1.8

N06 N42 800 2.0

N07 N02 360 4.0

N07 N04 160 2.0

N07 N38 340 10.0

N07 N45 220 1.9

N07 N46 460 1.7

N08 N03 220 1.6

N08 N10 440 11.0

N08 N12 340 3.0

N08 N18 200 9.0

N08 N22 380 7.0

N08 N24 100 1.0

N08 N35 160 3.0

N08 N41 600 5.0

N08 N46 300 9.0

N09 N02 720 7.0

N09 N03 620 6.0

N09 N06 720 11.0

N09 N20 940 10.0

N09 N22 580 11.0

N09 N36 280 11.0

N09 N38 460 7.0

N09 N40 240 2.0

N09 N45 280 1.4

N10 N04 440 9.0

N10 N05 920 10.0

N10 N08 440 4.0

N10 N15 200 5.0

N10 N17 800 1.5

N10 N22 800 1.7

N10 N29 1040 1.0

N10 N34 680 7.0

N10 N39 480 1.7

N10 N42 1080 12.0

N10 N43 160 9.0

N10 N45 700 5.0

N10 N46 720 2.0

N11 N12 740 7.0

N11 N14 140 4.0

N11 N19 940 1.8

N11 N20 1080 1.9

N11 N35 940 1.9

N11 N39 820 3.0

N12 N01 940 10.0

N12 N02 500 2.0

N12 N03 540 1.4

N12 N04 560 1.6

N12 N08 340 2.0

N12 N11 740 1.9

N12 N25 620 5.0

N12 N26 440 11.0

N12 N28 500 1.8

N12 N38 380 7.0

N12 N41 560 1.4

N13 N15 400 7.0

N13 N17 480 2.0

N13 N25 440 6.0

N13 N27 640 5.0

N13 N28 360 1.3

N13 N41 540 11.0

N13 N42 600 10.0

N13 N46 420 1.8

N14 N11 140 1.4

N14 N15 940 1.8

N14 N30 400 1.3

N14 N34 360 7.0

N14 N38 400 11.0

N15 N02 320 2.0

N15 N06 260 1.7

N15 N10 200 1.6

N15 N13 400 8.0