ES154 Communications and Media 2001-2002
Lecture 13 / UNIVERSITY OF WARWICK
School of Engineering / D.D.Udrea

The Internet: History, Protocols, Services, Client-Server model

Aims and Objectives

-Give an brief history of the Internet

-Explain the concept of Internet protocols

-Explain the concept of the Client-Server model

-Give examples of Internet services

Internet Defined

The Internet (“interconnected networks”) is a global system of networked computers which can exchange information. The Internet is not centrally controlled by any one organisation, nor is it operated for profit. The computers connected to the Internet are called hosts. They are independent from each other and they communicate using a standard suite of languages called Transmission Control Protocol/ Internet Protocol (TCP/IP). The information exchange is based on a type of data transmission through the network called packet switching.

Internet Services

Many of the host computers on the Internet provide services to other computers connected to the Internet.

Computers that provide these services are called servers. Computers that benefit from these services are called clients. There are many services available over the Internet, some of the most popular are:

-E-mail : Enables people to send private messages as well as files to one or more other people.

-WWW: The World Wide Web is a distributed system of inter-linked pages that include text, pictures, sound and other information.

-File transfer: Enables people to download any type of file from public file servers.

-Remote login: Enables people to connect to a remote computer and use its software.

-Voice and video conferencing: Enables two or more people to hear and see each other and share some other applications

-Online chat: Enables one or more people to send real-time messages and read each other’s messages.

Newsgroups: Enable ongoing group discussions using a system of newsgroup servers to store messages to any of the newsgroups identified by topic.

Packet switching vs. Circuit switching

There are two ways of transmitting data through a network: circuit switching and packet switching.

Circuit switching is a type of communications in which a dedicated channel (or circuit) is established physically and doesn’t change for the duration of a transmission. The most common circuit-switching network is the telephone system, which links together wire segments to create a single unbroken line for each telephone call.

Packet switching is the other common communications method, which divides messages into packets and sends each packet individually. The packets may take different routes and may arrive out of order. Upon getting to their destination, the individual packets get put back into order by a packet assembler. The Internet is based on a packet-switching protocol, TCP/IP.

Circuit-switching systems are ideal for communications that require data to be transmitted in real-time. Packet-switching networks are more efficient if some amount of delay is acceptable.

Controlling devices, instruments and virtual instruments on the Internet

With the continued expansion of the Internet, people are starting to realise that the medium can be successfully used not only as a data communication mechanism, but also as a platform for controlling processes regardless of geographical location.

Moreover, users should be able to accessed information not only from a computer, but also from other systems, like industrial machines, home security systems, measurement instruments etc., all due to the platform independence technology brought by the Internet.

This leads to four categories of possible applications: remote monitoring, remote control, collaboration and distributed computing.

History of the Internet (for reference only – not for exam)

The idea of connecting computers together was first established in the 1960s when data was required to be transferred between several computers at universities and research institutions in the United States [Travis, Chapter 2]. The idea was to connect the computers together in such a way that if a connection was broken, then an alternative route could be found so that the network was still useable. This network proved to be very successful and in the 1970s, the network was expanded to include five supercomputers, each of which in turn was connected to a series of smaller departmental networks. As a result, the term “interconnected network” or Internet was born. For almost two decades the Internet was mainly used by universities and government agencies but, during the 1990s, more companies began to connect their internal LANs to the Internet as a way to share information between branch offices and other organisations. As more users became connected together, then higher bandwidths were needed to maintain the same rate of data transmission.

As more and more users wanted to take advantage of the Internet, then people wanted to use it on their home computers. Since the average home user could not afford a network connection to the Internet, then modems were introduced to allow users to connect to the Internet via an Internet Service Provider (ISP). Although modems are used in common place today by home users, it should be remembered that phone lines were never meant for connecting computers to the Internet and they still present significant bandwidth limitations.

It is important to remember that during the first decades of the Internet, it was a text-based network in which users had to enter commands at prompts (similar to the DOS or UNIX prompts) in order to connect to other computers or to share data. It was the physicist Tim Berners-Lee at CERN who developed the first graphical user interface to the web so that particle-physics data could be displayed and shared with greater efficiency and user-friendliness.

What makes the Internet so unique today is that there is no central controlling agency or organisation that determines its course. Because it is literally a set of “interconnected networks”, virtually any organisation or person can connect their own network (or just a computer, or a device) into this huge network, thereby expanding the Internet a little more. With the introduction of mobile phones using WAP (Wireless Application Protocol, see later) technology and new web appliances, the Internet is set to expand at a continually increasing rate.

Internet Timeline (for reference only – not for exam)
1961 / Leonard Kleinrock (MIT) publishes the first paper on packet switching theory, which forms the basis for the creation of the Internet.
1964 / The Rand Corporation, funded by the US Air Force, produces a report on distributed Computer networks, which describes the advantages of distribution compared with the centralised mainframe systems typical of the day. One of the most important features was the continued functioning of the system regardless of whichever part of it might be destroyed in an enemy attack.
1965 / Ted Nelson invents the term Hypertext to describe links to other texts embedded in a text. He later designs a long, worldwide hypertext network called Xanadu, but it is not until 25 years later that Tim Berners-Lee makes hypertext available to all network users with his World Wide Web project.
1967 / The United States Department of Defense Advanced Research Projects Agency (DARPA) launches the ARPANET project to design a distributed computer network.
1969 / The ARPANET network is set up, creating the basis for the Internet. The backbone of the ARPANET consisted of packet-switching computers, called IMPs (Interface Message Processors), connected by, for the time, superfast 56 Kbit/s lines. Conventional computers with appropriate communications software were then connected to these IMP nodes.
There were four IMP nodes: University of California at Los Angeles (UCLA), Stanford Research Institute (SRI), University of California, Santa Barbara (UCSB) and the University of Utah. All the computers used different operating systems and they were able to talk to each other across the network with equal status.
1972 / ARPANET becomes international, with nodes in Europe at the University College London (UCL) and the Royal Radar Establishment in Norway. The total number of nodes was 23, mostly being American universities, military establishments and large companies. In Europe, however, network research did not really evolve into true network services, partly as a result of the policies adopted by telecom monopolies.
1972 / The predecessors of Internet electronic mail are introduced into the ARPANET.
During the 1970s, research on the ARPANET is carried out in the USA. This produces TCP/IP, the basis of the Internet.
1970s / Internetworking Protocols, gateways, other networks, services
1976 / UUCP (Unix to Unix CoPy)
1978 / USENET strted using UUCP. Newsgroups emerged from this development.
1979 / Packet Radio Network (PRNET) experiment starts with ARPA funding. Most communications take place between mobile vans.
1980 / The TCP/IP data transmission protocol is adopted as the US Department of Defence's (DoD) official network standard.
1980 / CSNET connects all the Computer Science departments in the US universities.
1981 / BITNET formed the first network available to the broad academic user community and students.
1981 / Software for e-mailing, mailing lists and discussion, such as LISTSERV and the original prototype for Internet Relay Chat (IRC), RELAY, were developed. KERMIT made it possible to transfer files reliably and to establish data-terminal connections with almost any mainframe or microcomputer via a slow serial cable. Smartmodem 300 modem onto the market
1982 / EARN, European counterpart of BITNET
1982 / EUnet (European UNIX Network) is created by EUUG to provide E-mail and USENET services. Original connections between the Netherlands, Denmark, Sweden, and UK
1982 / External Gateway Protocol specification (EGP) is used for gateways between (different architecture) networks.
1983 / ARPANET adopts TCP/IP and the open system architecture and becomes the backbone of the Internet.
1983 / The Domain Name System is developed, so that everyone no longer needs a hosts.txt file to serve as an address book. Instead of 123.456.789.10 it is easier to remember something like i.e.
1984 / SUN releases its Network File System (NFS), which allows workstations to use hard disks on servers via a network as they would local disks.
1985 / The National Science Foundation in the USA funds the setting up of the 56 Kbit/s NSFNET network, which links US supercomputer centres together to become the Internet using Fuzzball computers.
1986 / NNTP (News to News Transfer Protocol), which is more interactive than UUCP (Unix to Unix CoPy), is developed for distributing news to Internet newsgroups, allowing news to be transmitted any number of times.
1986 / The Internet Engineering Task Force (IETF) and Internet Research Task Force (IRTF) are set up as collaborative forums for standardising Internet protocols and for research.
1986 / Diginet, a 64 Kbit/s voice-and-data network is launched as a predecessor to ISDN
1987 / BITNET spans over 1000 mainframes and the Internet over 10,000.
1988 / Robert Morris' Internet Worm program spreads through the Internet, automatically copying itself. 10% of hosts are affected. As a consequence, people begin to pay more attention to Internet security.
1988 / The basic NSFNET network is updated to use T1 (1.5Mbit/s) lines
1988 / IRC (Internet Relay Chat ) launched for Unix.
1990 / Tim Berners-Lee's proposal, "WorldWideWeb: Proposal for a HyperText Project" is approved by CERN. Prototype versions of a graphic WWW browser-editor for the NeXT computer and of a line mode interface that works on all terminals are produced, i.e. the Web is born.
1990 / The NIC.FUNET.FI FTP archive for freely distributable files is set up. One model for this was the ARPANET's Network Information Center, the NIC.DDN.MIL server, which distributed Internet documents, but the service was later extended to become one of the world's biggest archives of public-domain software, partly through the popularity of Linux.
1991 / CERN releases the World Wide Web globally and Minnesota University its rival system, Gopher. Gopher was initially more popular because of its simplicity. It was basically a distributed hierarchical menu system for browsing files, and could be used without converting text documents to HTML format. WWW browsers were also able to call up gopher and ftp servers, if required, thus being able to access other network services from the same interface.
1991 / In the USA, the NSFNET brought in T3 (44,736 Mbit/s) links on the Internet backbone
1991 / NSF policies restricting commercial use are lifted and the Commercial Internet eXchange Association, Inc is formed.
1992 / Over 26 Web servers in use and over a million computers in the Internet.
1992 / The Internet Society (ISOC) and RIPE NCC are founded.
1992 / The first MBONE transmissions: sound in March and video in November.
1992 / Jean Polly writes her article "Surfing the Internet: an Introduction", thus coining the phrase 'surfing the Net'.
1993 / Computers start surfing the Web automatically to collect information for search engines.
Use of the WWW in the USA expands by 341634%, thanks to the easy-to-use graphics-based NCSA Mosaic browser, and the Internet's takeover of the world begins in earnest.
1994 / The three-dimensional modelling language, VRML, is created.
1994 / One of the Internet's biggest and best-known subject indexes comes on-line at
1994 / A standard is agreed for 28.8 bit/s V.34 modems.
1994 / Netscape navigator is launched and becomes the most popular web browser.
1995 / SUN Microsystems releases the platform-independent Java language and the HotJava browser written using it. Applets written in Java can add animations and various interactive features to Web pages.
1995 / Use of the WWW overtakes FTP in the NSFNET statistics, i.e. it is now the world's most frequently used Internet application.
1995 / Microsoft Internet Explorer and other Microsoft internet applications are launched with Windows 95
1996 / SUN releases its JavaStation network computer, which runs Java programs straight from the network. The aim is to produce an intelligent Internet terminal that does not require local software maintenance. Enthusiasts for network computers include, for example, the makers of major terminal-operated database environments such as IBM and Oracle, since they could replace millions of ageing mainframe terminals in situations where there is no need for an expensively maintained and complicated microcomputer. There are also plans to use network computers to create a simple Internet user interface for domestic televisions and other appliances.
1997 / The World Wide Web Consortium publishes version 4.0 of the HTML language used to create web pages. This includes multimedia features, UNICODE support, for displaying the world's various languages, and features that help people with disabilities use the Net.
1997 / The Internet2 project is announced in the US to develop within two years new Internet services for the research community, such as interactive TV, videoconferencing and remote presence for teaching and research. For this collaboration, the research community began to construct new Internet connections, which initially ran at 620 Mbit/s, increasing to 2.4 Gbit/s at the beginning of 1999.
1998 / The World Wide Web Consortium releases the specifications for XML (Extensible Markup Language) version 1.0, which will make it easy to expand future web pages.
2000 / Web size estimates surpass 1 billion indexable pages and 100 million hosts.
next / The Resource Description Framework (RDF) integrates a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications provide a lightweight ontology system to support the exchange of knowledge on the Web.
next / SVG - Scalable Vector Graphics - at last, graphics which can be rendered optimally on all sizes of device
next / The user interface world is rapidly becoming competent at voice input and output an W3C has standards in that area coming along.
next / XML Signature will let you digitally sign XML documents.

Internet Statistics

How many people are online worldwide

DATE / NUMBER / % EMAIL / SOURCE / REGIONS / TOTAL
August 2001 / 513.41 million / 8.46 / Nua Ltd / World Total / 513.41 million
August 2000 / 368.54 million / 6.07 / Nua Ltd / Africa / 4.15 million
August 1999 / 195.19 million / 4.64 / Nua Ltd / Asia/Pacific / 143.99 million
Sept 1998 / 147 million / 3.6 / Nua Ltd / Europe / 154.63 million
November 1997 / 76 million / 1.81 / Reuters / Middle East / 4.65million
December 1996 / 36 million / .88 / IDC / Canada & USA / 180.68 million
December 1995 / 16 million / .39 / IDC / Latin America / 25.33 million
Commercial activities vs. Income (bil. USD) / 1996 / 1997 / 1998 / 2001 / 2002
Internet Access / 1.21 / 1.21 / 5.53
Web Hosting & Security / 0.17 / 0.17 / 0.99
Electronic Commerce / 0.01 / 0.01 / 0.24 / > 40.00 / est 300.00

Note: The Internet is distributed by nature. This is its strongest feature, since no single entity is in control, and its pieces run themselves, co-operating to form the network of networks that is the Internet. However, because no single entity is in control, nobody knows everything about the Internet. Measuring it is especially hard because some parts choose to limit access to themselves to various degrees. So, instead of measurement, we have various forms of surveying and estimation. (John Quarterman of Matrix ID Science/Matrix NetSystems Inc.)

TCP/IP

TCP/IP is protocol suite which defines the Internet.

It is independent of the lower layers e.g. the ‘physical layer’. As a result, the TCP/IP protocol creates a logical network view and can work over Ethernet, wireless radio, or a modem. The relative independence of this protocol also allows platform independence, since all sorts of operating systems support TCP/IP (e.g. Windows, MacOS, Unix).

TCP/IP changes the format of the frame to include an IP address. This new frame is called an IP datagram. How can the datagram be transmitted across the physical network that does not understand the format? The entire IP datagram is placed inside the data area of the frame. The hardware does not examine or change the content of the frame data. The technique is called encapsulation.