Computer Security in Business

Chapter Contents:

Introduction: 3

Section 1: Information Storage and Retrieval in Business 6

Section 1.1: A Brief History of the Computerization of Business 6

1.2 Business Data and Information: The New Definition of Business Records 10

1.3 The cost of Computer Attacks on Business 12

1.3 Connectivity 13

Introduction:

These two brief items barely illustrate the information security risks that businesses face. Such crimes as wire fraud, identity theft, political terrorism, and industrial espionage, as well as non-criminal risks brought on by forces such as weather, fire, flood, and equipment breakdown cost American businesses billions of dollars every year.

Our lives have been dramatically changed by the evolution of computer technology and the increasing reliance placed on it to accomplish key business activities. As computing capabilities have increased at an exponential rate, business has been able to capitalize on these advances to deliver products and services better, cheaper and faster. One needs only to look back thirty to forty years to understand the difference modern computing capabilities make.

In 1960, in order to take a transcontinental air flight from New York to Los Angeles involved no fewer than 11 manual processes:

  1. Passenger visited a travel agent to plan the trip;
  2. Travel agent researched and provided alternatives to the passenger;
  3. Travel agent called the airline to inquire of a seat was available on a particular flight;
  4. Upon hearing that a seat was available, the travel agent asked and waited for pricing information to be provided by the airline;
  5. The passenger paid for the ticket using cash (yes, cash was a predominant form of payment then) or check;
  6. Travel agent wrote the ticket using a standard ticket form;
  7. Travel agent sent the collected fare to the airline;
  8. On the day of the flight, passenger would show up at the airport and use the paper ticket for the flight, getting their boarding pass and seat assignment at the time of check-in;
  9. The passenger checked-in any baggage and received a hand-written claim check;
  10. The airport would collect all of the tickets used that day and send them to the airline’s headquarters, where a large department of clerical workers would sort the tickets, record information about the tickets (e.g., price, destination) and report this information to management.
  11. At the destination, if the passenger’s bag was lost, airline personnel telephoned and telexed (a type of telegram device) all stations where the airline operated to search for the missing bag;

All of this does not even take into consideration what was going on at the airline's headquarters, where there was a huge room with clerks taking calls from travel agents, checking availability in their seat inventory, completing update cards, and others updating flight lists to remove the seat(s) from inventory.

Today, it is quite common for a passenger to use computers to accomplish all of these same tasks. Using the Internet or other computerized systems, a passenger can:

1.  Research the best available fares on a preferred airline or on any airline;

2.  Select particular flights for the trip, as well as specific seat assignments and meal preferences;

3.  Reserve the flight selections and pay for the trip using a credit card;

4.  On the day of the flight, print a boarding pass at home or office;

5.  At the airport, check themselves in for the flight, check-in baggage and receive a baggage receipt that is attached to the passenger’s flight record, and carries an electronic tag that allows it to be tracked in the airline’s premises at any time;

6.  At the destination airport, if the passenger's bag was not delivered, an airline employee has only to access the baggage records to find the missing bag and determine when it will arrive.[1]

What’s more, the passenger's data (along with that of all others) is constantly available to the airline. This allows the airline to continuously monitor and change its inventory of seats for flights[2], and even calculate the profitability of the flight (direct revenues – direct cost) as soon as the plane backs away from the gate.

What does this all mean from a security standpoint? It is the very innovations and advances in computing technologies that have heightened the risks that businesses face. Security is about safeguarding assets. Information security is about safeguarding data and information assets. In 1960, there was limited accessibility to the airline's data; information was largely controlled in the hands of people. The rise of computing technology and resulting business innovation was enabled by the electronic storage of data that was no longer under human physical control. To make matters even worse, the airline of today does not have only their data to control, but that of consumers as well such as credit card information, demographic data and the itineraries of previous trips.

Without data, we have no reason to maintain a system of information security. Business, along with other significant areas of our society such as health care, government and education; have been able to become more efficient and more effective using advancing technologies for data storage and manipulation. As these technologies become more complex and more reliant on data assets beyond human physical control, the need for effective security measures becomes more critical.

Unfortunately, this advance in technology also gives rise to more capable adversaries that prey on companies (as well as consumers) committing crimes such as theft, industrial espionage, and malicious damage in the name of just causes. It is because of them that the information security business generates billions of dollars of revenue each year, and why a discussion of the risks businesses face and the strategies they use to safeguard their assets.

Section 1: Information Storage and Retrieval in Business

Section 1.1: A Brief History of the Computerization of Business

The early years

The advent of the modern computer is a relatively recent occurrence. Figure 1 shows a timeline of important dates in the development of the computer. Mathematicians and engineers have been designing tools that enable man to compute data faster as far back as the 1600’s. The most significant first modern use of a computing machine dates back to 1890, with the invention of the punch card and tabulating machines by Herman Hollerith. The 1880 U.S. census required an army of clerks working for seven years to compile all the data. Immigration was on the rise in the late 19th century. Without improvement, it was deemed impossible to tabulate the data from the 1890 census before having to begin collecting data for the 1900 census. Enter Hollerith.

Working for the census bureau Hollerith, a newly graduated engineer from Columbia University, invented a system of punch cards to record the census data, and tabulators to automatically accumulate results. Using these machines, each census worker was able to process an average of 7,000 cards a day, each card representing one person. Hollerith left government service in 1894 to start a commercial company to exploit the use of his devices.

In the early part of the 20th century, various mechanical tabulating devices were developed, among them being the printing tabulator (1920), alphabetic tabulator and the multiplier (1932) and the comparative classifier (1937). Security over data at this time was effectively maintained by locking up the machines, tabulating cards and manual reports at night, and securing sensitive information in company safes.

The war years

These technologies were working themselves into businesses in the U.S and Europe. However, these did not match the vision of a general purpose computing machine envisioned by an English mathematician, Charles Babbage, in the 1830’s. In fact, it was not until the Second World War that a general computing machine was developed.


Figure 1

Demand for mass calculations was extreme on both sides of the war. Complicated military hardware was being produced; secret codes needed to be broken. Computational devices were needed in order to accurately aim large guns on naval ships and fly precision bombing runs. Both sides were using complicated codes and coding devices to transmit messages. These codes were beyond the capabilities of human decryption. Large-scale general computing machines the size of large apartments, were developed on both sides of the Atlantic. The U.S. Navy commissioned a group at Harvard to develop the Mark One. In 1943, this machine was able to compute the trajectory of a ship’s gun in less than a second that had previously taken more than twenty hours of manual calculations. Similarly, the U.S. Army commissioned the University of Pennsylvania who, in 1945, developed ENIAC. This machine could prepare a table of firing trajectories for field artillery in a matter of seconds that was taking 176 humans and two mechanical computing machines three months to complete. While the Mark I and ENIAC accomplished similar tasks, they achieved their results using markedly different technology.

The Mark I used predominantly mechanical relays that acted on the input data. While results were produced much faster than the manual calculations, the ENIAC achieved extreme speed by using only electronic valves for logical computations. These valves, known as vacuum tubes, were able to operate 1000 times faster than the mechanical relays used in the Mark I.

In Britain at this same time, the military was much more concerned about decrypting German secret codes in sufficient time to allow the decoded messages to be useful to the allies. In 1943, the first Colossus machine was built at Bletchley Park, the center of the British decryption efforts. This machine differed from the American machines in that it was the first large-scale computing machine that based its computing on logic, not merely high-speed calculation of mathematical data. It was the Colossus that enabled much of German war code to be decrypted. It also proved that computers can manipulate alphabetic data as well as numerical data.

The post-war years

In the thirty years after the war until 1975, huge strides were made in adapting and extending computer capabilities into the business world. This period saw such landmark events as the development of the first commercial computer - UNIVAC - in 1951, two generations of IBM computers (1410 and 360/370), and the CDC6600, developed in 1964 by Seymour Cray, which was the fastest computer at that time. Cray later went on to found Cray Computing, which produced the Cray supercomputer. By the late 1960's and early 1970's it was common to find large and medium-sized businesses using one or more computers.

Much of the development of these machines was made possible by the development of components we take for granted today. Such things as magnetic tape, ferrite core memory ("core"), stored programs and the transistor enabled computers to be built that were faster, smaller and more powerful with each successive generation. Then, in 1960, the U.S. Department of Defense began investing in research projects that would transform computing forever.

Businesses had the same problem that every other user of computers had in the post-war years, including the federal. government: while computers were useful where they were located, they could not be accessed by anyone outside of the location, nor could computers in two different locations share information. This meant that data had to be transported from remote sites to the computing sites for processing, and the results transported back. It also meant that far fewer computers were used throughout a business due to the risks associated with data transportation and the inherent delays caused by these distances. The status of the "home office" took on even more importance as computers were most often located only at company headquarters.

The Advances Research Projects Agency (ARPA) of the Defense Department established ARPAnet in 1969, the first large-scale computer network, allowing remote computers to share data. ARPAnet was the forerunner of the Internet. By developing and using methods such as network nodes, packet switching data, and multiplexing, the original ARPAnet was able to link multiple computing centers together to work on advanced defense projects. However, in spite of restrictions to use the network only for research purposes, more and more people realized that they could use this network to communicate with others on the network through electronic mail, or e-mail. Soon other networks were developed around the country, including USEnet (University of North Carolina and Duke University), BITnet ("Because It's There" - CUNY and Yale), and others in the private sector.

Additional improvements to network technologies were developed such as TCP (1973), IP (1974), URL's, HTTP and HTML, that are still in use today, power the Internet, and make it possible for dissimilar computers to communicate with other computers, for messages to be routed to the proper destinations and for pages of information to be displayed in common formats. Once commercial networks were established, businesses could tap into the power of sharing information across their organizations, allowing computing to become much more pervasive in companies.

In the past thirty years, advanced developments in the computer industry have been snapped up by businesses as fast as they can be brought to market. Such early innovations as the minicomputer, microcomputer (PC) and large-scale packaged business applications, as well as more recent development such as client-server computing and Internet-based applications have largely been driven by the demands for faster, more powerful and cheaper computing needs of commercial enterprises. It is precisely these new technologies, implemented in innovative ways that makes the need for enhanced information security an ever more important requirement today.

1.2 Business Data and Information: The New Definition of Business Records

As evidenced by the press release, businesses around the world face expensive attacks on their computer systems, many of which come from within the organization. In 2003, 77 percent of respondents to the 2003 CSI/FBI Computer Crime and Security Survey named disgruntled employees as a likely source of attack on their computer systems.[3]