Pikeville High School Academic Team
The Big List of Computers and Electronics Terms
Introduction
A computer is a machine that stores a set of instructions (a program) for
processing data and then executes those instructions when requested. Computers
have many applications for businesses, researchers, and students.
A computer system consists of hardware (the physical components of the system)
and software (the instructions for the machine to follow). The hardware
consists of a unit that reads instructions and determines how to execute them
(called the central processing unit, or CPU), a unit where data can be stored
for later retrieval (called the memory), and devices for input (where the
computer obtains information from the outside world) and output (where the
computer presents information to the outside world). A keyboard is an example
of an input device; printers and CRT screens (called monitors) are output
devices. Some devices (such as disk drives and cassette tape units) can be used
for both input and output. Computers can transmit and receive data from other
computers by sending signals over telephone lines. A device called a modem can
be connected to a computer to transform signals from the computer into signals
that can be transmitted over the telephone line.
Modern digital computers are made of electronic components. The first
electronic computer, built in 1947, was a huge machine consisting of vacuum
tubes. Transistors replaced the vacuum tubes in computers built during the
1950s, making it possible to build computers that were smaller, more reliable,
and less expensive. The development of integrated circuits, consisting of many
electronic components on a single silicon chip, has made it possible to make
computers even smaller and less expensive. This section contains descriptions
of many of the electronic components that make up computers.
Computers can be classified into categories based on size and capacity.
Supercomputers are very fast computers with large memories used at research
laboratories. Mainframe computers are the large computers typically used in
businesses and other organizations. A single mainframe computer can be used
simultaneously by many different people working at terminals. Minicomputers are
smaller than mainframe computers but can still support multiple users.
Microcomputers, first built in the mid-1970s, are computers where the entire
CPU is contained on a single integrated circuit (called a microprocessor).
Microcomputers are small enough and inexpensive enough for individuals to be
able to afford them. The development of more powerful microprocessor chips has
made it possible to steadily increase the speed and memory capacity of
microcomputers.
Computers can execute many different types of software (programs). To begin
with, a computer must have a basic software system called the operating system
that allows it to start operation and then read in other software. Application
software packages perform tasks such as word processing, data base management,
spreadsheets, graphics, interactive learning, or communication. Some software
is designed for a single purpose; other software can be used for a much wider
variety of purposes. General-purpose packages provide more flexibility, but
they can be more complicated to learn because you need to specify exactly what
you want done.
As computers became more common, increased effort was directed towards making
them more user-friendly. An important step was the development of a Graphical
User Interface (GUI), where the user can see menus of choices on the screen,
often in the form of icons (pictures), and can point to items with a mouse.
Macintosh computers and the program Microsoft Windows provide a GUI for their
users. These computers allow the display of graphics mixed with text, and the
appearance of the text itself can be altered by changing its size or style.
These capabilities led to the development of desktop publishing systems in the
mid 1980s, where the computer is used for the page layout process.
The graphics capabilities of microcomputers has greatly expanded, allowing
users to see high-resolution color pictures on the screen and explore other
types of visual information, such as maps. Also, computers have developed
improved sound capabilities. For example, it is possible to obtain a multimedia
encyclopedia, which contains text, pictures, and sound. These capabilities
require computers with large storage capacity, so CD-ROMs became an
increasingly common way of storing computer information.
Many computer programs exist for specialized purposes, such as performing
accounting for a business, performing statistical calculations, designing
electronic circuits, forming models of the economy, making graphs, or creating
architectural designs. Computers also can be used as an interactive teaching
tool: presenting information on the screen, asking questions, reading in the
answers from a student, informing the student whether the answer was correct,
and then moving on to new information at the pace the student can handle. The
most advanced type of computer programs fit within the category of artificial
intelligence, which consists of programs where a computer simulates human
thinking. For example, artificial intelligence includes the study of ways to
make a computer understand human languages, such as English. However this is a
very difficult problem that has not been solved.
You may write your own computer programs if you learn a computer programming
language such as BASIC, Pascal, or LISP. In order to use a programming
language, you must have available a compiler or interpreter for that language.
Programming languages generally include these features:
--provisions for operations on data, such as arithmetic calculations or
manipulations of strings of characters.
--the use of variables to represent data.
--iteration, making it possible for a set of instructions to be executed
repeatedly, either a fixed number of times or else indefinitely until a
particular condition is met.
--arrays, a collection of many data items which have one name and are
identified with subscripts.
--the ability to write subprograms (subroutines or procedures), making it
possible to write complicated programs as different modules (otherwise it would
be difficult for people to write long programs).
When writing a computer program, it is best to plan the program carefully.
Develop a strategy for solving the problem. (A clearly specified procedure for
solving a particular problem is called an algorithm.) Then proceed to write the
program. A large program should be designed as a collection of smaller modules
because it is easier for a person to understand the program in that form. Some
languages, such as Pascal, are designed so that programs consist of subparts
(called procedures).
Computer programs often have errors (called bugs), and debugging a program is
one of the parts of programming. Bugs can be either syntax errors (meaning that
the program does not follow the rules that determine which statements are legal
in that language) or logic errors (meaning that the program does not accomplish
what it is intended to accomplish).
The section that follows includes brief descriptions of several computer
programming languages, as well as more detailed descriptions of two common
languages: BASIC and Pascal.
Algorithm An algorithm is a sequence of instructions that tell how to solve a
particular problem. An algorithm must be specified exactly, so there can be no
doubt about what to do next, and it must have a finite number of steps. A
computer program is an algorithm written in a language that a computer can
understand, but the same algorithm could be written in several different
languages. An algorithm can also be a set of instructions for a person to
follow. (See Flowchart.) A set of instructions is not an algorithm if it does
not have a definite stopping place, or if the instructions are too vague to be
followed clearly. The stopping place may be at variable points in the general
procedure, but something in the procedure must determine precisely where the
stopping place is for a particular case. If you study the game of tic-tac-toe
long enough, you will be able to develop an algorithm that tells you how to
play an unbeatable game. However, some problems are so complicated that there
is no algorithm to solve them.
Ampere An ampere (or amp, for short) is the unit for measuring electric
current. A current of 1 ampere means that 6.25 x 1018 electrons are flowing by
a point each second. A group of 6.25 x 1018 electrons has a charge of 1
coulomb, so 1 ampere = 1 coulomb per second.
Analog computer An analog computer is a computer in which information is
stored in a form that can vary smoothly between certain limits rather than
having discrete values. (By way of contrast, See Digital computer.) A slide
rule is an example of an analog computer, because it represents numbers as
distances along a scale. All modern, programmable computers are digital. Analog
computer circuits are used in certain kinds of automatic machinery, such as
automotive cruise controls and guided missiles. Also, a fundamental analog
computer circuit called the operational amplifier is used extensively in audio,
radio, and TV equipment.
Apple Apple is one of the largest personal computer manufacturers. The Apple
II, introduced in 1977, was one of the earliest popular microcomputers. A wide
range of software was written for the Apple II and its followers. In 1984,
Apple introduced the Macintosh, the first widely used computer with a graphical
user interface. The company, located in Cupertino, California, was founded by
Steve Jobs and Steve Wozniak, who began work in a garage.
Array An array is a collection of data that is given one name. An array is
arranged so that each item in the array can be located when needed. An array is
made up of a group of elements, which may be either numbers or character
strings. Each element can be identified by a set of numbers known as
subscripts, which indicate the row and column in which the element is located.
The dimension of an array is the number of subscripts needed to locate a
particular element. For example, it takes two subscripts to identify an element
in a two-dimensional array.
Artificial intelligence Artificial intelligence (AI) is the branch of computer
science that deals with using computers to simulate human thinking. Artificial
intelligence is concerned with building computer programs that can solve
problems creatively, rather than simply working through the steps of a solution
designed by the programmer. For an example, consider computer game playing.
Some games, such as tic-tac-toe, are so simple that the programmer can specify
in advance a procedure that guarantees that the computer will play a perfect
game. With a game such as chess, however, no such procedure is known; the
computer must use, instead, a heuristic, that is, a procedure for discovering
and evaluating good moves. One possible heuristic for chess would be for the
computer to identify every possible move from a given position, and then
evaluate the moves by calculating, for each one, all the possible ways the game
could proceed. Chess is so complicated that this would take an impossibly long
time (on the order of millions of years with present-day computers). A better
strategy would be to take shortcuts. Calculating only five or six moves into
the future is sufficient to eliminate most of the possible moves as not worth
pursuing. The rest can be evaluated on the basis of general principles about
board positions. In fact, an ideal heuristic chess-playing machine would be
able to modify its own strategy on the basis of experience; like a human chess
player, it would realize that its opponent is also following a heuristic and
try to predict her behavior.
One of the main problems of AI is how to represent knowledge in the computer in
a form such that it can be used rather than merely reproduced. In fact, some
workers define AI as the construction of computer programs that utilize a
knowledge base. A computer that tells you the call number of a library book is
not displaying artificial intelligence; it is merely echoing back what was put
into it. Artificial intelligence would come into play if the computer used its
knowledge base to make generalizations about the library's holdings or
construct bibliographies on selected subjects. Computer vision and robotics are
important areas of AI. Although it is easy to take the image from a TV camera
and store it in a computer's memory, it is hard to devise ways to make the
computer recognize the objects it "sees." Likewise, there are many unsolved
problems associated with getting computers to move about in three-dimensional
space -- to walk, for instance, and to find and grasp objects -- even though
human beings do these things naturally.
Another unsolved problem is natural language processing -- getting computers to
understand speech, or at least typewritten input, in a language such as
English. In the late 1950s it was expected that computers would soon be
programmed to accept natural-language input, translate Russian into English,
and the like. But human languages have proved to be more complex than was
expected, and progress has been slow. The English-speaking computers of Star
Wars and 2001 are still some years away.
The important philosophical question remains: Do computers really think?
Artificial intelligence theorist Alan Turing proposed a criterion that has
since become known as the Turing test: a computer is thinking if a human being,
connected to it by teletype, cannot tell whether he is communicating with a
machine or with another person. In response Terry Rankin has pointed out that
it makes little sense to build a machine whose purpose is to deceive human
beings. Increasing numbers of AI workers are taking the position that computers
are not artificial minds, but merely tools to assist the human mind, and that
this is true no matter how closely they can be made to imitate human behavior.
ASC The ASC function in many versions of BASIC calculates the ASCII code
number associated with a given character. (See ASCII.) For example, ASC("A") is
65 because the ASCII code of the character "A" is 65 (expressed in decimal).
ASCII ASCII is a standard code for representing characters as binary numbers,
used on most microcomputers, computer terminals, and printers. ASCII stands for