Enumeration and classification
Read the passages that follow and analyze them. See a sample analysis on pp. 152-153 in the textbook.
There are two kinds of car buyers. The first kind simply wants a car in order to be able to go where it is necessary to go. Such buyers, who want a car for transportation, are likely to choose a car for its price, safety, and comfort. The second type wants a car that presents a certain image. This sort of buyer wants to seem daring, fashionable, sporty, or perhaps sexy. These people may be willing to pay a lot of money for a car which is not especially practical, comfortable, or safe.
(Effective Paragraph: From the Paragraph Up. Kenkyusha. p. 27)
Japan is divided into four kinds of geopolitical areas, each of which may be considered a kind of prefecture. The one which comes to mind first is the ken. There are 43 ken, ranging from Okinawa in the south to Aomori in the northeast. Next, there are the fu. There are two of these prefectural areas—Kyoto and Osaka. Then there is one prefecture, Hokkaido, called a dou, probably because of its special administrative history in the days when it was considered a frontier area. Finally, there is a to, a label given only to the area including the modern capital, Tokyo, and its environs. Together, these 47 tedoufuken make up the total land area of Japan.
(Effective Paragraph: From the Paragraph Up. Kenkyusha. p. 38)
The study of memory is crucial to the understanding of how the human mind works. Scientists who study memory divide it into two types. Of course, scientists study a lot of other factors in addition to memory. One type of memory is short-term memory. We use short-term memory for pieces of information that we need for a short time only. For example, when we make a phone call, we look at the phone number and remember it just long enough to dial it. The other type of memory is long-term memory. Long-term memory is stored for permanent use. It is believed that when we learn something, it goes from short-term memory into long-term memory.
(Kneji Kitao. 1993. From Paragraphs to Essays. Eichosha. p.48)
As you know, English is a language with an unusually rich vocabulary, more words than any other language. Where did all these words come from? English words can roughly be divided into four categories. The first category is words from Old English, which developed from a language spoken by a Germanic tribe that invaded England in the 5th century. These are basic words in English such as man, word, speak, and good. The second category includes words from French. William the Conqueror invaded from France in 1066, and, as a result, English adopted many words from French. The third category comes from Latin and Greek. Scientists and other scholars who needed a word for a new invention or idea often used a word based on Latin or Greek words. The fourth category is words from other languages. At various times, when English speakers have come into contact with speakers of other languages, they have borrowed words. Words have been borrowed from Arabic, Japanese, Spanish, Italian, Russian, and many other languages. These categories indicate the great variety of sources that English words have come from.
(ALC 1 sample lecture)
Numbers can be classified in many ways. The simplest class comprises the natural numbers (1, 2, 3, . . . ), which with the addition of zero are known as the whole numbers. The natural numbers, their negative equivalents, and the zero constitute the integers. Rational numbers can be defined as numbers expressible as ratios of integers, or equivalently, as terminating, or repeating, decimals (4 = 4/1; 3 1/7= 22/7= 3.142857 . . . ; 2 1/2= 2.5; 1/3= .333 . . . ). Irrational numbers cannot be expressed as a ratio of integers or as decimals that terminate or repeat; examples of them are √2= 1.4142 . . . ; π= 3.14159 . . . ; and e= 2.71828 . . . . Taken together, the rational and irrational numbers encompass the real number system.
Numbers that are the even roots of negative numbers (√(-3) , 4√(-16) , 6√(-5) , etc.) are not members of the real-number system; such numbers are called imaginary. The basic unit of the imaginary-number system is the irreducible form √(-1) , which is represented by i. Numbers that have a real and an imaginary component are known as complex numbers. Complex numbers are represented by the form a + bi; a is the real part and bi the imaginary part. Thus, the reals (for which b= 0) or the imaginaries (a= 0), taken by themselves, are subsets of the complex-number system. Within the system of complex numbers it is always possible to find a root of any polynomial of degree n > 0, but this is not true if one is restricted to the real numbers.
The Irish mathematician Sir William Hamilton (1805–65), by dropping the commutative axiom of multiplication (ab= ba), developed a system of hypercomplex numbers that he called quaternions, which have a basis of four elements (1,i,j,k).
(based on Encyclopedia Britannica)
We have discussed several kinds of forces—including weight, tension, friction, fluid resistance, and the normal force—and we will encounter others as we continue our study of physics. But just how many kinds of forces are there? Our current understanding is that all forces are expressions of just four distinct classes of fundamental forces, or interactions between particles. Two are familiar in everyday experience. The other two involve interactions between subatomic particles that we cannot observe with the unaided senses.
Of the two familiar classes, gravitational interactions were the first to be studied in detail. The weight of a body results from the Earth’s gravitational attraction acting on it. The sun’s gravitational attraction for the Earth keeps the Earth in its nearly circular orbit around the sun. Newton recognized that the motions of the planets around the sun and the free fall of objects on Earth both result from gravitational forces. In Chapter 12 we will study gravitational interactions in greater detail, and we will analyze their vital role in the motions of planets and satellites.
The second familiar class of forces, electromagnetic interactions, includes electric and magnetic forces. If you run a comb through your hair, you can then use it to pick up bits of paper or fluff; this interaction is the result of electric charge on the comb. All atoms contain positive and negative electric charge, so atoms and molecules can exert electric forces on each other. Contact forces, including the normal force, friction, and fluid resistance, are the combination of all such forces exerted on the atoms of a body by atoms in its surroundings. Magnetic forces occur in interactions between magnets or between a magnet and a piece of iron . These may seem to form a different category, but magnetic interactions are actually the result of electric charges in motion. In an electromagnet an electric current in a coil of wire causes magnetic interactions. We will study electric and magnetic interactions in detail in the second half of this book.
These two interactions differ enormously in their strength. The electrical repulsion between two protons at a given distance is stronger than their gravitational attraction by a factor of the order of l035. Gravitational forces play no significant role in atomic or molecular structure. But in bodies of astronomical size, positive and negative charge are usually present in nearly equal amounts, and the resulting electrical interactions nearly cancel out. Gravitational interactions are thus the dominant influence in the motion of planets and in the internal structure of stars.
The other two classes of interactions are less familiar. One, the strong interaction, is responsible for holding the nucleus of an atom together. Nuclei contain electrically neutral neutrons and positively charged protons. The charged protons repel each other, and a nucleus could not be stable if it were not for the presence of an attractive force of a different kind that counteracts the repulsive electrical interactions. In this context the strong interaction is also called the nuclear force. It has much shorter range than electrical interactions, but within its range it is much stronger. The strong interaction is also responsible for the creation of unstable particles in high-energy particle collisions.
Finally, there is the weak interaction. It plays no direct role in the behavior of ordinary matter, but it is of vital importance in interactions among fundamental particles. The weak interaction is responsible for a common form of radioactivity called beta decay, in which a neutron in a radioactive nucleus is transformed into a proton while ejecting an electron and an essentially massless particle called an antineutrino. The weak interaction between the antineutrino and ordinary matter is so feeble that an antineutrino could easily penetrate a wall of lead a million kilometers thick!
During the past several decades a unified theory of the electromagnetic and weak interactions has been developed. We now speak of the electroweak interaction, and in a sense this reduces the number of classes of interactions from four to three. Similar attempts have been made to understand strong, electromagnetic, and weak interactions on the basis of a single unified theory called a grand unified theory (GUT), and the first tentative steps have been taken toward a possible unification of all interactions into a theory of everything (TOE). Such theories are still speculative, and there are many unanswered questions in this very active field of current research.
(Young and Freedman. 2004. Sears and Zemansky’s University Physics. eleventh edition. pp. 188-189)
Early nuclear physicists used alpha and beta particles from naturally occurring radioactive elements for their experiments, but they were restricted in energy to the few MeV that are available in such random decays. Present-day particle accelerators can produce precisely controlled beams of particles, from electrons and positrons up to heavy ions, with a wide range of energies. These beams have three main uses. First, high-energy particles can collide to produce new particles, just as a collision of an electron and a positron can produce photons. Second, a high-energy particle has a short de Broglie wavelength and so can probe the small-scale interior structure of other particles, just as electron microscopes can give better resolution than optical microscopes. Third, they can be used to produce nuclear reactions of scientific or medical use.
(Young and Freedman. 2004. Sears and Zemansky’s University Physics. eleventh edition. p. 1674)
When the year 1905 began, Albert Einstein was an unknown 25-year-old clerk in the Swiss patent office. By the end of that amazing year he had published three papers of extraordinary importance. One was an analysis of Brownian motion; a second (for which he was awarded the Nobel Prize) was on the photoelectric effect. In the third, Einstein introduced his special theory of relativity, proposing drastic revisions in the Newtonian concepts of space and time.
The special theory of relativity has made wide-ranging changes in our understanding of nature, but Einstein based it on just two simple postulates. One states that laws of physics are the same in all inertial frames of reference; the other states that the speed of light in vacuum is the same in all inertial frames. These innocent-sounding propositions have far-reaching implications. Here are three: (1) Events that are simultaneous for one observer may not be simultaneous for another. (2) When two observers moving relative to each other measure a time interval or a length, they may not get the same results. (3) For the conservation principles for momentum and energy to be valid in all inertial systems, Newton’s second law and the equations for momentum and kinetic energy have to be revised.
Relativity has important consequences in all areas of physics, including electromagnetism, atomic and nuclear physics, and high-energy physics. Although the results derived in this chapter may run counter to your intuition, the theory is in solid agreement with experimental observations.
(Young and Freedman. 2004. Sears and Zemansky’s University Physics. eleventh edition. p. 1403)
There are three principal techniques that are used to construct tables of indefinite integrals, and they should be learned by anyone who desires a good working knowledge of calculus. They are (1) integration by substitution (to be described in the next section), a method based on the chain rule; (2) integration by parts, a method based on the formula for differentiating a product of functions (to be described in Section 5.9); and (3) integration by partial fractions, an algebraic technique which is discussed at the end of Chapter 6. These techniques not only explain how tables of indefinite integrals are constructed, but also they tell us how certain formulas are converted to the basic forms listed in the tables.
(Apostol. 1967. Calculus. volume I. second edition. p. 211)
In theory, the convergence or divergence of a particular series ∑ an is decided by examining its partial sums sn to see whether or not they tend to a finite limit as n → ∞. In some special cases, such as the geometric series, the sums defining sn may be simplified to the point where it becomes a simple matter to determine their behavior for large n. However, in the majority of cases there is no nice formula for simplifying sn and the convergence or divergence may be rather difficult to establish in a straightforward manner. Early investigators in the subject, notably Cauchy and his contemporaries, realized this difficulty and they developed a number of “convergence tests” that by-passed the need for an explicit knowledge of the partial sums. A few of the simplest and most useful of these tests will be discussed in this chapter, but first we want to make some general remarks about the nature of these tests.