The Renaissance began with the reintroduction of Aristotle’s thought into the West, especially by Thomas Aquinas. The resulting focus on the natural world and on reason as the means to understand it revolutionized European thought and led to major mathematical and scientific discoveries, including advances in algebra. Philosophers, mathematicians, and scientists differed in their view of the nature of reason, but by the end of the Renaissance, they had accepted its crucial role in the acquisition of knowledge. The result was the next major period of Western history, the Age of Reason. Although there is not uniform agreement on the exact dates of this era, we will consider the seventeenth and eighteenth centuries, beginning especially in the latter half of the seventeenth.

Let us begin with the key development, mathematically speaking, that provided the transition from the Renaissance to the Age of Reason: the integration of algebra and geometry into analytic geometry, independently and almost simultaneously by Rene Descartes (1596-1650) and Pierre de Fermat (1601-1665). Historically, geometry and algebra had developed as separate branches of mathematics. In the third century B. C., the ancient Greek mathematician, Apollonius, associated curves with equations in his monumental work on the conic sections, but he did not recognize that equations could be represented by curves. This was the crucial insight of Descartes and Fermat that led to analytic geometry.

In the sixteenth century, the Frenchman, Francois Viete (1540-1603), contributed to abstract mathematical thinking by representing coefficients in equations by consonants and variables by vowels. He then inaugurated the study of the theory of equations by establishing relationships among the roots of types of equations rather than by merely solving specific ones. Descartes used his insights into analytic geometry to do extensive work in the theory of equations, including his well-known rule of signs. He modified Viete’s approach to notation by using letters toward the first of the alphabet to represent constants and ones toward the end to represent unknowns, a practice we still follow today.

Fermat used his achievements in analytic geometry to take steps toward the development of calculus. In a treatise entitled Method of Finding Maxima and Minima, he essentially took the derivative of a polynomial function and set it to zero to find its relative extrema. By extending his technique, he calculated the slope of a curve at any point on it. He did not have the explicit concept of a limit, but otherwise his method is the one we use today. He also approximated the area under a curve by means of circumscribed rectangles. Although he left no record that he recognized the inverse relationship between differentiation and integration, his work strongly hinted at the Fundamental Theorem of Calculus.

The one person most responsible for launching the Age of Reason mathematically and scientifically was Isaac Newton (1642-1727). His revolutionary discoveries dramatically demonstrated the power of reason to grasp the nature of reality. Although he was reluctant to publish, at the urging of the astronomer, Edmund Halley, he composed Mathematical Principles of Natural Philosophy (also referred as the Principia, part of its name in Latin). It appeared in 1687 and is considered to be the most important scientific work ever written.

During the years 1665 and 1666, Newton made three major discoveries: calculus, the law of gravitation, and the nature of color. He developed differentiation systematically, and he inverted the process to find an area. Isaac Barrow (1630-1677), Newton’s teacher at Cambridge, evidently understood the possibility of such an approach, but Newton was the first person to formulate a general method of solving the area problem and thereby to grasp the Fundamental Theorem of Calculus. He also gave the first correct statement of the binomial series for rational exponents, and he came to realize that infinite series were more than tools of approximation: they provided a means of representing functions. He nearly succeeded in defining the concept of a limit although a satisfactory definition was not formulated until the nineteenth century.

In the Principia, Newton discussed his work in calculus and applied it to his three laws of motion and the law of gravitation to obtain profoundly important results, including the derivation of Kepler’s three laws and the deduction of the path of motion of one body around a second, fixed body. By studying the motion of bodies in resisting media such as air and water, he launched the field of hydrodynamics. His analysis of the solar system included the calculation of the mass of the sun, the average density of the earth, the variation of the earth’s gravitational attraction across its surface, the precession of the equinoxes, and the cause of the tides.

The ancient Greeks believed that different principles explained motion near the earth versus motion beyond the moon. Newton’s law of gravitation united Kepler’s work on the movement of the planets and Galileo’s discoveries about bodies falling to the earth to provide strong evidence that nature is uniform in its behavior.

Newton’s work on the nature of light and color was also of profound importance. The Atomist philosophers in ancient Greece held that the concept of color is subjective and cannot be quantified. By decomposing white light into its spectrum, Newton changed the understanding of color fundamentally by showing that it is based on wavelengths, which, of course, are quantifiable. The combined impact of his achievements fostered an intellectual revolution in Europe.

Gottfried Wilhelm Leibniz (1646-1716) formulated his ideas concerning calculus with no knowledge of Newton’s work. He also developed the techniques of differentiation and integration, and he understood the inverse relationship between the two. His methods of solving separable and homogeneous first order differential equations still appear in modern textbooks. In addition to his work in calculus, he introduced the term “function” and used it essentially as we do today. Another of his contributions was the introduction of the notion of determinants in the context of systems of equations.

Leibniz was quite adept at devising excellent mathematical symbolism. He is responsible for the notation we employ to denote the differential and the integral, and he was the first mathematician to use the dot consistently for multiplication. Other innovations of his include the colon for division and the similarity and congruence symbols.

In 1642, Blaise Pascal (1623-1662) invented the first mechanical calculating machine, one that would add and subtract. In 1671, Leibniz improved on Pascal’s achievement by designing a device that would also multiply and divide. Their work launched advances in the automation of computation that continue to this day.

Newton’s work on calculus preceded that of Leibniz by about ten years, but the latter published his first. A bitter dispute arose between the two concerning who should receive priority. Today it is recognized that they made their discoveries independently and deserve equal credit, just as with Descartes and Fermat in the case of analytic geometry. The dispute had a very negative effect on British mathematicians, however, because it isolated them from their counterparts on the European continent.

The development of calculus unleashed a torrent of mathematical and scientific activity, including the contributions of the Bernoulli family across multiple generations. The most prominent members were Jacob (also known as Jacques) Bernoulli (1654-1705) and his brother Johann (Jean) (1667-1748), both of whom were influenced by Leibniz. Jacob determined the equation of the catenary, studied the equation r^2 = a cos 2x (named the lemniscate of Bernoulli) and extensively investigated the logarithmic spiral. He was so fascinated by this last curve he requested that it be inscribed on his tombstone. He also published the polar coordinate system essentially in its modern form although it had been used earlier by Newton, and he suggested the term “integral” to Leibniz, who adopted it.

Fermat and Pascal founded the theory of probability when they corresponded on a problem related to a game of dice. Jacob Bernoulli wrote the first significant work on probability, including a general theory of permutations and combinations. Using mathematical induction, he provided the first rigorous proof of the binomial theorem for positive integer exponents, and he considered the expansion of (1 + 1/n)^n. He determined that the limit of this expression as n approaches infinity is less than 3 and he proposed finding it in order to calculate continuously compounded interest. Another of his major contributions to probability was the law of large numbers.

Both Jacob and Johann Bernoulli advanced differential equations by solving y’ + P(x) y = Q(x) y^n for any real number n, known today as Bernoulli’s equation. The latter’s solution was based on the substitution z = y^(1-n), an approach that appears in modern textbooks. The younger brother is also credited with developing the calculus of exponential functions. He studied ones of the form y = b^x, as well as the more general y = x^x. Another of his contributions was the introduction of the partial fractions integration technique.

In 1692, Johann Bernoulli tutored a French marquis, G. F. A. de ĽHôpital (1661-1704), in calculus, and they signed an agreement in which Bernoulli granted ĽHôpital use of his mathematical discoveries in exchange for a regular salary. In 1696, ĽHôpital published the first textbook on differential calculus, and it included one of Bernoulli’s major results, the technique that we call ĽHôpital’s Rule. The book was well written and enjoyed considerable success throughout the eighteenth century, as did another textbook by ĽHôpital on analytic geometry.

When we hear the name of Abraham de Moivre (1667-1754), we naturally think of his famous theorem and the calculation of the nth roots of a real or complex number. However, he did significant work in probability as well, addressing actuarial problems and questions related to life annuities. As we have seen, Jacob Bernoulli was also interested in the mathematics of finance. An early contributor to this area was the Dutch mathematician, Jan De Witt (1629-1672), who wrote a tract entitled A Treatise on Life Annuities. In addition, he completed what is often considered to be the first textbook on analytic geometry, in which he used the focus-directrix ratio to define conic sections. Indeed, he introduced the term “directrix” into the vocabulary of mathematics.

Newton’s extensive use of infinite series stimulated the Englishman, Brook Taylor (1685-1731), to study the series that bears his name. Taylor was also quite interested in the concept of perspective and wrote two books on the subject. Another contributor to the topic of series was the Scottish mathematician, Colin Maclaurin (1698-1746), whose series, of course, is a special case of Taylor’s. Maclaurin also obtained results on conic sections, and he published the method known as Cramer’s Rule before Gabriel Cramer (1704-1752) himself.

A student of Johann Bernoulli, Leonhard Euler (1707-1783) became one of the dominant mathematicians of the eighteenth century. Like Leibniz, he was quite adept at devising fruitful mathematical notations. He was the first to use the letters e and i to represent the base of the natural logarithm system and the imaginary unit respectively, and he popularized the Greek letter pi to stand for the ratio of the circumference of a circle to its diameter. It is to Euler that we owe the practice of using lower case letters for the sides of a triangle opposite angles denoted by the corresponding upper case letters, as well as the Greek letter sigma to indicate a summation. His abbreviations, sin, cos, tang, cot, sec, and cosec, for the six trigonometric ratios are almost identical to those we use today. Perhaps his most important contribution of all to mathematical symbolism was the use of f(x) to represent a function of x.

Far more important than Euler’s introduction of functional notation, however, was his injection of the concept of a function into the heart of mathematics. In addition to representing algebraic expressions as functions, he defined the concept of radian measure of angles, gave a strictly analytic presentation of trigonometric functions, treated logarithms as exponents in the manner we do today, and extended the idea of a function to ones that are piecewise defined. He clearly formulated what is now known as Euler’s formula, e^(ix) = cos x + i sin x, which is used, for example, to represent the solution of certain linear differential equations with constant coefficients. Zero, one, e, pi, and i, are considered to be five of the most important numbers in mathematics. If x is assigned the value of pi in Euler’s formula, all five appear in the resulting equation, for it can be written as e^(i pi) + 1 = 0.

Many of the methods taught today in differential equations courses originated with Euler. He was primarily responsible for the concept of an integrating factor, the distinctions between homogeneous and nonhomogeneous equations and general and particular solutions, and the systematic methods for solving linear equations of higher order with constant coefficients. He contributed to the solving of exact equations by showing that the continuous mixed partials of a function of two variables are equal, and he also advanced work with functions of several variables in another respect by using iterated integration to evaluate multiple integrals.

Euler’s work with infinite series was not rigorous, but he used the concept to obtain important results, for example, in number theory. He developed a proof of the infinitude of the prime numbers based on the divergence of the harmonic series, and he showed that the series whose terms are the reciprocals of the perfect squares converges to pi/6. This last result was a problem of particular interest to Jacob Bernoulli. Another of Euler’s contributions to number theory was his phi function, which represents the number of positive integers, including one, that are less than and relatively prime to a given positive integer.