# The Most Influential

# People in History

# MATHEMATICS SECTION

# INTRODUCTION and CONCLUSION

The choice of the 46 most influential mathematicians was based on the following findings:

“Mathematics is a unique aspect of human thought, and its history differs in essence from all other histories.

As time goes on, nearly every field of human endeavor is marked by changes which can be considered as corrections and/or extensions. Thus, the changes in the evolving history of political and military events are always chaotic; there is no way to predict the rise of a Genghis Khan, for example, or the consequences of the short-lived Mongol Empire. Other changes are a matter of fashion and subjective opinion. The cave-paintings of 25,000 years ago are generally considered great art, and while art has continuously—even chaotically—changed in the subsequent millennia, there are elements of greatness in all the fashions. Similarly, each society considered its own ways natural and rational, and found the ways of other societies to be odd, laughable, or repulsive.

But only among the sciences is there true progress; only there is the record one of continuous advance toward ever greater heights.

And yet, among most branches of science, the process of progress is one of both correction and extension. Aristotle, one of the greatest minds ever to contemplate physical laws, was wrong in his views on falling bodies and had to be corrected by Galileo in the 1590s. Galen, the greatest of ancient physicians, was not allowed to study human cadavers and was quite wrong in his anatomical and physiological conclusions. He had to be corrected by Vesalius in 1543 and Harvey in 1628. Even Newton, the greatest of all scientists, was wrong in his view of the nature of light, of the achromaticity of lenses, and missed the existence of spectral lines. His masterpiece, the laws of motion and the theory of universal gravitation, had to be modified by Einstein in 1916.

Now we can see what makes mathematics unique. Only in mathematics is there no significant correction—only extension. Once the Greeks had developed the deductive method, they were correct in what they did, correct for all time. Euclid was incomplete and his work has been extended enormously, but it has not had to be corrected. His theorems are, every one of them, valid to this day.

Ptolemy, may have developed an erroneous picture of the planetary system, but the system of trigonometry he worked out to help him with his calculations remains correct forever.

Each great mathematician adds to what came previously, but nothing needs to be uprooted. Consequently, when we read a book like A History of Mathematics, we get the picture of a mounting structure, ever taller and broader and more beautiful and magnificent and with a foundation, moreover, that is as untainted and as functional now as it was when Thales worked out the first geometrical theorems nearly 26 centuries ago.” page ix, foreword by Isaac Asimov in Carl B. Boyer, A History of Mathematics.

___________________________________________________________________

Given the way mathematics builds over time, choosing the most influential in this field rests on a stronger base than any of the other subject areas.

1. The pool of candidates includes any great scientist because all used mathematics as a tool to solve problems. However, three individuals were consistently noted as the top three mathematicians: the Greek, Archimedes; Carl Friedrich Gauss, a German; and Isaac Newton, an Englishman, so they made the list.

“Carl Friedrich Gauss, who, with Archimedes and Newton, ranks as one of the greatest mathematicians of all time…” page 697, Encyclopaedia Britannica, Macropaedia, Volume 19, 1993, 15th Edition.

“…he (Archimedes) is ranked with Newton and Gauss as one of the three greatest in that field (mathematics).” page 105, Morris Kline, Mathematical Thought From Ancient to Modern Times, 1972.

“Archimedes, Newton and Gauss, these three, are in a class by themselves among the great mathematicians, and it is not for ordinary mortals to attempt to range them in order of merit. All three started tidal waves in both pure and applied mathematics: Archimedes esteemed his pure mathematics more highly than its applications; Newton appears to have found the chief justification for his mathematical inventions in the scientific uses to which he put them, while Gauss declared that it was all one to him whether he worked on the pure or the applied side.” page 218, E. T. Bell, Men of Mathematics, 1937.

2. The Greek, Euclid is included because his Elements has proven to be the best text on geometry for more than 2,000 years. More than this, the Elements is the most influential math textbook in history. Several other Greeks: Thales of Miletus, Pythagoras of Samos, Apollonius of Perga, and Diophantus made significant contributions, especially in geometry, and so were added.

During the first century or so of the Hellenistic (Greek) Age, three mathematicians stood head and shoulders above all others of the time, as well as above most of their predecessors and successors. These men were Euclid, Archimedes, and Apollonius; it is their works that leads to the designation of the period from about 300 to 200 B.C.E. as the “Golden Age” of Greek mathematics. page 140, Carl B. Boyer, A History of Mathematics, 2nd Edition, 1991.

3. Another Greek, Aristotle, developed a system of logic, called syllogistic logic, which was the first deductive system in the history of logic.(1) It seems Aristotle was the first to engage in the systematic, sustained investigation of inference in its own right.(2) He served as the cornerstone in the development of logic and, as Aristotle said, “logic was a tool used by all the sciences,” so he made the list.

4. Four Indians, Brahmagupta, Aryabhata, Bhāskara I, and Bhāskara II were included because they played an important role in developing the idea of a place-value system using the principle of Base 10 for expressing quantities. This system makes expressions of any quantity from ten to one billion easier to write and more economically expressed. Secondly, the Indians began developing what we now call Hindu-Arabic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) generally used worldwide today. As an example, compare the quantity: Three thousand four hundred eighty-eight between the Roman method: MMMCCCCLXXXVIII; with the current, Hindu-Arabic style: 3,488. The Babylonians (2000–539 B.C.E.) and Mayans (c. 300–c. 900) used place-value, but their discoveries were not adopted by other civilizations.

5. Several Arabs were included, most notably al-Khwarizmi, but also al-Biruni, and Omar Khayyam, because they played a critical role in translating Greek math texts, thus preserving the invaluable insights and systematic organization of geometry attained by the Greeks. Al-Khwarizmi adopted the Hindu-Arabic numerals and place-value notation in his writings and produced valuable work in algebra and trigonometry. Europeans, over several hundred years, adopted the simplified number symbols and notation system with Fibonacci, an Italian, playing a key role. As a way of confirming the pivotal role of Arab mathematicians, the word “algebra” comes from the Arabic and the word “algorithm” comes from a Latin form (algorismi) of al-Khwarizmi.(3)

“By 850 most of the classic Greek texts in mathematics, astronomy, and medicine had been translated into Arabic.”(4)

“David Rivault’s Latin translation (1615) of Archimedes’ complete works, was enormously influential in the work of Rene Descartes and Pierre de Fermat. Without the background of the rediscovered ancient mathematicians, amongst whom Archimedes was paramount, the development of mathematics in Europe between 1550–1650 is inconceivable.”(5)

6. The following Europeans were all considered great mathematicians and so made the grade: François Viète; Rene Descartes; Pierre de Fermat; Blaise Pascal; Gottfried Leibniz; Jakob, Johann, and Daniel Bernoulli; Leonhard Euler; Joseph-Louis Lagrange; Jean Fourier; Augustin Cauchy; Évariste Galois; George Riemann; Georg Cantor; Henri Poincaré; David Hilbert; John von Neumann; Kurt Gödel; Bertrand Russell; and Alfred Whitehead.(6) Of particular note are Euler and Lagrange, who were the key figures in eighteenth-century mathematics. For Morris Kline, a prominent mathematics historian, Euler “should be ranked with Archimedes, Newton, and Gauss.”(7) On the other hand, Carl Boyer, another well-known math historian, states, “Lagrange generally is regarded as the keenest mathematician of the eighteenth century, only Euler being a close rival, and there are aspects of his work that are not easily described in an elementary historical survey.”(8)

7. I include three great woman mathematicians, two from Europe, Sophie Germain and Emmy Noether, and one from Russia, Sofia Kovalevskaya. The decision to add three women was based on the fact that enormous obstacles faced females, starting with family and societal walls, that discouraged studying mathematics beyond the elementary level until the twentieth century. The difficulties would continue at every stage, from gaining admittance to a college to finding a job as a mathematician. Also, these women serve as influential role models for other women that no male could fill.

8. I include Ptolemy of Alexandria, Nicolaus Copernicus, Johannes Kepler, and Galileo Galilei because some of the most important advances in the sciences came from individuals who explored the movement of the Earth, its moon, and the planets, which yielded groundbreaking results in advancing the field of mathematics. All four are usually not thought of as great mathematicians, but all used math to help explain the results of their discoveries. All put forth theories about how the universe works that were widely accepted, and used the language of mathematics–the fundamental tool for explaining the physical universe–to explain their ideas.

“So if the worth of the arts were measured by the matter with which they deal, this art—which some call astronomy, others astrology, and many of the ancients the consummation of mathematics—would be by far the most outstanding. This art which is as it were the head of all the liberal arts…leans upon all the other branches of mathematics. Arithmetic, geometry, optics, geodesy, mechanics, and whatever others, all offer themselves in its service.” page 510, Copernicus, On the Revolution of the Heavenly Spheres (1543), translated by Charles Glenn Wallis.

9. I include two mathematicians from Asia, Zhu Shijie from China, who flourished from 1280 to 1303, and Seki Kowa from Japan (c. 1650–c. 1710). I also include Matteo Ricci, an Italian Jesuit missionary, who arrived in China in the early 1600s, introducing Western science for the first time to China. No one from Africa, South America, North America or Australia made the list, though two Europeans settled in the United States–John von Neumann and Kurt Gödel. For Africa, the Greeks–Euclid, Apollonius, Ptolemy, and Diophantus–lived in Alexandria, Egypt, and Archimedes, another Greek, spent some time in Egypt, but no one from the indigenous cultures of the continent was included. This does not mean that no one from these continents was capable of producing something noteworthy in mathematics. On the contrary, a great deal of mathematical writing took place, but the results either never qualified as the best in the field at any given time or went unrecognized. In the case of ancient Egypt, Babylonia and the Mayans, no individuals are recorded for recognition. Overall, the circumstances of history produced these results. One question this raises is: Were the advances made in math by Western mathematicians and the Arabs, Indians and Asians important for advancing humanity, man’s betterment on Earth?

Some final facts that affected the choices:

“… I have ignored several civilizations such as the Chinese, Japanese, and Mayan because their work had no material impact on the main line of mathematical thought.” page viii, Morris Kline, Mathematical Thought From Ancient to Modern Times, 1972.

“As a consequence of the exponential growth of science most of ... mathematics has developed since the 15th century…” page 561, Encyclopedia Britannica, Macro, v. 23, 1993, "The History of Mathematics."

“... it is a historical fact that from the 15th century to the late 20th century new developments in mathematics have been largely concentrated in Europe and North America.” page 561, Encyclopedia Britannica, Macro, v. 23, 1993, "The History of Mathematics."

“... the history of mathematics in Japan ... did not really begin until the end of the 16th century ...” page 3, Joseph Needham, Science and Civilization in China, volume 3.

Footnotes:

(1) Encyclopaedia Britannica, Macropaedia, Volume 23, 1993, 15th Edition, p. 263.

(2) Encyclopaedia Britannica, Macropaedia, Volume 23, 1993, 15th Edition, p. 262.

(3) Ibid., p. 272.

(4) Will Durant, The Age of Faith – The Story of Civilization: Volume 4 (New York, 1950), p. 240.

(5) Encyclopaedia Britannica, Macropaedia, Volume 13, 1993, 15th Edition p. 873.

(6) Eric Temple Bell, Men of Mathematics (New York, 1937), p. vii (Descartes), p. viii (Fermat, Pascal, Newton, Leibniz), p. ix (Bernoulli family, Euler, Lagrange), p. x (Cauchy), p. xii (Riemann); p. xiii (Cantor); and William Dunham, Journey Through Genius – The Great Theorems of Mathematics (New York, 1990), p. 157 (Descartes, Pascal), p.158 (Fermat), p. 184 (Leibniz), p. 191 (Bernoulli family), p. 207 (Euler), p. 250 (Cauchy), p. 57 (Riemann), p. 267 (Cantor), p. 282 (Godel), p. 286 (Russell); Judy Pearsall and Bill Trumble (editors), The Oxford Encyclopedic English Dictionary (New York, 1996), p. 974 (von Neumann), p. 1649 (Whitehead); Stephen Hawking (editor), God Created the Integers – The Mathematical Breakthroughs That Changed History (Philadelphia, 2007), p. 519 (Fourier), p. 1131 (Cantor).

(7) Morris Kline, Mathematical Thought From Ancient to Modern Times (New York, 1972), p. 401.

(8) Boyer, p. 490.

# CONCLUSION

A basic dichotomy in the history of mathematics is the two main streams of development. One stream started with the Greeks, continued with the Indians, passed to the Arabs, and reached its zenith with the Europeans. The other major stream of development was that of the Chinese, which passed knowledge to Korea and Japan. By the early 1600s, however, the picture becomes confused because the Jesuits arrived in China, bringing the cutting-edge advances of Western mathematics, eventually reaching the rest of Asia. Thus mathematical knowledge was available everywhere, though, as stated at the beginning of this section, the major developments came out of Europe and North America into the late twentieth century.

The first important theme is that the Greeks were the first to use rigorous deductive proofs, and in the process made the most advances in geometry, culminating in Euclid’s Elements. The method of mathematical proof is in a sense the fundamental basis of mathematics and what defines the subject itself. The Chinese never reached that level of rigorous proof for their geometry or algebra, though they did develop many of the same theorems and techniques of the Greeks and Arabs, such as the Pythagorean theorem for finding the sides of a right triangle and Pascal’s triangle for finding the coefficients in a binomial expansion.(1) In the case of the Pythagorean theorem, the Greeks had a proof that the sides of every right triangle were such that the sum of the squares of the two shorter sides equaled the square of the longest side (hypotenuse). In mathematical symbols: x2 + y2 = z2, where x and y are the lengths of the shorter sides and z is the length of the longest side. The Chinese, in contrast, knew the relationship between the three sides of right-angled triangles but did not have the certainty of a proof that the theorem (relationship) was true for all right-angled triangles. An inability to create mathematical proofs meant they could never build a body of rigorous mathematical knowledge like that contained in Euclid’s Elements or Newton’s Principia Mathematica.

The second major idea is how the Hindu-Arabic numerals combined with the Base-10 positional system became universally accepted across the world, used by virtually all people to express numerical quantities and show mathematical calculations. The Hindu-Arabic symbols 1, 2, 3, 4, 5, 6, 7, 8, 9 and zero were first developed by the Indians around the 500s and somewhat later a positional system was added. This system passed to the Arabs and fully expressed by Al-Kharizmi. By 1202, Fibonacci, an Italian merchant and mathematician, published Fiber abbici in Europe which was instrumental in disseminating the nine number symbols together with a zero in a Base-10 positional notation system. Another world changing form of standardization was the introduction of the metric system in France in 1792 by Lagrange; eventually adopted across the world.

The third major idea is Galileo’s use of the inductive method of logic, also referred to as the modern scientific method, which marked a fundamental turning point in man’s investigation of nature. It represented the beginning of the scientific revolution by combining mathematics with the experimental method. Again, the Chinese made discoveries and developed methods for solving many problems in mechanics, but the Europeans developed more exact, rigorous methods in explaining their observations. Such great thinkers of the West as Pythagoras, Archimedes, Euclid, Copernicus, Kepler, Galileo, and Newton used mathematics to discover the fundamental laws of nature–rules that would be true in every instance. It was no longer the case that one speculated on results. Now the use of mathematics was required to clear out the errors and build solid, mathematically sound, logically deductive or inductive conclusions. A world of quantity was substituted for the world of quality.

In the field of mathematics proper, the discovery of the calculus by Newton and Leibniz in the late 1600s was the great leap that clearly raised the West above the East. Up to this time, mathematics in China and the West were little different. Newton and Leibniz represented the culminating figures in European mathematics as they transformed the field itself, bringing mathematical entities nearer to physics, subjecting them to motion. In 1550, European mathematics had been hardly more advanced than the Arabic inheritance of Indian and Chinese discoveries. But there followed an astounding range of things basically new—the elaboration of a satisfactory algebraic notation at last by Viète (1580) and Recorde (1557), the full appreciation of what decimals were capable of by Stein (1585), the invention of logarithms by Napier (1614) and the slide rule by Gunter (1620), the establishment of coordinate and analytic geometry by Descartes (1637), the first adding machine (Pascal, 1642), and the achievement of the infinitesimal calculus by Newton (1665) and Liebniz (1684).(2) Algebra and axiomatic-deductive geometry had basically evolved separately, the former among the Indians, Arabs, and Chinese, and the latter among the Greeks and their successors; now, the marriage of the two, the application of algebraic methods to the geometric field, was the greatest single step ever made in the progress of the exact sciences.(3)

Again, let us look at the situation in China. The Chinese should have been interested in mechanics for ships, in hydrostatics for their vast canal system (like the Dutch), in general like Archimedes and Bernoulli, and in ballistics for guns (after all, they had possessed gunpowder four centuries before Europe) and in pumps for mines. If they were not, could not the answer be sought in the fact that little or no private profit was gained from any of these things in Chinese society, dominated by its imperial bureaucracy? Their techniques and industries were all essentially “traditional,” the product of many centuries of slow growth under bureaucratic oppression or, at best, tutelage, not the creations of enterprising merchant-venturers with big profits in sight.

The Chinese might have made the great leap in combining their vast body of astronomical observations with mathematical rigor to arrive at the same conclusions as Kepler on the elliptical orbits of the planets or Newton’s law of universal gravitation, but Chinese mathematicians “went down into a kind of tomb” after the 1300s until the 1600s, and the synthesis never occurred. As the sinologist Joseph Needham astutely points out, “Interest in nature was not enough, controlled experimentation was not enough, empirical induction was not enough, eclipse-prediction and calendar-calculation were not enough–all of these the Chinese had. Apparently a mercantile culture (Europe) alone was able to do what agrarian bureaucratic civilization (China) could not—bring to fusion point the formerly separated disciplines of mathematics and nature-knowledge.”(4)

Another critically important theme is that the geometric perspective begun by the Greeks slowly changed over time to an arithmetic outlook in the 1700s and the time of Euler, who set it on a definite arithmetic path that has continued to the present. The geometric outlook emphasized the use of geometric figures, diagrams, and proofs without the use of numbers (see example on next page from Euclid’s Elements). The arithmetic perspective used algebraic expressions (i.e., letters to represent numbers shown in equations) to explain ideas. Lagrange, coming soon after Euler, made the point in the introduction to his masterpiece, Mecanique Analytique, that “No diagrams will be found in this work.” Lagrange used general algebraic formulas to express his ideas (see sample near end of this conclusion from Lagrange’s Mecanique Analytique).

Proposition 47 in Euclid’s Elements, the famous Pythagorean Theorem

In right-angled triangles, the square on the side opposite the right angle equals the sum of the squares on the sides containing the right angle.

Let ABC be a right-angled triangle having the angle BAC right.

I say that the square on BC equals the sum of the squares on BA and AC.

Describe the square BDEC on BC, and the squares GB and HC on BA and AC. Draw AL through A parallel to either BD or CE, and join AD and FC.

In right-angled triangles, the square on the side opposite the right angle equals the sum of the squares on the sides containing the right angle.

Let ABC be a right-angled triangle having the angle BAC right.

I say that the square on BC equals the sum of the squares on BA and AC.

Describe the square BDEC on BC, and the squares GB and HC on BA and AC. Draw AL through A parallel to either BD or CE, and join AD and FC.

Since each of the angles BAC and BAG is right, it follows that with a straight line BA, and at the point A on it, the two straight lines AC and AG not lying on the same side make the adjacent angles equal to two right angles, therefore CA is in a straight line with AG.

For the same reason BA is also in a straight line with AH.

Since the angle DBC equals the angle FBA, for each is right, add the angle ABC to each, therefore the whole angle DBA equals the whole angle FBC.

Since DB equals BC, and FB equals BA, the two sides AB and BD equal the two sides FB and BC respectively, and the angle ABD equals the angle FBC, therefore the base AD equals the base FC, and the triangle ABD equals the triangle FBC.

Now the parallelogram BL is double the triangle ABD, for they have the same base BD and are in the same parallels BD and AL. And the square GB is double the triangle FBC, for they again have the same base FB and are in the same parallels FB and GC.

Therefore the parallelogram BL also equals the square GB.

Similarly, if AE and BK are joined, the parallelogram CL can also be proved equal to the square HC. Therefore the whole square BDEC equals the sum of the two squares GB and HC.

And the square BDEC is described on BC, and the squares GB and HC on BA and AC.

Therefore the square on BC equals the sum of the squares on BA and AC.

Therefore in right-angled triangles the square on the side opposite the right angle equals the sum of the squares on the sides containing the right angle.

Q.E.D.

Sample Page from Lagrange’s Mecanique Analytique

The next major development was that of non-Euclidean geometry by Gauss, Riemann, Bolyai, and Lobachovsky. From the work of Lobachevsky, Gauss, Bolyai, and Riemann two new non-Euclidean geometries were discovered and each was found to be just as valid and consistent as Euclidean geometry. Also, it soon became clear that it is impossible to tell which, if any, of the three geometries is the most accurate as a mathematical representation of the real world. Thus, mathematicians were forced to abandon the cherished concept of a single correct geometry and to replace it with the concept of equally consistent and valid alternative geometries. They were also forced to realize that mathematical systems are not merely natural phenomena waiting to be discovered; instead, mathematicians create such systems by selecting consistent axioms and postulates and studying the theorems that can be derived from them.

Einstein used this new perspective to explain his general theory of relativity in mathematical terms. In addition, Einstein added time as a fourth dimension to the field of geometry or, more broadly speaking, to man’s concept of the universe.

Lastly, was the “group” concept that emerged gradually from investigations in algebra by Galois and Lagrange among others, and number theory. The extraordinary progress of algebra, analysis, geometry, mechanics, and theoretical physics is due to the idea of a group and its associated set of invariants.(5)

Footnotes:

(1) Joseph Needham, Science and Civilization in China, Volume 3 - Mathematics and the Sciences of the Heavens and the Earth (Cambridge, 1959), p. 91.

(2) Needham, p. 155.

(3) Ibid., p. 156.

(4) Ibid., p. 167-168.

(5) Isabella Bashmakova and Galina Smirnova, translated from the Russian by Abe Shenitzer, The Beginnings & Evolution of Algebra (Washington, D.C., 2000), p. 126.

Math Problems for the 21st Century

As a closing note the Clay Mathematics Institute announced on May 24, 2000 seven Millennium Prize Problems. A correct solution to any of the problems results in a U.S. $1 million price awarded by the institute to the discoverer(s). Two of the problems come from people chosen for this book as the most influential, Poincaré and Riemann.

The seven problems are:

1. The Riemann Hypothesis

This is the only problem that remains unsolved from Hilbert’s list in 1900 (see Hilbert biography for a discussion of the whole list) excluding a few that are too imprecise to have a definite answer. Mathematicians across the world agree that this obscure-looking question about the possible solutions to a particular equation is the most significant unsolved problem in mathematics.

The Riemann Hypothesis: The Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part ½. It asserts that all the 'non-obvious' zeros of the zeta function are complex numbers with real part 1/2.

The problem was created by the German mathematician Bernhard Riemann in 1859 as part of an attempt to answer one of the oldest questions in mathematics: What, if any, is the pattern of the prime numbers among all counting numbers? Around 350 B.C.E., the famous Greek mathematician Euclid proved that the primes continue forever; that is, there are infinitely many of them. Moreover, by inspection, the primes seem to “thin out” and become less common the higher up you go through the counting numbers. But can you conclude any more than that? The answer is yes.

Euclid also proved that every number bigger than 1 (i.e., every positive counting number bigger than 1) is either itself a prime or else can be written as the product of prime numbers in a way that is unique apart from the order in which the primes are written. For example,

21 = 3 x 7,

260 = 2 x 2 x 5 x 13.

The expressions to the right of the equals signs are the “prime decompositions” of the numbers 21 and 260, respectively. Thus, we can express Euclid’s result by saying that every counting number bigger than 1 is either prime or else has a unique (up to changing the order) prime decomposition.

This fact, called the fundamental theorem of arithmetic, tells us that the primes are like the chemist’s atoms—the fundamental building blocks out of which all numbers are constructed. Just as an understanding of the unique molecular structure of a substance can tell us a lot about its properties, knowing the unique prime decomposition of a number can tell us a lot about its mathematical properties.

Finally one of the deepest observations about the pattern of the primes was made by Gauss, Riemann’s Ph.D. advisor. In 1791, when he was just 14 years old, Gauss noticed that the prime density DN = P (N) / N is approximately equal to 1/ln (N), where ln (N) is the natural logarithm of N. As far as Gauss could tell, the bigger N got, the better this approximation became. He conjectured that this was not just an accident, and that by making N sufficiently large, the density Dn could be made as close as you want to 1/ln (N). Gauss was never able to prove his conjecture. This was finally achieved—using some very powerful math—in 1896 by the Frenchman Jacques Hadamard and the Belgian Charles de la Vallée Poussin, working independently. Their result is known today as the Prime Number Theorem.

There are at least two amazing aspects of this result. First, it demonstrates that, despite the seemingly random way that the primes crop up, there is a systematic pattern to the way they thin out. The pattern is not apparent if you look at an arbitrary finite stretch of the numbers. No matter how far out along the numbers you go, you can find clusters of several primes close together as well as stretches as long as you like in which there are no primes at all. Nevertheless, when you step back and look at the entire sequence of counting numbers, you see that there is a very definite pattern: The larger N becomes, the closer the density DN gets to 1/ln (N).

The second and far more important feature of the Prime Number Theorem is the nature of the pattern of the primes that is uncovers. The counting numbers are discrete objects, invented by our ancestors some 8,000 years ago as a basis for trading. The natural logarithm function was invented by sophisticated mathematicians a mere two hundred years ago. It is not discrete; rather, its definition depends upon a detailed analysis of infinite processes, and forms part of the discipline sometimes called advanced calculus and sometimes called real analysis. One of several equivalent definitions of ln(x) is as the inverse to the exponential function ex.

If you wanted to represent the prime numbers on a graph, the most obvious way would be to mark a point at each prime number on the x-axis, as shown in Figure 1 below.

____________________________________________

The graph of the function ln(x), on the other hand, is a smooth, continuous curve, as shown in Figure 2 below. The question is this: Why is there a connection between the irregularly spaced points on the x-axis in in Figure 1 and the smooth curve shown in Figure 2? How is it that the function ln(x) can tell us something about the pattern of the primes?

To sum up, a proof of the Riemann Hypothesis would add to our understanding of the prime numbers and the way they are distributed. This would do far more than satisfy the curiosity of mathematicians. Besides having implications in mathematics well beyond the patterns of the primes, it would have ramifications in physics and modern communications technology specifically Internet security.

The Riemann Hypothesis and the World Wide Web

Every time you use an ATM machine at your bank or carry out a business transaction on the Internet, you are depending on the mathematical theory of prime numbers to keep your transaction secure. This is how it works.

From the moment people started to send messages to one another, the following issue arose: How can you prevent an unauthorized person who gets hold of the message from understanding what it says? The answer is you encode the message (the technical term is “encrypt”) so that only the intended receiver can access the original contents. Julius Caesar used a very simple system to encrypt the messages he sent to his generals commanding the Roman legions across Europe. He simply replaced each letter of the alphabet in each word by another, according to a fixed scheme, such as “replace each letter by the previous one in the alphabet” (with Z replacing A and A replacing B). A message encrypted this way might look completely unreadable, but to today’s cryptanalyst, Caesar’s system is easily broken.

Today, with the computer power available to any would-be codebreaker, it is extremely difficult to design a secure encryption system. (Much of the drive to develop computer technology during World War II came from the desire of each side to break the enemy’s codes.) If there is any kind of “recognizable” pattern to the encrypted text, a sophisticated statistical analysis using a powerful computer can usually crack the code. So, your encryption system needs to be sufficiently robust to resist computer attack.

These days, encryption systems invariably consist of two components: an encryption procedure and a “key.” The former is typically a computer program or, in the most widely used systems, a specially designed computer chip; the key is usually a secretly chosen number. To encrypt a message, the system required not only the message but also the chosen key. The encryption program codes the message so that the encrypted text can be decoded only with the key. Since the security depends on the key, the same encryption program may be used by many people for a long period of time, and this means that a great deal of time and effort can be put into its design. An obvious analogy is that manufacturers of safes and locks design one type of lock which may be sold to millions of users, who rely upon the uniqueness of their own key—be it a physical key or a secret number combination—to provide security. Just as an enemy may know how your lock is designed and yet be unable to break into your safe, so too the enemy may know what encryption system you are using without being able to break your coded messages.

In early key systems, the message sender and receiver agreed beforehand on some secret key, which they then used to send each other messages. As long as they kept this key secret, the system would (it was hoped) remain secure. An obvious drawback with such an approach was that the sender and receiver had to agree in advance on the key they would use, and since they would clearly not want to transmit that key over any interceptable communication channel, they would have to meet ahead of time and choose the key (or perhaps employ a trusted courier to communicate it). Such a system is obviously unsuitable in many situations. In particular, it won’t work in, say, international banking or commerce, where it is often necessary to send secure messages across the world to someone the sender has never met.

In 1975, two mathematicians, Whitfield Diffie and Martin Hellman, proposed a quite new type of encryption system: public key cryptography, in which the encryption method requires not one but two keys—one for encryption and the other for decryption. Such a system is used like this. A new user, say Nicole, obtains the standard program (or special computer chip) used by all members of the communication network concerned. She then generates two keys. One of these, her deciphering key, she keeps secret. The other key, the one used for encrypting messages sent to her by anyone else in the network, she publishes in a directory of the network users. To send a message to a network user, all that has to be done is to look up that user’s public encryption key, encrypt the message using that key, and send it. To decode the message it is of no help knowing (as anyone can) the encryption key. You need the decryption key, which only the intended receiver knows.

Several specific methods were developed to implement Diffie and Hellman’s general scheme. The one that gained most support, and which remains to this day the industry standard, was designed by Ronald Rivest, Adi Shamir, and Leonard Adelman, of the Massachusetts Institute of Technology. It is known by their initials as the RSA system, and is marketed by a commercial data security company, RSA Data Security, Inc., based in Redwood City, California. The secret decryption key used in the RSA method consists (essentially) of two large prime numbers (each having, say, 100 digits) chosen by the user. (The choice of the two primes is made using a computer, not chosen from any published list of primes, to which an enemy might have access. Modern computers can find large primes with ease.) The public encryption key is the product of these two primes. The system’s security depends upon the fact that there is no known quick method of factoring large numbers. This means that it is practically impossible to recover the decryption key (the two primes) from the public encryption key (their product). Message encryption corresponds to multiplication of two large primes (an easy task), decryption corresponds to the opposite process of factoring (which is hard). (This is not exactly how the system works. Some moderately sophisticated mathematics is involved.)

At present, the largest numbers that can be factored on a powerful computer in less than a few days have around 90 to 100 digits, so using a key obtained by multiplying together two 100-digit primes, i.e., a number with 200 digits, should make the RSA system extremely secure. But there is a danger. The methods mathematicians use to factor large numbers are not simply trial-and-error searches such as you might use if I asked you to find the prime factors of 221. Doing it that way is fine for fairly small numbers, but could take a powerful computer over a year to factor a 60-digit number. Instead, mathematicians use some highly sophisticated techniques to find prime factors. The methods they have developed are clever and powerful, and getting steadily more so. Those methods make use of much that we know about prime numbers, and every time there is an advance in our knowledge of the primes, there is always a possibility that it will lead to a new method for factoring numbers.

Since the Riemann Hypothesis tells us so much about the primes, a proof of that conjecture might well lead to a major breakthrough in factoring techniques. Not because we will then know the hypothesis is true. Suspecting that it is true, mathematicians have been investigating its consequences for years. Indeed, some factoring methods work on the assumption that it’s true. Rather, the fear among the encryption community is that the methods used to prove the hypothesis will involve new insights into the pattern of the primes that will lead to better factoring methods.

Clearly, then, with Internet security and large parts of contemporary mathematics hanging in the balance, far more is at stake in the Riemann Problem than the $1 million Millennium Prize.

2. Yang-Mills Theory and the Mass Gap Hypothesis

The Yang-Mills equations come from quantum physics. They were formulated almost fifty years ago by the physicists Chen-Ning Yang and Robert Mills to describe all of the forces of nature other than gravity. They do an excellent job. The predictions culled from these equations describe particles that have been observed at laboratories around the world. But while the Yang-Mills theory works in practical terms, it has not yet been worked out as a mathematical theory. The second Millennium Problem asks for, in part, that missing mathematical development of the theory, starting from axioms.

The mathematics needs to meet a number of conditions that have been observed in the laboratory. In particular, it should establish (mathematically) the “Mass Gap Hypothesis,” which concerns supposed solutions to the Yang-Mills equations. This hypothesis is accepted by most physicists, and provides an explanation of why electrons have mass. Proof of the Mass Gap Hypothesis is regarded as a good test of a mathematical development of the Yang-Mills theory. It would help the physicists as well. They cannot explain why electrons have mass either; they simply observe they do.

3. The P Versus NP Problem

This is the only Millennium Problem that is about computers. Many people will find this surprising since many people assume most math is done on computers today. However this is not true. Most numerical calculations are done on computers, but numerical calculation is only a very small part of mathematics, and not a typical part at that.

Although the electronic computer came out of mathematics—the final pieces of the math were worked out in the 1930s, a few years before the first computers were built—the world of computing has hitherto generated only two mathematical problems that would merit inclusion among the world’s most important. Both problems concern computing as a conceptual process rather than any specific computing devices, although this does not prevent them from having important implications for real computing. Hilbert included one of them as number 10 on his 1900 list. That problem—which asks for a proof that certain equations cannot be solved by a computer—was solved in 1970.

The other problem is more recent. This is a question about how efficiently computers can solve problems. Computer scientists divide computational tasks into two main categories:

(1) Tasks of type P can be tackled effectively on a computer and (2) Tasks of type E could take millions of years to complete. Unfortunately, most of the big computational tasks that arise in industry and commerce fall into a third category, NP, which seems to be intermediate between P and E. But is it? Could NP be just a disguised version of P? Most experts believe that NP and P are not the same (i.e., that computational tasks of type NP are not the same as tasks of type P? But after thirty years of effort, no one has been able to prove whether or not NP and P are the same. A positive solution would have significant implications for industry, for commerce, and for electronic communications, including the World Wide Web.

4. The Navier-Stokes Equations

The Navier-Stokes equations describe the motion of fluids and gases—such as water around a pier or air over a plane wing. They are of a kind that mathematicians call partial differential equations. College students in science and engineering routinely learn how to solve partial differential equations, and the Navier-Stokes equations look just like the kinds of equations given as exercises in a college calculus textbook. But appearances can be deceptive. To date, no one has a clue how to find a formula that solves these particular equations—or even if such a formula exists.

This failure has not prevented marine engineers from designing efficient boats or aeronautical engineers from building better aircraft. Although there is no general formula that solves the equations (say, in the way that the quadratic formula solves all quadratic equations), the engineers who design high-performance boats and aircraft can use computers to solve particular instances of the equations in an approximate way. Like the Yang-Mills Problem, the Navier-Stokes Problem is another case where mathematics wants to catch up with what others, in this case engineers, are already doing.

5. The Poincaré Conjecture

This problem was first raised in 1904 by the French mathematician Henri Poincaré. It starts with a seemingly simple question: How can you distinguish an apple from a doughnut? This does not seem like a question that would lead to a $1 million math problem. But what makes it hard is that Poincaré wanted a mathematical answer that could be used in more general situations. That rules out the more obvious solutions, such as simply taking a bite of each.

Here is how Poincaré himself answered the question. If you stretch a rubber band around the surface of an apple, you can shrink it down to a point by moving slowly, without tearing it and without allowing it to leave the surface. On the other hand, if you imagine that the same rubber band has somehow been stretched in the appropriate direction around a doughnut, then there is no way of shrinking it to a point without breaking either the rubber band or the doughnut. Surprisingly, when you ask whether the same shrinking band idea distinguishes between four-dimensional analogies of apples and doughnuts—which is what Poincaré was really after—no one has been able to provide an answer. The Poincaré conjecture says that the rubber band idea does identify four-dimensional apples.

This problem lies at the heart of topology, one of the most fascinating branches of present-day mathematics. Besides its inherent and sometimes quirky fascination—for instance, it tells you the deep and fundamental ways in which a doughnut is the same as a coffee cup—topology has applications in many areas of mathematics, and advances in the subject have implications for the design and manufacture of silicon chips and other electronic devices, in transportation, in understanding the brain, and even in the movie industry.

In 2003, the Poincaré Conjecture was solved by the Russian mathematician Grigori Perelman. Perelman's proof tells us that every three manifold is built from a set of standard pieces, each with one of eight well-understood geometries. Amazingly, he refused the prize money telling the Interfax news agency that his contribution was no greater than Richard Hamilton who had completed work on the problem earlier.

6. The Birch and Swinnerton-Dyer Conjecture

This problem is in the same general area of mathematics as the Riemann Hypothesis. Since the time of the ancient Greeks, mathematicians have wrestled with the problem of describing all solutions in whole numbers x, y, z to algebraic equations like

x2 + y2 = z2

For this particular equation, Euclid gave the complete solution—that is to say, he found a formula that produces all the solutions. In 1994, Andrew Wiles proved that for any exponent n greater than 2, the equation

xn + y2 = z2

has no nonzero whole-number solutions. (This was the result known as Fermat’s Last Theorem. See a detailed discussion of Wiles solution under the Fermat biography.) But for more complicated equations it becomes extremely difficult to discover whether there are any solutions, or what they are. The Birch and Swinnerton-Dyer Conjecture provides information about the possible solutions to some of those difficult cases.

As with the Riemann Hypothesis, to which it is related, a solution to this problem will add to our overall understanding of the prime numbers. Whether it would have comparable implications outside of mathematics is not clear. Proving the Birch and Swinnerton-Dyer Conjecture might turn out to be important only to mathematicians.

On the other hand, it would be foolish to classify this or any mathematical problem as being “of no practical use.” Admittedly, the mathematicians who work on the abstract problems of “pure mathematics” are usually motivated more by curiosity than by any practical consequences. But again and again, discoveries in pure mathematics have turned out to have important practical applications.

On top of that, the methods mathematicians develop to solve one problem often turn out to have applications to quite different problems. This was definitely the case with Andrew Wiles’s proof of Fermat’s Last Theorem. Similarly, a proof of the Birch and Swinnerton-Dyer Conjecture would almost certainly involve new ideas that will later be found to have other uses.

7. The Hodge Conjecture

This is another “missing piece” question about topology. The general question is about how complicated mathematical objects can be built up from simpler ones. Of all the Millennium Problems, this is perhaps the one the layperson will have the most trouble understanding. Not so much because the underlying intuitions are any more obscure than for the other problems or because it is believed to be harder than any of the other six problems. Rather, the Hodge Conjecture is a highly technical one having to do with the techniques mathematicians use to classify certain kinds of abstract objects. It arises deep within the subject, at a high level of abstraction, and the only way to reach it is by way of those layers of increasing abstraction. This is why I have put this problem last following Keith Devlin’s order in his book on the Millennium problems.

The path to the conjecture began in the first half of the twentieth century, when mathematicians discovered powerful ways to investigate the shapes of complicated objects. The basic idea was to ask to what extent you can approximate the shape of a given object by gluing together simple geometric building blocks of increasing dimension. This technique turned out to be so useful that it was generalized in many different ways, eventually leading to powerful tools that enable mathematicians to catalogue many different kinds of objects. Unfortunately, the generalization obscured the geometric origins of the procedure, and the mathematicians had to add pieces that did not have any geometric interpretations at all. The Hodge conjecture asserts that for one important class of objects (called projective algebraic varieties), the pieces called Hodge cycles are, nevertheless, combinations of geometric pieces (called algebraic cycles).

Key References:

1. The Millennium Problems – The Seven Greatest Unsolved Mathematical Puzzles of Our Time by Keith Devlin, 2002.

2. For precise descriptions of the seven problems, together with the official rules for the competition, consult the Clay Mathematics Institute website at www.claymath.org (The website also features a twenty minute streaming video on the Millennium Problems, presented by Keith Devlin and produced by the Clay Institute.).

3. On the Poincaré Conjecture – Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century by Masha Gessen, 2009.