The Sciences
Volume 39, Number 1
January/February 1999

NOTA BENE

by Hans Christian von Baeyer


In April 1837 the English chemist John Dalton, a peace-loving Quaker, erupted with uncharacteristic ferocity over the remarks of a visitor. To an outsider, Dalton's reaction might have seemed silly and overblown: at issue was no weighty scientific matter, just the conventions of notation for the new atomic theory of chemistry. At age seventy, Dalton was universally acclaimed as the architect of that theory, and no doubt he felt his scientific priority had earned him the right to his own notational preference. He had represented atoms as small circles, each kind distinguished by a unique mark, and molecular combinations of atoms as juxtapositions of two or more circles. Younger chemists,

Mark Tansey, Close Reading, 1990

though, were beginning to adopt the modern alphabetic notation, as in C+O2=>CO2. Dalton vehemently objected, and on the occasion of his argument with the visitor he worked himself into such a frenzy that he suffered a stroke. 

Dalton ultimately recovered—but why would anyone get so exercised over mere nomenclature? After all, the substance of Dalton's revolutionary theory—which postulates that elements are made up of atoms, and that compounds are made up of molecules—was not at issue. Was his fury just a symptom of an old man's sentimental attachment to the inventions of his youth? Or was there more to it? Can notation really matter that much? Of course it can. The way in which the building blocks of a body of thought are designated profoundly affects the development of that discipline. Compare, for instance, the quasi-phonetic alphabetic symbols of English with the ideograms of Chinese: the abstract English symbols have readily lent themselves to the Western philosophical preoccupation with essences, whereas the more representational Chinese symbols have suited a more pragmatic philosophy of existence. Or reflect on the overwhelming superiority of Arabic over Roman numerals for doing the sums and products of arithmetic. Music attained its present sophistication even as musical notation evolved to its contemporary state. Letters, numbers and musical notes are atoms of human expression, and their forms help shape the world of ideas as surely as the forms of physical atoms help shape the world of matter.

As it happens, Dalton's obsession with notation was part of a much broader tradition in European culture, which reaches back to the very dawn of Western civilization: mnemotechnics. Its purpose was to reinforce memory; not surprisingly, its popularity began to wane after the invention of printing. According to Marcus Tullius Cicero, the Roman statesman and orator, mnemotechnics—"the art of memory," in the more felicitous phrase of the English historian Frances A. Yates—was invented by the Greek poet Simonides of Ceos around 500 b.c.

The basic principle is simple. Suppose you must learn a complex sequence of ideas by heart. You start by recalling a specific place, such as a familiar house, street or public building. As you mentally wander through the place along a fixed route, you attach the concepts to be remembered, one by one, to the objects you encounter. You might tack the first idea, for instance, above the front door, place the second idea on a table in the entrance hall, tuck the third into a vase on the table, and so on, until the entire sequence has been distributed throughout the house. (In some versions of the procedure, you might even represent the idea itself by a little sign or symbol called a nota, a word that links mnemotechnics with notation.) When the time comes to recite your piece, you imagine yourself walking along the prescribed path, and you retrieve the ideas, one by one, without gaps and in perfect order.

In her 1966 book The Art of Memory, Yates reports that the system was highly effective, and that it remained in common use throughout the Middle Ages and the Renaissance. It was perfected and advocated by several distinguished writers, including Aristotle, Saint Thomas Aquinas, Petrarch, Giordano Bruno, Francis Bacon and numerous lesser scholars. The psychological underpinnings of the art of memory were stated cogently by Aristotle. "The soul never thinks without a mental picture," he wrote, and "no one could ever learn or understand anything, if he had not the faculty of perception." Simply put, things are easier to understand and remember than ideas. To memorize ideas, associate them with things.

Dalton's vehement defense of his notation can be seen as a testimonial to the truth of Aristotle's insight. In the nineteenth century, two independent and quite different concepts of the atom coexisted. On one hand there was the physical atom: a hard little kernel, a miniature billiard ball, as envisioned 2,300 years earlier by the Greek philosophers Leucippus and Democritus. Many nineteenth-century scholars agreed that it was useful to think of the world as made up of those little kernels.

The second conception of the atom was Dalton's chemical atom. It was also a least unit of matter, but it was not regarded as a material thing. The chemical atom was, in the main, a convenience for succinctly summarizing a huge body of chemical knowledge—more like a share of stock in a railroad than a physical object. The reality of a share of stock resides not in the physical piece of paper that certifies its existence, but in the hypothetical fraction of a railroad company's assets that the share represents. Until the beginning of this century it was not unreasonable to believe in physical atoms and chemical atoms as useful scientific concepts—and to deny that they were related. Dalton, however, was a hundred years ahead of his time. For him, physical and chemical atoms were the same thing, and they were both perfectly real—as, of course, they are for us. Dalton represented each atom by its own little circle, and thought of each atom as a miniature ball. In his system, combining carbon and oxygen to make carbon dioxide might be written
+ O + O ® O O
(a reaction that, as I noted earlier, is now written C+O2® CO2). But Victorian scientists, perhaps swayed by the high cost of printing such hieroglyphics, lampooned Dalton's primitive but straightforward idea: "Atoms are round bits of wood invented by Mr. Dalton," one of them scoffed in 1887.

In short, for Dalton the little circles were pictographs of real objects, albeit objects whose outlines were not to be seen individually until 1981—almost two centuries after Dalton imagined them. To represent them by letters such as C for carbon and O for oxygen was to deny their physical reality, and to reduce them to accounting tricks. Worse, it deprived people of a mental picture to attach to atoms, and thus violated the ancient principles of the art of memory. Of the new symbols Dalton wrote: "A young student in chemistry might as soon learn Hebrew as make himself acquainted with them. . . . [They] equally perplex the adepts of science, discourage the learner . . . [and] cloud the beauty and simplicity of the Atomic Theory."

Dalton was not the first physical scientist to insist on a particular symbolic notation. One of the most famous controversies in the history of science was the quarrel between Newton and the German philosopher Gottfried Wilhelm Leibniz over who deserved credit for inventing the differential calculus. (Modern scholars agree that Newton, in fact, deserves priority, but that's beside the point here.) Newton and Leibniz not only argued over who invented the calculus, but they each created a different system for its notation. To the idea of a function, say x(t)—which specifies the changing position of a body as time passes—Newton added the derivative, which he denoted by a dot over the x. The dot by itself meant nothing, but x., the derivative of x with respect to t, signified the result of a limiting process. The derivative is a triumph of human thought, and even in terse mathematical language it requires a paragraph to be defined.

Leibniz denoted the derivative of x by dx/dt. That more complicated symbol looks like a fraction, but it is definitely not a fraction. It is, however, approximately equal to a fraction, and therein lies its power. For a student first encountering the calculus, LeibnizÕs symbol is far more evocative than x. It recalls the geometric meaning of the derivative, which is a slope, and which for a straight line is defined by the very fraction suggested by dx/dt. As applied in physics, Leibniz's "fraction" displays the correct units of measurement in a way that a dot cannot. Speed, for instance, is given by dx/dt and is measured in meters per second (assuming x is in meters and t is in seconds). Even in the electronic age, Leibniz's notation is superior, because a digital computer often evaluates derivatives approximately: as fractions. For all those reasons Leibniz's notation won out over Newton's and is in virtually universal use today. And Leibniz's success in inventing a superior notation was no accident. He knew the literature of the art of memory very well, and left extensive discussions of it in both published and unpublished form.

Twentieth-century physics supplies numerous further illustrations of the need for effective notation. Without it, for instance, Einstein could not have formulated his general theory of relativity. To describe the geometric relation between each of tHe four space-time dimensions, he needed 4 ´ 4, or sixteen, independent quantities, called the metric coefficients, which are determined by solving sixteen equations. Some parts of those equations include 44, or 256, separate terms. To keep control over such a forest of variables, clear notation is more than a question of elegance or printing costs—it is a necessity. Whereas Newton was able to keep the definition of a derivative in mind, and could therefore get away with his primitive notation, no one can cut through the dense tangle of variables of general relativity and gain an overview without effective notational support.

Einstein managed to denote the sixteen metric coefficients by a single symbol gmn, known as the metric tensor. With that tremendous act of streamlining it became possible to attach a physical meaning to the symbol, and Einstein proposed that it was nothing more or less than the familiar gravitational field. The sixteen interrelated equations specifying gmn became a single one: Einstein's celebrated field equation.

A telling indication of the essential role of notation is the space Einstein devoted to the topic in his historic 1916 paper, "The Foundation of the General Theory of Relativity." The paper runs some fifty-three printed pages, of which the first thirty-one are dedicated to an exposition of tensors and the way they are manipulated. Only after that lengthy preamble, which introduced a new language to most physicists of the time, could Einstein proceed to the new physics of the paper, his "theory of the gravitational field." To be sure, tensors are more than mere notational devices, and they carry both mathematical and physical meaning. But as signs, tensors also manage to make a nearly intractable subject accessible to thought and discussion—even if, on first encounter, they look for all the world like Hebrew.

The most spectacularly successful notation in modern physics was introduced by the late Richard Feynman. During and just after the Second World War, he and several other theorists, both American and Japanese, combined quantum mechanics with special relativity to create a complete, consistent theory of how electrons interact with one another, with atoms and with light. Known as QED, for quantum electrodynamics, the theory is powerful, accurate and beautiful, but its equations are exceedingly complex. Some calculations in QED have required years of effort.

To minimize errors and save time, Feynman invented an ingenious graphic shorthand for the calculations, made up of little dots, lines and wiggles that can be thought of as plots of all the ways two or more electrons can interact. The drawings, now known universally as Feynman diagrams, look simple, but they stand for something very complicated. Feynman actually introduced each line, dot and wiggle not to represent a particle or the path of a particle, but to stand in for a complicated mathematical expression. Hence Feynman diagrams were initially a kind of stenography, but they ultimately made computations much faster and more reliable. Doing QED before Feynman was a bit like doing arithmetic with Roman numerals.

For postwar physicists such as my colleagues and me, Feynman diagrams have become a lingua franca. When we discuss atoms and nuclei in front of a blackboard or over paper napkins in a restaurant, we don't scribble numbers or letters; we make squiggly line drawings that resemble stick men. So with time, Feynman diagrams have taken on a physical meaning well beyond their role as shorthand symbols.

For example, the concept of a virtual particle—a particle that pops up for an instant out of nowhere, living ever so briefly on energy borrowed from the vacuum—was not at first a part of QED. It was suggested by parts of Feynman diagrams. Similarly, Feynman's peculiar suggestion that antiparticles are just ordinary particles traveling backward in time was dictated not by mathematics, but by his diagrams, which made the idea plausible. Finally, Feynman diagrams led to a better understanding of how forces are generated between particles. In QED, forces must be mediated by an exchange of virtual particles. That seems intuitively plausible when the forces are repulsive: imagine that two particles are like a pair of skaters who throw a ball back and forth; they are forced apart by both the recoil and the impact of each exchange. It took Feynman diagrams, however, to show that attractive forces are generated in the same way, since such a process has no counterpart in the everyday world.

Thus Feynman's notation revealed the messages hidden in the formalism of QED, and required physicists to modify their ideas. FeynmanÕs notation has taken us from the experimental data it was designed to describe into new and unfamiliar realms, just as Dalton's circles transported him from the laboratory into the unseen but vividly imagined world of the atom.

And what of the future? What revolutions in notation can one anticipate, and what new understanding will they bring about? The answers await another Dalton, a Leibniz or a Feynman. But I would suggest at least two subject areas in which notational innovation would be welcome. The first is superstring theory. When bits of string replace particles as the putative fundamental building blocks of matter, the mathematics of their interactions becomes even more complex than it is in QED. Feynman diagrams have been adapted to superstring theory, but a more radical revision of that visual language is needed, possibly one that exploits the ability of modern computers to generate evocative three-dimensional color images.

Here is just one puzzle that such a revision might help clarify. In superstring theory the strings mediate all the forces of nature, including gravity, just as virtual particles mediate the electromagnetic force in QED. Hence, in a sense the gravitational field itself is made up of strings. But according to general relativity, the gravitational field is space-time. How is it, then, that strings can make up space-time and still be imagined as existing in space-time? None of the superstring experts I have asked have been able to clear up that muddle for me. Presumably—if the theory survives at all—the answers to such questions will become obvious, as the equations of the theory are digested down to a more popular level. A suggestive notation could help that process along.

Quantum mechanics itself presents another opportunity for better notation. The theory admits of two ways atomic systems can develop over time: the first is via smooth, predictable evolution, according to well-established rules; the second, interspersed with the first, is via abrupt, random changes called quantum jumps or leaps. The two processes might be represented respectively by the run and rise of the steps of a staircase, and both are well understood. Most atomic processes physicists have described so far deal with only two or three steps up and down the stairs.

But in recent years much effort has gone into an attempt to understand the puzzling interface between the quantum world of the atom and our own everyday, or macroscopic, experience. Macroscopic events differ from both the smooth "horizontal" and the jarring "vertical" parts of a quantum process; they are more like long passages across billions of steps, which, viewed from a distance, look like a smooth ramp. Such long sequences are called histories, and they were not part of the original formalism of quantum mechanics. Their mathematical description is a controversial and much-debated topic of current research. Here, then, is a second place for evocative notation. Such notation might
suggest better ways to describe histories, and to understand how they relate to the world as we experience it.

However useful, even essential, notational transparency is to chem- ists and linguists, mathematicians and physicists, there is, of course, a broader reason for insisting on its importance. The reason is simply that science is a public undertaking and, in fine, the agent of public knowledge. As theorists continue their headlong rush into abstractions too difficult for nonscientists to follow, it is incumbent upon them to continue in the tradition of Dalton, Leibniz and Feynman by providing the things and images, the notae, that can help ordinary people appreciate their arcane business.

Hans Christian von Baeyer is chancellor professor of physics at the College of William and Mary in Williamsburg, Virginia. His newest book, Maxwell's Demon: Why Warmth Disperses and Time Passes, was published last year.