Free Essay

Wienberg Against Philosophy

In:

Submitted By Stray
Words 8145
Pages 33
Steven Weinberg: “Against Philosophy” (from “Dreams of a Final Theory”).

Physicists get so much help from subjective and often vague aesthetic judgments that it might be expected that we would be helped also by philosophy, out of which after all our science evolved. Can philosophy give us any guidance toward a final theory? The value today of philosophy to physics seems to me to be something like the value of early nation-states to their peoples. It is only a small exaggeration to say that, until the introduction of the post office, the chief service of nation-states was to protect their peoples from other nation-states. The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion—by protecting them from the preconceptions of other philosophers. I do not want to draw the lesson here that physics is best done without preconceptions. At any one moment there are so many things that might be done, so many accepted principles that might be challenged, that without some guidance from our preconceptions one could do nothing at all. It is just that philosophical principles have not generally provided us with the right preconceptions. In our hunt for the final theory, physicists are more like hounds than hawks; we have become good at sniffing around on the ground for traces of the beauty we expect in the laws of nature, but we do not seem to be able to see the path to the truth from the heights of philosophy. Physicists do of course carry around with them a working philosophy. For most of us, it is a rough-and-ready realism, a belief in the objective reality of the ingredients of our scientific theories. But this has been learned through the experience of scientific research and rarely from the teachings of philosophers. This is not to deny all value to philosophy, much of which has nothing to do with science. I do not even mean to deny all value to the philosophy of science, which at its best seems to me a pleasing gloss on the history and discoveries of science. But we should not expect it to provide today's scientists with any useful guidance about how to go about their work or about what they are likely to find. I should acknowledge that this is understood by many of the philosophers themselves. After surveying three decades of professional writings in the philosophy of science, the philosopher George Gale concludes that "these almost arcane discussions, verging on the scholastic, could have interested only the smallest number of practicing scientists." Wittgenstein remarked that "nothing seems to me less likely than that a scientist or mathematician who reads me should be seriously influenced in the way he works." This is not merely a matter of the scientist's intellectual laziness. It is agonizing to have to interrupt one's work to learn a new discipline, but scientists do it when we have to. At various times I have managed to take time off from what I was doing to learn all sorts of things I needed to know, from differential topology to Microsoft DOS. It is just that a knowledge of philosophy does not seem to be of use to physicists—always with the exception that the work of some philosophers helps us to avoid the errors of other philosophers. It is only fair to admit my limitations and biases in making this judgment. After a few years' infatuation with philosophy as an undergraduate I became disenchanted. The insights of the philosophers I studied seemed murky and inconsequential compared with the dazzling successes of physics and mathematics. From time to time since then I have tried to read current work on the philosophy of science. Some of it I found to be written in a jargon so impenetrable that I can only think that it aimed at impressing those who confound obscurity with profundity. Some of it was good reading and even witty, like the writings of Wittgenstein and Paul Feyerabend. But only rarely did it seem to me to have anything to do with the work of science as I knew it. According to Feyerabend, the notion of scientific explanation developed by some philosophers of science is so narrow that it is impossible to speak of one theory being explained by another, a view that would leave my generation of particle physicists with nothing to do. It may seem to the reader (especially if the reader is a professional philosopher) that a scientist who is as out of tune with the philosophy of science as I am should tiptoe gracefully past the subject and leave it to experts. I know how philosophers feel about attempts by scientists at amateur philosophy. But I do not aim here to play the role of a philosopher, but rather that of a specimen, an unregenerate working scientist who finds no help in professional philosophy. I am not alone in this; I know of no one who has participated actively in the advance of physics in the postwar period whose research has been significantly helped by the work of philosophers. I raised in the previous chapter the problem of what Wigner calls the "unreasonable effectiveness" of mathematics; here I want to take up another equally puzzling phenomenon, the unreasonable ineffectiveness of philosophy. Even where philosophical doctrines have in the past been useful to scientists, they have generally lingered on too long, becoming of more harm than ever they were of use. Take, for example, the venerable doctrine of "mechanism," the idea that nature operates through pushes and pulls of material particles or fluids. In the ancient world no doctrine could have been more progressive. Ever since the pre-Socratic philosophers Democritus and Leucippus began to speculate about atoms, the idea that natural phenomena have mechanical causes has stood in opposition to popular beliefs in gods and demons. The Hellenistic cult leader Epicurus brought a mechanical worldview into his creed specifically as an antidote to belief in the Olympian gods. When Rene Descartes set out in the 1630s on his great attempt to understand the world in rational terms, it was natural that he should describe physical forces like gravitation in a mechanical way, in terms of vortices in a material fluid filling all space. The "mechanical philosophy" of Descartes had a powerful influence on Newton, not because it was right (Descartes did not seem to have the modern idea of testing theories quantitatively) but because it provided an example of the sort of mechanical theory that could make sense out of nature. Mechanism reached its zenith in the nineteenth century, with the brilliant explanation of chemistry and heat in terms of atoms. And even today mechanism seems to many to be simply the logical opposite to superstition. In the history of human thought the mechanical worldview has played a heroic role.
That is just the trouble. In science as in politics or economics we are in great danger from heroic ideas that have outlived their usefulness. The heroic past of mechanism gave it such prestige that the followers of Descartes had trouble accepting Newton's theory of the solar system. How could a good Cartesian, believing that all natural phenomena could be reduced to the impact of material bodies or fluids on one another, accept Newton's view that the sun exerts a force on the earth across 93 million miles of empty space? It was not until well into the eighteenth century that Continental philosophers began to feel comfortable with the idea of action at a distance. In the end Newton's ideas did prevail on the Continent as well as in Britain, in Holland, Italy, France, and Germany (in that order) from 1720 on. To be sure, this was partly due to the influence of philosophers like Voltaire and Kant. But here again the service of philosophy was a negative one; it helped only to free science from the constraints of philosophy itself. Even after the triumph of Newtonianism, the mechanical tradition continued to flourish in physics. The theories of electric and magnetic fields developed in the nineteenth century by Michael Faraday and James Clerk Maxwell were couched in a mechanical framework, in terms of tensions within a pervasive physical medium, often called the ether. Nineteenth-century physicists were not behaving foolishly—all physicists need some sort of tentative worldview to make progress, and the mechanical worldview seemed as good a candidate as any. But it survived too long. The final turn away from mechanism in electromagnetic theory should have come in 1905, when Einstein's special theory of relativity in effect banished the ether and replaced it with empty space as the medium that carries electromagnetic impulses. But even then the mechanical worldview lingered on among an older generation of physicists, like the fictional Professor Victor Jakob in Russell McCormmach's poignant novel, Night Thoughts of a Classical Physicist, who were unable to absorb the new ideas. Mechanism had also been propagated beyond the boundaries of science and survived there to give later trouble to scientists. In the nineteenth century the heroic tradition of mechanism was incorporated, unhappily, into the dialectical materialism of Marx and Engels and their followers. Lenin, in exile in 1908, wrote a turgid book about materialism, and, although for him it was mostly a device by which to attack other revolutionaries, odds and ends of his commentary were made holy writ by his followers, and for a while dialectical materialism stood in the way of the acceptance of general relativity in the Soviet Union. As late as 1961 the distinguished Russian physicist Vladimir Fock felt compelled to defend himself from the charge that he had strayed from philosophical orthodoxy. The preface to his treatise "The Theory of Space, Time, and Gravitation" contains the remarkable statement, "The philosophical side of our views on the theory of space, time and gravitation was formed under the influence of the philosophy of dialectical materialism, in particular, under the influence of Lenin's materialism and empirical criticism." Nothing in the history of science is ever simple. Although after Einstein there was no place in serious physics research for the old naive mechanical worldview, some elements of this view were retained in the physics of the first half of the twentieth century. On one hand, there were material particles, like the electrons, protons, and neutrons that make up ordinary matter. On the other, there were fields, such as the electric and magnetic and gravitational fields, which are produced by particles and exert forces on particles. Then in 1929 physics began to turn toward a more unified worldview. Werner Heisenberg and Wolfgang Pauli described both particles and forces as manifestations of a deeper level of reality, the level of the quantum fields. Quantum mechanics had several years earlier been applied to the electric and magnetic fields and had been used to justify Einstein's idea of particles of light, the photons. Now Heisenberg and Pauli were supposing that not only photons but all particles are bundles of the energy in various fields. In this quantum field theory electrons are bundles of the energy of the electron field; neutrinos are bundles of the energy of the neutrino field; and so on. Despite this stunning synthesis, much of the work on photons and electrons in the 1930s and 1940s was set in the context of the old dualistic quantum electrodynamics, in which photons were seen as bundles of energy of the electromagnetic field but electrons were merely particles of matter. As far as electrons and photons are concerned this gives the same results as quantum field theory. But by the time that I was a graduate student in the 1950s quantum field theory had become almost universally accepted as the proper framework for fundamental physics. In the physicist's recipe for the world the list of ingredients no longer included particles, but only a few kinds of fields. From this story we may draw the moral that it is foolhardy to assume that one knows even the terms in which a future final theory will be formulated. Richard Feynman once complained that journalists ask about future theories in terms of the ultimate particle of matter or the final unification of all the forces, although in fact we have no idea whether these are the right questions. It seems unlikely that the old naive mechanical worldview will be resurrected or that we will have to return to a dualism of particles and fields, but even quantum field theory is not secure. There are difficulties in bringing gravitation into the framework of quantum field theory. In the effort to overcome these difficulties there has recently emerged a candidate for a final theory in which quantum fields are themselves just low-energy manifestations of glitches in space-time known as strings. We are not likely to know the right questions until we are close to knowing the answers. Although naive mechanism seems safely dead, physics continues to be troubled by other metaphysical presuppositions, particularly those having to do with space and time. Duration in time is the only thing we can measure (however imperfectly) by thought alone, with no input from our senses, so it is natural to imagine that we can learn something about the dimension of time by pure reason. Kant taught that space and time are not part of external reality but are rather preexisting structures in our minds that allow us to relate objects and events. To a Kantian the most shocking thing about Einstein's theories was that they demoted space and time to the status of ordinary aspects of the physical universe, aspects that could be affected by motion (in special relativity) or gravitation (in general relativity). Even now,almost a century after the advent of special relativity, some physicists still think that there are things that can be said about space and time on the basis of pure thought. This intransigent metaphysics comes to the surface especially in discussions of the origin of the universe. According to the standard big-bang theory the universe came into existence in a moment of infinite temperature and density some ten to fifteen billion years ago. Again and again when I have given a talk about the big-bang theory someone in the audience during the question period has argued that the idea of a beginning is absurd; whatever moment we say saw the beginning of the big bang, there must have been a moment before that one. I have tried to explain that this is not necessarily so. It is true for instance that in our ordinary experience however cold it gets it is always possible for it to get colder, but there is such a thing as absolute zero; we cannot reach temperatures below absolute zero not because we are not sufficiently clever but because temperatures below absolute zero simply have no meaning. Stephen Hawking has offered what may be a better analogy; it makes sense to ask what is north of Austin or Cambridge or any other city, but it makes no sense to ask what is north of the North Pole. Saint Augustine famously wrestled with this problem in his Confessions and came to the conclusion that it is wrong to ask what there was before God created the universe, because God, who is outside time, created time along with the universe. The same view was held by Moses Maimonides. I should acknowledge here that in fact we do not know if the universe did begin at a definite time in the past. Andre Linde and other cosmologists have recently presented plausible theories that describe our present expanding universe as just a small bubble in an infinitely old megauniverse, in which such bubbles are eternally appearing and breeding new bubbles. I am not trying here to argue that the universe undoubtedly has some finite age, only that it is not possible to say on the basis of pure thought that it does not. Here again, we do not even know that we are asking the right questions. In the latest version of string theories space and time arise as derived quantities, which do not appear in the fundamental equations of the theory. In these theories space and time have only an approximate significance; it makes no sense to talk about any time closer to the big bang than about a million trillion trillion trillionth of a second. In our ordinary lives we can barely notice a time interval of a hundredth of a second, so the intuitive certainties about the nature of time and space that we derive from our everyday experience are not really of much value in trying to frame a theory of the origin of the universe. It is not in metaphysics that modern physics meets its greatest troubles, but in epistemology, the study of the nature and sources of knowledge. The epistemological doctrine of positivism (or in some versions logical positivism) demands not only that science must ultimately test its theories against observation (which is hardly in doubt) but that every aspect of our theories must at every point refer to observable quantities. That is, although physical theories may involve aspects that have not yet been studied observationally and would be too expensive to study this year or next year, it would be inadmissible for our theories to deal with elements that could not in principle ever be observed. A great deal is at stake here, because positivism if valid would allow us to discover valuable clues about the ingredients of the final theory by using thought experiments to find out what sorts of things can in principle be observed. The figure most often associated with the introduction of positivism into physics is Ernst Mach, physicist and philosopher of fin-de-siecle Vienna, for whom positivism served largely as an antidote to the metaphysics of Immanuel Kant. Einstein's 1905 paper on special relativity shows the obvious influence of Mach; it is full of observers measuring distances and times with rulers, clocks, and rays of light. Positivism helped to free Einstein from the notion that there is an absolute sense to a statement that two events are simultaneous; he found that no measurement could provide a criterion for simultaneity that would give the same result for all observers. This concern with what can actually be observed is the essence of positivism. Einstein acknowledged his debt to Mach; in a letter to him a few years later, he called himself "your devoted student." After the First World War, positivism was further developed by Rudolf Carnap and the members of the Vienna Circle of philosophers, who aimed at a reconstruction of science along philosophically satisfactory lines, and did succeed in clearing away much metaphysical rubbish. Positivism also played an important part in the birth of modern quantum mechanics. Heisenberg's great first paper on quantum mechanics in 1925 starts with the observation that "it is well known that the formal rules which are used in [the 1913 quantum theory of Bohr] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron." In the spirit of positivism, Heisenberg admitted into his version of quantum mechanics only observables, such as the rates at which an atom might spontaneously make a transition from one state to another by emitting a quantum of radiation. The uncertainty principle, which is one of the foundations of the probabilistic interpretation of quantum mechanics, is based on Heisenberg's positivistic analysis of the limitations we encounter when we set out to observe a particle's position and momentum. Despite its value to Einstein and Heisenberg, positivism has done as much harm as good. But, unlike the mechanical world-view, positivism has preserved its heroic aura, so that it survives to do damage in the future. George Gale even blames positivism for much of the current estrangement between physicists and philosophers. Positivism was at the heart of the opposition to the atomic theory at the turn of the twentieth century. The nineteenth century had seen a wonderful refinement of the old idea of Democritus and Leucippus that all matter is composed of atoms, and the atomic theory had been used by John Dalton and Amadeo Avogadro and their successors to make sense out of the rules of chemistry, the properties of gases, and the nature of heat. Atomic theory had become part of the ordinary language of physics and chemistry. Yet the positivist followers of Mach regarded this as a departure from the proper procedure of science because these atoms could not be observed with any technique that was then imaginable. The positivists decreed that scientists should concern themselves with reporting the results of observation, as for instance that it takes 2 volumes of hydrogen to combine with 1 volume of oxygen to make water vapor, but they should not concern themselves with speculations about metaphysical ideas that this is because the water molecule consists of two atoms of hydrogen and one atom of oxygen, because they could not observe these atoms or molecules. Mach himself never made his peace with the existence of atoms. As late as 1910, after atomism had been accepted by nearly everyone else, Mach wrote in a running debate with Planck that, "if belief in the reality of atoms is so crucial, then I renounce the physical way of thinking. I will not be a professional physicist, and I hand back my scientific reputation," The resistance to atomism had a particularly unfortunate effect in retarding the acceptance of statistical mechanics, the reductionist theory that interprets heat in terms of the statistical distribution of the energies of the parts of any system. The development of this theory in the work of Maxwell, Boltzmann, Gibbs, and others was one of the triumphs of nineteenth-century science, and in rejecting it the positivists were making the worst sort of mistake a scientist can make: not recognizing success when it happens. Positivism did harm in other ways that are less well known. There is a famous experiment performed in 1897 by J. J. Thomson, which is generally regarded as the discovery of the electron. (Thomson was Maxwell's and Rayleigh's successor as Cavendish Professor at the University of Cambridge.) For some years physicists had puzzled over the mysterious phenomenon of cathode rays, rays that are emitted when a metal plate in a glass vacuum tube is connected to the negative terminal of a powerful electric battery, and that show their presence through a luminous spot where they strike the far end of the glass tube. The picture tubes in modern television sets are nothing but cathode ray tubes in which the intensity of the rays is controlled by the signals sent out by television stations. When cathode rays were first discovered in the nineteenth century no one at first knew what they were. Then Thomson measured the way the cathode rays are bent by electric and magnetic fields as they pass through the vacuum tube. It turned out that the amount of bending of these rays was consistent with the hypothesis that they are made up of particles that carry a definite quantity of electric charge and a definite quantity of mass, always in the same ratio of mass to charge. Because the mass of these particles turned out to be so much smaller than the masses of atoms, Thomson leapt to the conclusion that these particles are the fundamental constituents of atoms and the carriers of electric charge in all currents of electricity, in wires and atoms as well as in cathode-ray tubes. For this, Thomson regarded himself, and has become universally regarded by historians, as the discoverer of a new form of matter, a particle for which he picked up a name that was already current in the theory of electrolysis: the electron. Yet the same experiment was done in Berlin at just about the same time by Walter Kaufmann. The main difference between Kaufmann's experiment and Thomson's was that Kaufmann's was better. It yielded a result for the ratio of the electron's charge and mass that today we know was more accurate than Thomson's. Yet Kaufmann is never listed as a discoverer of the electron, because he did not think that he had discovered a new particle. Thomson was working in an English tradition going back to Newton, Dalton, and Prout—a tradition of speculation about atoms and their constituents. But Kaufmann was a positivist; he did not believe that it was the business of physicists to speculate about things that they could not observe. So Kaufmann did not report that he had discovered a new kind of particle, but only that whatever it is that is flowing in a cathode ray, it carries a certain ratio of electric charge to mass. The moral of this story is not merely that positivism was bad for Kaufmann's career. Thomson, guided by his belief that he had discovered a fundamental particle, went on and did other experiments to explore its properties. He found evidence of particles with the same ratio of mass to charge emitted in radioactivity and from heated metals, and he carried out an early measurement of the electric charge of the electron. This measurement, together with his earlier measurement of the ratio of charge to mass, provided a value for the mass of the electron. It is the sum of all these experiments that really validates Thomson's claim to be the discoverer of the electron, but he would probably never have done them if he had not been willing to take seriously the idea of a particle that at that time could not be directly observed. In retrospect the positivism of Kaufmann and the opponents of atomism seems not only obstructive but also naive. What after all does it mean to observe anything? In a narrow sense, Kaufmann did not even observe the deflection of cathode rays in a given magnetic field; he measured the position of a luminous spot on the downstream side of the vacuum tube when wires were wound a certain number of times around a piece of iron near the tube and connected to a certain electric battery and used accepted theory to interpret this in terms of ray trajectories and magnetic fields. Very strictly speaking, he did not even do that: he experienced certain visual and tactile sensations that he interpreted in terms of luminous spots and wires and batteries. It has become a commonplace among historians of science that observation can never be freed of theory. The final surrender of the anti-atomists is usually taken to be a statement by the chemist Wilhelm Ostwald in the 1908 edition of his Outlines of General Chemistry: "I am now convinced that we have recently become possessed of experimental evidence of the discrete or grained nature of matter, which the atomic hypothesis sought in vain for hundreds and thousands of years." The experimental evidence that Ostwald quoted consisted of measurements of molecular impacts in the so-called Brownian motion of tiny particles suspended in liquids, together with Thomson's measurement of the charge of the electron. But if one understands how theory-laden are all experimental data, it becomes apparent that all the successes of the atomic theory in chemistry and statistical mechanics already in the nineteenth century had constituted an observation of atoms. Heisenberg himself records that Einstein had second thoughts about the positivism of his initial approach to relativity. In a lecture in 1974 Heisenberg recalled a conversation he had with Einstein in Berlin in early 1926:
I pointed out [to Einstein] that we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path. And since it is but rational to introduce into a theory only such quantities as can be directly observed, the concept of electron paths ought not, in fact, to figure in the theory. To my astonishment, Einstein was not at all satisfied with this argument. He thought that every theory in fact contains unobservable quantities. The principle of employing only observable quantities simply cannot be consistently carried out. And when I objected that in this I had merely been applying the type of philosophy that he, too, had made the basis of his special theory of relativity, he answered simply: "Perhaps I did use such philosophy earlier, and also wrote it, but it is nonsense all the same."
Even earlier, in a Paris lecture in 1922, Einstein referred to Mach as "un bon mecanicien" but a "deplorable philosophe." Despite the victory of atomism and the defection of Einstein, the theme of positivism has continued to be heard from time to time in the physics of the twentieth century. The positivist concentration on observables like particle positions and momenta has stood in the way of a "realist" interpretation of quantum mechanics, in which the wave function is the representation of physical reality. Positivism also played a part in obscuring the problem of infinities. As we have seen, Oppenheimer in 1930 noticed that the theory of photons and electrons known as quantum electrodynamics led to an absurd result, that the emission and absorption of photons by an electron in an atom would give the atom an infinite energy. The problem of infinities worried theorists throughout the 1930s and 1940s and led to a general supposition that quantum electrodynamics simply becomes inapplicable for electrons and photons of very high energy. Much of this angst over quantum electrodynamics was tinged with a positivist sense of guilt: some theorists feared that in speaking of the values of the electric and magnetic fields at a point in space occupied by an electron they were committing the sin of introducing elements into physics that in principle cannot be observed. This was true, but worrying about it only retarded the discovery of the real solution to the problem of infinities, that the infinities cancel when one is careful about the definition of the mass and charge of the electron. Positivism also played a key role in a reaction against quantum field theory led by Geoffrey Chew in the 1960s at Berkeley. For Chew the central object of concern in physics was the S-matrix, the table that gives the probabilities for all possible outcomes of all possible particle collisions. The S-matrix summarizes everything that is actually observable about reactions involving any number of particles. S-matrix theory goes back to work of Heisenberg and John Wheeler in the 1930s and 1940s (the "S" stands for streung, which is German for "scattering"), but Chew and his coworkers were using new ideas about how to calculate the S-matrix without introducing any unobservable elements like quantum fields. In the end this program failed, partly because it was simply too hard to calculate the S-matrix in this way, but above all because the path to progress in understanding the weak and strong nuclear forces turned out to lie in the quantum field theories that Chew was trying to abandon. The most dramatic abandonment of the principles of positivism has been in the development of our present theory of quarks. In the early 1960s Murray Gell-Mann and George Zweig independently tried to reduce the tremendous complexity of the zoo of particles then known at that time. They proposed that almost all these particles are composed of a few simple (and even more elementary) particles that Gell-Mann named quarks. This idea at first did not seem at all outside the mainstream of the way that physicists were accustomed to think; it was after all one more step in a tradition that had started with Leucippus and Democritus, of trying to explain complicated structures in terms of simpler, smaller constituents. The quark picture was applied in the 1960s to a great variety of physical problems having to do with the properties of the neutrons and protons and mesons and all the other particles that were supposed to be made up out of quarks and generally it worked quite well. Yet the best efforts of experimental physicists in the 1960s and the early 1970s proved inadequate to dislodge quarks from the particles that were supposed to contain them. This seemed crazy. Ever since Thomson pulled electrons out of atoms in a cathode ray tube, it had always been possible to break up any composite system like a molecule or an atom or a nucleus into the individual particles of which it is composed. Why then should it be impossible to isolate free quarks? The quark picture began to make sense with the advent in the early 1970s of quantum chromodynamics, our modern theory of strong nuclear forces, which forbids any process in which a free quark might be isolated. The breakthrough came in 1973, when independent calculations by David Gross and Frank Wilczek at Princeton and David Politzer at Harvard showed that certain kinds of quantum field theory have a peculiar property known as "asymptotic freedom," that the forces in these theories decrease at high energies. Just such a decrease in force had been observed in experiments on high-energy scattering going back to 1967, but this was the first time that any theory could be shown to have forces that behave in this way. This success rapidly led to one of these quantum field theories, the theory of quarks and gluons known as quantum chromo-dynamics, rapidly being accepted as the correct theory of the strong nuclear forces. Originally it was assumed that gluons had not been observed to be produced in elementary particle collisions because they are heavy, and there had not been enough energy available in these collisions to produce the large gluon masses. Soon after the discovery of asymptotic freedom a few theorists proposed instead that the gluons are massless, like photons. If this were true, then the reason that gluons and presumably also quarks are not observed would have to be that exchange of the massless gluons between quarks or gluons produces long-range forces that make it impossible in principle to pull either quarks or gluons apart from each other. It is now believed that if you try, for instance, to pull apart a meson (a particle composed of a quark and an antiquark) the force needed increases as the quark and antiquark are pulled farther apart, until eventually you have to put so much energy into the effort that there is enough energy available to create a new quark-antiquark pair. An anti-quark then pops out of the vacuum and joins itself to the original quark, while a quark pops out of the vacuum and joins itself to the original antiquark, so that instead of having a free quark and antiquark you simply have two quark-antiquark pairs— that is, two mesons. The metaphor has often been used that this is like trying to pull apart two ends of a piece of string: you can pull and pull, and eventually, if you put enough energy into the effort, the string breaks, but you do not find yourself with two isolated ends of the original piece of string; what you have are two pieces of string, each of which has two ends. The idea that quarks and gluons can in principle never be observed in isolation has become part of the accepted wisdom of modern elementary particle physics, but it does not stop us from describing neutrons and protons and mesons as composed of quarks. I cannot imagine anything that Ernst Mach would like less. The quark theory was only one step in a continuing process of reformulation of physical theory in terms that are more and more fundamental and at the same time farther and farther from everyday experience. How can we hope to make a theory based on observables when no aspect of our experience—perhaps not even space and time—appears at the most fundamental level of our theories? It seems to me unlikely that the positivist attitude will be of much help in the future. Metaphysics and epistemology have at least been intended to play a constructive role in science. In recent years science has come under attack from unfriendly commentators joined under the banner of relativism. The philosophical relativists deny the claim of science to the discovery of objective truth; they see it as merely another social phenomenon, not fundamentally different from a fertility cult or a potlatch. Philosophical relativism stems in part from the discovery by philosophers and historians of science that there is a large subjective element in the process by which scientific ideas become accepted. We have seen here the role that aesthetic judgments play in the acceptance of new physical theories. This much is an old story to scientists (though philosophers and historians sometimes write as if we were utterly naive about this). In his celebrated book The Structure of Scientific Revolutions Thomas Kuhn went a step further and argued that in scientific revolutions the standards (or "paradigms") by which scientists judge theories change, so that the new theories simply cannot be judged by the prerevolutionary standards. There is much in Kuhn's book that fits my own experience in science. But in the last chapter Kuhn tentatively attacked the view that science makes progress toward objective truths: "We may, to be more precise, have to relinquish the notion, explicit or implicit, that changes of paradigm carry scientists and those who learn from them closer and closer to the truth." Kuhn's book lately seems to have become read (or at least quoted) as a manifesto for a general attack on the presumed objectivity of science. There has as well been a growing tendency starting with the work of Robert Merton in the 1930s for sociologists and anthropologists to treat the enterprise of science (or at least, sciences other than sociology and anthropology) by the same methods that are used to study other social phenomena. Science is of course a social phenomenon, with its own reward system, its revealing snobberies, its interesting patterns of alliance and authority. For instance, Sharon Traweek has spent years with elementary particle experimentalists at both the Stanford Linear Accelerator Center and the KEK Laboratory in Japan and has described what she had seen from the perspective of an anthropologist. This kind of big science is a natural topic for anthropologists and sociologists, because scientists belong to an anarchic tradition that prizes individual initiative, and yet they find in today's experiments that they have to work together in teams of hundreds. As a theorist I have not worked in such a team, but many of her observations seem to me to have the ring of truth, as for instance:
The physicists see themselves as an elite whose membership is determined solely by scientific merit. The assumption is that everyone has a fair start. This is underscored by the rigorously informal dress code, the similarity of their offices, and the "first naming" practices in the community. Competitive individualism is considered both just and effective: the hierarchy is seen as a meritocracy which produces fine physics. American physicists, however, emphasize that science is not democratic: decisions about scientific purposes should not be made by majority rule within the community, nor should there be equal access to a lab's resources. On both these issues, most Japanese physicists assume the opposite. In the course of such studies, sociologists and anthropologists have discovered that even the process of change in scientific theory is a social one. A recent book on peer review remarks that "scientific truths are, at bottom, widely quoted social agreements about what is 'real', arrived at through a distinctively 'scientific process' of negotiation." Close observation of scientists at work at the Salk Institute led the French philosopher Bruno Latour and the English sociologist Steve Woolgar to comment, "The negotiations as to what counts as a proof or what constitutes a good assay are no more or less disorderly than any argument between lawyers and politicians." It seems to have been an easy step from these useful historical and sociological observations to the radical position that the content of the scientific theories that become accepted is what it is because of the social and historical setting in which the theories are negotiated. (The elaboration of this position is sometimes known as the strong program in the sociology of science.) This attack on the objectivity of scientific knowledge is made explicit and even brought into the title of a book by Andrew Pickering: Constructing Quarks. In his final chapter, he comes to the conclusion: "And, given their extensive training in sophisticated mathematical techniques, the preponderance of mathematics in particle physicists' accounts of reality is no more hard to explain than the fondness of ethnic groups for their native language. On the view advocated in this chapter, there is no obligation upon anyone framing a view of the world to take account of what twentieth-century science has to say." Pickering describes in detail a great change of emphasis in high-energy experimental physics that took place in the late 1960s and early 1970s. Instead of a commonsense (Pickering's term) approach of concentrating on the most conspicuous phenomena in collisions of high-energy particles (i.e., the fragmentation of the particles into great numbers of other particles going mostly in the original direction of the particle beams), experimentalists instead began to do experiments suggested by theorists, experiments that focused on rare events, such as those in which some high-energy particle emerges from the collision at a large angle to the incoming beam direction. There certainly was a change of emphasis in high-energy physics, pretty much as described by Pickering, but it was driven by the necessities of the historical mission of physics. A proton consists of three quarks, together with a cloud of continually appearing and disappearing gluons and quark-antiquark pairs. In most collisions between protons the energy of the initial particles goes into a general disruption of these clouds of particles, like a collision of two garbage trucks. These may be the most conspicuous collisions, but they are too complicated to allow us to calculate what should happen according to our current theory of quarks and gluons and so they are useless for testing that theory. Every once in a while, however, a quark or gluon in one of the two protons hits a quark or gluon in the other proton head on, and their energy becomes available to eject these quarks or gluons at high energy from the debris of the collision, a process whose rate we do know how to calculate. Or the collision may create new particles, like the W and Z particles that carry the weak nuclear force, which need to be studied to learn more about the unification of the weak and electromagnetic forces. It is these rare events that today's experiments are designed to detect. Yet Pickering, who as far as I can tell understands this theoretical background very well, still describes this change of emphasis in high-energy physics in terms suggestive of a mere change of fashion, like the shift from impressionism to cubism, or from short skirts to long. It is simply a logical fallacy to go from the observation that science is a social process to the conclusion that the final product, our scientific theories, is what it is because of the social and historical forces acting in this process. A party of mountain climbers may argue over the best path to the peak, and these arguments may be conditioned by the history and social structure of the expedition, but in the end either they find a good path to the peak or they do not, and when they get there they know it. (No one would give a book about mountain climbing the title Constructing Everest.) I cannot prove that science is like this, but everything in my experience as a scientist convinces me that it is. The "negotiations" over changes in scientific theory go on and on, with scientists changing their minds again and again in response to calculations and experiments, until finally one view or another bears an unmistakable mark of objective success. It certainly feels to me that we are discovering something real in physics, something that is what it is without any regard to the social or historical conditions that allowed us to discover it. Where then does this radical attack on the objectivity of scientific knowledge come from? One source I think is the old bugbear of positivism, this time applied to the study of science itself. If one refuses to talk about anything that is not directly observed, then quantum field theories or principles of symmetry or more generally laws of nature cannot be taken seriously. What philosophers and sociologists and anthropologists can study is the actual behavior of real scientists, and this behavior never follows any simple description in terms of rules of inference. But scientists have the direct experience of scientific theories as desired yet elusive goals, and they become convinced of the reality of these theories. There may be another motivation for the attack on the realism and objectivity of science, one that is less high-minded. Imagine if you will an anthropologist who studies the cargo cult on a Pacific island. The islanders believe that they can bring back the cargo aircraft that made them prosperous during World War II by building wooden structures that imitate radar and radio antennas. It is only human nature that this anthropologist and other sociologists and anthropologists in similar circumstances would feel a frisson of superiority, because they know as their subjects do not that there is no objective reality to these beliefs—no cargo-laden C-47 will ever be attracted by the wooden radars. Would it be surprising if, when anthropologists and sociologists turned their attention to studying the work of scientists, they tried to recapture that delicious sense of superiority by denying the objective reality of the scientists' discoveries? Relativism is only one aspect of a wider, radical, attack on science itself. Feyerabend called for a formal separation of science and society like the separation of church and state, reasoning that "science is just one of the many ideologies that propel society and it should be treated as such." The philosopher Sandra Harding calls modern science (and especially physics) "not only sexist but also racist, classist, and culturally coercive" and argues, "Physics and chemistry, mathematics and logic, bear the fingerprints of their distinctive cultural creators no less than do anthropology and history." Theodore Roszak urges that we change "the fundamental sensibility of scientific thought . . . even if we must drastically revise the professional character of science and its place in our culture."

These radical critics of science seem to be having little or no effect on the scientists themselves. I do not know of any working scientist who takes them seriously. The danger they present to science comes from their possible influence on those who have not shared in the work of science but on whom we depend, especially on those in charge of funding science and on new generations of potential scientists. Recently the minister in charge of government spending on civil science in Britain was quoted by Nature as speaking with approval of a book by Bryan Appleyard that has as its theme that science is inimical to the human spirit. I suspect that Gerald Holton is close to the truth in seeing the radical attack on science as one symptom of a broader hostility to Western civilization that has bedeviled Western intellectuals from Oswald Spengler on. Modern science is an obvious target for this hostility; great art and literature have sprung from many of the world's civilizations, but ever since Galileo scientific research has been overwhelmingly dominated by the West. This hostility seems to me to be tragically misdirected. Even the most frightening Western applications of science such as nuclear weapons represent just one more example of mankind's timeless efforts to destroy itself with whatever weapons it can devise. Balancing this against the benign applications of science and its role in liberating the human spirit, I think that modern science, along with democracy and contrapuntal music, is something that the West has given the world in which we should take special pride. In the end this issue will disappear. Modern scientific methods and knowledge have rapidly diffused to non-Western countries like Japan and India and indeed are spreading throughout the world. We can look forward to the day when science can no longer be identified with the West but is seen as the shared possession of humankind.

Similar Documents