Minkowski on mathematician and physicist

十二月 19, 2012

Minkowski concluded this section with a remark that
clarifies his understanding of the basic motivations behind Einstein’s contribution to the
latest developments in electrodynamics: mathematicians—Minkowski said—accustomed
as they are to discuss many-dimensional manifolds and non-Euclidean geometries, will
have no serious difficulties in adapting their concept of time to the new one, implied by
the application of the Lorentz transformation; on the other hand, the task of making physical
sense out of the essence of these transformations had been addressed by Einstein in the
introduction to his 1905 relativity article.26

Advertisements

Time is Relative

十月 7, 2012

http://www.ram.org/ramblings/ramblings.html


I recently had an argument about why time is relative. My friend found it hard to conceive how time could not be absolute. At that time, when I tried to explain it, I found that it was hard for me see WHY time really is relative. All I could do was state experimental results that indicate it is the case. This shows that thinking you can understand a deep concept at a very shallow, philosophical level doesn’t really mean much, but at least it’s better than no understanding at all.

Newton’s laws put an end to the idea of absolute motion. Two people playing ping pong on a moving train would measure the distance the ball bounces on the table (between consequecutive bounce) as lesser than a person standing along the track watching the ball bounce. Both measurements are equally valid since there is no absolute standard of rest.

Maxwell’s equations predicted that light waves travel at a fixed speed. This is something we have to believe, and it arises out of the fact that his equations predicted “there would be wavelike disturbances in the combined electromagnetic field, and that these would travel at a fixed speed, like ripples on a pond.”

But Newton’s theory got rid of the idea of absolute rest and if we believe that there’s no ambiguity about time, then if light is supposed to travel at a fixed speed, we need say what the fixed speed is measured relative to. It was suggested that there was a substance called “ether” that was present everywhere (even in “empty space”) and that light waves travelled through this ether at a fixed speed. However, different observers, moving relative to the ether, would see light coming at them at different speeds.

The way to verify this is to measure the speed of light as the earth moves through the ether on its orbit around the sun. So, light speed measured in the direction of earth’s orbit (when we are moving towards the source of light) should be higher than the speed of light measured at right angles from the direction of earth’s motion (when we are not moving towards the source of light). This experiment, called the Michaelson-Morley experiment, was performed in 1887, and to their great surprise, it was shown that the speed of light was exactly the same in both cases!

Einstein then pointed out that the whole idea of ether was unnecessary as long as one was willing to abandon the idea of absolute time. He postulated that the laws of science should be the same for all freely moving observers regardless of the speed. This was true for Newton’s laws of motion, but it also now applied to Maxwell’s theory and the speed of light: all observers should measure the same speed of light, no matter how fast they are moving.

One of the remarkable consequences of this theory is that how it has changed our thinking regarding space and time. In Newton’s theory, if a pulse of light is sent from one place to another, different observers would agree on the time the journey took (since time is absolute) but will not agree on how far the light travelled (since space is not absolute). Since the speed of light is measured as distance travelled/time taken, different observers would measure different speeds for the speed of light. But this clearly violates what Maxwell’s equations say and what has been shown by experiment. In relativity, all observers MUST agree on how fast light travels, but they don’t have to agree regarding the distance and consequently, they must disagree over the time it has taken! (The time taken is the distance (which they don’t agree on) divided by the speed of light (which they do agree on).) That is, observers must have their own measure of time, as recorded by a clock carried by them and even identical clocks carried by different observers would not necessarily agree.

From all this we can make the statement that time MUST be relative, because if it weren’t, it would contradict experimental results. But this doesn’t, at least to me, shed light on WHY this is the case. This might be a question more metaphysical than physical (i.e., time is relative just as Newton’s laws are the way they are). I shall attempt to address my question of “WHY” in greater detail now. (Though ultimately I don’t have a good answer I think—anyone?)

This theory (special relativity) and the more general form (which takes gravitional effects into account) of it makes several other important predictions which have withstood empirical observations. But one of the prediction of general relativity is time should appear to run slower near a massive body like the earth. This is because the energy of light is proportional its frequency (waves/second). As light travels upward in earth’s gravitional field, it loses energy, and therefore its frequency goes down. (The length of time between one wave crest and the next goes up.) To someone high up, it would appear that that everything down below was taking longer to happen. Apparently, this prediction can be tested by using a pair of very accurate clocks, one at the bottom nearer to the earth, and the other at the top. This was done in 1962, and the clock at the bottom was found to run slower, in exact agreement with general relativity! This is of practical importance, if you wish to trust navigational signals from satellites. If you ignore relativistic effects, you could be off by several miles!

Now, my question is, what is the basis for this phenomenon? Why should the clocks have different times? Why should it be related to light and its frequency? This is saying that time slows down when it is exposed to intense gravity. Why is this the case?

Similarly consider the twins paradox where one twin flies off into space at the speed of light and comes back and hasn’t aged much more than the twin who is on earth. It’s not a paradox if you have relative time, but what is the physiological basis (do the cells actually undergo cell division slower?) for this phenomenon?

We must accept that time isn’t independent of space and it forms a 4-dimensional construct called space-time. General relativity shows that gravity is really not a force, but rather a consequence of the fact that space-time is curved due to the distribution of mass and energy. Bodies like the earth don’t follow a curved orbit due to a force called gravity, but they follow the nearest thing to a straight path in curved space, called a geodesic. On the surface of the earth, a geodesic is a circle (the equator, for example). In general relativity, bodies always follow straight lines in four-dimensional space-time, but they appear to us to move along curved paths in three-dimensional space. (Hawking says this is like watching an airplane over an hilly ground. Although it follows a straight line in 3D space, its shadow follows a curved path on the ground.)

I imagine this as a huge rubber sheet with bodies like the sun, the earth, etc. are laying on the sheet causing a depression (a “gravity well”). An object travelling through this sheet might just go past the end of a well, causing it follow a curved path as thought it suffered a gravitional pull. An object might also just go inside the well and follow an elliptical path around the walls of the well. An object may finally, due to friction, decay and eventually fall in the greater object at the bottom of the well.

A postulate of general relativity, which follows from logical deductions, is that if two events are close together, there is an interval between them which can be calculated by some function of their coordinates. We know, according to the mathematics of the theory, that if we choose a region of space-time where the gravitation is the same throughout the region, that we can obtain very nearly a Euclidean space. We have a second postulate which states that a body travels in a geodesic in space-time unless non-gravitional forces act on it. The third postulate says that light travels on a geodesic such that the interval between any parts of it is zero.

In the general theory, It is only neighbouring events that have a definite interval and this is INDEPENDENT of the route pursued. The interval between distant events depends on the route pursued, and it can be calculated by dividing up the route into small enough parts (each part has constant gravitional effects and thus we can calculate the interval between the two neighbouring events), and adding up the intervals for all parts. If the interval is spacelike, a body cannot travel from one event to the other. Therefore, the interval has to be timelike. The interval between neighbouring events when it is timelike is the time between them for observers who travel from one event to the other. And so the TOTAL interval between two events will be judged by people who travel from one of the other by what their clocks show to to be the time they have taken on the journey. The slower they travel, the longer they will think they have been on the journey. This is not platitude—if you travel from DC to NY and leave at 6a and arrive at 10a, the more slowly you travel, the longer you will take according to your watch. So if you travel at the speed of light (third postulate), going all across the solar system before reaching your destination, your watch would say that you had taken no time at all! If you had gone by any circuitous route, which enabled you arrive in time by travelling fast, the longer your route the less time you will take. The diminision of time is continual as you approach the speed of light.

A body when left to itself travels so that the time, measured by its clocks, is the longest. If it had travelled by any other route from one event to another, the time would be shorter. This is saying that bodies left to themselves make their journeys as slowly as they can. Russell refers to this as a law of cosmic laziness. Mathematically, they travel in geodesics, in which the total interval between any two events on the journey is GREATER than by any other alternative route. (The fact that it is greater and not less is because the sort of interval we are considering is more analogous to time than to distance.) So, if someone flies off into space and comes back to earth after a while, the time between departure and return would be less by their clocks than by the clocks on earth, since the earth, during its journey around the sun, chooses the route so that any bit of it is measured longer than any other alternative route.

“Space and time are now dynamic quantities: when a body moves, or a force acts, it affects the curvature of space and time—and in turn the curvature of space-time affects the way in which bodies move and forces act. Space and time not only affect, but are also affected by everything that that happens in the universe.” Hofstadter would call this a Strange Loop.


References

  • A Brief History of Time by Stephen Hawking
  • Einstein’s Law of Gravition by Bertrand Russell
  • The Two Masses by Isaac Asimov

Pseudointellectual ramblings || Ram Samudrala || me@ram.org


A Brief History and Philosophy of Physics

三月 4, 2009

A Brief History and Philosophy of Physics

by Alan J. Slavin, Department of Physics, Trent University, August 1994 Introduction This brief history and philosophy of physics has been written to give physics students some appreciation of where their discipline has come from, and of the philosophical principles underpinning it. It is hoped that this will provide students with a sense of physics as a living, evolving discipline, and of their place in its evolution. Physics, indeed all of science, is not a static agglomeration of proven facts and inviolable theories. While there are many theories which are so well tried that they are generally accepted as being correct, all scientific theories are still open to attack from some new, reproducible experiment which disagrees with them. The history below bears this out. Furthermore, while science per se may be value-less, neither good or bad, the teaching or application of science always has values attached. If it is taught by a scientist without any mention of the need to use the learning responsibly, then students may assume that scientists need not be concerned about the application of science. If this happens, then the scientist ultimately abdicates to the politician or manufacturer the decision on the use of her or his own work, even though it is unlikely that either the politician or manufacturer will understand as well as the scientist the effects of its use. By this, I am not suggesting that scientists are the only ones qualified to decide on the application of science; scientists can also be blind to the potential in what they do. It is hoped that this paper will contribute to the ability of students to ask the necessary questions regarding the science they and others participate in, both now and throughout their lives. This summary is designed to outline the general development of the main branches of physics as we know them today. It is presented here as occurring in a fairly linear fashion, and discusses only the principal figures in each area. However, it must always be remembered that there were a great many more people working on these problems than mentioned here, with many of them being unaware of the work of the others. As a result, many of these areas progressed in a more-or-less “random walk” between theory and experiment until about the last two hundred years, when improved communications made it much easier to keep up to date with developments world-wide. Given the fact that half the world’s population is female, there is a notable absence of women in this history. This is largely because women have been systematically excluded from science over the centuries until very recently, with few exceptions. Even when women did make major contributions as part of a larger team in relatively recent times, as was the case of the women “computers” in astronomy at Harvard College Observatory in the late 1800s, usually only the male team leader gained recognition [Rossiter]. One can only mourn the loss to the discipline from the exclusion of other Marie Curies, and work towards encouraging the participation of many more women in the future. Earliest Beginnings, and the Greeks People have always been acutely aware of the regularities in nature: the sun rises every day; the moon appears at the same place in the sky roughly every twenty-seven days, about the same as a woman’s menstrual cycle; the seasons always follow in the same order; the pattern of the “fixed” stars (all the heavenly bodies except for the planets, sun, moon and comets) repeats itself at the same time every year; snowflakes all have six points; a dropped stone always falls. In fact, the very well-being of a family depended until recent times on knowing when to plant, or when to move camp for the next season’s game. This obvious order begged for explanation, and the earliest people attributed it to a range of gods and goddesses who controlled the world. With the Greeks, for example, Gaea was the earth goddess, Zeus threw lightning bolts, and Apollo drove the fiery chariot of the sun once per day across the heavens. “Science” is the attempt to give a rational, rather than religious or magical, explanation for the order in nature. People in different parts of the world began to develop science at different times, with different emphases. As one example, as early as 36 B.C. [Cole, p.46] the Mayan people of what is now Mexico and Central America used a calendar with an accuracy equivalent to knowing the length of the year to within six seconds, and plotted the movement of the sun, moon and planets. They also used a “place system” for numbers (like our decimal place system) at the time when the Romans were still using a new symbol for every new power of ten they encountered, and the Mayans employed the zero centuries before Europe. (The zero was used in India from about 850 A.D.) Although the Mayans had recorded much of their customs and learning on hundreds of books made of beaten-bark paper, very little remains today. Their Spanish conquerors systematically destroyed almost all of this “heathen” literature. The first European attempts to provide a rational explanation for the workings of nature began with the Greeks, about 600 B.C. For example, Pythagoras (582-500 B.C.) and his followers belonged to a religious fraternity dedicated to the study of numbers. They believed that the world, like the whole number system, was divided into finite elements, an early precursor to the idea of atoms (“atom” means “indivisible”). Their discovery of irrational numbers such as Ö2, which could not be expressed as a ratio of whole numbers, was a serious threat to this system, and history tells us that they killed the Pythagorean who released this secret to the world. The Greeks Leucippus (~440 B.C.), Democritus (~420 B.C.) and Epicurus (342-270 B.C.) put forward the hypothesis that matter was composed of extremely small atoms, with different materials being composed of different combinations of these atoms. Aristarchus of Samos (310-230 B.C.) is the first person known to have proposed that the earth rotates once per year around the sun, rather than the intuitive explanation that the sun rotates around the earth. He also attempted to calculate relative sizes for the earth, moon and sun. However, it was not considered necessary by the Greeks to test such hypotheses experimentally; all that most of them were looking for was a self-consistent explanation of the world based on a small number of philosophical principles. Aristotle is generally credited with providing the most comprehensive of such explanations. He believed that there were four earthly elements: earth, water, air and fire. Each had its natural place determined by its weight. Earth, being the heaviest, “wanted” to be at the centre of the universe. Water was above the earth, with air above water, and then fire. This order makes intuitive sense. Solid (“earthy”) bodies sink in water; if you release air under water the air bubbles to the surface; and flames leap upward during burning. (Wood could float even though it was a solid body, because it contained both earth and fire; the fire was released on burning.) The farther a body was from the earth, the more perfect it became. Hence the moon was the least perfect of the heavenly bodies, as could be seen by its uneven appearance, while the fixed stars were the most perfect of all, and were composed of a fifth element (the “quintessence”) which had no weight at all. In Aristotle’s physics, a moving body of any mass had to be in contact with a “mover”, something which caused its motion, or it would stop. This mover could either be internal as for animals, or external as in the case of a bowstring pushing on an arrow. The arrow was kept in flight by air displaced from the front rushing to the back to fill the vacuum left by the arrow. Since Aristotle said that a vacuum was impossible (“nature abhors a vacuum”), this explanation of an arrow’s motion was again internally consistent. However, because the stars were without mass, once they were put in motion by a “prime mover” they could continue to move by themselves. The Greeks spent much effort trying to explain the motion of the sun, moon, planets and stars. Since this motion also played a major role in the development of modern science, it is worth discussing in some detail. The stars are so far from us that their relative motions cannot be observed except over timescales of a few centuries. Therefore, to someone standing on the earth the stars appear to be fixed in a vast sphere, concentric with the earth. This sphere rotates at constant speed about the earth at a rate of just more than once in twenty-four hours, returning to almost the same position at a given time of day once every year. Similarly, the sun and moon appear to lie on spheres, which rotate about the earth once per day and once every 27 days, respectively. The motions of the planets appear much more complicated to an earthly observer. We now know that the planets are all on orbits with different average distances from the sun, and orbital periods that increase the farther the planet is from the sun. For example Venus, Earth’s nearest and brightest planetary neighbour, has a period of 225 days, compared to Earth’s 365. This means that as Venus makes its annual pilgrimage through the night sky as viewed from Earth, it occasionally moves backwards relative to the fixed stars, in “retrograde motion”, as its orbit carries it opposite to the direction the earth is moving. (Hence the name “planet”, meaning “wanderer”.) The Greeks usually described this motion using a device invented by Eudoxus of Cnidus (409-356 B.C.), who was apparently the first Greek to use quantitative observation to develop a mathematical description. Noting that the motion of the planets was periodic, he developed a system of spheres each of which carried a planet, with each sphere centred on the earth but with its axis of rotation fixed in a larger sphere. This explanation fitted with the Greek belief that the circle was the most perfect geometrical form. However, this system was approximate at best. Apollonius of Perga (~220 B.C.) suggested, instead, that each planet was attached to a small sphere which, in turn, rolled on a large sphere centred on the earth, with the larger one rotating roughly once per day. The large sphere accounted for the daily motion of the planet, while the small one (the “epicycle”) explained the retrograde motion. A later addition was the use of the “eccentric”, which allowed the centre of rotation of the large sphere for each planet to lie away from the centre of Earth. As the accuracy of the mathematical description increased, so did the need for reliable observations. This was recognized by Hipparchus of Nicea (190-120 B.C.) who had studied the observational records of the earlier Greeks and Babylonians, with the latter dating back to the seventh century B.C. In this process, Hipparchus discovered the “precession of the equinoxes”; that is, that it takes the sun about 20 seconds more to return to its position at the equinox every year than it does to return to its position among the fixed stars. To satisfy the need for accurate data, Hipparchus catalogued the position and brightness of 1080 stars. By the time of Ptolemy (85-165 A.D.), who observed at Alexandria in Egypt, the system of system of epicycles and eccentrics required eighty circles to describe the known periodicities of the heavens. Of course, the Greeks did not restrict their science to physics. For example, the Hippocratic oath sworn by doctors today takes its name from Hippocrates of Cos (~460-377 B.C.). Aristotle’s most lasting contribution to science was in biology, where he classified about 540 animal species, and carried out careful dissections of at least 50 different animals. Archimedes (287-212 B.C.), scientist-engineer, has been described as one of the three greatest geniuses of all time [Kramer]. He invented the Archimedean screw for raising water, discovered the principle of buoyancy of a body in a liquid, and calculated an accurate value for p, among other accomplishments. In light of his future influence on the course of European science, it is of interest to look at Aristotle’s attitude towards the role of women. In his “Generation of Animals” he says, “Wherever possible and so far as possible the male is separate from the female, since [he] is something better and more divine in that [he] is the principle of movement for generated things, while the female serves as their matter … We should look upon the female state as it were a deformity, though one which occurs in the ordinary course of nature.” [French p.130]. This attitude was not shared by all Greeks. For example, Pythagoras admitted women to his school equally with men. [French, p.144.] The Dark Ages, and the Translations With the fall of the Roman empire about 400 A.D., most of the Greek learning was lost to Europe as it entered the Dark Ages. Even the knowledge that the Earth was round, known to the Greeks who had a good estimate for its diameter, was replaced by the conception of a flat Earth. (This does not mean that all learning stopped during the Dark Ages; important technological discoveries were made during this period, such as the invention of the plough and the water wheel.) The Greek knowledge itself, however, was not lost. It had migrated into the Middle East and Egypt under the Greek and Roman empires, and was translated into Arabic by the people who lived in these regions. The Arabs not only kept Greek science alive, they added to it considerably. For example, the Arabs had important medical schools and first discovered the law of refraction, now known as Snell’s law. They also translated major Indian scientific works into Arabic, and began to use the numerals and algebra developed in India. Al-Battani (~858-929 A.D.) measured a value for the precession of the equinoxes that was more accurate than Ptolemy’s. The Arabs also transported the art of paper-making from China to the west. Their contribution remains enshrined in Arabic words which we still use today, including algebra and algorithm. When Christians recaptured Spain in the eleventh century, the bridge was formed to carry this learning back into Europe. A major translation centre was set up in Toledo after it was captured in 1085, with a lesser centre in Sicily after it fell to the Christians in 1091. Translation was done primarily into Latin, the language of learning in Europe at this time. However, most of the translators focused on the Greek works, and some Arabic and Persian works remain untranslated today. The Middle Ages The scholarly work in Europe during the Dark Ages (roughly from the fall of Rome to the beginning of the Middle Ages, or Medieval period, about 1100) had been primarily concerned with the copying of church manuscripts. As a result, it was natural that as ancient learning began to reach Europe it should be studied first in the cathedral schools. These schools evolved into the first universities, with colleges in Cambridge and Oxford, for example, being founded in the 1200s. These were followed by universities set up by both city (e.g. Bologna, Padua) or state (e.g. Naples) governments. The scholars in these early universities laid much of the groundwork for later scientific developments. One of the most important schools for the development of physics was in Oxford, where the impetus theorists, beginning with William of Ockham (~1295-1349), investigated the cause of motion. They believed that a body in motion did not need to be in contact with a “mover” to stay in motion as Aristotle had claimed, but did so out of its own “impetus”. This was a precursor to our modern concept of momentum. Another major contribution has become known as “Ockham’s Razor”. This principle states that the best scientific theory, other things being equal, is the one which requires the fewest new starting assumptions. It is still accepted today. It was important historically because it provided an objective means for choosing between two theories and did not attempt to answer the question of which was “true”. The flood of ancient, “pagan” knowledge into Europe through the translations from Arabic produced a crisis for Christian theologians: How could one accept a world philosophy which was not rooted in the Christian faith? This problem was largely overcome, at least for the time being, by St. Thomas Aquinas (1225-74) who integrated Aristotelian philosophy and Greek logic with Catholic theology. For example, his first proof of the existence of God was that the fixed stars needed a source of motion, which he identified with Aristotle’s “Prime Mover”. One must ask why, when so many of the early scientific discoveries were made in the east, the development of modern science was primarily in the west. Alfred North Whitehead, in Science and the Modern World, suggests that this was due to the integration of Greek rationality with Christian monotheism under Thomas Aquinas. The all-seeing God of Christianity created the world in an ordered, logical fashion as related in the biblical book of Genesis. Therefore it was only natural to look for a rational explanation of the phenomena of nature. The Renaissance (1300-1700) The rebirth (“Renaissance”) of knowledge and learning in Europe, which followed the rediscovery of Greek and Arab learning, affected all of society. Awakened to the fact that there was so much “new” knowledge to be explored, people became free to invent their own. The arts flourished, with Durer inventing perspective drawing in Germany, Michelangelo studying anatomy to give life to his sculpture in Italy, and orchestral music being born. It saw the beginning of the Protestant Reformation in 1517, with Martin Luther nailing his 95 theses to the door of Wittenburg Cathedral. This was the period of the great European voyages of discovery, with Columbus arriving in America in 1492 and Magellan sailing around the tip of South America. Unfortunately, this period also saw the destruction of much of the learning of the peoples “discovered” by the Europeans, who still believed that non-Christian/European culture was valueless. This Eurocentrism is still active today, as witnessed by the almost complete omission of the great Central American civilizations from today’s school curriculum in Canada. However, during the Renaissance Aquinas’ integration of Greek, and particularly Aristotelian, philosophy with Catholic theology eventually led to as many problems for the church as it had solved. Copernicus’ suggestion (about 1530) that the Earth and the other planets moved around the sun, rather than the reverse, was seen as heresy by the Church. Not only did it contradict Aristotle’s teaching and several Biblical assertions that the Earth was stationary, it also challenged the authority of the Church by questioning the hierarchical structure on which its entire existence was based. If the Earth was not stationary at the centre of the universe, perhaps Heaven was not outside the sphere of the stars, and where did this leave God, not to mention all of His ecclesiastical delegates? The idea of a moving Earth was so revolutionary that Copernicus did not agree to have it published until he was on his death bed (1543). It is no surprise that the two people most responsible for the publishing of Copernicus’ book were followers of Martin Luther, who had dared to question the authority of the Catholic church on scriptural matters. The Renaissance also saw the beginnings of modern science under Galileo Galilei (1564-1642). One of Galileo’s greatest contributions was to recognize that the role of the scientist was not to explain “why” things happened as they do in nature, but only to describe them. In one of his “Dialogues” he asks a colleague why objects fall when released. When the colleague replies that everyone knows that gravity makes them fall, Galileo replies that he has not explained anything, just given it a name. This new role greatly simplified the work of the scientist, who no longer had to wonder why God would have caused a particular phenomenon to occur. It sufficed to recognize that it did occur, and allowed one to get on with the job of deciding how best to describe it. This leads us to Galileo’s second major contribution, the description of natural phenomena using mathematics and the appeal to nature through experimentation to see if the description is correct. This was a major deviation from the qualitative science of Aristotle in which, for the most part, all that was required of an explanation was that it agreed qualitatively with reality: solid objects fell because they were composed of earthy material whose natural place was at the centre of the universe. In Galileo’s science, on the other hand, one had to describe mathematically how far an object fell in a given time, and then verify experimentally that this description was correct. Moreover, he recognized that the experimenter had to devise the experiment so as to isolate the phenomenon being studied; for example, to minimize the effect of friction in the study of falling bodies. Galileo’s most important applications of these ideas was in the mechanics of falling bodies, building on the early ideas of the impetus theorists. He showed that all compact bodies fell at the same rate, such that the distance covered was proportional to the square of the elapsed time of fall. Because objects in free fall drop too fast for easy measurement, Galileo did his measurements by rolling balls down an inclined plane. Even so, there were no clocks at the time accurate enough to make the measurements Galileo has recorded. (Galileo is, in fact, credited with the suggestion of using a pendulum as clock.) Stillman Drake, a Canadian who was one of the world’s foremost scholar of Galileo, has noted that a person can keep time while singing with a precision of about 0.01 seconds. Drake shows that Galileo could have made his measurements by noting where the rolling ball was at each beat in a song [Drake, 1975]. Galileo is probably best known for his conflict with the Catholic church over his support for Copernicus’ description of the solar system. When Galileo heard of the invention of the telescope, he designed and built one for himself. This, the first telescope usable for astronomical observations, quickly led Galileo to realize that Copernicus’ theory was more than just an alternative to the Ptolemaic approach for calculating the positions of the planets. He saw that Jupiter had moons, and so was a miniature model of the solar system in itself; that Venus showed phases similar to those of the moon, as it must under the Copernican system; and that the moon had mountains and so was similar to the Earth. No wonder the church saw him as a threat! Galileo, aged sixty-eight, was tried by the Inquisition and sentenced to house arrest for the remainder of his life for daring to support Copernicus’ theory, even though he recanted when faced with the death penalty. Ironically, he used this time to develop mechanics to the point at which it could explain why the planets would not fall into the sun if they were not held up by their “natural place”. Development of The Scientific Method Francis Bacon (1561-1626) takes credit for providing much of the philosophical basis for our modern scientific method. His major works, published in 1605 and 1620, were very influential in directing the approach to science over the next two hundred years and remain relevant today. Bacon had a vision that science could greatly improve the lot of humanity, and set out how he thought this could best be accomplished. This belief in human “progress”, that humanity is moving towards some ultimate state of happiness in which war, illness and poverty will be abolished, was unique to the west. Part of this vision was his belief, founded in the Genesis story of creation, in the right of man to dominate nature, “to bind her to your service and make her your slave” [French, p.117]. This right of domination over the rest of nature has been a guiding principle of science and technology for most of the time since Bacon. It is only now beginning to be challenged by the developing ecological awareness that people, too, are part of nature, and that they ignore the inter-relationship at their peril. Marilyn French goes on to argue that, since nature has generally been seen as “female”, Bacon’s claim for the right of men to dominate nature has helped perpetrate the domination of women by men. Bacon’s approach was basically experimental, qualitative and inductive. He rejected a priori assumptions such as the idea of the perfection of spherical motion used by the Greeks. Rather, Bacon believed that if enough observations could be made which involved a particular phenomenon, an observer could use these to induce the fundamental principles involved. The first step of this process, then, was the gathering of as many unbiased facts as possible, drawing heavily on information already available in craft and industrial processes. The next was to correlate these so as to discern the fundamental truths within them. René Descartes (1596-1650), from France, proposed a different approach to the development of science. Instead of starting with raw facts, as Bacon had suggested, Descartes believed that the basic principles ruling nature could be obtained by a combination of pure reason and mathematical logic (e.g., “I think, therefore I exist.”) His approach was analytic. It involved breaking down a problem into its parts and arranging them logically, a technique which is still used constantly in science today. It is termed “reductionism”, because its basic assumption is that we can reduce a phenomenon to a collection of independent components; if we can understand each of them taken independently, then we can understand the entire phenomenon, in a way similar to our understanding of the operation of a machine. This approach has dominated scientific investigation over the last three hundred years, and has proven very successful in areas in which in which the parts really are largely independent. “Holism”, the opposite of reductionism, assumes that some phenomena, at least, can only be understood as integrated wholes, and so cannot be broken down into independent parts. An excellent discussion of the need for more holistic thinking in modern science can be found in Fritjof Capra’s The Turning Point. Capra argues that the need for a holistic approach has a theoretical basis in the quantum nature of matter, as discussed below. Descartes’ “mathematical-deductive” approach was diametrically opposed to Bacon’s “qualitative-inductive” method, whereas modern science uses a combination of the two. Given Bacon’s emphasis on experimentation, and Descartes’ emphasis on deductive reasoning, it is not too surprising that in the next hundred years English scientists stressed experimentation while French scientists stressed mathematical theory. In developing his approach, Descartes made several important mathematical contributions of his own. Principal among these was the invention of cartesian geometry, which describes geometrical figures in the form of algebraic equations. Descartes really believed that the world and most of what was in it were essentially machines. God had created and wound up the system at the beginning, and it had been running ever since under the laws of nature without further intervention. The one exception to a machine was the soul (or mind) of a human, which was divine and separate from the mechanical body. Since animals did not possess a mind, they were pure machines which could not feel pain. For a period there were Cartesian followers who would vivisect animals to show how well a machine made by nature could mimic suffering. This concept of the world as a machine persisted for many years, and was strengthened by Newton’s mechanics. In fact, in 1812 Laplace, a great mathematical physicist, made the following statement, [Schneer, p.129] “If an intelligence, for a given instant, recognizes all the forces which animate Nature, and the respective positions of all things which compose it, and if that intelligence is sufficiently vast to subject these data to analysis, it will comprehend in one formula the movements of the largest bodies of the universe as well as those of the minutest atom; nothing will be uncertain to it, and the future as well as the past will be present to its vision. The human mind offers in the perfection which it has been able to give to astronomy, a modest example of such an intelligence. The Development of Classical Physics: Mechanics, Heat, Optics, Electromagnetism, Atoms Mechanics Sir Isaac Newton (1642-1727), born the year Galileo died, is the most important figure in the development of mechanics. His three “laws” form the base on which all of mechanics prior to 1900 was constructed. This model of building an edifice of theory on the foundation of a few fundamental definitions and laws is essentially that used by Euclid in his geometry. It became the ideal for all future physical theories, including thermodynamics with three basic laws (zeroth, first and second), optics (laws of reflection and refraction) and electromagnetism (Maxwell’s laws). Much of the physics of the hundred years after the death of Newton was spent in applying his three laws to different phenomena. Newton’s crowning accomplishment was the application of his mechanics to show that the entire universe obeyed the same laws of nature, as published in his Mathematical Principles of Natural Philosophy (the Principia) in 1687. By assuming that two masses attracted each other with a force inversely proportional to the square of the distance between them, Newton proved that the mechanics which determined how bodies fall on Earth also explained the periodic motions of the planets. However, Newton did not restrict his work to mechanics; he also did extensive studies on light and shares the credit for the invention of calculus with the German, Gottfried Wilhelm Leibnitz (1646-1716), with whom he fought a long battle over who was first. Newton also wrote on theology, and was Master of the Royal Mint. Thermal Physics The invention of a practical steam engine by Thomas Newcomen (1663-1729) prompted great scientific interest in the study of heat, and was a major contribution to the industrial revolution which began in England in the mid 18th century. (It is ironic that the industrial revolution, which began to apply scientific principles to the production of goods as predicted by Bacon one hundred years earlier, also led to the virtual slave labour of children and the poor in mines and factories.) Sadi Carnot (1796-1832), a French engineer, laid the basis for our understanding of heat engines (any engine which uses heat to produce power, such as the automobile engine, or a coal or nuclear electrical power station). He compared the operation of a heat engine with that of a waterwheel, with heat “falling” from a higher to a lower temperature. Joseph Black (1728-99), the professor of medicine at Glasgow University, began to quantify heat by the measurement of the specific heat capacities (the amount of heat required to raise the temperature of a given mass by one degree) of different substances, compared to that of water. Motivated by the heat generated in the boring of cannons, Count Rumford (1753-1814), first showed that heat could be produced in limitless quantities by friction, and so was not a material substance (caloric) as had been believed previously. James Prescott Joule (1818-89), by rotating a “paddle wheel” under water and measuring the increase of temperature, established a numerical equivalence between work and heat. He also showed that the heat produced by an electrical current I in a wire of resistance R was given by I2R, a relationship now known as Joule’s law. Joule’s quantitative work on the interconversion of energy laid the basis for the first law of thermodynamics, which says that the change in the energy of a system is equal to the heat input to it plus the mechanical work done on it. This law was first stated explicitly by the German Rudolph Clausius and Englishman William Thomas Kelvin in 1851. Clausius also realized that a heat engine could utilize only some of the available heat to do work, and from this developed the concept of entropy, the quantity of heat transferred divided by the temperature. Clausius showed that the entropy always increased in any spontaneous natural process, and so established the second law of thermodynamics. As with Newton’s three laws, the laws of thermodynamics form the foundation for the understanding of thermal physics. Light and Optics The Greeks had applied the methods of geometry to the study of optics, and Ptolemy had a crude approximation to the law of refraction. This work was extended by the Arab Al-Hazen (965-1038), who showed that Ptolemy’s law was just an approximation, valid at small angles. Al-Hazen also carried out experiments which brought him close to the thin lens formula for convex lenses. The telescope and compound microscope were invented in Holland near the beginning of the seventeenth century, with the telescope used to advantage by the early astronomers including Galileo. In 1621 Willebrod Snell rediscovered the correct formula for the refraction of light, which now bears his name. From the time of Descartes there was considerable debate as to whether light consisted of small particles which were localized and travelled in straight lines, or of waves which spread out in space. Descartes adhered to the former explanation whereas in the late 1600s Christian Huygens argued for a wave theory, with the waves travelling through an ether which permeated all space and all objects. Newton used a combination of the two approaches: while light itself consisted of “corpuscles”, he believed that these particles could induce vibrations in the ether through which they travelled, which in turn could affect the transport of the particles. For example, he used this theory to explain “Newton’s rings”, alternating light and dark bands which appear when a slightly curved lens is placed in contact with a flat mirror. For a century after Newton, the majority of scientists adhered to the corpuscular theory. Thomas Young (1773-1829) revived the wave theory for light. It was generally accepted that sound was transported by waves carried through the air, and Young argued that light travelled in a similar way. He used the interference pattern produced in his famous “two-slit experiment”, still studied in introductory physics courses today, as proof of this wave nature. (A similar pattern, in the form of a cross, can be seen with the naked eye by looking at a distant street light through a window screen, although using binoculars improves the image.) From these patterns he was able to measure the wavelength of light which he proved to be very small. He went on to show that this led to light travelling in approximately straight lines for the vast majority of common cases, although it did bend slightly around objects to produce patterns in their shadows, patterns which could be explained by his wave theory. Then, in 1817, the Frenchman Augustin Fresnel showed that all known optical phenomena could be explained by the wave theory provided that, following a suggestion of Young’s, the vibrations were transverse (perpendicular to the direction of light propagation) rather than parallel to it as for sound waves. This firmly established the wave theory as dominant, although it did raise the question of how a fluid such as the ether could support a transverse vibration, since fluids usually have only longitudinal vibrations. This problem was a harbinger of an upcoming debate over the very existence of the ether. Electromagnetism The study of electromagnetism began in experimental studies of such effects as static electricity and magnetism. People had known from ancient times that rubbing certain materials on dry hair would make the two attract each other, and the naturally occurring, magnetic lodestone was used as a navigating compass by the Chinese from about 100 B.C. Systematic studies of electricity began in earnest once apparatus had been invented for generating and storing electrical charge. The first electrostatic generator, a machine which rubbed a cloth against a rotating ball of sulphur, was invented by Otto von Guerike (1602-86), while Pieter van Muschenbroek (1692-1761) made the first Leiden jar to store electrical charge. In contrast to the spark discharges of an electrostatic generator, the voltaic cell (battery), invented by Volta in Italy in 1799, could provide a continuous flow of current. In a famous (and dangerous!) experiment in 1752, Benjamin Franklin used a kite to collect charge from a thunder cloud and store it in a Leiden jar. He then showed that this charge had identical properties to that produced by an electrostatic generator, proving that lightning was just one manifestation of electricity. However, Franklin’s main contribution to the theory of electricity was his suggestion that charge came in two types, which he called positive and negative, with like charges repelling each other and unlike charges attracting. By these simple assumptions he could explain all known experimental facts about electricity, whereas previous theories had required about 20 different assumptions, including different shapes for particles of electricity in different media. This is one example of the use of Ockham’s Razor in deciding between rival theories. Franklin also showed that there was a connection between electricity and magnetism, because iron needles could be magnetized by placing them near a wire carrying an electrical current. In 1750 John Mitchell, at Cambridge, had discovered the inverse-square repulsion of magnetic poles, by using a “torsion balance” to measure the twisting of a thread supporting one magnet when another was brought close. In a period beginning in 1785, the Frenchman Charles Augustin Coulomb reinvented the torsion balance and showed that both magnetic and electric forces experienced an inverse-square dependence on distance, now called “Coulomb’s law” in the case of electrostatics. In Germany there developed a separate school of thought, that of the “nature philosophers”. They believed that matter was not inert, as claimed by the mechanist school, but alive, with a universal world spirit that interconnected all forces. One member of this movement was the philosopher Immanuel Kant (1724-1804), who asserted that it was the interplay of innate repulsive and attractive forces that governed matter. If only repulsive forces existed, all matter would disperse; if only attractive forces were present, all matter would coalesce into a point. This balance between attractive and repulsive forces is today the starting point for the theoretical analysis of the structure of solids and liquids, although the forces are no longer believed to reflect a life force. The study of both electricity and magnetism was popular with German scientists, because the presence of opposite polarities in these phenomena fitted with their philosophy. These ideas also led to the conviction that every effect in nature had its inverse effect, since the vital forces were all connected. This idea that every effect has its inverse is fundamental to modern physics. For example, if you connect two wires made of different materials, and heat the junction, a voltage develops between the free ends of the wires. This effect, discovered by Thomas Seebeck, another German Nature-Philosopher, is the principle behind the use of a “thermocouple” for measuring temperatures. Conversely, a voltage applied with the correct polarity across the free ends of the two wires causes the junction to decrease in temperature. This is the principle behind the “thermoelectric cooler”, often used to cool devices in electronic circuits. The belief in the interconnectedness of all forces in nature led Hans Christian Oersted, in Copenhagen, to announce in 1807 that he was looking for a connection between magnetism and electricity. He found that a magnet would move in a circle around a wire carrying a current, and that a wire carrying a current would move around a magnet. This is the principle required for the construction of an electric motor. The magnetic forces near current-carrying wires were the first forces which had been discovered which did not operate radially from the two interacting bodies. The next major contributions in electricity and magnetism came from the theoretician André Marie Ampère in France, and the experimentalist Michael Faraday in England. Ampère (1775-1836) developed a theory for the calculation of magnetic forces caused by a given electrical current, and suggested that the magnetic effects of some solids were caused by small circulating currents in the particles making up these materials. Faraday (1791-1867), on the other hand, had very little mathematics but was a superb experimentalist. His most important experimental observation in electromagnetism was that of induced currents, made in 1831: a wire loop would have an electric current developed in it, if either the loop was moved near a magnet, or the magnet was moved. This is the principle behind the generation of electricity by mechanical means, as occurs in every hydro- or thermo-electric power generating station, or in every car alternator. Even though mathematically unlearned, Faraday made a very important contribution to the development of the theory of electromagnetism by constructing a qualitative model of how electrical and magnetic forces acted. He supposed that each “particle” of electricity or magnetism produced a “line of force” which emanated from a positive pole of a particle and returned to a negative pole. These lines tended to contract along their length, and to expand perpendicular to their length. The lines could not cross. The number of such lines passing through a given area (i.e. the areal density) was a measure of the strength of the force provided by them. These assumptions explained the repulsion and attraction of magnetic and charged bodies: the tendency to contract lengthwise would pull bodies of opposite polarity together, whereas the tendency for them to expand laterally would push bodies of opposite polarity apart. Since the area of a sphere increases with the square of the radius, the inverse-square decrease in intensity of the forces was a natural consequence of the decrease in the areal density of the lines of force with distance from the charge or magnetic poles. The visual appeal of these lines of force still plays an important role in our understanding of electromagnetic phenomena. Moreover, Faraday believed that the lines of force would be present even if only a single charged or magnetic object existed; that is, even if there were no other body on which the first one could exert a force. Thus he invented the concept of the “field”, as a physical presence which had the ability to produce a force — magnetic, electric or gravitational — if a second body happened to come into its vicinity. The concept of the field has served as one of the most powerful of all theoretical tools of modern physics. James Clerk Maxwell (1831-79) set out to make Faraday’s ideas quantitative. He described the lines of force using Newtonian mechanics, envisioning them as rotating tubes of fluid (the ether) which had the properties required by Faraday: the rotation would cause the tubes to expand laterally and contract longitudinally. The resulting set of only four equations (“Maxwell’s equations”) described all known electric and magnetic phenomena exactly. Maxwell, however, realized that the enormous machinery with which he had filled all space was not an essential part of his theory, and eventually just used his equations as though the machinery did not exist. This is how we use his equations today. The relationship between the original machinery and the final equations was not without its detractors, however. One French reader stated that when he started to read Maxwell’s work he expected to find himself in the midst of the quiet groves of electromagnetic theory, and instead found himself inside a factory! [Williams, p.122]. One of the unexpected results of Maxwell’s work was that it predicted that electromagnetic waves could be produced which would propagate at the speed of light. This showed that light was an electromagnetic phenomenon, and not a separate subject. Discoveries in electromagnetism were applied quite rapidly to the development of useful devices. For example, the telegraph was invented in 1837 by Charles Wheatstone only one year after the development of the first reliable battery, and the first practical electrical generator was invented by Werner Siemens in Germany in 1866, 35 years after Faraday’s discovery of induced currents. Atoms Until the twentieth century, the development of the atomic theory of matter was pursued by scientists who are often more closely identified with chemistry than with physics. In 1789 Antoine Lavoisier published his Elements of Chemistry. In this work, he emphasized the need for quantitative methods in chemistry. By carefully devised experiments, he was able to isolate 23 elements, fundamental substances that could not be broken down into simpler forms. In England in the late 1700s, the experimentalists Joseph Black, Henry Cavendish and Joseph Priestley isolated several different gases and showed how they could be produced. Schneer makes the interesting point that a large number of the most successful scientists of this era, including Priestley, Dalton, Faraday, James Watt (who greatly improved the steam engine), Thomas Young, and Franklin, were all Quakers, a non-conforming religious group who dared to challenge the established beliefs of the day. Then in 1802 John Dalton, an English schoolmaster, revived the theory of atoms. It was known by this time that gases always combine in fixed ratios by mass. For example one gram of hydrogen burns with eight grams of oxygen to produce nine grams of water. Dalton proposed that these ratios of whole numbers could be explained if the gases were formed of atoms whose masses were, themselves, in the ratio of simple integers. The formation of water discussed above could then be explained by the combination of two hydrogen atoms with one oxygen atom. At this time, Dalton was unaware that both hydrogen and oxygen gas consisted of “molecules” which were each composed of two atoms, but his theory was correct in essence. In 1869 Dimitri Mendeleev of Russia, combining Dalton’s atomic description with the fact that certain groups of elements had similar chemical properties, constructed the first periodic table. He pointed out that the gaps in this table should correspond to as-yet-undiscovered elements, and was able to predict their properties and atomic masses. Armed with this knowledge, scientists very quickly discovered most of the missing elements. Darwin’s Theory of Evolution A brief mention must be made here to the theory of biological evolution, because of its philosophical relevance to the physical idea of an evolving universe. A basic tenet of the theory of evolution is that the world as we know it today has evolved from an earlier form of the world under the pressures of natural forces which were in existence at the time, such as erosion and sedimentation, and not by divine intervention in this process. This idea of “uniformitarianism” was first put forward by James Hutton of Edinburgh in 1785, as an explanation for the formation of the geological structures of the earth. He found part of his justification for this theory in the motion of the planets, which required only the forces of nature to keep them moving in their orbits forever. In analogy to the timeless motion of the planets, Hutton assumed that the formation of the earth had occurred over extremely long periods of time. Hutton’s ideas were unpopular in his time because they were perceived to be in conflict with the teaching of the Bible. They were received little better by scientists when revived by Charles Lyell in The Principles of Geology published in 1830-33, but were accepted much more readily by the populace. Mason suggests that one of the reasons for this change in reception was that the idea of the progress of humanity, championed by such writers as Francis Bacon and the economist Adam Smith who published An Enquiry into the Nature and Causes of the Wealth of Nations in 1776, was now generally accepted by society. Charles Darwin acknowledges that it was the concept of uniformitarianism that led him to his theory of evolution, the idea that biological species might evolve in the same way that the earth’s geology did, under the natural forces continually in existence. The part that needed to be added was the answer to what determined the direction of this evolution. Offspring are born with characteristics which are slightly different from those of the parents. Darwin claimed that when these new characteristics better prepared the organism to live to reproductive age, then it would be able to pass these characteristics on to its children: thus, nature selected those offspring for survival much as a cattle owner selected for breeding those animals born with desirable characteristics. His theory did not require a reason for the variation readily observed in offspring, although he speculated that it might be due to changes in food or climate. However, he believed that these changes were exceedingly slight, and could result in a new species (a class of life that is only fertile within that class) over very long periods of time. Knowing that his theory was in contradiction with a literal interpretation of the Bible, Darwin spent twenty years amassing data before the publication of On The Origin of the Species in 1859. Although this book raised a furore when first published, the logic of its arguments and its philosophical consistency with other scientific theories gradually won the day. Indeed, evolution turned out to be a useful, though fallacious, argument for justifying both colonialism and racism. Herbert Spencer coined the phrase “survival of the fittest” to replace Darwin’s “natural selection”, and applied it to the evolution of society. With the idea of human progress fully ensconced in society’s thinking, it was a short step to assume that the race or nationality in power deserved to be there, because it was the one most fit to rule. “Survival of the fittest” soon became “might is right”, a belief which is still at work in the world today. Modern Physics: Relativity and Quantum Physics Relativity By the end of the nineteenth century, most physicists were feeling quite smug. They seemed to have theories in place that would explain all physical phenomena. There was clearly a lot of cleaning up to do, but it looked like a fairly mechanical job: turn the crank on the calculator until the results come out. Apart from a few niggling problems like those lines in the light emitted by gas discharges, and the apparent dependence of the mass of high-speed electrons on their velocity …. Twenty-five years later, this complacency had been completely destroyed by the invention of three entirely new theories: special relativity, general relativity, and quantum mechanics. The outstanding figure of this period was Albert Einstein. His name became a household word for his development, virtually single-handedly, of the theory of relativity, and he made a major contribution to the development of quantum mechanics in his explanation of the photoelectric effect. Einstein was a clerk in a Swiss patent office when he published his special theory of relativity in 1905. He claimed in later life that the need for this theory emerged out of Maxwell’s equations. Those equations changed their form when one rewrote them from the conventional perspective of a person moving at constant velocity. On the other hand, our experience tells us that we cannot tell if we are moving as long as our velocity is constant: you can throw a ball back and forth in a rapidly moving train car just as you can when the train is still. It is only when it accelerates — slows down or speeds up — that one experiences a change. Moreover, Maxwell’s equations indicated that the speed of light did not depend on the speed of the person measuring this speed, whereas if one throws a stone while running, the speed of the runner contributes to the speed of the stone. To overcome these apparent difficulties with Maxwell’s theory, which Einstein believed to describe reality correctly, he considered the effect of two postulates. The first was that all physical phenomena must obey the same equations for people moving at different constant velocities (the principle of relativity), and the second was that the speed, c, measured for light does not depend on the speed of the “observer” (the person carrying out the measurement). These two postulates led directly to almost unbelievable results. They showed that the measurement of space and time depended on each other (that the time you measured for an occurrence depended on your position), and also depended on the speed of the observer. One immediate result is that “simultaneity ” is relative to the observer. Two “events” that occur at the same time for one observer occur at different times as seen by an observer in motion relative to the first, provided that the events occur at different spatial locations; the concept of absolute time and space which had underpinned mechanics for two centuries lay in shatters. Einstein’s theory also showed that the measured mass of an object depended on its velocity, and that mass (m) could be converted to energy (E) according to E=mc2, the principle behind the atomic bomb and nuclear power plants. One of the beauties of Einstein’s theory was that, as you let a body’s speed become small compared to the speed of light, the equations would reduce to those of Newtonian mechanics. This requirement of physics, that a more general theory must reduce in some limit to more restrictive theories, is called the “correspondence principle”. Thus we see that the development of the special theory of relativity in no way diminishes the stature of Newton. Although his concept of absolute space and time were incorrect, his genius remains: Newton’s mechanics is still correct except for bodies whose speeds approach that of light. It is important to discuss the fact that the results of the special theory contradict “common sense”: we know that we do not have to correct our watches after we have been in a car, and that people who are running do not appear thinner than when at rest. The problem here is that our common sense is, by definition, the sense of how the common world works. However, the effects predicted by the special theory are significant only at a speed approaching that of light, and none of us has ever moved at such a speed relative to another object with which we can interact. Therefore, we must not assume that our low-speed common sense also applies at very high speeds. Similarly, we will see that the mechanics governing sub-microscopic bodies such as atoms is quite different to the mechanics describing 60-kg human beings. In 1887 the Americans Albert Michelson and Edward Morley had attempted to measure the speed of the Earth through the ether by measuring the difference in the speed of light travelling in two perpendicular directions. A difference was expected, for the same reason that the speed of a water wave relative to you depends on whether you are travelling in the same direction as the wave or otherwise. They found no dependence on the direction of motion of the light, and interpreted this null result by claiming that the Earth dragged the ether with it. But if the ether interacted with matter in this way, why could it not be detected directly? Moreover, the observation by James Bradley in 1725 of stellar aberation rules out the hypothesis of ether drag. (Stellar aberation is the apparent movement of the stars in a small ellipse over the course of a year, because the Earth is moving and it takes some time for the light of the stars to reach Earth.) In 1892, Hendrik Lorentz and G.F. Fitzgerald independently hypothesized that the size of Michelson and Morley’s measuring device must depend on its velocity so as to contract in the direction of motion exactly enough to give the null result. Einstein’s second postulate presented yet another possibility: the measured speed of light was intrinsically independent of the speed of the observer. However, it went much beyond interpreting the Michelson -Morley result and explained, for example, the experimental observation that an electron’s mass depended on its velocity. In fact, Henri Poincaré, a renowned physicist, had suggested a year before Einstein’s publication that a whole new mechanics might be required, in which mass depended on velocity. Einstein’s theory cleared up so many outstanding problems that it was quite quickly accepted by most physicists. Before leaving special relativity it is important to discuss briefly Einstein’s role in the development of nuclear weapons. Nuclear fission had been discovered in Germany in 1938, just after the invasion of Austria by Hitler’s forces. In 1939, faced with the threat that Germany would develop a nuclear bomb, Einstein was convinced by physicist Leo Szilard to write to President Roosevelt, pointing out the possibility and encouraging American research in this direction. In spite of this, Einstein actively opposed further development of nuclear weapons following the Second World War. In fact, he and British philosopher/mathematician Bertrand Russell founded the Pugwash organization, named after its first meeting in Pugwash, Nova Scotia, in 1954. This organization of leading scientists throughout the world, and its student wing, still meet regularly to discuss issues concerning the impact of science on society, and to prepare position papers for presentation to governments and the United Nations. The General Theory of Relativity extended Einstein’s ideas to bodies which are accelerating, rather than moving at constant velocity. Einstein showed that spacetime near masses could not be described by Euclidean geometry, but rather that a geometry invented by Riemann must be used. In this way, gravitation was shown to be a result of the curvature of spacetime in the vicinity of mass. The general theory allowed Einstein to predict the amount of the deflection of light in the eclipses of 1919 and 1921, a value which agreed with that measured. However, Einstein’s theory of general relativity was not the last word on the subject. General relativity is still an active area of research today, partly because it provides us with much evidence on the evolution of the universe including such questions as, “Will the universe someday begin to collapse back upon itself under its gravitational attraction?” Quantum Physics Einstein’s theories of relativity were developed in a way close to Descartes’ mathematical-deductive method. The special theory came from an attempt to harmonize electromagnetic theory with the principle of relativity. The general theory evolved from trying to reconcile the fact that inertial mass, the “resistance” to the force in the equation F=ma, has the same value as gravitational mass, even though the two are totally unrelated in Newtonian mechanics. Quantum physics, on the other hand, emerged from attempts to explain experimental observations. In the late 1800s a major area of research centred on the explanation of “blackbody” radiation: a black object such as a fireplace poker, when heated until it begins to glow, emits light whose intensity depends on wavelength in a way which depends largely on the temperature of the body and little on its material of construction. Because of the universal nature of this phenomenon, it was apparent that it must depend on fundamental physical principles. In 1900 Max Planck used a “lucky guess” [Jammer p.19] to obtain a mathematical equation which fitted the experimental data accurately. Three months later he derived the expression theoretically. To do this he assumed that a blackbody contained many small oscillators which emitted the light, much the way the oscillations of electrons along a transmission antenna emit radio waves. However, he had to allow these oscillators to emit energy only at certain frequencies rather than with a continuous range of frequencies, as would be expected from classical electromagnetism. Planck had no physical basis for this assumption; it was just the only way that he could fit the data. Einstein used Planck’s idea in his explanation of the photoelectric effect, in which electrons are ejected from a metal when it is exposed to light whose frequency exceeds a certain value. Einstein extended Planck’s ideas on the emission of light from a blackbody to the general statement that light, itself, came in packets of energy, or quanta (called “photons” from the Greek “photos” meaning “light”). Each quantum has an energy E=hf, where f is the frequency of light and h is “Planck’s constant”. This was a bold move, since the work of Young and Fresnel had seemed to establish beyond all doubt that light acted as a wave, and Maxwell’s theory did not include any mention of a particle nature to light. However, Einstein’s assumption explained the fact that even an intense light below a certain frequency could not cause the emission of electrons: if each incoming light quantum gave all its energy to an electron in the metal, the electron could not escape if this energy was less than the binding energy of the electron. This explanation dismayed Planck, who never expected his suggestion to be applied so broadly. In 1911 Ernest Rutherford fired very small particles, emitted in radioactive decay, at a thin film of gold. From the scattering pattern of the particles, he determined that the atom consisted of a small, heavy, positively charged nucleus surrounded by very light electrons. Niels Bohr used this model and the quantum ideas of Planck and Einstein in 1913 to explain why the light from gas discharges was emitted at only a few, discrete frequencies; this light formed emission “lines” of different colours when the light was passed through a slit and dispersed by a prism. Bohr suggested that the electrons in an atom were only allowed to occupy certain orbits of definite radius r around the nucleus, namely orbits whose angular momentum was given by mvr=nh/2p where m and v are the mass and velocity of the electron, and n is an integer. When an electron gained energy and was “excited” to a higher orbit during the gas discharge, it could lose this energy only by falling back to one of the lower allowed orbits, with its energy loss DE being carried off by the emission of a quantum of light of energy f=DE/h. The predicted frequencies for hydrogen matched the experimental values. Beginning with the claim that mechanical models such as Bohr’s were inappropriate because they tried to use the mechanics which had been developed for macroscopic bodies in situations where it might not apply, Werner Heisenberg in 1925 derived a purely mathematical theory that incorporated directly the empirical data, such as the wavelengths of spectral lines. The same year, Louis de Broglie argued that if light could act both as a wave and as a particle (photon) with definite energy, then perhaps material particles such as electrons could as well. He suggested that such a particle should have a wavelength given by l=h/mv, where m is the particle’s mass and v is its velocity. By the next year, de Broglie’s hypothesis had been used by Erwin Schrödinger to explain the quantization of Bohr’s orbits. Moreover, Schrödinger showed that his wave mechanics was equivalent to Heisenberg’s theory. By 1927, C.J. Davisson and L.H. Germer had confirmed de Broglie’s hypothesis directly by producing a diffraction pattern by scattering electrons from the ordered atoms on the surface of a nickel sample, much like the two-slit interference pattern used by Thomas Young to prove that light behaved as a wave. This result is impossible if we consider the electron as a classical particle: it means that the electron must scatter off more than one nickel atom simultaneously or, in the two-slit analogy, go through both slits at the same time! Rather than placing the electrons in the atom in definite orbits as envisioned by Bohr, Schrödinger’s wave mechanics, as interpreted by Born, treated the square of the particle’s wave amplitude y as giving the probability that the electron was at a particular place in space, with the most probable positions corresponding to Bohr’s orbits. From this discussion it is clear that we are treating the electron both as a particle and a wave. Consider Young’s two-slit experiment again, but using electrons instead of light as the incident radiation. Suppose we position a fluorescent screen behind the two holes, and decrease the intensity of the electron beam until only one electron hits the screen at a time. Experimentally we see that each electron produces a tiny flash on the screen, as though it were struck by a particle rather than a wave. However, the number of particles arriving in a given region of the screen is greater where the diffraction pattern has its maxima. The electron acts like a particle when we demand a particle-like response, but like a wave when we demand a wave-like response. This is the conclusion come to by Bohr, in establishing his “principle of complementarity”: the wave and particle descriptions of matter (or electromagnetic radiation) are complementary, in the sense that our experiments can test for one or the other, but never for both properties at the same time. In 1927 Heisenberg proved that it was impossible to determine both a particle’s position and momentum with arbitrary precision; if one is known very accurately, then the uncertainty in the other becomes large. This “Uncertainty Principle” showed that there are theoretical limits on a person’s ability to describe the world. The limits are not a serious consideration for large bodies, but become very important for bodies the size of an atom or smaller. The uncertainty principle also makes it clear that the presence of the experimenter always affects the results of an experiment at some level. For example, if we try to determine the position of a small particle very accurately we must, in principle, change its momentum by the very act of observing it. Quantum mechanics has now been extended to explain a wide range of phenomena at the sub-microscopic level, including the structure of the atomic nucleus. Experimentally, this structure has been determined in a manner similar in principle to Rutherford’s scattering experiment, using accelerators which produce incident particles of very high energy. Philosophically, the developments of quantum mechanics were far-reaching. Like relativity, they again showed that humans could not assume that the physical laws which seem to govern a 60-kg person moving at speeds up to several hundred kilometres per hour also applied to bodies far from this regime. They also brought into question the assumption of the perfectly deterministic world proposed by Laplace. Clearly it was impossible to predict the position and velocity of every body for all future times if you could not even know these coordinates accurately at a single instant in time. This conclusion has even been used as the basis of the claim that humans have free will, that all is not predetermined as would seem to be the case in a purely mechanistic, deterministic world governed by the laws of physics. These ideas are still heavily debated today, as in a recent article by Roger Penrose in the book Quantum Implications. Indeed, Einstein himself was never able to accept fully the uncertainty implied in quantum mechanics, declaring that he did not believe that God played dice (Clark, pp.414,415). In an attempt to show that quantum theory was at variance with the real world, he helped develop the Einstein-Podolsky-Rosen (EPR) paradox, a “thought experiment” which shows that quantum mechanical theory must lead to what seems like an impossible situation: what you do to one particle can affect a second, even if they are sufficiently separated in space that a light signal could not pass from the first to the second fast enough to cause the observed effect. That is, either the knowledge of the event can travel between the particles faster than the speed of light, or the two particles really are not separate but remain interconnected in some fundamental sense. It was the latter option which was under debate. An experiment designed to test this hypothesis was carried out by D. Aspect and coworkers in 1981 [Physical Review Letters 47,460 (1981) and 49, 91 (1982)] and was shown to confirm what was predicted: the two particles really were connected over large distances by “non-local” forces acting instantaneously. That is, the EPR paradox, rather than showing a basic inconsistency in quantum theory, actually points to one more aspect of nature that contravenes common sense. The Unification of Physical Phenomena The work of Maxwell represents the first great theoretical unification of physical phenomena, in this case the integration of magnetic, electrical and optical theory into one all-encompassing framework. Again, this must be seen as desirable under Ockham’s Razor, which argues for economy of understanding. Such economy is the strength of modern analytical science, which emphasizes the logical description of a vast range of physical phenomena from a few basic principles, rather than the memorization of a large number of isolated facts or formulae. The former approach enables the user to predict effects not seen previously, to invent, whereas the latter restricts one to what already is known. Other great unifications that have taken place in physics include the integration of classical mechanics, quantum physics and heat in the development of statistical mechanics. This subject assumes that the properties of large systems, such as gases or solids, can be calculated by working out the average of the properties of all their constituent particles. For example, the relationship between the temperature and pressure of a gas can be calculated by treating the gas as being made up of a very large number of independent molecules, and calculating the average force they produce as they collide with the container walls, using Newtonian mechanics for the particles. This approach was followed for gases by Maxwell and Ludwig Boltzmann (1844-1906). Boltzmann also showed that Clausius’ entropy could be interpreted as a measure of the disorder of a system. In particular, he proved that the value for entropy can be obtained from a knowledge of the total number of different states in which a system can be found. That, in turn, depends on the number of different potential configurations of all the particles which comprised the system. This statistical approach has led to the development of “quantum statistics”, the application of statistical mechanics to quantum phenomena. Perhaps the greatest such unification that has taken place in this century is the integration of electromagnetism and quantum mechanics, in quantum electrodynamics (QED). This feat earned Richard Feynman, Julian Schwinger, and Sin-itiro Tomonaga the Nobel Prize for physics in 1965. It is capable of predicting the spin g-factor of the electron with a numerical accuracy of 1 part in 1010! In 1979, Sheldon Glashow, Abdus Salam, and Stephen Weinberg were given the Nobel Prize for their “electroweak theory” that unified the electromagnetic and weak nuclear forces. Attempts have also been made to form a quantum theory of the strong nuclear force. Because of its similarity to QED, it has been called quantum chromodynamics (QCD). “Chromo” comes from the Greek word for colour, and refers to the fact that the quarks that make up neutrons and protons come in several varieties that have been given the names red, blue and green, and their antiparticles. (These names have been chosen in analogy to light. These three colours can be combined to give white light; the three quarks combine to give a “colourless” particle.) The combination of electroweak theory and QCD comprises what is called the “Standard Model”. Attempts are still under way to integrate QCD and electroweak theory into a single “Grand Unified Theory” (GUT). Much effort has also gone into trying to unify electromagnetism and gravitation. In fact, Einstein spent most of the latter part of his life trying to create a quantum form of the general theory of relativity. As can be seen from these few examples, the nineteenth-century belief that the main theoretical work of physicists was over could not have been further from the truth! Dissemination of the Results of Scientific Research Written exchange of information among scientists in different countries was common from before the time of Galileo, and books on science were published from shortly after the development of the printing press in Europe by 1450. Starting in 1644 in England, John Wilkins, a Puritan clergyman, organized weekly meetings of several scientists in London, who called themselves the “Philosophical College”. They met to discuss scientific theory and carry out experiments, first at a pub and then at Gresham College. When the Puritans under Cromwell came to power, Wilkins was appointed the head of Wadham College in Oxford. There he established the Philosophical Society for the discussion of science. Under the Commonwealth, interest in science had increased substantially, and shortly after the restoration of Charles II to the throne in 1660 a group of forty-one persons founded a college for scientific learning which became the “Royal Society for the Improvement of Natural Knowledge” two years later, with about one hundred members; John Wilkins was one of its two secretaries. This organization eventually became the Royal Society of London, which persists to today. Similar societies emerged on the continent. These organizations published regular journals of the findings of their members. Today, there are hundreds of scientific societies world-wide, some discipline-based and national in focus such as the Canadian Association of Physicists, and some research-area-based and very international in membership, such as the American Vacuum Society. Most hold meetings annually or more often. There are more than 100,000 articles published per year in physics alone. With this enormous amount of information, it has been necessary to develop bibliographic search tools just to enable researchers to find papers of interest. In physics these include three major journals. Physics Abstracts, published monthly, catalogues by subject and author almost all the articles published in physics in the previous period. Current Contents, published weekly, lists by journal, author and subject all papers in the main journals. Science Citation Index, published monthly, lists articles covering all the sciences, which have been published or cited (referred to) in the previous period. This last journal enables researchers to use their knowledge of a seminal article in a given field to find the most current related work. These search tools have become immensely more powerful recently, with the application of computer programs which provide rapid searching, cross-referencing and automatic print-outs. Searching can even be done on-line using remote data banks. Applied Physics Bacon’s vision of the application of science for human use has been realized this century, with tens of thousands of scientists and engineers working world-wide to develop usable products. However, the deal has been Faustian. We have our jumbo jets, cellular telephones, catscans, personal computers and CD-players, all direct applications of physics which we enjoy. We have also developed the fission bomb which killed 110,000 in Hiroshima and similar numbers in Nagasaki, with some 2500 people continuing to die per year for decades from radiation-related illness (the fusion bombs currently deployed are typically 50 times more powerful); modern conventional weapons and communications keep millions of the world’s people in economic slavery; the world’s ecosystem, of which we are a part, is endangered by the pollution resulting from our technological successes; the technologically developed world consumes some ten times that of the lesser developed world per capita, so limiting the economic viability of the rest of the world. As suggested by Capra in The Turning Point, it is time to take a lesson from the EPR paradox and consider the world more holistically. Physics still has a powerful role to play in the evolution of our society, and it is our individual and collective responsibility to choose its direction carefully. Acknowledgements The motivation for writing this paper arose from long discussions with my partner, Linda, on the need for physics students to question their role in the world. The material presented above has been chosen as that which the author has found most useful in doing this for himself. It has come from a wide variety of secondary sources, many of which are given in the attached bibliography. However, the dates and other details have been confirmed for this writing using primarily the excellent book, A History of the Sciences by Stephen F. Mason, with some assistance from Schneer’s The Evolution of Physical Science. Many useful comments from Peter Dawson have been incorporated into the text. A Partial Bibliography Butterfield, H., The Origins of Modern Science, 1300-1800 (Clarke-Irwin, Toronto) 1977. A good discussion of the interplay between science and society. Capra, F., The Turning Point (Simon and Schuster, New York) 1982. Reductionist vs. holistic science, from a physicist’s perspective. Clark, R.W., Einstein, The Life and Times (Avon, New York) 1971. Cline, B.L., Men who Made a New Physics (previously entitled The Questioners) (Signet, New York) 1965. A very readable account of the origins of quantum physics and relativity. Cole, M.D., The Maya, 3rd ed. (Thames and Hudson, London) 1984. Dijksterhuis, E.J., The Mechanization of the World Picture (Oxford University) 1961. Drake, S., Telescopes, Tides and Tactics: A Galilean Dialogue about the Starry Messenger and Systems of the World (University of Chicago Press, Chicago) 1983. This book includes a translation of Galileo’s description of his first astronomical observations, and MUST be read. It contains copies of Galileo’s original sketches of the appearance of the Moon and of the moons of Jupiter. Drake, S., The Role of Music in Galileo’s Experiments Scientific American, p. 98, June 1975. Finocchiaro, M.A., The Galileo Affair, A Documentary History (University of California Press, Berkeley) 1989. Gives the context for Galileo’s trial, and a translation of a number of the original documents. French, M., Beyond Power (Ballantine, New York) 1985. A feminist perspective on patriarchal society. Hawking, S.W., A Brief History of Time (Bantam, 1988). A discussion of modern cosmology for the layperson, from one of the world’s experts. Horgan, J., Quantum Philosophy, Scientific American, July 1992, p.94. A discussion of recent investigations of the EPR paradox. Hiley, B.J. and Peat, F.D. (editors), Quantum Implications – Essays in Honour of David Bohm (Routledge, New York) 1987. An excellent but fairly mathematical consideration of the implications of quantum theory. Kramer, E., Nature and Growth of Modern Mathematics, (Princeton University Press, New York) 1982. Jammer, M., The Conceptual Development of Quantum Mechanics, (McGraw-Hill, New York) 1966. This book is quite mathematical. Mason, S.F., A History of the Sciences (Collier, New York), 1962. An excellent general history, very complete. Rossiter, M.W., Women Scientists in America: Struggles and Strategies to 1940, (John Hopkins University Press, Baltimore) 1982. Schneer, C.J., The Evolution of Physical Science (Grove Press, New York) 1960. Greeks to modern physical science. Tuana, N. (editor), Feminism and Science (Indiana University Press, Bloomington) 1989. Addresses gender bias in science. Whitehead, A.N., Science and the Modern World, (Cambridge University Press) 1933. Williams, L.P., The Origins of Field Theory (Random House, Toronto) 1966. (Not in Trent Library).


Special Relativity – Experimental Verification

三月 3, 2009

Special Relativity – Experimental Verification

Like any scientific theory, the theory of relativity must be confirmed by experiment. So far, relativity has passed all its experimental tests. The special theory predicts unusual behavior for objects traveling near the speed of light. So far no human has traveled near the speed of light. Physicists do, however, regularly accelerate subatomic particles with large particle accelerators like the recently canceled Superconducting Super Collider (SSC). Physicists also observe cosmic rays which are particles traveling near the speed of light coming from space. When these physicists try to predict the behavior of rapidly moving particles using classical Newtonian physics, the predictions are wrong. When they use the corrections for Lorentz contraction, time dilation, and mass increase required by special relativity, it works. For example, muons are very short lived subatomic particles with an average lifetime of about two millionths of a second. However when they are traveling near the speed of light physicists observe much longer apparent lifetimes for muons. Time dilation is occurring for the muons. As seen by the observer in the lab time moves more slowly for the muons traveling near the speed of light.

Time dilation and other relativistic effects are normally too small to measure at ordinary velocities. But what if we had sufficiently accurate clocks? In 1971 two physicists, J. C. Hafele and R. E. Keating used atomic clocks accurate to about one billionth of a second (one nanosecond) to measure the small time dilation that occurs while flying in a jet plane. They flew atomic clocks in a jet for 45 hours then compared the clock readings to a clock at rest in the laboratory. To within the accuracy of the clocks they used time dilation occurred for the clocks in the jet as predicted by relativity. Relativistic effects occur at ordinary velocities, but they are too small to measure without very precise instruments.

The formula E=mc2 predicts that matter can be converted directly to energy. Nuclear reactions that occur in the Sun, in nuclear reactors, and in nuclear weapons confirm this prediction experimentally.

Albert Einstein’s special theory of relativity fundamentally changed the way scientists characterize time and space. So far it has passed all experimental tests. It does not however mean that Newton’s law of physics is wrong. Newton’s laws are an approximation of relativity. In the approximation of small velocities, special relativity reduces to Newton’s laws.

Resources

Books

Cutnell, John D., and Kenneth W. Johnson. Physics. 3rd ed. New York: Wiley, 1995.

Einstein, Albert. Relativity. New York: Crown, 1961.

Mould, R.A. Basic Relativity. Springer Verlag, 2001.

Hawking, Stephen. Black Holes and Baby Universes and Other Essays. New York: Bantam, 1993.

Schrödinger, Edwin. Space-Time Structure, Reprint Edition. Cambridge University Press, 2002.

Paul A. Heckert
K. Lee Lerner

KEY TERMS

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

General relativity
—The part of Einstein’s theory of relativity that deals with accelerating (noninertial) reference frames.

Lorentz contraction
—An effect that occurs in special relativity; to an outside observer the length appears shorter for an object traveling near the speed of light.

Reference frames
—A system, consisting of both a set of coordinate axes and a clock, for locating an object’s (or event’s) position in both space and time.

Space-time
—Space and time combined as one unified concept.

Special relativity
—The part of Einstein’s theory of relativity that deals only with nonaccelerating (inertial) reference frames.

Time dilation
—An effect that occurs in special relativity; to an outside observer time appears to slow down for an object traveling near the speed of light.


Real-World Relativity: The GPS Navigation System

三月 2, 2009

Real-World Relativity: The GPS Navigation System

People often ask me “What good is Relativity?” It is a commonplace to think of Relativity as an abstract and highly arcane mathematical theory that has no consequences for everyday life. This is in fact far from the truth.

Consider for a moment that when you are riding in a commercial airliner, the pilot and crew are navigating to your destination with the aid of the Global Positioning System (GPS). Further, many luxury cars now come with built-in navigation systems that include GPS receivers with digital maps, and you can purchase hand-held GPS navigation units that will give you your position on the Earth (latitude, longitude, and altitude) to an accuracy of 5 to 10 meters that weigh only a few ounces and cost around $100.

GPS was developed by the United States Department of Defense to provide a satellite-based navigation system for the U.S. military. It was later put under joint DoD and Department of Transportation control to provide for both military and civilian navigation uses.

The current GPS configuration consists of a network of 24 satellites in high orbits around the Earth. Each satellite in the GPS constellation orbits at an altitude of about 20,000 km from the ground, and has an orbital speed of about 14,000 km/hour (the orbital period is roughly 12 hours – contrary to popular belief, GPS satellites are not in geosynchronous or geostationary orbits). The satellite orbits are distributed so that at least 4 satellites are always visible from any point on the Earth at any given instant (with up to 12 visible at one time). Each satellite carries with it an atomic clock that “ticks” with an accuracy of 1 nanosecond (1 billionth of a second). A GPS receiver in an airplane determines its current position and heading by comparing the time signals it receives from a number of the GPS satellites (usually 6 to 12) and triangulating on the known positions of each satellite. The precision is phenomenal: even a simple hand-held GPS receiver can determine your absolute position on the surface of the Earth to within 5 to 10 meters in only a few seconds (with differential techiques that compare two nearby receivers, precisions of order centimeters or millimeters in relative position are often obtained in under an hour or so). A GPS receiver in a car can give accurate readings of position, speed, and heading in real-time!

To achieve this level of precision, the clock ticks from the GPS satellites must be known to an accuracy of 20-30 nanoseconds. However, because the satellites are constantly moving relative to observers on the Earth, effects predicted by the Special and General theories of Relativity must be taken into account to achieve the desired 20-30 nanosecond accuracy.

Because an observer on the ground sees the satellites in motion relative to them, Special Relativity predicts that we should see their clocks ticking more slowly (see the Special Relativity lecture). Special Relativity predicts that the on-board atomic clocks on the satellites should fall behind clocks on the ground by about 7 microseconds per day because of the slower ticking rate due to the time dilation effect of their relative motion.

Further, the satellites are in orbits high above the Earth, where the curvature of spacetime due to the Earth’s mass is less than it is at the Earth’s surface. A prediction of General Relativity is that clocks closer to a massive object will seem to tick more slowly than those located further away (see the Black Holes lecture). As such, when viewed from the surface of the Earth, the clocks on the satellites appear to be ticking faster than identical clocks on the ground. A calculation using General Relativity predicts that the clocks in each GPS satellite should get ahead of ground-based clocks by 45 microseconds per day.

The combination of these two relativitic effects means that the clocks on-board each satellite should tick faster than identical clocks on the ground by about 38 microseconds per day (45-7=38)! This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time. This kind of accumulated error is akin to measuring my location while standing on my front porch in Columbus, Ohio one day, and then making the same measurement a week later and having my GPS receiver tell me that my porch and I are currently about 5000 meters in the air somewhere over Detroit.

The engineers who designed the GPS system included these relativistic effects when they designed and deployed the system. For example, to counteract the General Relativistic effect once on orbit, they slowed down the ticking frequency of the atomic clocks before they were launched so that once they were in their proper orbit stations their clocks would appear to tick at the correct rate as compared to the reference atomic clocks at the GPS ground stations. Further, each GPS receiver has built into it a microcomputer that (among other things) performs the necessary relativistic calculations when determining the user’s location.

Relativity is not just some abstract mathematical theory: understanding it is absolutely essential for our global navigation system to work properly!


Einstein: making the bad difficult and the good easy

三月 2, 2009

http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/sr.html
from Spacetime Physics, p. 5, by Edwin Taylor & John Archibald Wheeler, W.H. Freeman and Company, San Francisco, 1966 edition.

“The principles of special relativity are remarkably simple. They are very much simpler than the axioms of Euclid or the principles of operating an automobile. Yet both Euclid and the automobile have been mastered – perhaps with insufficient surprize – by generations of ordinary people. Some of the best minds of the twentieth century struggled with the concepts of relativity, not because nature is obscure, but simply because man finds it difficult to outgrow established ways of looking at nature. For us the battle has already been won. The concepts of relativity can now be expressed simply enough to make it easy to think correctly – thus “making the bad difficult and the good easy.”[*] The problem of understanding relativity is no longer one of learning but one of intuition – a practiced way of seeing. When seen with this intuition, a remarkable number of otherwise incomprehensible experimental results are revealed to be perfectly natural.”

[*] Quote is from Einstein, in a similar connection, in a letter to the architect Le Corbusier.


John A. Wheeler, Physicist Who Coined the Term ‘Black Hole,’ Is Dead at 96

四月 15, 2008

John A. Wheeler, a visionary physicist and teacher who helped invent the theory of nuclear fission, gave black holes their name and argued about the nature of reality with Albert Einstein and Niels Bohr, died Sunday morning at his home in Hightstown, N.J. He was 96.

The cause was pneumonia, said his daughter Alison Wheeler Lahnston.

Dr. Wheeler was a young, impressionable professor in 1939 when Bohr, the Danish physicist and his mentor, arrived in the United States aboard a ship from Denmark and confided to him that German scientists had succeeded in splitting uranium atoms. Within a few weeks, he and Bohr had sketched out a theory of how nuclear fission worked. Bohr had intended to spend the time arguing with Einstein about quantum theory, but “he spent more time talking to me than to Einstein,” Dr. Wheeler later recalled.

As a professor at Princeton and then at the University of Texas in Austin, Dr. Wheeler set the agenda for generations of theoretical physicists, using metaphor as effectively as calculus to capture the imaginations of his students and colleagues and to pose questions that would send them, minds blazing, to the barricades to confront nature.

Max Tegmark, a cosmologist at the Massachusetts Institute of Technology, said of Dr. Wheeler, “For me, he was the last Titan, the only physics superhero still standing.”

Under his leadership, Princeton became the leading American center of research into Einsteinian gravity, known as the general theory of relativity — a field that had been moribund because of its remoteness from laboratory experiment.

“He rejuvenated general relativity; he made it an experimental subject and took it away from the mathematicians,” said Freeman Dyson, a theorist at the Institute for Advanced Study across town in Princeton.

Among Dr. Wheeler’s students was Richard Feynman of the California Institute of Technology, who parlayed a crazy-sounding suggestion by Dr. Wheeler into work that led to a Nobel Prize. Another was Hugh Everett, whose Ph.D. thesis under Dr. Wheeler on quantum mechanics envisioned parallel alternate universes endlessly branching and splitting apart — a notion that Dr. Wheeler called “Many Worlds” and which has become a favorite of many cosmologists as well as science fiction writers.

Recalling his student days, Dr. Feynman once said, “Some people think Wheeler’s gotten crazy in his later years, but he’s always been crazy.”

John Archibald Wheeler — he was Johnny Wheeler to friends and fellow scientists — was born on July 9, 1911, in Jacksonville, Fla. The oldest child in a family of librarians, he earned his Ph.D. in physics from Johns Hopkins University at 21. A year later, after becoming engaged to an old acquaintance, Janette Hegner, after only three dates, he sailed to Copenhagen to work with Bohr, the godfather of the quantum revolution, which had shaken modern science with paradoxical statements about the nature of reality.

“You can talk about people like Buddha, Jesus, Moses, Confucius, but the thing that convinced me that such people existed were the conversations with Bohr,” Dr. Wheeler said.

Their relationship was renewed when Bohr arrived in 1939 with the ominous news of nuclear fission. In the model he and Dr. Wheeler developed to explain it, the atomic nucleus, containing protons and neutrons, is like a drop of liquid. When a neutron emitted from another disintegrating nucleus hits it, this “liquid drop” starts vibrating and elongates into a peanut shape that eventually snaps in two.

Two years later, Dr. Wheeler was swept up in the Manhattan Project to build an atomic bomb. To his lasting regret, the bomb was not ready in time to change the course of the war in Europe and possibly save his brother Joe, who died in combat in Italy in 1944.

Dr. Wheeler continued to do government work after the war, interrupting his research to help develop the hydrogen bomb, promote the building of fallout shelters and support the Vietnam War and missile defense, even as his views ran counter to those of his more liberal colleagues.

Dr. Wheeler was once officially reprimanded by President Dwight D. Eisenhower for losing a classified document on a train, but he also received the Atomic Energy Commission’s Enrico Fermi Award from President Lyndon B. Johnson in 1968.

When Dr. Wheeler received permission in 1952 to teach a course on Einsteinian gravity, it was not considered an acceptable field to study. But in promoting general relativity, he helped transform the subject in the 1960s, at a time when Dennis Sciama, at Cambridge University in England, and Yakov Borisovich Zeldovich, at Moscow State University, founded groups that spawned a new generation of gravitational theorists and cosmologists.

One particular aspect of Einstein’s theory got Dr. Wheeler’s attention. In 1939, J. Robert Oppenheimer, who would later be a leader in the Manhattan Project, and a student, Hartland Snyder, suggested that Einstein’s equations had made an apocalyptic prediction. A dead star of sufficient mass could collapse into a heap so dense that light could not even escape from it. The star would collapse forever while spacetime wrapped around it like a dark cloak. At the center, space would be infinitely curved and matter infinitely dense, an apparent absurdity known as a singularity.

Dr. Wheeler at first resisted this conclusion, leading to a confrontation with Dr. Oppenheimer at a conference in Belgium in 1958, in which Dr. Wheeler said that the collapse theory “does not give an acceptable answer” to the fate of matter in such a star. “He was trying to fight against the idea that the laws of physics could lead to a singularity,” Dr. Charles Misner, a professor at the University of Maryland and a former student, said. In short, how could physics lead to a violation itself — to no physics?

Dr. Wheeler and others were finally brought around when David Finkelstein, now an emeritus professor at Georgia Tech, developed mathematical techniques that could treat both the inside and the outside of the collapsing star.

At a conference in New York in 1967, Dr. Wheeler, seizing on a suggestion shouted from the audience, hit on the name “black hole” to dramatize this dire possibility for a star and for physics.

The black hole “teaches us that space can be crumpled like a piece of paper into an infinitesimal dot, that time can be extinguished like a blown-out flame, and that the laws of physics that we regard as ‘sacred,’ as immutable, are anything but,” he wrote in his 1999 autobiography, “Geons, Black Holes & Quantum Foam: A Life in Physics.” (Its co-author is Kenneth Ford, a former student and a retired director of the American Institute of Physics.)

In 1973, Dr. Wheeler and two former students, Dr. Misner and Kip Thorne, of the California Institute of Technology, published “Gravitation,” a 1,279-page book whose witty style and accessibility — it is chockablock with sidebars and personality sketches of physicists — belies its heft and weighty subject. It has never been out of print.

In the summers, Dr. Wheeler would retire with his extended family to a compound on High Island, Me., to indulge his taste for fireworks by shooting beer cans out of an old cannon.

He and Janette were married in 1935. She died in October 2007 at 99. Dr. Wheeler is survived by their three children, Ms. Lahnston and Letitia Wheeler Ufford, both of Princeton; James English Wheeler of Ardmore, Pa.; 8 grandchildren, 16 great-grandchildren, 6 step-grandchildren and 11 step-great-grandchildren.

In 1976, faced with mandatory retirement at Princeton, Dr. Wheeler moved to the University of Texas.

At the same time, he returned to the questions that had animated Einstein and Bohr, about the nature of reality as revealed by the strange laws of quantum mechanics. The cornerstone of that revolution was the uncertainty principle, propounded by Werner Heisenberg in 1927, which seemed to put fundamental limits on what could be known about nature, declaring, for example, that it was impossible, even in theory, to know both the velocity and the position of a subatomic particle. Knowing one destroyed the ability to measure the other. As a result, until observed, subatomic particles and events existed in a sort of cloud of possibility that Dr. Wheeler sometimes referred to as “a smoky dragon.”

This kind of thinking frustrated Einstein, who once asked Dr. Wheeler if the Moon was still there when nobody looked at it.

But Dr. Wheeler wondered if this quantum uncertainty somehow applied to the universe and its whole history, whether it was the key to understanding why anything exists at all.

“We are no longer satisfied with insights only into particles, or fields of force, or geometry, or even space and time,” Dr. Wheeler wrote in 1981. “Today we demand of physics some understanding of existence itself.”

At a 90th birthday celebration in 2003, Dr. Dyson said that Dr. Wheeler was part prosaic calculator, a “master craftsman,” who decoded nuclear fission, and part poet. “The poetic Wheeler is a prophet,” he said, “standing like Moses on the top of Mount Pisgah, looking out over the promised land that his people will one day inherit.” Wojciech Zurek, a quantum theorist at Los Alamos National Laboratory, said that Dr. Wheeler’s most durable influence might be the students he had “brought up.” He wrote in an e-mail message, “I know I was transformed as a scientist by him — not just by listening to him in the classroom, or by his physics idea: I think even more important was his confidence in me.”

Dr. Wheeler described his own view of his role to an interviewer 25 years ago.

“If there’s one thing in physics I feel more responsible for than any other, it’s this perception of how everything fits together,” he said. “I like to think of myself as having a sense of judgment. I’m willing to go anywhere, talk to anybody, ask any question that will make headway.

“I confess to being an optimist about things, especially about someday being able to understand how things are put together. So many young people are forced to specialize in one line or another that a young person can’t afford to try and cover this waterfront — only an old fogy who can afford to make a fool of himself.

“If I don’t, who will?”