Thursday, July 30, 2015

WSJ: "The Unsettling, Anti-Science Certitude on Global Warming...doesn’t sound like science"

The Unsettling, Anti-Science Certitude on Global Warming

Climate-change ‘deniers’ are accused of heresy by true believers. That doesn’t sound like science to me


By  JOHN STEELE GORDON

THE WALL STREET JOURNAL July 30, 2015 8:03 p.m. ET
COMMENTS

Are there any phrases in today’s political lexicon more obnoxious than “the science is settled” and “climate-change deniers”?

The first is an oxymoron. By definition, science is never settled. It is always subject to change in the light of new evidence. The second phrase is nothing but an ad hominem attack, meant to evoke “Holocaust deniers,” those people who maintain that the Nazi Holocaust is a fiction, ignoring the overwhelming, incontestable evidence that it is a historical fact. Hillary Clinton’s speech about climate change on Monday in Des Moines, Iowa, included an attack on “deniers.”



The phrases are in no way applicable to the science of Earth’s climate. The climate is an enormously complex system, with a very large number of inputs and outputs, many of which we don’t fully understand—and some we may well not even know about yet. To note this, and to observe that there is much contradictory evidence for assertions of a coming global-warming catastrophe, isn’t to “deny” anything; it is to state a fact. In other words, the science is unsettled—to say that we have it all wrapped up is itself a form of denial. The essence of scientific inquiry is the assumption that there is always more to learn.

Science takes time, and climatology is only about 170 years old. Consider something as simple as the question of whether the sun revolves around the Earth or vice versa.

The Greek philosopher Aristarchus suggested a heliocentric model of the solar system as early as the third century B.C. But it was Ptolemy’s geocentric model from the second century A.D. that predominated. It took until the mid-19th century to solve the puzzle definitively.

Assuming that “the science is settled” can only impede science. For example, there has never been so settled a branch of science as Newtonian physics. But in the 1840s, as telescopes improved, it was noticed that Mercury’s orbit stubbornly failed to behave as Newtonian equations said that it should.

It seems not to have occurred to anyone to question Newton, so the only explanation was that Mercury must be being perturbed by a planet still closer to the sun. The French mathematician Urbain Le Verrier had triumphed in 1846 when he had predicted, within one degree, the location of a planet (later named Neptune) that was perturbing Uranus’s orbit.

He set out to calculate the orbit of the planet that he was sure was responsible for Mercury’s orbital eccentricity. He named it Vulcan, after the Roman god of fire. Once Le Verrier had done the math, hundreds of astronomers, both amateur and professional, searched for the illusive planet for the next few decades. But telescopic observation near the immensely bright sun is both difficult and dangerous. More than one astronomer injured his eyesight in the search.

Several possible sightings were reported, but whether they were illusions, comets, or asteroids is unknown, as none could be tracked over time. After Le Verrier’s death in 1877 the hunt for Vulcan slacked off though it never ceased entirely.

Only in 1915 was the reason no one could find Vulcan explained: It wasn’t there. Newton had written in the “Principia” that he assumed space to be everywhere and always the same. But a man named Albert Einstein that year, in his theory of general relativity, demonstrated that it wasn’t always the same, for space itself is distorted by hugely massive objects such as the sun.

When Mercury’s orbit was calculated using Einstein’s equations rather than Newton’s, the planet turned out to be exactly where Einstein said it would be, one of the early proofs of general relativity.

Climate science today is a veritable cornucopia of unanswered questions. Why did the warming trend between 1978 and 1998 cease, although computer climate models predict steady warming? How sensitive is the climate to increased carbon-dioxide levels? What feedback mechanisms are there that would increase or decrease that sensitivity? Why did episodes of high carbon-dioxide levels in the atmosphere earlier in Earth’s history have temperature levels both above and below the average?

With so many questions still unanswered, why are many climate scientists, politicians—and the left generally—so anxious to lock down the science of climatology and engage in protracted name-calling? Well, one powerful explanation for the politicians is obvious: self-interest.

If anthropogenic climate change is a reality, then that would be a huge problem only government could deal with. It would be a heaven-sent opportunity for the left to vastly increase government control over the economy and the personal lives of citizens.

Moreover, the release of thousands of emails from the University of East Anglia’s Climate Research Unit in 2009 showed climate scientists concerned with the lack of recent warming and how to “hide the decline.” The communications showed that whatever the emailers were engaged in, it was not the disinterested pursuit of science.

Another batch of 5,000 emails written by top climate scientists came out in 2011, discussing, among other public-relations matters, how to deal with skeptical editors and how to suppress unfavorable data. It is a measure of the intellectual corruption of the mainstream media that this wasn’t the scandal of the century. But then again I forget, “the science is settled.”

Mr. Gordon is the author of the forthcoming “Washington’s Monument and the Fascinating History of the Obelisk,” out early next year from Bloomsbury.

Wednesday, July 29, 2015

Feynman explains how gravitational potential energy and kinetic energy convert to create the gravito-thermal greenhouse effect, without greenhouse gases



If the temperature is the same at all heights, the problem is to discover by what law the atmosphere becomes tenuous as we go up. If N is the total number of molecules in a volume V of gas at pressure P, then we know PV=NkT, or P=nkT, where n=N/V is the number of molecules per unit volume. In other words, if we know the number of molecules per unit volume, we know the pressure, and vice versa: they are proportional to each other, since the temperature is constant in this problem. But the pressure is not constant, it must increase as the altitude is reduced, because it has to hold, so to speak, the weight of all the gas above it. That is the clue by which we may determine how the pressure changes with height. If we take a unit area at height h, then the vertical force from below, on this unit area, is the pressure P. The vertical force per unit area pushing down at a height h+dh would be the same, in the absence of gravity, but here it is not, because the force from below must exceed the force from above by the weight of gas in the section between h and h+dhNow mg is the force of gravity on each molecule, where gis the acceleration due to gravity, and ndh is the total number of molecules in the unit section. So this gives us the differential equation Ph+dhPh= dP= mgndh. Since P=nkT, and T is constant, we can eliminate either P or n, say P, and get
dndh=mgkTn
for the differential equation, which tells us how the density goes down as we go up in energy.
We thus have an equation for the particle density n, which varies with height, but which has a derivative which is proportional to itself. Now a function which has a derivative proportional to itself is an exponential, and the solution of this differential equation is
n=n0emgh/kT.(40.1)
Here the constant of integration, n0, is obviously the density at h=0 (which can be chosen anywhere), and the density goes down exponentially with height.

Fig. 40–2.The normalized density as a function of height in the earth’s gravitational field for oxygen and for hydrogen, at constant temperature.
Note that if we have different kinds of molecules with different masses, they go down with different exponentials. The ones which were heavier would decrease with altitude faster than the light ones. Therefore we would expect that because oxygen is heavier than nitrogen, as we go higher and higher in an atmosphere with nitrogen and oxygen the proportion of nitrogen would increase. This does not really happen in our own atmosphere, at least at reasonable heights, because there is so much agitation which mixes the gases back together again. It is not an isothermal atmosphere. Nevertheless, there is a tendency for lighter materials, like hydrogen, to dominate at very great heights in the atmosphere, because the lowest masses continue to exist, while the other exponentials have all died out (Fig. 40–2).

40–2The Boltzmann law

Here we note the interesting fact that the numerator in the exponent of Eq. (40.1) is the [gravitational] potential energy of an atom. Therefore we can also state this particular law as: the density at any point is proportional to
ethe potential energy of each atom/kT.
That may be an accident, i.e., may be true only for this particular case of a uniform gravitational field. However, we can show that it is a more general proposition. Suppose that there were some kind of force other than gravity acting on the molecules in a gas. For example, the molecules may be charged electrically, and may be acted on by an electric field or another charge that attracts them. Or, because of the mutual attractions of the atoms for each other, or for the wall, or for a solid, or something, there is some force of attraction which varies with position and which acts on all the molecules. Now suppose, for simplicity, that the molecules are all the same, and that the force acts on each individual one, so that the total force on a piece of gas would be simply the number of molecules times the force on each one. To avoid unnecessary complication, let us choose a coordinate system with the x-axis in the direction of the force, F.
In the same manner as before, if we take two parallel planes in the gas, separated by a distance dx, then the force on each atom, times the n atoms per cm³ (the generalization of the previous nmg), times dx, must be balanced by the pressure change: Fndx=dP=kTdn. Or, to put this law in a form which will be useful to us later,
F=kTddx(lnn).(40.2)
For the present, observe that Fdx is the work we would do in taking a molecule from x to x+dx, and if F comes from a potential, i.e., if the work done can be represented by a [gravitational] potential energy at all, then this would also be the difference in the [gravitational] potential energy (P.E.). The negative differential of [gravitational] potential energy is the work done, Fdx, and we find that d(lnn)=d(P.E.)/kT, or, after integrating,

n=(constant)eP.E./kT.(40.3)
Therefore what we noticed in a special case turns out to be true in general. (What if F does not come from a potential? Then (40.2) has no solution at all. Energy can be generated, or lost by the atoms running around in cyclic paths for which the work done is not zero, and no equilibrium can be maintained at all. Thermal equilibrium cannot exist if the external forces on the atoms are not conservative.) Equation (40.3), known as Boltzmann’s law, is another of the principles of statistical mechanics: that the probability of finding molecules in a given spatial arrangement varies exponentially with the negative of the potential energy of that arrangement, divided by kT.
This, then, could tell us the distribution of molecules: Suppose that we had a positive ion in a liquid, attracting negative ions around it, how many of them would be at different distances? If the potential energy is known as a function of distance, then the proportion of them at different distances is given by this law, and so on, through many applications...

40–4The distribution of molecular speeds

Now we go on to discuss the distribution of velocities, because sometimes it is interesting or useful to know how many of them are moving at different speeds. In order to do that, we may make use of the facts which we discovered with regard to the gas in the atmosphere. We take it to be a perfect gas, as we have already assumed in writing the potential energy, disregarding the energy of mutual attraction of the atoms. The only potential energy that we included in our first example was gravity. We would, of course, have something more complicated if there were forces between the atoms. Thus we assume that there are no forces between the atoms and, for a moment, disregard collisions also, returning later to the justification of this. Now we saw that there are fewer molecules at the height h than there are at the height 0; according to formula (40.1), they decrease exponentially with height. How can there be fewer at greater heights? After all, do not all the molecules which are moving up at height 0 arrive at h? No!, because some of those which are moving up at 0 are going too slowly, and cannot climb the potential hill to h. With that clue, we can calculate how many must be moving at various speeds, because from (40.1) we know how many are moving with less than enough speed to climb a given distance h. Those are just the ones that account for the fact that the density at h is lower than at 0...Since velocity and momentum are proportional, we may say that the distribution of momenta is also proportional to 
eK.E./kT per unit momentum range. It turns out that this theorem is true in relativity too, if it is in terms of momentum, while if it is in velocity it is not, so it is best to learn it in momentum instead of in velocity:
f(p)dp=CeK.E./kTdp.(40.8)
So we find that the probabilities of different conditions of energy, kinetic and potential, are both given by eenergy/kT, a very easy thing to remember and a rather beautiful proposition.
So far we have, of course, only the distribution of the velocities “vertically.” We might want to ask, what is the probability that a molecule is moving in another direction? Of course these distributions are connected, and one can obtain the complete distribution from the one we have, because the complete distribution depends only on the square of the magnitude of the velocity, not upon the z-component. It must be something that is independent of direction, and there is only one function involved, the probability of different magnitudes. We have the distribution of the z-component, and therefore we can get the distribution of the other components from it. The result is that the probability is still proportional to eK.E./kT, but now the kinetic energy involves three parts, mv2x/2mv2y/2, and mv2z/2, summed in the exponent. Or we can write it as a product:
f(vx,vy,vz)dvxdvydvzemv2x/2kTemv2y/2kTemv2z/2kTdvxdvydvz.(40.9)
You can see that this formula must be right because, first, it is a function only of v2, as required, and second, the probabilities of various values of vz obtained by integrating over all vx and vy is just (40.7). But this one function (40.9) can do both those things!

40–5The specific heats of gases

Now we shall look at some ways to test the theory, and to see how successful is the classical theory of gases. We saw earlier that if U is the internal energy of N molecules, then PV= NkT= (γ1)U holds, sometimes, for some gases, maybe. If it is a monatomic gas, we know this is also equal to 
23 of the kinetic energy of the center-of-mass motion of the atoms. If it is a monatomic gas, then the kinetic energy is equal to the internal energy, and therefore Î³1=23. But suppose it is, say, a more complicated molecule, that can spin and vibrate, and let us suppose (it turns out to be true according to classical mechanics) that the energies of the internal motions are also proportional to kT. Then at a given temperature, in addition to kinetic energy 32kT, it has internal vibrational and rotational energies. So the total U includes not just the kinetic energy, but also the rotational and vibrational energies, and we get a different value of Î³. Technically, the best way to measure Î³ is by measuring the specific heat, which is the change in energy with temperature. We will return to that approach later. For our present purposes, we may suppose Î³ is found experimentally from the PVγ curve for adiabatic compression...

40–6The failure of classical physics

So, all in all, we might say that we have some difficulty. We might try some force law other than a spring, but it turns out that anything else will only make Î³ higher. If we include more forms of energy, Î³ approaches unity more closely, contradicting the facts. All the classical theoretical things that one can think of will only make it worse. The fact is that there are electrons in each atom, and we know from their spectra that there are internal motions; each of the electrons should have at least 12kT of kinetic energy, and something for the potential energy, so when these are added in, Î³ gets still smaller. It is ridiculous. It is wrong.
The first great paper on the dynamical theory of gases was by Maxwell in 1859. On the basis of ideas we have been discussing, he was able accurately to explain a great many known relations, such as Boyle’s law, the diffusion theory, the viscosity of gases, and things we shall talk about in the next chapter. He listed all these great successes in a final summary, and at the end he said, “Finally, by establishing a necessary relation between the motions of translation and rotation (he is talking about the 12kT theorem) of all particles not spherical, we proved that a system of such particles could not possibly satisfy the known relation between the two specific heats.” He is referring to Î³ (which we shall see later is related to two ways of measuring specific heat), and he says we know we cannot get the right answer...

Also see Feynman's lecture 42 in which he states,

"It is like the atmosphere in equilibrium under gravity, where the gas at the bottom is denser than that at the top because of the work mghneeded to lift the gas molecules to the height h."
."
n=(constant)eP.E./kT