Wednesday, October 28, 2015

The Kinetic Theory of Gases explains why the Maxwell et al Gravito-Thermal Greenhouse Effect is Correct

An excellent review of the Kinetic Theory of Gases, similar to Feynman's lecture 40 on the Statistical Mechanics of the Atmosphere, and explains the fundamental basis of the 33C Maxwell et al Gravito-Thermal greenhouse effect (& which also falsifies the Arrhenius radiative-greenhouse effect). 

The review also explains the fundamental reasons why the false analogies of inflated tires or static, closed gas cylinders to our atmosphere are incorrect. 

Kinetic Theory of Gases: A Brief Review

By Michael Fowler, physicist, U of VA

Bernoulli's Picture


Daniel Bernoulli, in 1738, was the first to understand air pressure from a molecular point of view. He drew a picture of a vertical cylinder, closed at the bottom, with a piston at the top, the piston having a weight on it, both piston and weight being supported by the air pressure inside the cylinder. He described what went on inside the cylinder as follows: “let the cavity contain very minute corpuscles, which are driven hither and thither with a very rapid motion; so that these corpuscles, when they strike against the piston and sustain it by their repeated impacts, form an elastic fluid which will expand of itself if the weight is removed or diminished…”

(An applet is available here.)  Sad to report, his insight, although essentially correct, was not widely accepted [yet another example of a false scientific "consensus"]. Most scientists believed that the molecules in a gas stayed more or less in place, repelling each other from a distance, held somehow in the ether. Newton had shown that PV = constant followed if the repulsion were inverse-square. In fact, in the 1820’s an Englishman, John Herapath, derived the relationship between pressure and molecular speed given below, and tried to get it published by the Royal Society. It was rejected by the president, Humphry Davy, who pointed out that equating temperature with motion, as Herapath did, implied that there would be an absolute zero of temperature, an idea Davy was reluctant to accept.  And it should be added that no-one had the slightest idea how big atoms and molecules were, although Avogadro had conjectured that equal volumes of different gases at the same temperature and pressure contained equal numbers of molecules—his famous number—neither he nor anyone else knew what that number was, only that it was pretty big.

The Link between Molecular Energy and Pressure


It is not difficult to extend Bernoulli’s picture to a quantitative description, relating the gas pressure to the molecular velocities. As a warm up exercise, let us consider a single perfectly elastic particle, of mass m, bouncing rapidly back and forth at speed v inside a narrow cylinder of length L with a piston at one end, so all motion is along the same line. (For the movie, click here!) What is the force on the piston?

Obviously, the piston doesn’t feel a smooth continuous force, but a series of equally spaced impacts. However, if the piston is much heavier than the particle, this will have the same effect as a smooth force over times long compared with the interval between impacts. So what is the value of the equivalent smooth force?

Using Newton’s law in the form force = rate of change of momentum, we see that the particle’s momentum changes by 2mv each time it hits the piston. The time between hits is 2L/v, so the frequency of hits is v/2L per second. This means that if there were no balancing force, by conservation of momentum the particle would cause the momentum of the piston to change by 2mv´v/2L units in each second. This is the rate of change of momentum, and so must be equal to the balancing force, which is therefore F = mv2/L.

We now generalize to the case of many particles bouncing around inside a rectangular box, of length L in the x-direction (which is along an edge of the box). The total force on the side of area A perpendicular to the x-direction is just a sum of single particle terms, the relevant velocity being the component of the velocity in the x-direction. The pressure is just the force per unit area, P = F/A. Of course, we don’t know what the velocities of the particles are in an actual gas, but it turns out that we don’t need the details. If we sum contributions, one from each particle in the box, each contribution proportional to vx2 for that particle, the sum just gives us N times the average value ofvx2. That is to say,


where there are N particles in a box of volume V.  Next we note that the particles are equally likely to be moving in any direction, so the average value of vx2 must be the same as that of vy2 or vz2, and since v2 = vxvy2 + vz2, it follows that


This is a surprisingly simple result!  The macroscopic pressure of a gas relates directly to the average kinetic energy per molecule.  Of course, in the above we have not thought about possible complications caused by interactions between particles, but in fact for gases like air at room temperature these interactions are very small.  Furthermore, it is well established experimentally that most gases satisfy the Gas Law over a wide temperature range:

PV nRT

for n moles of gas, that is, n = N/NA, with NA Avogadro’s number and R the gas constant.
Introducing Boltzmann’s constant R/NA, it is easy to check from our result for the pressure and the ideal gas law that the average molecular kinetic energy is proportional to the absolute temperature,


Boltzmann’s constant k = 1.38.10-23 joules/K.

Maxwell finds the Velocity Distribution 


By the 1850’s, various difficulties with the existing theories of heat, such as the caloric theory, caused some rethinking, and people took another look at the kinetic theory of Bernoulli, but little real progress was made until Maxwell attacked the problem in 1859.  Maxwell worked with Bernoulli’s picture, that the atoms or molecules in a gas were perfectly elastic particles, obeying Newton’s laws, bouncing off each other (and the sides of the container) with straight-line trajectories in between collisions. (Actually, there is some inelasticity in the collisions with the sides—the bouncing molecule can excite or deexcite vibrations in the wall, this is how the gas and container come to thermal equilibrium.)  Maxwell realized that it was completely hopeless to try to analyze this system using Newton’s laws, even though it could be done in principle, there were far too many variables to begin writing down equations.  On the other hand, a completely detailed description of how each molecule moved was not really needed anyway.  What was needed was some understanding of how this microscopic picture connected with the macroscopic properties, which represented averages over huge numbers of molecules.

The relevant microscopic information is not knowledge of the position and velocity of every molecule at every instant of time, but just the distribution function, that is to say, what percentage of the molecules are in a certain part of the container, and what percentage have velocities within a certain range, at each instant of time.  For a gas in thermal equilibrium [unlike the 100 km Earth atmosphere, which is not in vertical thermal equilibrium due to gravity], the distribution function is independent of timeIgnoring tiny corrections for gravity [only true for this small container, not the 100km height Earth atmosphere in which gravity-corrections are very large], the gas will be distributed uniformly in the container, so the only unknown is the velocity distribution function.

Velocity Space


What does a velocity distribution function look like?  Suppose at some instant in time one particular molecule has velocity  We can record this information by constructing a three-dimensional velocity space, with axes , and putting in a pointP1 representing the molecule’s velocity (the red arrow is of course):
Now imagine that at that instant we could measure the velocities of all the molecules in a container, and put points P2P3P4, … PN  in the velocity space.  Since N  is of order 1021 for 100 ccs of gas, this is not very practical!  But we can imagine what the result would be: a cloud of points in velocity space, equally spread in all directions (there’s no reason molecules would prefer to be moving in the x-direction, say, rather than the y-direction) and thinning out on going away from the origin towards higher and higher velocities. 

Now, if we could keep monitoring the situation as time passes individual points would move around, as molecules bounced off the walls, or each other, so you might think the cloud would shift around a bit.  But there’s a vast number of molecules in any realistic macroscopic situation, and for any reasonably sized container it’s safe to assume that the number of molecules in any small region of velocity space remains pretty much constant.  Obviously, this cannot be true for a region of velocity space so tiny that it only contains one or two molecules on average.  But it can be shown statistically that if there are N molecules in a particular small volume of velocity space, the fluctuation of the number with time is of order, so a region containing a million molecules will vary in numbers by about one part in a thousand, a trillion molecule region by one part in a million.  Since 100 ccs of air contains of order 1021 molecules, we can in practice divide the region of velocity space occupied by the gas into a billion cells, and still have variation in each cell of order one part in a million!

The bottom line is that for a macroscopic amount of gas, fluctuations in density, both in ordinary space and in velocity space, are for all practical purposes negligible, and we can take the gas to be smoothly distributed in both spaces.

Maxwell’s Symmetry Argument


Maxwell found the velocity distribution function for gas molecules in thermal equilibrium by the following elegant argument based on symmetry.

For a gas of N particles, let the number of particles having velocity in the x-direction between vx and vx + dvx be .  In other words,  is the fraction of all the particles having x-direction velocity lying in the interval between vx and vx + dvx.  (I’ve written f1 instead of f to help remember this function refers to only one component of the velocity vector.)
If we add the fractions for all possible values of vx, the result must of course be 1:


But there’s nothing special about the x-direction—for gas molecules in a container, at least away from the walls, all directions look the same, so the same function f will give the probability distributions in the other directions too.  It follows immediately that the probability for the velocity to lie between vx and vx + dvx, vy and vy + dvyand vz and vz + dvz must be:


Note that this distribution function, when integrated over all possible values of the three components of velocity, gives the total number of particles to be N, as it should (since integrating over each f1(v)dv gives unity).

Next comes the clever part—since any direction is as good as any other direction, the distribution function must depend only on the total speed of the particle, not on the separate velocity components. Therefore, Maxwell argued, it must be that:


where F is another unknown function.  However, it is apparent that the product of the functions on the left is reflected in the sum of variables on the right.  It will only come out that way if the variables appear in an exponent in the functions on the left.  In fact, it is easy to check that this equation is solved by a function of the form:


This curve is called a Gaussian:  it’s centered at the origin, and falls off very rapidly as vx increases.  Taking A = B = 1 just to see the shape, we find:
At this point, A and B are arbitrary constants—we shall eventually find their values for an actual sample of gas at a given temperature.  Notice that (following Maxwell) we have put a minus sign in the exponent because there must eventually be fewer and fewer particles on going to higher speeds, certainly not a diverging number. 

Multiplying together the probability distributions for the three directions gives the distribution in terms of particle speed v, where v2 = vx2 +vy2 + vz2.   Since all velocity directions are equally likely, it is clear that the natural distribution function is that giving the number of particles having speed between v and v + dv.

From the graph above, it is clear that the most likely value of vx is zero.  If the gas molecules were restricted to one dimension, just moving back and forth on a line, then the most likely value of their speed would also be zero.  However, for gas molecules free to move in two or three dimensions, the most likely value of the speed is not zero.  It’s easiest to see this in a two-dimensional example. Suppose we plot the points P representing the velocities of molecules in a region near the origin, so the density of points doesn’t vary much over the extent of our plot (we’re staying near the top of the peak in the one-dimensional curve shown above).  

Now divide the two-dimensional space into regions corresponding to equal increments in speed:


In the two-dimensional space, is a circle, so this division of the plane is into annular regions between circles whose successive radii are  apart:
Each of these annular areas corresponds to the same speed increment .  In particular, the green area, between a circle of radius  and one of radius , corresponds to the same speed increment as the small red circle in the middle, which corresponds to speeds between 0 and . Therefore, if the molecular speeds are pretty evenly distributed in this near-the-origin area of the (vxvy) plane, there will be a lot more molecules with speeds between  and  than between 0 and —so the most likely speed will not be zero.  To find out what it actually is, we have to put this area argument together with the Gaussian fall off in density on going far from the origin.  We’ll discuss this shortly.

The same argument works in three dimensions—it’s just a little more difficult to visualize. Instead of concentric circles, we have concentric spheres.  All points lying on a spherical surface centered at the origin correspond to the same speed. 

Let us now figure out the distribution of particles as a function of speed.  The distribution in the three-dimensional space  is from Maxwell’s analysis


To translate this to the number of particles having speed between v and  we need to figure out how many of those little boxes there are corresponding to speeds between v and  .  In other words, what is the volume of velocity space between the two neighboring spheres, both centered at the origin, the inner one with radius v, the outer one infinitesimally bigger, with radius ?    Since dv is so tiny, this volume is just the area of the sphere multiplied by dv: that is, .

Finally, then, the probability distribution as a function of speed is:



Of course, our job isn’t over—we still have these two unknown constants A and B.  However, just as for the function  is the fraction of the molecules corresponding to speeds between v and  , and all these fractions taken together must add up to 1.

That is,
We need the standard result  (a derivation can be found in my 152 Notes on Exponential Integrals), and find:


This means that there is really only one arbitrary variable left: if we can find B, this equation gives us A: that is, , and   is what appears in .

Looking at , we notice that B is a measure of how far the distribution spreads from the origin: if B is small, the distribution drops off more slowly—the average particle is more energetic.   Recall now that the average kinetic energy of the particles is related to the temperature by .  This means that B is related to the inverse temperature.

In fact, since is the fraction of particles in the interval dv at v, and those particles have kinetic energy  ½mv2, we can use the probability distribution to find the average kinetic energy per particle:


To do this integral we need another standard result: .  We find:


.Substituting the value for the average kinetic energy in terms of the temperature of the gas,


gives B = m/2kT, so  .

This means the distribution function



where E is the kinetic energy of the molecule.

Note that this function increases parabolically from zero for low speeds, then curves round to reach a maximum and finally decreases exponentially.  As the temperature increases, the position of the maximum shifts to the right.  The total area under the curve is always one, by definition.  For air molecules (say, nitrogen) [i.e. pure N2 without any greenhouse gases] at room temperature the curve is the blue one below. The red one is for an absolute temperature down by a factor of two:


What about Potential Energy?

Maxwell’s analysis solves the problem of finding the statistical velocity distribution of molecules of an ideal gas in a box at a definite temperature T: the relative probability of a molecule having velocity  is proportional to .  The position distribution is taken to be uniform: the molecules are assumed to be equally likely to be anywhere in the box.

But how is this distribution affected if in fact there is some kind of potential [like gravity] pulling the molecules to one end of the box?  In fact, we’ve already solved this problem, in the discussion earlier on the isothermal atmosphere [in a small box for which gravity-corrections are insignificant].  Consider a really big box, kilometers high, so air will be significantly denser towards the bottom.  Assume the temperature is uniform throughout [a thought experiment premise only]. We found under these conditions that with Boyles Law expressed in the form



the atmospheric density varied with height as


Now we know that Boyle’s Law is just the fixed temperature version of the Gas Law , and the density


with N the total number of molecules and m the molecular mass,



Rearranging,


for n moles of gas, each mole containing Avogadro’s number NA molecules.

Putting this together with the Gas Law,

so

where Boltzmann’s constant  as discussed previously.

The dependence of gas density on height can therefore be written


The important point here is that mgh is the [gravitational] potential energy of the molecule, and the distribution we have found is exactly parallel to Maxwell’s velocity distribution, the [gravitational] potential energy now playing the role that kinetic energy played in that case.

We’re now ready to put together Maxwell’s velocity distribution with this height distribution, to find out how the molecules are distributed in the atmosphere, both in velocity space and in ordinary space.  In other words, in a six-dimensional space!

Our result is:


 That is, the probability of a molecule having total energy E is proportional to .
This is the Boltzmann, or Maxwell-Boltzmann, distribution.  It turns out to be correct for any type of potential energy [including gravitational potential energy], including that arising from forces between the molecules themselves.

Degrees of Freedom and Equipartition of Energy

By a “degree of freedom” we mean a way in which a molecule is free to move, and thus have energy—in this case, just the xy, and z directions.  Boltzmann reformulated Maxwell’s analysis in terms of degrees of freedom, stating that there was an average energy  ½kT  in each degree of freedom, to give total average kinetic energy 3.½kT,  so the specific heat per molecule is presumable 1.5k, and given that R/NA, the specific heat per mole comes out at 1.5R.  In fact, this is experimentally confirmed for monatomic gases.  However, it is found that diatomic gases can have specific heats of 2.5R and even 3.5R.  This is not difficult to understand—these molecules have more degrees of freedom.  A dumbbell molecule can rotate about two directions perpendicular to its axis.  A diatomic molecule could also vibrate.  Such a simple harmonic oscillator motion has both kinetic and potential energy, and it turns out to have total energy kT  in thermal equilibrium.  Thus, reasonable explanations for the specific heats of various gases can be concocted by assuming a contribution ½k from each degree of freedom.  But there are problems.  Why shouldn’t the dumbbell rotate about its axis?  Why do monatomic atoms not rotate at all?  Even more ominously, the specific heat of hydrogen, 2.5R at room temperature, drops to 1.5R at lower temperatures.  These problems were not resolved until the advent of quantum mechanics.

Brownian Motion

One of the most convincing demonstrations that gases really are made up of fast moving molecules is Brownian motion, the observed constant jiggling around of tiny particles, such as fragments of ash in smoke.  This motion was first noticed by a Scottish botanist, who initially assumed he was looking at living creatures, but then found the same motion in what he knew to be particles of inorganic material.  Einstein showed how to use Brownian motion to estimate the size of atoms.  For the movie, click here!

Related:


Maxwell established that gravity & atmospheric mass create so-called greenhouse effect

Debunking Myths & Strawmen about the Gravito-Thermal Greenhouse Effect & Radiative Greenhouse Effect


1] The Greenhouse Equation


2] How Gravity continuously does Thermodynamic Work on the atmosphere to control pressure & temperature


3] Why Greenhouse Gases Don't Affect the Greenhouse Equation or Lapse Rate (debunks claim that greenhouse gases are necessary for convection or a lapse rate to occur or that greenhouse gas radiative forcing can affect the lapse rate)


4] Quick and dirty explanation of the Greenhouse Equation and theory

5] The Greenhouse Equation predicts 1% change in cloud cover changes global temperature by 1°C


6] Why the atmosphere is in horizontal thermodynamic equilibrium but not vertical equilibrium (debunks claims that the gravito-thermal greenhouse effect assumes thermodynamic equilibrium in all three x, y, and z planes).

7] The Greenhouse Equation predicts temperatures within 0.02°C throughout entire troposphere without radiative forcing from greenhouse gases

8] Why increased water vapor decreases the lapse rate by half to cause surface cooling of up to 25.5C

9] Derivation of the effective radiating height & entire 33°C greenhouse effect without radiative forcing from greenhouse gases

10] Derivation of the entire 33°C greenhouse effect without radiative forcing from greenhouse gases


11] French scientist explains why the greenhouse effect is primarily due to atmospheric mass/gravity/pressure

12] Modeling of the Earth’s Planetary Heat Balance with an Electrical Circuit Analogy

13] Why can't radiation from a cold body make a hot body hotter?

WSJ: It’s Always Exxon’s Fault- Why climate warriors keep returning to the same whipping boy

It’s Always Exxon’s Fault

Why climate warriors keep returning to the same whipping boy.


In 2009, the New York Times was forced to issue a 328-word correction (a retraction in all but name) because a reporter, assailing an industry group, could not distinguish the proposition “the greenhouse effect exists” from the proposition “any and all environmentalist proposals for dealing with a possible human influence on the greenhouse effect must be met uncritically.”
Here we go again, in the form of an exposé of Exxon by the website InsideClimateNews.org, echoed by the Los Angeles Times and other media organs. See if you can follow the logic exhibited in the ICN series.
Because Exxon concerned itself with how a warming Arctic might affect the safety of its pipelines and drilling structures there, Exxon is a hypocrite on climate change.
Because Exxon refrained from developing an Indonesian gas field that would have meant releasing or capturing a large amount of associated carbon dioxide, Exxon is a hypocrite on climate change.
Exxon, in the early 1980s, adapted then-existing climate models to estimate that a doubling of atmospheric carbon would lead to a temperature increase of 1.5 to 4.5 degrees Celsius. Then as now the company also judged such models not reliable enough to serve as the basis for large and costly policy actions. So Exxon is a hypocrite.
Here’s the interesting part. These studies took place 35 years ago. In completely unrelated comments, nobody’s idea of a “denier,” Harvard’s Martin Weitzman, co-author of the book “Climate Shock,” recently complained about the lack of climate modeling progress in “35 years.” He cited the U.N. climate panel’s latest temperature forecast, which is identical (i.e., unimproved in precision) to Exxon’s three decades earlier.
Through six installments ICN kept promising the goods on how Exxon’s public advocacy conflicted with its private understanding of climate change. The series essentially delivered nothing.
An Exxon spokesman is quoted as saying, “The risk of climate change is real and warrants action.”
Exxon’s CEO in the 1990s, Lee Raymond, the villain of the series, is quoted as saying, “Many people believe that global warming is a rock-solid certainty. But it’s not.”
The company’s position on a carbon tax is that . . . it should be revenue neutral.
The real smoking gun isn’t the Exxon revelations but the climate community’s hysterical reaction to them. Veteran campaigner Bill McKibben and Democratic presidential candidate Bernie Sanders demand the Obama administration launch a criminal investigation.
A Washington Monthly writer, in a blog post for the psychiatric textbooks, delivers himself of this remarkable paragraph: “ExxonMobil’s deceit continues to this very day. The company still insists that it supports federal revenue-neutral carbon tax legislation. How can we possibly take their word for it, after the company spent years attacking the abundant scientific evidence pointing to the critical need for such legislation?”
Just spend a minute parsing that one.
ICN calls its Exxon series “the road not taken.” Were its reporters really the free thinkers they imagine themselves to be, however, they would put aside such crass exercises in orthodoxy enforcement. They would investigate exactly when and how the climate movement itself made its ill-advised turn toward frantic exaggeration, false certainty and vilification of anybody who raises scientific caveats.
They’re right: Exxon was once a respected participant in the debate. It wasn’t Exxon that equated its opponents to holocaust deniers. The $16 million that Exxon spent between 1998 and 2005 to support organizations that pointed out the inadequacies of climate models would have bought less than 1% of the media attention Al Gore was getting at the time.
Talk about a road not taken. A calmer discussion based on uncertainties, risks and benefits might long ago have allowed the introduction of a modest carbon tax in the only way it would be politically salable—by using the proceeds to reduce taxes on investment and work.
Washington might have set itself a clear if modest agenda to fund basic battery research, rather than squander taxpayer dollars on Tesla and Solyndra.
All this might have been below-the-fold, inside-the-paper news, rather than turning climate science into another polarizing proxy for irreconcilable left-right partisan differences on economics.
But the truth is, people like Mr. McKibben can’t afford practical, meaningful progress—because it would be unnoticeable and undramatic. Modest tweaks to incentives, then seeing how energy technology and the energy economy adapted over time, would not fulfill their need for a pressing global crisis that casts them as moral warriors (well-funded ones) whose victory over deniers and climate criminals is always just around the corner—and must remain so in order to keep the money, media attention and political fealty flowing.

Tuesday, October 27, 2015

Why Tyndall's experiment did not "prove" the theory of anthropogenic global warming

Many warmists cite Tyndall's 1861 experiment as "proof" of the catastrophic anthropogenic global warming theory, but in fact the experiment demonstrated only that CO2 and H2O are IR-active molecules capable of absorbing and emitting infrared radiation, nothing more. 

Of course, CO2 does indeed absorb and emit very low-energy ~15 micron infrared radiation, equivalent to a "partial blackbody" at a temperature of 193K (-80C) by Wien's Law. However, radiation from a true or "partial" blackbody cannot warm the much warmer atmosphere (with an "average" temperature of 255K (-18C), equivalent to the equilibrium temperature of Earth with the Sun), nor the even warmer Earth surface at 288K (15C).

Yet the Arrhenius radiative greenhouse theory falsely assumes that "backradiation" from the 193K CO2 "partial blackbody" can warm the Earth surface temperature from the 255K equilibrium temperature with the Sun by 33K up to 288K. This would require a continuous and dominating heat transfer from cold to hot, thus requiring an impossible decrease of entropy, and therefore a gross violation of the Second Law of Thermodynamics (which requires entropy to increase from any transfer of heat)

In contrast, the alternative 33C gravito-thermal greenhouse theory of Poisson, Helmholtz, Maxwell, Boltzmann, Carnot, Clausius, Feynman, US Standard Atmosphere, International Standard Atmosphere, the HS greenhouse equation, et al instead fully explains the 33C 'greenhouse effect' on Earth, as well as on all 7 additional planets for which we have adequate data. 

As we can see from the diagram of Tydall's apparatus below, it consists of a horizontal sealed tube containing the gas to be studied. Unlike the actual 100km Earth atmosphere, Tyndall's apparatus does not allow any vertical convective cooling as is found in the real Earth atmosphere. In fact, increased greenhouse gases accelerate convective cooling in the troposphereTyndall's apparatus artificially prevents this convective cooling, just like a sealed greenhouse does, but which does not happen in the real atmosphere. 

Furthermore, as physicist William Happer points out, the probability of CO2 transferring quanta of energy in the troposphere via collisions instead of emitting a photon is one billion times more likely. This transfer of energy via collisions to the remaining 99.06% of the atmosphere causes acceleration of convective cooling by increasing the adiabatic expansion, rising, and cooling of air parcels. Convection dominates radiative-convective equilibrium in the troposphere by a factor of ~8 times and thus cancels any possible warming effect of the low-energy CO2 backradiation upon the surface. 

Further, the presence of IR-active gases in the atmosphere only delays the ultimate passage of IR photons from the surface to space by a few seconds, and is easily reversed and erased during each 12 hour night, and explains why 'greenhouse gases' don't 'trap heat' in the atmosphere.

For these reasons, Tydall's experiment does not in any way prove the Arrhenius radiative greenhouse theory. In contrast, the alternative 33C gravito-thermal greenhouse theory of Poisson, Helmholtz, Maxwell, Boltzmann, Carnot, Clausius, Feynman, US Standard Atmosphere, International Standard Atmosphere, the HS greenhouse equation, et al instead fully explains the 33C 'greenhouse effect' on Earth, as well as on all 7 additional planets for which we have adequate data. 





Tyndall's Setup For Measuring Radiant Heat Absorption By Gases (source: Wikipedia)

This illustration dates from 1861 and it is taken from one of John Tyndall's presentations where he describes his setup for measuring the relative radiant-heat absorption of gases and vapors. The galvanometer quantifies the difference in temperature between the left and right sides of the thermopile. The reading on the galvanometer is settable to zero by moving the Heat Screen a bit closer or farther from the lefthand heat source. That is the only role for the heat source on the left. The heat source on the righthand side directs radiant heat into the long brass tube. The long brass tube is highly polished on the inside, which makes it a good reflector (and non-absorber) of the radiant heat inside the tube. Rock-salt (NaCl) is practically transparent to radiant heat, and so plugging the ends of the long brass tube with rock-salt plates allows radiant heat to move freely in and out at the tube endpoints, yet completely blocks the gas within from moving out. To begin the measurements, both heat sources are turned on, the long brass tube is evacuated as much as possible with an air suction pump, the galvanometer is set to zero, and then the gas under study is released into the long brass tube. The galvanometer is looked at again. The extent to which the galvanometer has changed from zero indicates the extent to which the gas has absorbed the radiant heat from the righthand heat source and blocked this heat from radiating to the thermopile through the tube. If a highly polished metal disc is placed in the space between the thermopile and the brass tube it will completely block the radiant heat coming out of the tube from reaching the thermopile, thereby deflecting the galvanometer by the maximum extent possible with respect to blockage in the tube. Thus the system has minimum and maximum readings available, and can express other readings in percentage terms. (The galvanometer's responsiveness was physically nonlinear, but well understood, and mathematically linearizable.)
In one of his public lectures to non-professional audiences Tyndall gave the following indication of instrument sensitivity: "My assistant stands several feet off. I turn the thermopile towards him. The heat from his face, even at this distance, produces a deflection of 90 degrees [on the galvanometer dial]. I turn the instrument towards a distant wall, judged to be a little below the average temperature of the room. The needle descends and passes to the other side of zero, declaring by this negative deflection that the pile feels the chill of the wall." (quote from Six Lectures On Light). To reduce interference from human bodies, the galvanometer was read through a telescope from across the room. The thermopile & galvanometer system was invented by Nobili and Melloni. Melloni measured radiant heat absorption in solids and liquids but didn't have the sensitivity for gases. Tyndall greatly improved the sensitivity of the overall setup (including putting an offsetting heat source on the other side of the thermopile, and putting the gas in a brass tube), and as a result of his superior apparatus he was able to confidently reach conclusions that were quite different from Melloni's concerning radiant heat in gases (book ref below, in chapter I). Air from which water vapor and carbon dioxide had been removed deflected the galvanometer dial by less than 1 degree, in other words a detectable but very small amount (same ref, chapter II). Many other gases and vapors deflected the galvanometer by a large amount -- thousands of times greater than air.
As a check on his system's reliability, Tyndall painted the inside walls of the brass tube with a strong absorber of radiant heat (namely lampblack). This greatly reduced the radiant heat that reached the thermopile when the tube was empty. Nevertheless the percentage absorptions by the different gases and vapors relative to the empty tube were largely and essentially unchanged by this change to the absorption property of the tube's walls. That's excluding a few gases and vapors such as chlorine that must be excluded because they tarnish brass, changing its heat reflectivity. As another test of the reliability of the system, the long brass tube was cut to about a quarter of its original length, and the exact same quantity of gas was released into the shorter tube. Thus the shorter tube will have about four times higher gas density. It was found that the percentage of radiant heat absorbed by or transmitted through the gas relative to the empty-tube state was entirely unchanged by this (even though the two tubes don't have equal empty-tube states). Varying the absolute quantity of the gas in the tube causes corresponding changes in the absorption percentages, but varying the density doesn't matter, nor does the absolute value of the empty-tube reference point.
The emission spectrum of the particular source of heat makes a difference -- sometimes a big difference -- in the amount of radiant heat a gas will absorb, and different gases can respond differently to a change in the source. Tyndall said in 1864, "a long series of experiments enables me to state that probably no two substances at a temperature of 100°C emit heat of the same quality [i.e. of the same spectral profile]. The heat emitted by isinglass, for example, is different from that emitted by lampblack, and the heat emitted by cloth, or paper, differs from both." Looking at an electrically-heated platinum wire, it is obvious to the human eye that the heat's spectral profile depends on whether the wire is heated to dull red, bright orange, or white hot. Some gases were relatively stronger absorbers of the dull-red platinum heat while other gases were relatively stronger absorbers of the white hot platinum heat, he found. For his original and primary benchmark in 1859, he used the heat from 100°C lampblack (akin to a theoretical "blackbody radiator"). Later he got some of his more interesting findings from using other heat sources. E.g., when the source of radiant heat was any one kind of gas, then this heat was strongly absorbed by another body of the same kind of gas, regardless of whether the gas was a weak absorber of broad-spectrum sources. In the illustration above, the radiant heat that is going into the brass tube comes from a pot of simmering water; the heat radiates from the exterior surface of the pot, not from the water, and not from the gas flame that keeps the water at a simmer. An alternative illustration with a modified setup taken from the same book (page 112) is shown below. The main difference is that the heat source is separated from the brass tube by open air, which eliminates the need for circulating cold water cooling at the interface between heat source and brass tube.

New paper "has profound implications for current mathematical climate models," convection, airflows, & effects of trees on climate

A new paper published in Hydrology and Earth System Sciences challenges the "consensus" view and finds the primary cause of atmospheric circulations and airflow is water vapor condensation and not buoyancy. According to the author's laboratory experiments,
"The experimental results therefore provide evidence that condensation and not buoyancy is the major mechanism driving airflow, thus lending strong support to one of the main tenets of the [Biotic Pump Theory]."

The "Biotic Pump Theory"
"maintains that the primary motive force of atmospheric circulation derives from the intense condensation and sharp pressure reduction that is associated with regions where a high rate of evapotranspiration from natural closed-canopy forests provides the "fuel" for cloud formation. The net result of the "biotic pump" theory is that moist air flows from ocean to land, drawn in by the pressure changes associated with a high rate of condensation. "

The authors conclude,
"The general implications are that the great forests of the world play a fundamental role in air mass circulation through providing water vapour via evapotranspiration, and are therefore the “fuel” for a high rate of cloud condensation. Airflow circulation is the net result, bringing the rain to the deep interior of continents. The Biotic Pump theory suggests that the hydrological role of rainforests is by far their most important climatic contribution, and that large-scale deforestation may well be as detrimental in its consequences for the well-being of the planet as are greenhouse gas emissions. Indeed, it may be that in macro-climatological terms whether forests are net absorbers or emitters of greenhouse gases is relatively insignificant compared to their hydrological role."


Experimental evidence of condensation-driven airflow
P. Bunyard1, M. Hodnett2,a, G. Poveda3, J. D. Burgos Salcedo4, and C. Peña5
1IDEASA, Universidad Sergio Arboleda, Bogotá, Colombia
2Centre for Ecology & Hydrology, Wallingford, UK
3Department of Geosciences and Environment, Universidad Nacional de Colombia, Sede Medellín, Colombia
4Corporación para la Investigación y la Innovación – CIINAS, Bogotá, Colombia
5Facultad de Matemática, Universidad Sergio Arboleda, Bogotá, Colombia
aretired

Abstract. The dominant "convection" model of atmospheric circulation is based on the premise that hot air expands and rises, to be replaced by colder air, thereby creating horizontal surface winds. A recent theory put forward by Makarieva and Gorshkov (2007, 2013) maintains that the primary motive force of atmospheric circulation derives from the intense condensation and sharp pressure reduction that is associated with regions where a high rate of evapotranspiration from natural closed-canopy forests provides the "fuel" for cloud formation. The net result of the "biotic pump" theory is that moist air flows from ocean to land, drawn in by the pressure changes associated with a high rate of condensation. 

To test the physics underpinning the biotic pump theory, namely that condensation of water vapour, at a sufficiently high rate, results in an uni-directional airflow, a 5 m tall experimental apparatus was designed and built, in which a 20 m3 body of atmospheric air is enclosed inside an annular 14 m long space (a "square donut") around which it can circulate freely, allowing for rotary air flows. One vertical side of the apparatus contains some 17 m of copper refrigeration coils, which cause condensation. The apparatus contains a series of sensors measuring temperature, humidity and barometric pressure every five seconds, and air flow every second. 

The laws of Newtonian physics are used in calculating the rate of condensation inside the apparatus. The results of more than one hundred experiments show a highly significant correlation, with r2 > 0.9, of airflow and the rate of condensation. The rotary air flows created appear to be consistent both in direction and velocity with the biotic pump hypothesis, the critical factor being the rate change in the partial pressure of water vapour in the enclosed body of atmospheric air. Air density changes, in terms of kinetic energy, are found to be orders of magnitude smaller than the kinetic energy of partial pressure change. 
The consistency of the laboratory experiments, in confirming the physics of the biotic pump, has profound implications for current mathematical climate models, not just in terms of predicting the consequences of widespread deforestation, but also for better understanding the atmospheric processes which lead to air mass convection.


Select excerpts:

Atmospheric convection, which leads to air mass circulation, is generally considered to result from the lower atmosphere acting as a heat engine, with the kinetic energy for convection deriving from differences in temperature, according to the general principle that hot air rises and cold air sinks. However, as Makarieva et al. (2013) point out, when hot air rises in the lower atmosphere it cools because of expansion and when the same, but now cooler, air sinks it heats up, such that the overall gain or loss in kinetic energy is zero. The same cooling and heating happens when air expands and forces air elsewhere to compress; there is no net energy gain to do work. In other words, a strict application of the first law of thermodynamics to the atmosphere would yield a rate of kinetic energy generation equal to zero. Instead, the same authors (2013) present the notion that the potential energy, derived from an outside source (the Sun), is stored in the evapotranspiration of water which, on condensing, converts into kinetic energy, and so drives the process of air mass convection. During daylight hours closed-canopy forests pump more than double the quantity of water vapour per square metre of surface compared to the ocean in the same latitude, the net result being that condensation in cloud-forming over the forest causes surface air to flow upwards, thereby generating low pressure at the surface which, in turn, establishes an ocean-to-land pressure gradient (Makarieva et al., 2013, 2014). By means of evapotranspiration, rainforests, whether in the equatorial tropics or in boreal regions during summer months, feed the lower atmosphere with water vapour, up to some 3 % of atmospheric pressure, and thereby provide the source material for cloud condensation. The partial pressure change, with the corresponding kinetic energy release, drives convection, according to the biotic pump theory (BPT). From that point of view, it is the hydrological cycle, including water evaporation and condensation, which drives convection and therefore the circulation of the air masses. That is in sharp contrast to the orthodox view of convection and air mass circulation, which explains the movement of the air mass through latitudinal differences, helped on by the release of latent heat. 

The proposition that a high rate of evapotranspiration from forested regions is a prime mover of major air mass convection has remained contentious. Meesters et al. (2009) rejected the BPT on the grounds that the ascending air motions induced by the evaporative/condensation force would rapidly restore hydrostatic equilibrium and thereby become extinguished. In reply Makarieva et al. (2009) pointed out that condensation removed water vapour molecules from the gas phase and reduced the weight of the air column. That removal must disturb hydrostatic equilibrium and make air circulate under the action of the evaporation/condensation force (Makarieva, 2009). The mass of an air column is equal to the number of air molecules in the column multiplied by their molecular masses. When the number of air molecules in the column is preserved, its weight remains unchanged and independent of density. Hence, heating of the air column does not change its weight. In contrast, condensation changes the number of gas molecules in the air column and instantaneously reduces the weight of the air column irrespective of the effects it might have on air density (Makarieva, 2009). In effect, the BPT states that the major physical cause of moisture fluxes is not the non-uniformity of atmospheric and surface heating, but that water vapour is invariably upward-directed as a result of the rarefaction of air from condensation (Makarieva, 2013). The BPT, therefore, maintains that the air pressure sustains its disequilibrium because of the reduction in total weight of the air column as condensation occurs, that being a continuous process as the ascending moist air cools. In fact, when the initial bulk air pressure in the lower atmospheric levels no longer equals the bulk weight of the air column, the initial hydrostatic equilibrium of air as a whole is disturbed and an accelerating upward motion is initiated in the air column. This upward motion of expanding and cooling moist air sustains the continuous process of condensation and does not allow the hydrostatic equilibrium of air as a whole to set in. The motion continues as long as there is water vapour in the rising air to sustain condensation (Makarieva, 2009). Within the concept of the biotic pump it is the physical mechanism of condensation which drives the upward airflow in the lower atmosphere by removing molecules from the air column, and thus generates the surface horizontal winds, such as the Trade Winds. 

Conclusion 

This paper describes a series of experiments, in a specially-designed structure, to test the physics of condensation and its potential to cause airflows. The results have provided data for a careful analysis of the physics involved, showing that condensation and the subsequent release of kinetic energy from the partial pressure change do indeed account for the observed airflow. The experimental results therefore provide evidence that condensation and not buoyancy is the major mechanism driving airflow, thus lending strong support to one of the main tenets of the BPT. The results are significant and unambiguous. At least at the laboratory scale, using only conventional physics, such as is employed in climatological studies, the primary force driving convection appears to be the kinetic energy released in the implosive events which take place during condensation, with a sudden reduction –  1200-fold – in the air volume of one gram-molecule of water vapour (18 g) as it transforms to liquid and ice. Air density changes are shown to be orders of magnitude less important in convection processes compared to partial pressure changes on condensation. The macro-physics of the experiment is not fundamentally different from that in the lower atmosphere at large. The same laws apply and are widely used by hydrologists, meteorologists and climatologists. Those opposed to the biotic pump theory should therefore re-consider their position and take into account that the physics underlying the theory may not only be correct, but that it operates in the atmosphere at large. 

The general implications are that the great forests of the world play a fundamental role in air mass circulation through providing water vapour via evapotranspiration, and are therefore the “fuel” for a high rate of cloud condensation. Airflow circulation is the net result, bringing the rain to the deep interior of continents. The Biotic Pump theory suggests that the hydrological role of rainforests is by far their most important climatic contribution, and that large-scale deforestation may well be as detrimental in its consequences for the well-being of the planet as are greenhouse gas emissions. Indeed, it may be that in macro-climatological terms whether forests are net absorbers or emitters of greenhouse gases is relatively insignificant compared to their hydrological role. 

Thursday, October 22, 2015

New paper finds Gleissberg cycle of solar activity related to ocean oscillations, land temperatures, & extreme weather

A new paper published in Advances in Space Research, finds,

"The recent extended, deep minimum of solar variability and the extended minima in the 19th and 20th centuries (1810–1830 and 1900–1920) are consistent with minima of the Centennial Gleissberg Cycle (CGC), a 90–100 year variation of the amplitude of the 11-year sunspot cycle observed on the Sun and at the Earth. The Earth’s climate response to these prolonged low solar radiation inputs involves heat transfer to the deep ocean causing a time lag longer than a decade."

The authors find,
"The spatial pattern of the climate response [to the Gleissberg solar activity cycle]... is dominated by the Pacific North American pattern (PNA). The Gleissberg minima, sometimes coincidently in combination with volcanic forcing, are associated with severe weather extremes. Thus the 19th century Gleissberg minimum, which coexisted with volcanic eruptions, led to especially cold conditions in United States, Canada and Western Europe."
The paper shows clear evidence in the first graph below of a significant, sustained increase of Total Solar Irradiance (TSI) from 1700 to the late 20th century, coincident with the end of the Little Ice Age ~1850 and the global warming observed during the 20th century. 

The paper is coauthored by Joan Feynman (sister of the famous physicist Richard Feynman).


Total Solar Irradiance in top graph shows a significant increase of solar activity since 1700. Second wavelet graph shows periodicity (red areas) corresponding to the 90-100 year Gleissberg cycle of solar activity. Bottom graph shows smoothed Gleissberg cycles since 1700. 

Second graph solid line shows Total Solar Irradiance correlates with observed land temperatures (dashed line). 

The Earth’s climate at minima of Centennial Gleissberg Cycles


The recent extended, deep minimum of solar variability and the extended minima in the 19th and 20th centuries (1810–1830 and 1900–1920) are consistent with minima of the Centennial Gleissberg Cycle (CGC), a 90–100 year variation of the amplitude of the 11-year sunspot cycle observed on the Sun and at the Earth. The Earth’s climate response to these prolonged low solar radiation inputs involves heat transfer to the deep ocean causing a time lag longer than a decade. The spatial pattern of the climate response, which allows distinguishing the CGC forcing from other climate forcings, is dominated by the Pacific North American pattern (PNA). The CGC minima, sometimes coincidently in combination with volcanic forcing, are associated with severe weather extremes. Thus the 19th century CGC minimum, coexisted with volcanic eruptions, led to especially cold conditions in United States, Canada and Western Europe.


Related: 

New paper argues current lull in solar activity is consistent with a Gleissberg Cycle minimum

Tuesday, October 20, 2015

Jupiter emits 67% more radiation than it receives from the Sun -only explanation is the gravito-thermal greenhouse effect, not greenhouse gases

An article published at The Conversation asks Is the Red Spot shrinking superstorm evidence of climate change on Jupiter?and indeed finds that this and other observed changes are evidence of climate change (of unknown cause) on Jupiter. 

The article incidentally notes that,
"We do know that Jupiter emits 67% more radiation than it receives from the Sun. This is due to an internal heat source, which is thought to drive much of Jupiter\'s weather, including, presumably, the Great Red Spot. The heat likely is generated by the gradual contraction of matter under Jupiter's enormous gravity."
Warmists claim gravity cannot be the cause of any so-called "greenhouse effect" (or the "gravito-thermal greenhouse effect") on Earth, Jupiter, nor any other planet, yet overwhelming observational evidence for every planet in our solar system (with adequate observational data - 8 planets at this point) clearly demonstrates that surface and atmospheric temperatures are a sole function of gravity/mass/pressure and independent of greenhouse gas concentrations. 

In the case of Jupiter, a gas planet composed almost entirely of the non-IR-active, non-greenhouse gases hydrogen and helium, there is no solid planetary surface nor greenhouse gases to allegedly "trap" solar radiation, yet Jupiter has an "internal heat source" that causes a thermal enhancement ("gravito-thermal greenhouse effect") resulting in emission of 67% more radiation than it receives from the Sun. The only possible explanation of this is gravity, not radiative forcing from the Sun nor greenhouse gases, and hence the mass/pressure/gravity gravito-thermal greenhouse effect of Maxwell, Clausius, Carnot, Boltzmann, Helmholtz, Feynman, US Std Atmosphere, the HS greenhouse equation is corroborated on 9 planets.

Likewise, the ice planet Uranus has recently been observed to have storms at the top of the atmosphere radiating at blackbody temperatures hotter than required to melt steel. In addition, 
"the base of the troposphere on the planet Uranus is 320K, considerably hotter than on Earth [288K], despite being nearly 30 times further from the Sun. The base of the troposphere on Uranus is 320K at 100 bars pressure, despite the planet only receiving 3.71 W/m2 energy from the Sun. By the Stefan-Boltzmann Law, a 320K blackbody radiates 584.6 W/m2. This is 157.5 times the energy received from the Sun, due to the atmospheric temperature gradient produced within a planetary gravity field. The temperature at the base of the troposphere is determined by the ideal gas law PV=nRT, where pressure from gravity and atmospheric mass raise the temperature at the base of the troposphere from the equilibrium temperature with the Sun of Uranus of 89.94K to 320K, regardless of the atmospheric mixture of greenhouse gases."
Once again, the only possible explanation of both of these phenomena on Uranus is the Maxwell et al gravito-thermal greenhouse effect, thus bringing the number of planets for which very strong evidence exists to a total of ten. 

On Venus, we know from the NASA Fact Sheet:


Venus Atmosphere

Surface pressure: 92 bars = 92000 mbar 
Surface density: ~65. kg/m3 = 65000 g/m3
Scale height: 15.9 km
Total mass of atmosphere:  ~4.8 x 1020 kg
Average temperature: 737 K (464 C)
Diurnal temperature range: ~0 
Wind speeds: 0.3 to 1.0 m/s (surface)
Mean molecular weight: 43.45 
Atmospheric composition (near surface, by volume): 
    Major:       96.5% Carbon Dioxide (CO2), 3.5% Nitrogen (N2) 
    Minor (ppm): Sulfur Dioxide (SO2) - 150; Argon (Ar) - 70; Water (H2O) - 20;
                 Carbon Monoxide (CO) - 17; Helium (He) - 12; Neon (Ne) - 7

We can easily calculate the gravito-thermal greenhouse effect surface temperature of Venus using the ideal gas law 

T = PV/nR = 92000/(65000/43.45*0.083144621) = 739K 

which is within 2K (or 2C) of NASA observations of 737K as noted above, leaving essentially no room for any sort of Arrhenius radiative greenhouse effect on Venus. Note below also, the blackbody temperature of Venus is 184.2K, therefore mass/gravity/pressure alone has thermally enhanced the surface temperature of Venus by a factor of

737K/184.2K = 4 times

Thus, the Arrhenius radiative greenhouse effect is falsified on the basis of observations and first physical principles, and the only possible alternative greenhouse theory of Maxwell et al confirmed. 


Bulk parameters Venus vs. Earth

                                   Venus          Earth     Ratio (Venus/Earth)
Mass (1024 kg)                      4.8676         5.9726         0.815 
Volume (1010 km3)                  92.843        108.321          0.857
Equatorial radius (km)            6051.8         6378.1          0.949     
Polar radius (km)                  6051.8         6356.8          0.952
Volumetric mean radius (km)        6051.8         6371.0          0.950
Ellipticity (Flattening)            0.000          0.00335        0.0  
Mean density (kg/m3)               5243           5514            0.951 
Surface gravity (eq.) (m/s2)        8.87           9.80           0.905 
Surface acceleration (eq.) (m/s2)   8.87           9.78           0.907 
Escape velocity (km/s)             10.36          11.19           0.926
GM (x 106 km3/s2)                   0.3249         0.3986         0.815
Bond albedo                         0.90           0.306          2.94
Visual geometric albedo             0.67           0.367          1.83  
Visual magnitude V(1,0)            -4.40          -3.86             -
Solar irradiance (W/m2)            2613.9         1367.6          1.911
Black-body temperature (K)          184.2          254.3          0.724 
Topographic range (km)               15             20            0.750 
Moment of inertia (I/MR2)           0.33           0.3308         0.998
J2 (x 10-6)                         4.458       1082.63           0.004  
Number of natural satellites          0              1
Planetary ring system                No             No


Thermal enhancement or gravito-thermal greenhouse curve for 8 planets

Is shrinking superstorm evidence of climate change on Jupiter?

Is shrinking superstorm evidence of climate change on Jupiter?
Andrew Coates is Professor of Physics, Head of Planetary Science at the Mullard Space Science Laboratory, UCL. 
(CNN) It makes our most turbulent terrestrial storms look like mere pipsqueaks. But remarkable new Hubble footage shows that Jupiter\'s Great Red Spot -- an anticyclonic storm system twice the size of Earth -- is shrinking and turning orange. Is this evidence of Jovian climate change? And could the planet\'s violent storm finally be giving way to more clement conditions, at least by Jupiter\'s dramatic standards?
Jupiter, the largest planet in our solar system, is a gas giant dominated by hydrogen with some helium and smaller amounts of other gases, a mixture that resembles the composition of the early solar nebula and results in some staggeringly beautiful weather. The planet\'s cloud systems, which counter-rotate in zones and belts, with eastward and westward winds reaching 100 meters per second, are among the solar system\'s most spectacular sights and come in a blaze of different colors -- red due to ammonia, white due to ammonium hydrosulphide, and brown and blue due to additions to water ice.
A raging storm
But one of the most recognizable and persistent features of Jupiter\'s atmosphere is the Great Red Spot (GRS). Swirling around the planet\'s southern hemisphere, it covers a huge 10 degrees of latitude. (2-3 times the size of Earth)
This vast anticyclonic (high pressure) storm system has been observed raging for perhaps 350 years -- the first likely observations were reported in 1664-1655 by Robert Hooke and Gian-Dominique Cassini. It is cooler than its surroundings, rotates anticlockwise with a four to six day period, and is located between zonal winds moving at 100 meters per second.
The Great Red Spot\'s stability over such a long period of time is remarkable. A fluid instability would disappear in a few days to weeks, as in the case of the scars caused when several fragments of the comet Shoemaker-Levy 9 struck Jupiter in 1994 -- so an energy source must be powering it. Models have been suggested, but none fully explain the Great Red Spot: is it really a hurricane, a shear instability, an eddy or a solitary wave?
Inside the pressure cooker
We do know that Jupiter emits 67% more radiation than it receives from the Sun. This is due to an internal heat source, which is thought to drive much of Jupiter\'s weather, including, presumably, the Great Red Spot. The heat likely is generated by the gradual contraction of matter under Jupiter\'s enormous gravity. In the planet\'s deeper layers, for example, hydrogen enters a liquid metallic state and the pressure is 3m atmospheres.
We also know that after years of relative stability, the Great Red Spot is now changing. Since 2012, Hubble observations as part of the Outer Planets Atmospheres Legacy (OPAL) program have shown that the spot has been shrinking -- and that the rate of shrinkage has increased in recent years. The latest measurement, published by Amy Simon and colleagues, show a further reduction of 240km, although this rate of shrinkage is less than in preceding years and there are not enough observations yet to know if this is a periodic feature as seen with Neptune\'s great dark spot.
It is not just a matter of size, however. The Hubble results also show that the spot\'s shape is continuing its evolution from oval to circular, and that a new wispy filament, spiralling inwards and driven by winds of at least 150 meters per second, has developed within the Great Red Spot. The core region has also been shrinking, consistent with the overall trend, and is also becoming less distinct. It is also now deep orange in color.
Jovian climate change
There are other changes in the Jovian atmosphere, too. The Hubble observations show a new wave structure about 16 degrees north of Jupiter\'s equator, in a region of cyclones and anticyclones. It is similar to the only previous observation of such a structure by Voyager 2 in 1979 and may herald the birth of a new cyclone there.
It\'s clear that Jupiter\'s atmosphere is changing, and the Great Red Spot is evolving. The question is: why? Is the Great Red Spot fizzling out, or oscillating over time?
The jury is still out, but continued observations by the annual OPAL campaign, combined with in-situ measurements of the atmospheric dynamics and interior structure, may yet reveal intriguing new clues. The JUNO polar orbiter will also reach Jupiter in July next year and doubtless offer answers of its own.
Jupiter\'s mysterious Great Red Spot may be shrinking, then, but the world will be talking about Jupiter\'s weather for a good while yet.

New paper explains the ~1,500 year climate cycle on basis of astronomical variables, not CO2

A potentially important paper published today in Climate of the Past Discussions finds the well-known ~1500 year cycle of "abrupt climate change" can be explained on the basis of astronomical variables that create a "high-frequency extension of the Milankovitch precessional cycle." 

According to the authors,
"The existence of a ~ 1470 year cycle of abrupt climate change is well-established, manifesting in Bond ice-rafting debris (IRD) events, Dansgaard–Oeschger atmospheric temperature cycle, and cyclical climatic conditions precursory to increased El Niño/Southern Oscillation (ENSO) variability and intensity. This cycle is central to questions on Holocene climate stability and hence anthropogenic impacts on climate. To date no causal mechanism has been identified, although solar forcing has been previously suggested."

"Here we show that interacting combination of astronomical variables related to Earth's orbit may be causally related to this cycle and several associated key isotopic spectral signals. The ~ 1470 year climate cycle may thus be regarded as a high frequency extension of the Milankovitch precessional cycle, incorporating orbital, solar and lunar forcing through interaction with the tropical and anomalistic years and Earth's rotation."
Warmists claim that the current warm period is not explainable on the basis of solar activity nor astronomical variables, but this paper and many others suggest otherwise, that the current warming period is entirely explainable as a result of natural variability, and for which anthropogenic CO2 plays little to no role.






An astronomical correspondence to the 1470 year cycle of abrupt climate change
A. M. Kelsey1, F. W. Menk2, and P. T. Moss1
1School of Geography, Planning and Environmental Management, The University of Queensland, St Lucia, QLD, 4072, Australia
2Centre for Space Physics, School of Mathematical and Physical Sciences, Faculty of Science and Information Technology, University of Newcastle, Callaghan, NSW, 2308, Australia

Abstract. The existence of a ~ 1470 year cycle of abrupt climate change is well-established, manifesting in Bond ice-rafting debris (IRD) events, Dansgaard–Oeschger atmospheric temperature cycle, and cyclical climatic conditions precursory to increased El Niño/Southern Oscillation (ENSO) variability and intensity. This cycle is central to questions on Holocene climate stability and hence anthropogenic impacts on climate (deMenocal et al., 2000). To date no causal mechanism has been identified, although solar forcing has been previously suggested. Here we show that interacting combination of astronomical variables related to Earth's orbit may be causally related to this cycle and several associated key isotopic spectral signals. The ~ 1470 year climate cycle may thus be regarded as a high frequency extension of the Milankovitch precessional cycle, incorporating orbital, solar and lunar forcing through interaction with the tropical and anomalistic years and Earth's rotation.

Related:


The Physical Evidence of Earth's Unstoppable 1,500-Year Climate Cycle

Friday, September 30, 2005
by S. Fred Singer & Dennis T. Avery