Monday, July 14, 2014

Analysis: Solar activity & ocean cycles are the 2 primary drivers of climate, not CO2

Dan Pangburn has updated his analysis identifying the two primary drivers of global temperature:

1) the integral of solar activity 
2) ocean oscillations [which are in-turn driven by solar activity and perhaps lunar-tidal forcing]. 

The correlation of the integral of solar activity and ocean cycles to global temperature is 90.49%, and with the addition of CO2 the correlation only improves very slightly to 90.61%, demonstrating CO2 change has no significant effect on climate. The correlation of CO2 alone to global temperature is poor at only 44%, not to mention that CO2 follows temperature on short, intermediate, and long-term timescales, and the effect does not precede the cause. 

Comment by Dan Pangburn:

Two primary drivers of average global temperature have been identified. They very accurately explain the reported up and down measurements since before 1900 with coefficient of determination, R2>0.9 (correlation coefficient = 0.95) and provide credible estimates back to the low temperatures of the Little Ice Age (1610). 

R2 = 0.9049 considering only sunspots and ocean cycles.
R2 = 0.9061 considering sunspots, ocean cycles and CO2 change.

The tiny difference in R2, whether considering CO2 or not, demonstrates that CO2 change has no significant effect on climate.

The coefficients of determination are a measure of how accurately the calculated average global temperatures compare with measured. R2 > 0.9 is excellent.

The calculations use data since before 1900 which are official, accepted as valid and are publicly available. 

Solar cycle duration or magnitude fail to correlate but their combination, expressed as the time-integral of solar cycle anomalies, gives an excellent correlation. A solar cycle anomaly is the difference between the sunspot number for a year and an average sunspot number for many years. 

Everything not explicitly considered (such as the 0.09 K s.d. random uncertainty in reported annual measured temperature anomalies, aerosols, CO2, other non-condensing ghg, volcanoes, ice change, etc.) must find room in the unexplained 9.51%.

The method, equation, data sources, history and predictions are provided at http://agwunveiled.blogspot.com and references.


Introduction
This monograph is a clarification and further refinement of Reference 10 (references are listed at the end of this paper) which also considers only average global temperature. It does not discuss weather, which is a complex study of energy moving about the planet. It does not even address local climate, which includes precipitation. It does, however, consider the issue of Global Warming and the mistaken perception that human activity has a significant influence on it.

The word ‘trend’ is used here for temperatures in two different contexts. To differentiate, α-trend applies to averaging-out the uncertainties in reported average global temperature measurements to produce the average global temperature oscillation resulting from the net ocean surface oscillation. The term β-trend applies to the slower average temperature change of the planet which is associated with change to the temperature of the bulk volume of the material (mostly water) involved.

The first paper to suggest the hypothesis that the sunspot number time-integral is a proxy for a substantial driver of average global temperature change was made public 6/1/2009. The discovery started with application of the first law of thermodynamics, conservation of energy, and the hypothesis that the energy acquired, above or below break-even (appropriately accounting for energy radiated from the planet), is proportional to the time-integral of sunspot numbers. The derived equation revealed a rapid and sustained global energy rise starting in about 1941. The true average global temperature anomaly change β-trend is proportional to global energy change.

Subsequent analysis revealed that the significant factor in calculating the β-trend is the sunspot number anomaly time-integral. The sunspot number anomaly is defined as the difference between the sunspot number in a specific year and an average sunspot number for several years.
Measured temperature anomaly α-trends oscillate above and below the temperature anomaly β-trend calculated using only the sunspot number time-integral. The existence of ocean oscillations, especially the Pacific Decadal Oscillation, led to the perception that there must be an effective net surface temperature oscillation for the planet with all named and unnamed ocean oscillations as participants. Plots of measured average global temperatures indicate that the net surface temperature oscillation has a period of 64 years with the most recent maximum in 2005.

Combination of the effects results in the effect of the ocean surface temperature oscillation (α-trend) decline 1941-1973 being slightly stronger than the effect of the rapid rise from sunspots (β-trend) resulting in a slight decline of the trend of reported average global temperatures. The steep rise 1973-2005 occurred because the effects added. A high coefficient of determination, R2, demonstrates that the hypothesis is true.

Over the years, several refinements to this work (often resulting from other's comments which may or may not have been corroborative) slightly improved the accuracy and led to the equations and figures in this paper.

Prior work
The law of conservation of energy is applied effectively the same as described in Reference 2 in the development of a very similar equation that calculates temperature anomalies. The difference is that the variation in energy ‘OUT’ has been found to be adequately accounted for by variation of the sunspot number anomalies. Thus the influence of the factor [T(i)/Tavg]4 is eliminated.

Change to the level of atmospheric carbon dioxide has no significant effect on average global temperature. This was demonstrated in 2008 at Reference 6 and is corroborated at Reference 2 and again here.

As determined in Reference 3, reported average global temperature anomaly measurements have a random uncertainty with equivalent standard deviation ≈ 0.09 K.

Global Warming ended more than a decade ago as shown here, and in Reference 4 and also Reference 2.

Average global temperature is very sensitive to cloud change as shown in Reference 5.

The parameter for average sunspot number was 43.97 (average 1850-1940) in Ref. 1, 42 (average 1895-1940) in Ref. 9, and 40 (average 1610-2012) in Ref. 10. It is set at 34 (average 1610-1940) in this paper. The procession of values for average sunspot number produces slight but steady improvement in R2 for the period of measured temperatures and progressively greater credibility of average global temperature estimates for the period prior to direct measurements becoming available.

Initial work is presented in several papers at http://climaterealists.com/index.php?tid=145&linkbox=true

The sunspot number anomaly time-integral drives the temperature anomaly trend
It is axiomatic that change to the average temperature trend of the planet is due to change to the net energy retained by the planet.

Table 1 in reference 2 shows the influence of atmospheric carbon dioxide (CO2) to be insignificant (tiny change in R2 if considering CO2 or not) so it can be removed from the equation by setting coefficient ‘C’ to zero. With ‘C’ set to zero, Equation 1 in Reference 2 calculates average global temperature anomalies (AGT) since 1895 with 89.82% accuracy (R2= 0.898220).

The current analysis determined that 34, the approximate average of sunspot numbers from 1610-1940, provides a slightly better fit (in fact, the best fit) to the measured temperature data than did 43.97 and other values 9,10. The influence, of Stephan-Boltzmann radiation change due to AGT change, on energy change is adequately accounted for by the sunspot number anomaly time-integral. With these refinements to Equation (1) in Reference 2 the coefficients become A = 0.3588, B = 0.003461 and D = ‑ 0.4485.  R2 increases slightly to 0.904906 and the calculated anomaly in 2005 is 0.5045 K. Also with these refinements the equation calculates lower early temperature anomalies and projects slightly higher (0.3175 K vs. 0.269 K in 2020) future anomalies. The resulting equation for calculating the AGT anomaly for any year, 1895 or later, is then:

Anom(y) = (0.3588,y) + 0.003461/17 Σyi=1895 (s(i) – 34) – 0.4485

Where:
            Anom(y) = calculated temperature anomaly in year y, K
            (0.3588,y) = approximate contribution of ocean cycle effect to AGT in year y
            s(i) = average daily Brussels International sunspot number in year i

Measured temperature anomalies are from Figure 2 of Reference 3. The excellent match of the up and down trends since before 1900 of calculated and measured anomalies, shown here in Figure 1, demonstrates the usefulness and validity of the calculations.

Projections until 2020 use the expected sunspot number trend for the remainder of solar cycle 24 as provided 11 by NASA. After 2020 the limiting cases are either assuming sunspots like from 1925 to 1941 or for the case of no sunspots which is similar to the Maunder Minimum.

Some noteworthy volcanoes and the year they occurred are also shown on Figure 1. No consistent AGT response is observed to be associated with these. Any global temperature perturbation that might have been caused by volcanoes of this size is lost in the temperature measurement uncertainty. Much larger volcanoes can cause significant temporary global cooling from the added reflectivity of aerosols and airborne particulates. The Tambora eruption, which started on April 10, 1815 and continued to erupt for at least 6 months, was approximately ten times the magnitude of the next largest in recorded history and led to 1816 which has been referred to as ‘the year without a summer’. The cooling effect of that volcano exacerbated the already cool temperatures associated with the Dalton Minimum.
 
 Figure 1: Measured average global temperature anomalies with calculated prior and future trends using 34 as the average daily sunspot number.

As discussed in Reference 2, ocean oscillations produce oscillations of the ocean surface temperature with no significant change to the average temperature of the bulk volume of water involved. The effect on AGT of the full range of surface temperature oscillation is given by the coefficient ‘A’.

The influence of ocean surface temperature oscillations can be removed from the equation by setting ‘A’ to zero. To use all regularly recorded sunspot numbers, the integration starts in 1610. The offset, ‘D’ must be changed to -0.1993 to account for the different integration start point and setting ‘A’ to zero. Setting ‘A’ to zero requires that the anomaly in 2005 be 0.5045 - 0.3588/2 = 0.3251 K. The result, Equation (1) here, then calculates the trend 1610-2012 resulting from just the sunspot number time-integral.

Trend3anom(y) = 0.003461/17 * Σyi = 1610 [s(i)-34] – 0.1993                (1)

Where:
Trend3anom(y) = calculated temperature anomaly β-trend in year y, K degrees.
0.003461 = the proxy factor, B, W yr m-2.
17 = effective thermal capacitance of the planet, W Yr m-2 K-1
s(i) = average daily Brussels International sunspot number in year i
34 ≈ average sunspot number for 1610-1940.
-0.1993 is merely an offset that shifts the calculated trajectory vertically on the graph, without changing its shape, so that the calculated temperature anomaly in 2005 is 0.3251 K which is the calculated anomaly for 2005 if the ocean oscillation is not included.

Sunspot numbers back to 1610 are shown in Figure 2 of Reference 1.

Applying Equation (1) to the sunspot numbers of Figure 2 of Reference 1 produces the trace shown in Figure 2 below.

Figure 2: Anomaly trend from just the sunspot number time-integral using Equation (1).

Average global temperatures were not directly measured in 1610 (thermometers had not been invented yet). Recent estimates, using proxies, are few. The anomaly trend that Equation (1) calculates for that time is roughly consistent with other estimates. The decline in the trace 1610-1700 on Figure 2 results from the low sunspot numbers for that period as shown on Figure 2 of Reference 1. 

How this phenomenon could take place
Although the connection between AGT and the sunspot number time-integral is demonstrated, the mechanism by which this takes place remains somewhat theoretical.

Various papers have been written that indicate how the solar magnetic field associated with sunspots can influence climate on earth. These papers posit that decreased sunspots are associated with decreased solar magnetic field which decreases the deflection of and therefore increases the flow of galactic cosmic rays on earth.

Henrik Svensmark, a Danish physicist, found that decreased galactic cosmic rays caused decreased low level (< 3 km) clouds and planet warming. An abstract of his 2000 paper is at Reference 13. Marsden and Lingenfelter also report this in the summary of their 2003 paper 14 where they make the statement “…solar activity increases…providing more shielding…less low-level cloud cover… increase surface air temperature.”  These findings have been further corroborated by the cloud nucleation experiments 15 at CERN.

These papers associated the increased low-level clouds with increased albedo leading to lower temperatures. Increased low clouds would also result in lower average cloud altitude and therefore higher average cloud temperature. Although clouds are commonly acknowledged to increase albedo, they also radiate energy to space so increasing their temperature increases radiation to space which would cause the planet to cool. Increased albedo reduces the energy received by the planet and increased radiation to space reduces the energy of the planet. Thus the two effects work together to change the AGT of the planet.

Simple analyses 5 indicate that either an increase of approximately 186 meters in average cloud altitude or a decrease of average albedo from 0.3 to the very slightly reduced value of 0.2928 would account for all of the 20th century increase in AGT of 0.74 °C. Because the cloud effects work together and part of the temperature change is due to ocean oscillation, substantially less cloud change is needed.


Combined Sunspot Effect and Ocean Oscillation Effect
As a possibility, the period and amplitude of oscillations attributed to ocean cycles demonstrated to be valid after 1895 are assumed to maintain back to 1610. Equation (1) is modified as shown in Equation (2) to account for including the effects of ocean oscillations. Since the expression for the oscillations calculates values from zero to the full range but oscillations must be centered on zero, it must be reduced by half the oscillation range.

Trend4anom(y) = (0.3588,y) – 0.1794 + 0.003461/17 * Σyi = 1610 [s(i)-34] – 0.1993   (2)

The ocean oscillation factor, (0.3588,y) – 0.1794, is applied to the period prior to the start of temperature measurements as a possibility. The effective sea surface temperature anomaly, (A,y), is defined in Reference 2.

Applying Equation (2) to the sunspot numbers from Figure 2 of Reference 1 produces the trend shown in Figure 3 next below. Available measured average global temperatures from Figure 2 in Reference 3 are superimposed on the calculated values.
 
 Figure 3: Calculated temperature anomalies from the sunspot number anomaly time-integral plus ocean oscillation using Equation (2) with superimposed available measured data from Reference 3 and range estimates determined by Loehle.

Figure 3 shows that temperature anomalies calculated using Equation (2) estimate possible trends since 1610 and actual trends of reported temperatures since they have been accurately measured world wide. The match from 1895 on has R2 = 0.9049 which means that 90.49% of average global temperature anomaly measurements are explained. All factors not explicitly considered must find room in that unexplained 9.51%. Note that a coefficient of determination, R2 = 0.9049 means a correlation coefficient of 0.95.

A survey 12 of non-tree-ring global temperature estimates was conducted by Loehle including some for a period after 1610. A simplification of the 95% limits found by Loehle are also shown on Figure 3. The spread between the upper and lower 95% limits are fixed, but, since the anomaly reference temperatures might be different, the limits are adjusted vertically to approximately bracket the values calculated using the equations. The fit appears reasonable considering the uncertainty in all values.

Calculated temperature anomalies look reasonable back to 1700 but indicate higher temperatures prior to that than most proxy estimates. They are, however, consistent with the low  sunspot numbers in that period. They qualitatively agree with Vostok, Antarctica ice core data but decidedly differ from Sargasso Sea estimates during that time (see the graph for the last 1000 years in Reference 6). Credible worldwide assessments of average global temperature that far back are sparse. Ocean oscillations might also have been different from assumed.

Possible lower values for average sunspot number
Possible lower assumed values for average sunspot number, with coefficients adjusted to maximize R2, result in noticeably lower estimates of early (prior to direct measurement) temperatures with only a tiny decrease in R2. Calculated temperature anomalies resulting from using an average sunspot number value of 26 are shown in Figure 4. The projected temperature anomaly trend decline is slightly less steep (0.018 K warmer in 2020) than was shown in Figure 1.

Figure 4: Calculated temperature anomalies from the sunspot number anomaly time-integral plus ocean oscillation using 26 as the average sunspot number with superimposed available measured data from Reference 3 and range estimates determined by Loehle.

Carbon dioxide change has no significant influence
The influence that CO2 has on AGT can be calculated by including ‘C’ in Equation (1) of Reference 2 as a coefficient to be determined. The tiny increase in R2 demonstrates that consideration of change to the CO2 level has no significant influence on AGT. The coefficients and resulting R2 are given in Table 1.

Table 1: A, B, C, D, refer to coefficients in Equation 1 in Reference 2
Average daily SSN
ocean oscillation A
sunspots B
CO2C
Offset
D
Coefficient of determinationR2
% cause of 1909-2005 AGT change
Sunspots
Ocean oscillation
CO2change
26
0.3416
0.002787
0
-0.4746
0.903488
63.8
36.2
0
32
0.3537
0.003265
0
-0.4562
0.904779
62.7
37.3
0
34
0.3588
0.003461
0
-0.4485
0.904906
62.2
37.8
0
36
0.3642
0.003680
0
-0.4395
0.904765
61.7
38.3
0
34
0.3368
0.002898
0.214
-0.4393
0.906070
52.3
35.6
12.1

Further discussion of ocean cycles
The temperature contribution to AGT of ocean cycles is approximated by a function that has a saw-tooth trajectory profile. It is represented in equation (2) by (A,y) where A is the total amplitude and y is the year. The uptrends and down trends are each determined to be 32 years long for a total period of 64 years. The total amplitude resulting from ocean oscillations was found here to be 0.3588 K (case highlighted in Table 1).

Thus, for an ocean cycle surface temperature uptrend, the contribution of ocean oscillations to AGT is approximated by adding (to the value calculated from the sunspot integral) 0.3588 multiplied by the fraction of the 32 year period that has elapsed since a low. For an ocean cycle surface temperature down trend, the contribution is calculated by adding 0.3588 minus 0.3588 multiplied by the fraction of the 32 year period that has elapsed since a high. The lows were found to be in 1909 and 1973 and the highs in 1941 and 2005. The resulting trajectory, offset by half the amplitude, is shown as ‘approximation’ in Figure 5.

Temperature data is available for three named cycles: PDO, ENSO 3.4 and AMO. Successful accounting for oscillations is achieved for PDO and ENSO when considering these as forcings (with appropriate proxy factors) instead of direct measurements. As forcings, their influence accumulates with time. The proxy factors must be determined separately for each forcing. The measurements are available since 1900 for PDO 16 and ENSO3.4 17. This PDO data set has the PDO temperature measurements reduced by the average SST measurements for the planet.

The contribution of PDO and ENSO3.4 to AGT is calculated by:
PDO_NINO = Σyi=1900 (0.017*PDO(i) + 0.009 * ENSO34(i))        (3)

Where:
            PDO(i) = PDO index 16 in year i
            ENSO34(i) = ENSO 3.4 index 17 in year i


How this calculation compares to the idealized approximation use in Equation (2) is shown in Figure 5. The high coefficient of determination in Table 1 and the comparison in Figure 5 corroborate the assumption that the saw-tooth profile provides an adequate approximation of the influence of all named and unnamed ocean cycles in the calculated AGT anomalies.
Figure 5: Comparison of idealized approximation of ocean cycle effect and the calculated effect from PDO and ENSO.

Conclusions
Others that have looked at only amplitude or only duration factors for solar cycles got poor correlations with average global temperature. The good correlation comes by combining the two, which is what the time-integral of sunspot numbers does. As shown in Figure 2, the anomaly trend determined using the sunspot number time-integral has experienced substantial change over the recorded period. Prediction of future sunspot numbers more than a decade or so into the future has not yet been confidently done although assessments using planetary synodic periods appear to be relevant 7,8.

As displayed in Figure 2, the time-integral of sunspot numbers alone appears to show the estimated true average global temperature trend (the net average global energy trend) during the planet warm up from the depths of the Little Ice Age.

The net effect of ocean oscillations is to cause the surface temperature trend to oscillate above and below the trend calculated using only the sunspot number time-integral. Equation (2) accounts for both and also, because it matches measurements so well, shows that rational change to the level of atmospheric carbon dioxide can have no significant influence.

Long term prediction of average global temperatures depends primarily on long term prediction of sunspot numbers.


References:
11. Graphical sunspot number prediction for the remainder of solar cycle 24http://solarscience.msfc.nasa.gov/predict.shtml
12. http://www.econ.ohio-state.edu/jhm/AGW/Loehle/Loehle_McC_E&E_2008.pdf
13.   Svensmark paper, Phys. Rev. Lett. 85, 5004–5007 (2000)  http://prl.aps.org/abstract/PRL/v85/i23/p5004_1
14.  Marsden & Lingenfelter 2003, Journal of the Atmospheric Sciences 60: 626-636  http://www.co2science.org/articles/V6/N16/C1.php
15. CLOUD experiment at CERNhttp://indico.cern.ch/event/197799/session/9/contribution/42/material/slides/0.pdf
        16. PDO index http://jisao.washington.edu/pdo/PDO.latest
        (Linked from http://www.cgd.ucar.edu/cas/catalog/climind/TNI_N34/ )

21 comments:

  1. Just a cautionary point. The integral SSN method has a problem in that it is developing a multiplication factor to fit that significant variable to the temperature record. Effectively you are doing a type of multiple regression. That has its points, but is vulnerable when you have covarying variables like CO2.

    That is why I use the previous solar cycle length correlation to temperature because it is an independent variable which does not require a multiplication factor to be evolved. So when you use that, and use the empirical amplitude of ocean cycles on temperature you get a residual which fits the 5th entry in the table not the third.

    That suggests a statistical significance to the effect of pCO2, although because it is a residual there can be other contributors in both directions - volcanoes, aerosols, UHIE, which all are contained in the residual. However these seem to cancel out, since the size of the residual after you deduct the solar component and the ocean cycles component is right on Lindzen's TCR value, which suggests both TCR and ECS are about 0.7 C/doubling. Ie not zero.

    This is pretty moot though since such a low ECS is completely harmless. However for credibility of Dan's analysis it is unwise to conclude CO2 has zero effect. Because it does have one when you avoid the statistical quirks of the MR approach.

    ReplyDelete
  2. I had a similar thought back at the beginning of March. although no one voiced any positive response to the concept. I have made one specific forecast that turned out correct using my method {that the positive ENSO would retreat in June}. Now it is up in the air to see if the next leg of the forecast happens as said. Otherwise, I will have learned another lesson about predictions.

    ReplyDelete
  3. Dan Pangburn said...

    ‘Other molecules’ outnumber CO2 molecules by approximately 2500 to 1.

    When a molecule of CO2 absorbs a photon of terrestrial EMR it immediately (less than 0.1 microsecond, hyperphysics calc is about 0.1 nanosecond) bumps into other molecules handing off the added energy in a process called thermalization (some spell it thermalisation). Once it has handed off the energy it cannot emit a photon.

    Thus the only influence that added CO2 can have is to cause the absorption/thermalization process to move slightly closer to the emitting surface.

    Why isn’t thermalization (or thermalisation) mentioned in IPCC reports?

    Two natural drivers have been identified that explain measured average global temperatures since before 1900 with R^2>0.9 (95% correlation) and credible values back to 1610. Global Warming ended before 2001 http://endofgw.blogspot.com/. The current trend (from a graph for longer than a century) is down.

    The method, equation, data sources, history (hind cast to 1610) and predictions (to 2037) are provided at http://agwunveiled.blogspot.com and references.

    ReplyDelete
    Replies
    1. "Other molecules’ outnumber CO2 molecules by approximately 2500 to 1."

      That's correct. But let's do some maths on it...

      Firstly how thick is the glass in a real greenhouse (one used to grow plants)?

      My guess is 3-4mm. So a 3-4mm surround of a material demonstrating a greenhouse effect is enough to warm a space by a few degrees or more.ss

      Second, if the CO2 in the atmosphere were a solid, glassy layer (it would be dry ice strictly) roughly how thick would it be?

      Well air pressure is equivalent to the pressure from 32 feet of water or ice or about 10m of water or ice, and we have 1/2,500th of that. 1cm would be 1,000th of 10m, so the CO2 would be approximately 4mm thick - about the same as the thickness of glass in a greenhouse.

      Thirdly it is known that the greenhouse effect caused by water, CO2 and other atmospheric gases is known to raise the earth's temperature by around 18 degrees C - on average today's surface temperature would be below freezing if the greenhouse effect did not provide significant warming.

      Fourthly your statement is absolutely right - "Thus the only influence that added CO2 can have is to cause the absorption/thermalization process to move slightly closer to the emitting surface."

      However, when you follow the statement through it means an increase in CO2 causes the heat which is lost to space at infrared frequencies has to come from higher in the atmosphere than it would with less CO2. And the higher layer is colder than the previous lower layer, so less heat is lost. And if less heat is lost then it means more is retained. Hence more CO2 means a bigger greenhouse effect and higher surface temperatures (on average - before you talk about other effects).

      Regards,
      Peter

      Delete
    2. 1. comparing CO2 to a sheet of glass or a greenhouse is a completely false analogy. Greenhouses heat by limiting convection. Greenhouse gases cannot limit convection, and I have presented evidence in several other posts that GHGs actually speed up convective cooling.

      http://hockeyschtick.blogspot.com/2014/08/paper-proves-bill-nyes-faked-greenhouse.html

      Delete
  4. Just to prove I'm out of my league here, but when a photo strikes an atom, the only energy imparted is through an electron going to a higher state.

    Since a photon has no mass it cannot provide energy in a Newtonian way, correct? It can't be like two billiard balls colliding, correct?

    ReplyDelete
    Replies
    1. Not Newtonian but in a relativistic way electrons have mass since

      E=mc2 Energy = mass

      http://math.ucr.edu/home/baez/physics/ParticleAndNuclear/photon_mass.html

      Delete
  5. The calculations have been done over a 400 year period, but the change in CO2 is only significant over the last 50 years. For 350 out of the 400 years it is extremely likely that factors other than CO2 would show the greatest correlation to temperatures because you cannot test for the effect of a static variable such as CO2 over that time.

    So the CO2 rise has only had an effect for 12.5% of the period.

    Further, during those 50 years "natural" climate variability (TSI, ocean oscillations) has been comparable to the effects of human induced CO2. If you this halve the 12.5% which the last 50 years represents then you get 6.25% - which is well within the 10% odd of variability which is not accounted for by the other fitting parameters. The 50% figures (comparable to the 44% mentioned above) comes from the graph plotted using Lean and Rind's approach at http://www.aip.org/history/climate/images/Model-4_effects.jpg

    Thus it is highly dangerous to conclude that CO2 does not affect temperature as there is no proof of this from the 400 year R2 figure alone.

    The Lean and Rind graph for the last 30 years gives a pretty good fit using similar variables - ENSO, TSI, volcanoes but also including a linear effect due to CO2 changes, so a proper explanation of why CO2 change has no effect would have to include an explanation of why their approach is wrong and yours is right (which at the moment it isn't because of the huge period up to 1960 when CO2 levels were relatively static and within touching distance of the historical 280 ppm value).

    Strictly any attempt to include CO2 as an independent variable should integrate the change from the 280 ppm value over time, as it is the change which causes the increase in radiative forcing, and the integral of radiative forcing gives the total excess heat which would be expected to correlate to changes in surface temperatures, though from time to time you would also expect some of it to go into increasing (deep or shallow) sub-surface ocean temperatures instead.

    Regards,
    Peter

    ReplyDelete
    Replies
    1. "The calculations have been done over a 400 year period, but the change in CO2 is only significant over the last 50 years."

      Your thesis appears to be essentially that solar/oceans controlled climate for 350 years, then man-made CO2 took over control. I reject this hypothesis since solar/ocean cycles explain the entire 400 year temperature progression equally well over the last 50 years as the prior 350. If CO2 was a major player there would have been a major warming divergence upward from the sun/ocean model, and there is not. Therefore, CO2 is a bit player. Now that the PDO & NAO are starting to turn negative, & relatively low solar activity during this cycle's maximum, we see further divergence between the CO2 control knob theory and the sun/ocean control knob theory.

      Delete
  6. MS,

    If CO2 had varied significantly over the full 400 year period and this did not correlate at all with surface temperatures then it might be possible to reach a statistical conclusion about CO2.

    However, statistical techniques do not allow you to say that you have accounted for most of the variation in temperatures, therefore no other variable can affect the temperature. Although a pair of variables may be independent, their values might by chance approximately mirror each other, as CO2 and integral of sunspot numbers do from 1960 onwards. In that case you cannot separate the effect, using just multiple regression.

    Further, correlation is not causation. There is good support in the literature for the thesis that CO2 increases causes underlying temperature rise which is superimposed on natural variation which is very carefully analysed in the post above. The above post gives a correlation coefficient of around 44% (or 50%) which is approximately what you would expect if Lean and Rind's technique is correct, because the sum of the ranges of natural variation (Ocean cycles, TSI:, volcanoes) is approximately the same as the underlying anthropogenic CO2 temperature increase.

    " CO2 follows temperature on short, intermediate, and long-term timescales, and the effect does not precede the cause."

    While it may be true that CO2 followed temperature on long-term timescales, this is not what has happened since 1960.

    The science points conclusively to human CO2 emissions being the main cause of CO2 increases since 1960, though not the only one. Some people get confused because the annual CO2 increase is a small fraction of the annual dynamic CO2 exchange between atmosphere, ocean and biosphere but that is no excuse.

    So the real question is, given the increase in CO2 since 1960 due to humans, what effect is that having on temperatures now. And 44% correlation looks like a reasonable answer to that question - human produced CO2 makes a significant difference after 1960 but shares that with natural climate variability due to ENSO, TSI, volcanoes etc.

    Regards,
    Peter

    ReplyDelete
  7. Peter-"If CO2 had varied significantly over the full 400 year period and this did not correlate at all with surface temperatures then it might be possible to reach a statistical conclusion about CO2. The science points conclusively to human CO2 emissions being the main cause of CO2 increases since 1960, though not the only one."

    2 big flies in your AGW claim:

    http://hockeyschtick.blogspot.com/2014/07/new-paper-finds-only-375-of-atmospheric.html?spref=tw

    Jul 19, 2014

    New Paper Finds Only ~3.75% of Atmospheric CO2 is Man-Made From Burning of Fossil Fuels

    Pierre Gosselin

    A paper published today in Atmospheric Chemistry and Physics finds that only about 3.75% [15 ppm] of the CO2 in the lower atmosphere is man-made from the burning of fossil fuels, and thus, the vast remainder of the 400 ppm atmospheric CO2 is from land-use changes and natural sources such as ocean outgassing and plant respiration.

    http://hockeyschtick.blogspot.com/2010/11/analysis-ipcc-insider-inserted-false.html

    Sunday, November 28, 2010

    IPCC Insider Inserted False Claim on CO2 Sources in Assessment Report

    11/29/10- John O'Sullivan:

    Mišo Alkalaj, is one of 24 expert authors of this 2-volume publication, among them are qualified climatologists, prominent skeptic scientists and a world leading math professor. It is Alkalaj’s chapter in the second of the 2 books that exposes the fraud concerning the isotopes 13C/12C found in carbon dioxide (CO2).

    Alkalaj, who is head of Center for Communication Infrastructure at the "J. Stefan" Institute, Slovenia says because of the nature of organic plant decay that emits CO2, such a mass spectrometry analysis is bogus. Therefore, it is argues, IPCC researchers are either grossly incompetent or corrupt because it is impossible to detect whether carbon dioxide (CO2) in the atmosphere is of human or organic origin.

    http://www.ijs.si/ijsw/More%20about%20the%20Jo%C5%BEef%20Stefan%20Institute

    Satellites trump surface temps for coverage and accuracy and they show no warming for the last 18+ years. So what ever koolaid science you have it’s false. And I serious doubt the credibility of those who make claims based on hearsay.

    ReplyDelete
    Replies
    1. Pierre Gosselin has withdrawn his claim in your first link because it was based on a misinterpretation of the paper.

      It's a common misunderstanding - recent fossil fuel CO2 emissions can be distinguished isotopically because the carbon has not recently been subject to bombardment by cosmic rays. However, there is a huge dynamic interchange of the atmospheric CO2 with the biosphere and upper ocean so that the particular molecules emitted from fossil fuels soon get substituted. This process doesn't change the additional CO2 concentration caused by burning fossil fuel - just the actual molecules which make it up. The atmospheric nuclear tests inadvertently provided an excellent experiment for this process.

      So yes, you are right - after a few years it is impossible to tell from spectrographic analysis of just CO2 in the atmosphere where all the additional CO2 over the years has come from. But you can tell by doing more extensive research - because if you include the upper ocean and biosphere concentrations then it would be clearer.

      It's not clear why you inserted the link http://www.ijs.si/ijsw/More%20about%20the%20Jo%C5%BEef%20Stefan%20Institute which just seems to point to an introductory page for the J Stefan institute. Presumably someone there didn't understand dynamic CO2 interchange either.

      As for the satellite dataset, there are two in common use - UAH and RSS, and various research papers which also use the satellite sensors to calculate trends. All trend calculations except that from RSS show significant warming over the last 18 years - in UAH the "pause" ended in 2009 and the trend since 1998 is around 0.7 degrees C per century. The satellite-based trends since 1998 in the research papers are larger than the UAH trend.

      As far as satellite accuracy goes, firstly the RSS and UAH trends over the last 18 years are 0.07 degrees C / decade apart, even ignoring the UAH and RSS supplied estimated errors which very rarely get mentioned in blogs like this. If it is so easy to get temperature trends from satellites and they are so accurate, how come there is this huge difference between the two data sets? Secondly, the satellite sensors were designed for weather, not climate use, and if you look at the details of which satellites and sensors are actually used you find every so often they stop using a sensor because it may be recording too high or too low, and the decision to do this seems pretty arbitrary to me.

      All in all, the surface temperature data sets are generally more accurate than the satellite data sets as far as actual surface temperatures go. The lower tropospheric temperatures measured by the satellites are from a few miles up, and actually tend to swing higher on the peaks and lower on the troughs than the surface temperature data sets. RSS appears to have given up doing this since 2005 which makes me think there is something wrong with what they are doing, but that's only from eyeballing graphs, so is hardly conclusive evidence.

      Finally, you have to look at any slowdown in warming scientifically. This means controlling for external influences which are going to be superimposed on the raw AGW trend, of which the paramount ones are ENSO state, volcanoes and TSI. Counting the middle of ENSO neutral as zero, regression analyses show the El Nino state is worth 0.1 to 0.2 degrees C on global temperature and you can subtract the same range for a La Nina state.

      Since the 18 year slowdown started with a huge El Nino state, whereas the record surface temperatures during 2014 are from a neutral ENSO state, then you should be mentally adding on 0.1 to 0.2 degrees to the 18 year trends before thinking AGW has magically stopped.

      Regards,
      Peter

      Delete
  8. Peter: “Pierre Gosselin has withdrawn his claim in your first link because it was based on a misinterpretation of the paper.

    How come the link is still there?

    It's a common misunderstanding - recent fossil fuel CO2 emissions can be distinguished isotopically because the carbon has not recently been subject to bombardment by cosmic rays.”

    Do you have a link for that?

    Peter: “But you can tell by doing more extensive research - because if you include the upper ocean and biosphere concentrations then it would be clearer.”

    Based on what research?

    Peter: “It's not clear why you inserted the link http://www.ijs.si/ijsw/More%20about%20the%20Jo%C5%BEef%20Stefan%20Institute which just seems to point to an introductory page for the J Stefan institute. Presumably someone there didn't understand dynamic CO2 interchange either.”

    That is where Mišo Alkalaj paper came from. The Jožef Stefan Institute is named after the distinguished 19th century physicist Jožef Stefan, most famous for his work on the Stefan-Bolzmann law of black-body radiation. What proof do you have of their lack of understanding- you!?

    Peter: “All trend calculations except that from RSS show significant warming over the last 18 years - in UAH the "pause" ended in 2009 and the trend since 1998 is around 0.7 degrees C per century. The satellite-based trends since 1998 in the research papers are larger than the UAH trend.”

    Significant warming in the last 18 years? Your should be familiar with UAH and the latest graph I got showed no significant warming since the super El Nino year of 1998. Seems like you have your ‘facts’ mixedup!

    Peter: “As far as satellite accuracy goes, firstly the RSS and UAH trends over the last 18 years are 0.07 degrees C / decade apart, even ignoring the UAH and RSS supplied estimated errors which very rarely get mentioned in blogs like this.”

    Since the thermisters used in the surface stations have a resolution of +/- 0.47C, 0.07C is a very minor difference.. When you add the surface station siting problems with coverage, I would take satellites over surface stations any day.

    Peter: “If it is so easy to get temperature trends from satellites and they are so accurate, how come there is this huge difference between the two data sets? “

    Huge differences? There are major inconsistencies with surface temps:

    http://www.surfacestations.org/

    http://wattsupwiththat.com/2013/05/28/hadcrut4-revision-or-revisionism/

    HadCRUt4: Revision or Revisionism?


    Peter: “All in all, the surface temperature data sets are generally more accurate than the satellite data sets as far as actual surface temperatures go.”

    That depends, surface temps are subject to many irregularities.

    http://icecap.us/images/uploads/The_data_games.pdf

    The Data Games - The Transition from Real Data to Model/Data Hybrids

    http://wattsupwiththat.com/2011/01/20/surface-temperature-uncertainty-quantified/

    Surface temperature uncertainty, quantified

    Again, where is your proof, any links to substantiate your claims?

    Peter: “Since the 18 year slowdown started with a huge El Nino state, whereas the record surface temperatures during 2014 are from a neutral ENSO state, then you should be mentally adding on 0.1 to 0.2 degrees to the 18 year trends before thinking AGW has magically stopped.”

    http://www.newsmax.com/Newsfront/Science-US-climate-oceans/2014/10/06/id/598864/#ixzz3FSztjETK

    NASA Scientists Puzzled by Global Cooling on Land and Sea

    Since the ice fields have been expanding the last 18 years and winters have been getting colder I fail to acknowledge your claims.

    ReplyDelete
  9. The link http://hockeyschtick.blogspot.co.uk/2014/07/new-paper-finds-only-375-of-atmospheric.html?spref=tw may still be there, but the version of the post now kicks off with :

    "Thanks to notification from and email conversations with the lead author Denica Bozhinova of the paper "Simulating the integrated summertime Δ14CO2 signature from anthropogenic emissions over Western Europe, the claim made in the original Hockey Schtick post that ~3.75% of background atmospheric CO2 is man-made from burning of fossil fuels is hereby retracted due to a misinterpretation of the paper. The author has clarified that her paper does not address the mole fraction or concentration of CO2 of fossil fuel origin present in background levels of CO2, it addresses the mole fraction or concentration of CO2 of fossil fuel origin of recent emissions only. "

    and ends with

    "I apologize for my misinterpretation of the paper, putting the post in draft mode during email conversations back and forth with the author, and all subsequent confusion which I caused. I'd like to thank lead author Denica Bozhinova for her kind and detailed emails [portions below], apologize for the time she has spent correcting my misinterpretation and that of several others in the blogosphere, and wish her all the best in her future career and research. "

    If this isn't what you see then fresh the page in your browser.

    The carbon isotope stuff is only to do with how you can determine the percentage of atmospheric CO2 which is from recent carbon emissions.

    The following link deals with the three different and independent methods of calculating how much atmospheric CO2 comes from burning fossil fuels - http://www.skepticalscience.com/print.php?r=384.

    Are you sure you are using UAH for the trend? RSS is the only one with the zero trend. From this link http://woodfortrees.org/plot/uah/from:1996.95/to:2014.95/trend you can see that the UAH trend is an increase of just over 0.14 degrees C over precisely 18 years, which is 0.077 degrees per decade or 0.77 degrees per century. By contrast RSS is http://woodfortrees.org/plot/rss/from:1996.95/to:2014.95/trend which gives a reduction of 0.018 degrees C over precisely 18 years which is -0.001 degrees per decade or -0.01 degrees per century.

    Given that the two satellite records can choose to use or not use various readings from the same set of satellite sensors, it is extraordinary that you have the nerve to claim satellite readings are accurate when they generate 18 year trends that actually differ by 0.77 degrees C per century as shown above. It is a totallly illogical conclusion.

    Peter

    ReplyDelete
  10. And you are very confused about the surface temperature sensor readings too. 0.47 (or 0.5) degrees C is not the "resolution" of the sensors. It is the accuracy of the readings BEFORE the sensors have been calibrated across the range (and it varies by temperature) e.g. see page 4 of https://s.campbellsci.com/documents/us/manuals/107.pdf. After calibration what then matters is the reproducibility (to what extent they always give the same reading over time when measuring the same temperature). The CRN equipment uses triple redundant sensors. By the time you have calibrated (and can recalibrate later) then the accuracy is very much higher than your quoted figure. Provided the reproducibility is very high (and they generally age the sensors before using them to ensure it is) and because we are always interested in changes, and not absolute readings then any surface temperature readings are highly accurate, which is more than can be said for the satellite data sets RSS and UAH as proved above since their trends over 18 years differ by 0.78 degrees C per century which is immense.

    Now you seem to have expanded the discussion to bring in many other denier claims. There isn't time to address all of them, though there's plenty of stuff on Skeptical Science about most of them.. Instead consider that 1) your original claim about atmospheric CO2 increases not being due to fossil fuel use was based on a misunderstanding, 2) that you are wrong about UAH showing no warming 3) wrong that UAH and RSS satellite datasets are accurate (because if they were that accurate they would be bound to agree on 18 year trends) and 4) wrong about surface temperature sensor raw accuracy of 0.5 degrees causing similar inaccuracies in the measurements. If you can be so wrong about these topics then it is very clear that you can be wrong about a lot more too, don't you think?

    Peter

    ReplyDelete
    Replies
    1. I would concede on the difference of the UAH vs the RSS trends (I’m not that familiar with Wood for Trees) but my main counter is the accuracy of surface temp is not as you claim.

      Reference: surfacestations.org and http://stevengoddard.wordpress.com/2014/06/23/noaanasa-dramatically-altered-us-temperatures-after-the-year-2000/

      One of the sources that showed me the close tracking of RSS and UHA is:

      http://www.drroyspencer.com/2014/10/why-2014-wont-be-the-warmest-year-on-record/

      As Dr. Spencer observed:

      “The thermometer network is made up of a patchwork of non-research quality instruments that were never made to monitor long-term temperature changes to tenths or hundredths of a degree, and the huge data voids around the world are either ignored or in-filled with fictitious data. Furthermore, land-based thermometers are placed where people live,.... that cause an artificial warming (urban heat island, UHI) effect...... The data adjustment processes in place cannot reliably remove the UHI effect because it can’t be distinguished from real global warming.”

      “Satellite microwave radiometers, however, are equipped with laboratory-calibrated platinum resistance thermometers, which have demonstrated stability to thousandths of a degree over many years, and which are used to continuously calibrate the satellite instruments once every 8 seconds. The satellite measurements still have residual calibration effects that must be adjusted for, but these are usually on the order of hundredths of a degree, rather than tenths or whole degrees in the case of ground-based thermometers.”

      You can have all the redundancy you want but if you have crappy siting and consistency on your historical observations then claims of accuracy are not valid.

      “As of this writing, 69% of the USHCN stations were reported to merit a site rating of poor, and a further 20% only fair.”

      http://wattsupwiththat.com/2011/01/20/surface-temperature-uncertainty-quantified/

      Fig 3. the global surface air temperature anomaly series through 2009, as updated on 18 Feb 2010, (http://data.giss.nasa.gov/gistemp/graphs/). The gray error bars show the annual anomaly lower-limit uncertainty of ±0.46 C.

      The rate and magnitude of 20th century warming are thus unknowable,......

      Plus, from HADCRUT to GISS bias adjustments where recorded:

      http://stevengoddard.wordpress.com/2014/06/23/noaanasa-dramatically-altered-us-temperatures-after-the-year-2000/

      “chart of U.S. temperatures published by NASA in 1999... shows the highest temps- occurred in the 1930's, followed by a cooling trend ramping downward to the year 2000:...”

      “Using the exact same data found in the chart shown above,- NASA managed to misleadingly distort the chart to depict the appearance of global warming:”

      http://wattsupwiththat.com/2013/05/28/hadcrut4-revision-or-revisionism/

      “terrestrial results are now being tuned to bring them into correspondence with the more accurate and complete satellite results,.... there now has to be a corresponding disenhancement of the 21st-century warming.”

      Despite the CRN claims of their latest temperature sensors that appear to be the latest one not widely employed. The most employed is the MMTS type of run of the mill thermistors.

      The changeover from from liquid in glass (LiG) thermometers to the Max/Min Temp System (mid and late 1980s) led to an average drop in max temps of about 0.4°C and to an average rise in min temps of 0.3°C for sites with no coincident station relocation. Quayle et al. (1991) from https://ams.confex.com/ams/pdfpapers/141108.pdf

      Delete
    2. On man-made emissions vs natural-

      http://www.elsevier.com/locate/gloplacha

      Jul 19, 2013

      The Phase Relation Between Atmospheric Carbon Dioxide and Global Temperature

      Changes in ocean temperatures appear to explain a substantial part of the observed changes in atmospheric CO2 since Jan- 1980.

      CO2 released from anthropogenic sources apparently has little influence on the observed changes in atmospheric CO2, and changes in atmospheric CO2 are not tracking changes in human emissions.

      http://www.co2web.info/ESEF3VO2.htm

      Carbon Cycle Modeling and the Residence Time of Natural and Anthropogenic Atmospheric CO2:

      Tom V. Segalstad

      Both radioactive and stable carbon isotopes show that the real atmospheric CO2 residence time is only about 5 years, and that the amount of fossil-fuel CO2 in the atmosphere is maximum 4%. Any CO2 level rise beyond this can only come from a much larger, but natural, carbon reservoir with much higher 13-C/12-C isotope ratio than that of the fossil fuel pool, namely from the ocean, and/or the lithosphere, and/or the Earth's interior.

      I will have to check up on the difference of RSS vs UAH trends.

      Delete
    3. Scottar,

      You should know that there is evidence that the NOAA adjustments to USHCN data lead to trends which are similar in both rural and urban areas, which means that UHI cannot be the reason for the increase in US temperatures - using four different definitions of "urban".
      See http://api.ning.com/files/-7oxPj1gJLmb2yc2szcNWgH0xLORLwqk5o4sTOz03zKwPjJDej3rOpIBfSeojk*2tTRiSxj0EhJV6ViMMTR7NlEzWvnMLt6o/NOAAAdjustingUrbanTemperaturesDown.jpg

      The real question is why is there such a focus on the USA? Sure a lot of people who do not believe in AGW live there and draw on anecdotal experience of cool USA
      temperatures. But the USA comprises only around 2% of the earth's surface, so whatever anomaly occurs in the USA can be divided by fifty before you add it to the calculation of global temperatures. It make absolutely no measurable difference at all on its own, so any appeal to it has to be purely emotional.

      On satellite vs surface temperatures, look at the the following chart from Roy Spencer's post - http://www.drroyspencer.com/wp-content/uploads/Yearly-global-LT-UAH-RSS-thru-Sept-2014.png .
      Look at the graph section from 1992 to 1997 (RSS is around 0.05 degrees C above UAH), 1999 to 2001 (RSS 0.08 ABOVE UAH) and then 2011 to 2014 (RSS now 0.08 degrees BELOW UAH). This is a swing of 0.16 degrees of the RSS data compared to the UAH data over a period of no more than 15 years. That really is huge for something you personally would like to trust more than surface temperatures.

      There is so much Roy Spencer leaves out of his comments on satellite sensors that you ought to doubt his objectivity - like some sensors develop weird drifts, and if the UAH and RSS guys are lucky to identify this they stop using these sensors - so much for sufficient reliability to act as long-term data for climate purposes!! Satellite temperature data sets are a black art - and not one that is easy - if it was then the RSS and UAH efforts would be consistent with each other, not demonstrate a 0.16 degree drift from each other over 15 years. Neither the UAH and RSS calculation code is published and the UAH guys published long-term trends have gone up and up every few years.

      Secondly, remember that the satellite data sets do NOT measure surface temperature but instead a few miles up. There is a possible satellite sensor feed which does surface temperatures, but it is too "noisy" and both UAH and RSS throw it away.

      The physics tells us that the satellite data set global averages (ie. from a few miles up) should swing higher on the peak (hot) years and lower on the trough (cold) years, and this can be well seen in my chart - http://api.ning.com/files/-7oxPj1gJLlkl05NIiZxYH41hUC9ty*-Jt-xtm8iUoK41txARikSrRKuqkWDcO7VEs3wtaVEUmoeazDEIiK9vg1t*FNGhmfa/ComparisonofRSSUAHGISTEMPCW.jpg which you could reproduce using just Excel. The temperatures have firstly been normalised by subtracting the individual 1979 to 2013 averages, then smoothing over 12 months. The relatively larger swings of UAH and RSS over the Cowtan & Way and GISTEMP surface data sets are apparent. But clearly something has gone rather wrong with the RSS feed over the last few years.

      From this chart there is no case for saying the C&W and GISTEMP data sets are way out of line with the UAH satellite readings - just that they swing less. So the evidence from Roy Spencer and from my chart is not supporting your opinion on the relative merits of surface temperature and satellite data sets.

      Regards,
      Peter

      Delete
    4. As regards the differences between the satellite and surface temperature measurements you have to understand that guys like Roy Spencer and Anthony Watts are biased - they have an agena and are not really interested in an objective comparison. Therefore they include only the good points of the satellite measurements and the bad points of the surface measurements. Because of this they are not likely to take the trouble to understand the competing measurements fully. An objective view should include all the significant relative strengths and weaknesses of each approach.

      Let's take each paragraph in turn. Due to the 4096 character limit I am only going to present the opposing side here to save space.

      “The thermometer network is made up of a patchwork of non-research quality instruments that were never made to monitor long-term temperature changes to tenths or hundredths of a degree, and the huge data voids around the world are either ignored or in-filled with fictitious data. ”

      Since we are interested generally only in temperature anomalies (changes - not absolute temperatures) then it is the actual measured REPRODUCIBILITY of measurements with surface sensors which matters. Since the surface sensors are not in space then the conditions are not as harsh and calibration can be performed at the end of a sensors life as well as the beginning to check the reproducibility of different sensor types (so you would do this on a sample not necessarily on all of them). Further, the temperature anomaly at particular times tends to be very similar in areas which are up to hundred of miles apart - even where the actual temperatures differ. So where there are multiple surface sensors only tens of miles apart then they can be used to cross check one another. By such means a good estimate can be made of the actual errors in the surface temperature readings.

      "Furthermore, land-based thermometers are placed where people live,.... that cause an artificial warming (urban heat island, UHI) effect...... The data adjustment processes in place cannot reliably remove the UHI effect because it can’t be distinguished from real global warming."

      Go look at http://api.ning.com/files/-7oxPj1gJLmb2yc2szcNWgH0xLORLwqk5o4sTOz03zKwPjJDej3rOpIBfSeojk*2tTRiSxj0EhJV6ViMMTR7NlEzWvnMLt6o/NOAAAdjustingUrbanTemperaturesDown.jpg . This says that urban (on four different measures) temperature adjustments do a very good job of removing the rural / urban bias. The proof of the pudding is in the eating - not in the weasel words of Roy Spencer or Anthony Watts.


      Regards,
      Peter

      Delete
    5. “Satellite microwave radiometers,..are equipped with laboratory-calibrated platinum resistance thermometers, which have demonstrated stability to thousandths of a degree over many years, and which are used to continuously calibrate the satellite instruments once every 8 seconds. The satellite measurements still have residual calibration effects that must be adjusted for, but these are usually on the order of hundredths of a degree, rather than tenths or whole degrees in the case of ground-based thermometers.”

      Here's the plot of satellite overlap used to generate the RSS TLS temperatures - http://images.remss.com/figures/missions/amsu/satellites_used.png. There are many times only one satellite with one set of sensors is making the readings. Any drift in sensor readings or on-board calibration will throw the whole history out. Around 1985/6 there was hardly any overlap between two satellites NOAA-7 and NOAA-9, so historical cross calibration is suspect. All of the older satellites have decaying orbits which affect the readings and requires temperature adjustments. One or two later satellites have active measures (extra propellant) so orbits do not decay over time to make them more accurate. TLT (the one most people are interested in) overlaps are worse than for TLS (see http://www.remss.com/missions/amsu). This is the sort of stuff you have to put up with in space and there is no possibility of recalibrating a sensor against ground equipment once it is launched. The RSS and UAH discrepancies mean the sensor data processing must be somewhat subjective rather than clear cut. If residual recalibration is the only issue and is as low as hundredths of a degree why do the UAH and RSS temperatures change relatively by 0.16 degrees in 15 years?

      Clearly Roy Spencer and Anthony Watts have not bothered to mention any of this stuff above.

      "You can have all the redundancy you want but if you have crappy siting and consistency on your historical observations then claims of accuracy are not valid."

      That is a very misleading statement for two reasons. For temperature anomalies we are interested in REPRODUCIBILITY NOT ACCURACY. If the station siting and surroundings have not changed over time then the temperature anomaly over time will not change either, so the anomaly is valid. Secondly, if changes take place in less than a year (e.g. someone builds a tarmac strip ten yards away, a new building 50 yards away changes the wind patterns etc.) then the anomaly from one sensor will suddenly shift relative to the other local sensors, and the effect can be compensated.


      “As of this writing, 69% of the USHCN stations were reported to merit a site rating of poor, and a further 20% only fair.”

      So that leaves 11% of good quality sensors, even ignoring the fact that the adjustment process ought to compensate for siting problems with some of the others. And has Watts or Spencer told you whether the 11% of good sited sensors disproves the compensated readings from the other 89%, or whether it is in line? I bet not. In fact the analyses show that poorly sited sensors on average record more cooling after adjustment than do well-placed sensors. 11% would be enough for a global temperature anomaly calculation.

      Regards,
      Peter

      Delete
    6. “terrestrial results are now being tuned to bring them into correspondence with the more accurate and complete satellite results,.... there now has to be a corresponding disenhancement of the 21st-century warming.”

      This is all rubbish. The truth is that it is useful to have both satellite and surface temperature data sets, and where there is a significant, unexpected discrepancy then climatologists have looked very hard at both datasets and often have found an error - but more usually something indicating an error in the satellite data set. Here is a chart of temperatures from the publication record of John Christie on the UAH team - http://d35brb9zkkbdsd.cloudfront.net/wp-content/uploads/2014/07/Christy-Spencer-638x379.jpg . The last four points are in a straight line - would you now bet your house on the UAH team now having it right? Since the UAH code is not public and the UAH team are reluctant to take feedback on board, mostly they have been reluctantly and forced to fix errors after much foot dragging.

      "Despite the CRN claims of their latest temperature sensors that appear to be the latest one not widely employed. The most employed is the MMTS type of run of the mill thermistors. The changeover from from liquid in glass (LiG) thermometers to the Max/Min Temp System (mid and late 1980s) led to an average drop in max temps of about 0.4°C and to an average rise in min temps of 0.3°C for sites with no coincident station relocation. Quayle et al. (1991) from https://ams.confex.com/ams/pdfpapers/141108.pdf"

      So what? Remember the importance of reproducibility versus accuracy. Generally there is a year of overlap between installation of new technology in a station and the removal of the old stuff. That means you can identify effects such as the above and compensate for them in the historical record to make it consistent with modern readings. Thus the historical anomalies are still correct. That is very similar to using different sensors in different weather satellites to generate temperature anomalies over time.


      Regards,
      Peter

      Delete