Dr. Lorenz notes that although other fields that deal with complex, non-linear systems have accepted the implications of chaos theory, some meteorologists and climatologists remain reluctant to accept the implications of chaos theory, namely that long-term climate forecasting is impossible.
According to chaos theory, all the current "initial' conditions throughout the atmosphere must be known precisely to predict what the atmosphere will be doing in the distant future. In addition, one must know all the current conditions throughout the oceans as well, since the oceans control the atmosphere. “In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent,” Lorenz concluded. So even if the molecules in the air all interacted nonrandomly, in a totally cause-and-effect (deterministic) manner, you still couldn’t predict with certainty what they would do or what the weather would be."
Chaos theory also debunks the claim of some climatologists that although models cannot predict short-term climate variations such as the current 20 year "pause," they can still be used for long-term projections. Chaos theory instead proves that uncertainty of projections increases exponentially with time, and therefore, long-term climate model projections such as throughout the IPCC AR5 report are in fact impossible to rely upon.
Full PDF
Bulletin of the American Meteorological Society 2013 ; e-View
doi: http://dx.doi.org/10.1175/BAMS-D-13-00096.1
Last Interview with Professor Edward Lorenz? - Revisiting the “limits of predictability” - Impact on the Operational and Modeling Communities?
The first thing I would say is that current numerical prediction output are much better than I ever thought they would be at this time. I wasn’t sure they would ever get as good as they are now, certainly not within my lifetime. So, this makes me think that they can become still better and makes me hopeful that we may actually get good forecasts a couple of weeks ahead some time. I still don’t hold much hope for day-to-day forecasting a month ahead. Two weeks ahead doesn’t seem unreasonable at all now even though we haven’t reached that point. And . . . I gather that a lot of the improvement has been from the improvement in initial conditions, and in turn, improvement in data assimilation methods. I’m still surprised, although it has been known for a great many years now that so much of the total time in numerical forecasts is spent on the data assimilation rather than on the actual forecasting, even when you make an ensemble forecast of 50 members or so. Still an enormous amount of the time is actually the data assimilation time. And I’m convinced that there will some day be better methods of data assimilation which incorporate the nonlinearity better than we are able to now in the assimilation process. I don’t know just what they are – every time I look at the thing and try to see if I can learn something new I get discouraged pretty soon. I haven’t come up with anything.
R.R. - So you have at least been thinking about that problem?
E.L. - I think the meteorological community accepted the idea of limitations to the forecasts. Of course, the idea wasn’t new then. You can find it quite strongly expressed in some of the earlier papers. Particularly one of the papers by Eady11 around 1950 where he points out that that any forecast given is just one member of a large ensemble of possible forecasts and we have no real reason for selecting among those.
R.R. - And he was saying that in 1950?
Full PDF
Bulletin of the American Meteorological Society 2013 ; e-View
doi: http://dx.doi.org/10.1175/BAMS-D-13-00096.1
Last Interview with Professor Edward Lorenz? - Revisiting the “limits of predictability” - Impact on the Operational and Modeling Communities?
Robert W. Reeves
NOAA/NWS/OCWWS, Silver Spring, Maryland UNITED STATES
More than 50 years ago, Massachusetts Institute of Technology (MIT) Professor Edward Lorenz conducted some numerical experiments with a simple 12-variable system representing convective processes. He had begun work on a statistical forecasting project, but disagreed with some of the thinking at the time, in particular that the primarily linear statistical methods could duplicate what the nonlinear methods achieved. He proposed to demonstrate this by performing numerical time integrations of his simple model with his newly-acquired desk-top computer. On one occasion he wanted to reexamine the results from an earlier simulation. Rather than re-run the simulation from the initial state, he decided to pick up the computations part-way into the original run by using the printout from the earlier run as the starting point. To his astonishment the new simulation diverged significantly from the original. Eventually he realized that the initial values he used for the second simulation were rounded off from the initial run so that the initial values of his second run were slightly different. The minor differences at initialization were magnified later in the run and led to very different end states. Lorenz (1963) concluded that if the real atmosphere evolved similarly to his numerical simulation, then very long-range prediction would not be possible. If Lorenz’ work was valid, then this could significantly alter the course of long-range prediction history. What would Lorenz have to say about that? I requested an interview for the primary purpose of eliciting his views.
The Interview
R.R - It’s Tuesday, November 6, 2007. I am with Emeritus Professor Edward Lorenz from MIT. Professor Lorenz I would like to ask you to discuss your activities and views related to the topic of extended and long-range prediction. Your numerical experiments in the early ‘60s, which you published in ’63 entitled “Deterministic Non-periodic Flow,” suggested that there are practical limits to weather prediction. But before we go there I’d like to ask you to go back to the time that you came to MIT and some of the topics you were working on at that time.
E.L. - I came here shortly after we got involved in World War II. I signed up for the meteorology program at MIT which was part of the Army Air Corps there. At that time MIT was a nice convenient place for me because I was at Harvard and I didn’t have to move until the Army decided to move us all under the same place. But for the first month or so I kept on living at Harvard and the course was the regular graduate course in Meteorology at MIT, except it was crowded into a shorter period. So, I studied Meteorology in the course, and after that I was one of five students out of a hundred that they kept here as instructors for the next course. I think there were nearly 400 at that time taking it and stayed on for the next session after that. I was in the 3rd so I taught in the 4th and 5th, and after that they stopped the teaching program. They found they had more meteorologists than they needed and a lot of them were being assigned to other duties at that time because they had trained too many.
R.R. - So you were teaching cadets then?
E.L. - I was teaching cadets – yes mostly cadets. I guess there were other people taking the course at the same time. A few civilians and some Navy and they were all together. R.R. Were you in the Ph.D. program at the time?
E.L. No. Although I was in a Master’s program. When we stayed on as instructors, they gave us an option to do a Master’s thesis at the same time. We got a Master’s degree. But it was after the War and I got into the Ph.D. program and after about a year and a half I finally got my Ph.D.
R.R. And then you were working on angular momentum problems?
E.L. Well no, not at that time. This was after Victor (Starr)1 came and after I got my degree. I needed a job, or course, and they offered me one working with Victor Starr as a post-Doc. He was very much interested in angular momentum. So that’s when I got involved with that.
R.R. How did you get started in the studies that ended up related to the limits of predictability?
E.L. What happened was that at the same time there was a program which I didn’t know much about in statistical weather forecasting here which Tom Malone2 was directing; and Tom left to form and head up the Travelers Weather Service in Hartford. So they offered me his position which I accepted. And along with his position I also inherited his project. I had to learn something about statistics, so I got involved with statistical weather forecasting. And a lot of the things they said about it were current knowledge of statistical weather which I didn’t quite agree with. One was that most of the statistical methods then were primarily linear methods and I didn’t agree with the idea that linear methods could almost duplicate what the nonlinear methods were able to do, so I proposed a test where we could get extensive solutions. Computers were just coming in then, and we wanted to get some small system - any nonlinear system would do – to generate some extended solutions, and treat them as if they were observational data. Then we could see if we were able to forecast them by linear methods, knowing that we could forecast by nonlinear methods just by repeating the computations that produced the solution in the first place. This led to a number of things: I soon found that it wasn’t
NOAA/NWS/OCWWS, Silver Spring, Maryland UNITED STATES
More than 50 years ago, Massachusetts Institute of Technology (MIT) Professor Edward Lorenz conducted some numerical experiments with a simple 12-variable system representing convective processes. He had begun work on a statistical forecasting project, but disagreed with some of the thinking at the time, in particular that the primarily linear statistical methods could duplicate what the nonlinear methods achieved. He proposed to demonstrate this by performing numerical time integrations of his simple model with his newly-acquired desk-top computer. On one occasion he wanted to reexamine the results from an earlier simulation. Rather than re-run the simulation from the initial state, he decided to pick up the computations part-way into the original run by using the printout from the earlier run as the starting point. To his astonishment the new simulation diverged significantly from the original. Eventually he realized that the initial values he used for the second simulation were rounded off from the initial run so that the initial values of his second run were slightly different. The minor differences at initialization were magnified later in the run and led to very different end states. Lorenz (1963) concluded that if the real atmosphere evolved similarly to his numerical simulation, then very long-range prediction would not be possible. If Lorenz’ work was valid, then this could significantly alter the course of long-range prediction history. What would Lorenz have to say about that? I requested an interview for the primary purpose of eliciting his views.
The Interview
R.R - It’s Tuesday, November 6, 2007. I am with Emeritus Professor Edward Lorenz from MIT. Professor Lorenz I would like to ask you to discuss your activities and views related to the topic of extended and long-range prediction. Your numerical experiments in the early ‘60s, which you published in ’63 entitled “Deterministic Non-periodic Flow,” suggested that there are practical limits to weather prediction. But before we go there I’d like to ask you to go back to the time that you came to MIT and some of the topics you were working on at that time.
E.L. - I came here shortly after we got involved in World War II. I signed up for the meteorology program at MIT which was part of the Army Air Corps there. At that time MIT was a nice convenient place for me because I was at Harvard and I didn’t have to move until the Army decided to move us all under the same place. But for the first month or so I kept on living at Harvard and the course was the regular graduate course in Meteorology at MIT, except it was crowded into a shorter period. So, I studied Meteorology in the course, and after that I was one of five students out of a hundred that they kept here as instructors for the next course. I think there were nearly 400 at that time taking it and stayed on for the next session after that. I was in the 3rd so I taught in the 4th and 5th, and after that they stopped the teaching program. They found they had more meteorologists than they needed and a lot of them were being assigned to other duties at that time because they had trained too many.
R.R. - So you were teaching cadets then?
E.L. - I was teaching cadets – yes mostly cadets. I guess there were other people taking the course at the same time. A few civilians and some Navy and they were all together. R.R. Were you in the Ph.D. program at the time?
E.L. No. Although I was in a Master’s program. When we stayed on as instructors, they gave us an option to do a Master’s thesis at the same time. We got a Master’s degree. But it was after the War and I got into the Ph.D. program and after about a year and a half I finally got my Ph.D.
R.R. And then you were working on angular momentum problems?
E.L. Well no, not at that time. This was after Victor (Starr)1 came and after I got my degree. I needed a job, or course, and they offered me one working with Victor Starr as a post-Doc. He was very much interested in angular momentum. So that’s when I got involved with that.
R.R. How did you get started in the studies that ended up related to the limits of predictability?
E.L. What happened was that at the same time there was a program which I didn’t know much about in statistical weather forecasting here which Tom Malone2 was directing; and Tom left to form and head up the Travelers Weather Service in Hartford. So they offered me his position which I accepted. And along with his position I also inherited his project. I had to learn something about statistics, so I got involved with statistical weather forecasting. And a lot of the things they said about it were current knowledge of statistical weather which I didn’t quite agree with. One was that most of the statistical methods then were primarily linear methods and I didn’t agree with the idea that linear methods could almost duplicate what the nonlinear methods were able to do, so I proposed a test where we could get extensive solutions. Computers were just coming in then, and we wanted to get some small system - any nonlinear system would do – to generate some extended solutions, and treat them as if they were observational data. Then we could see if we were able to forecast them by linear methods, knowing that we could forecast by nonlinear methods just by repeating the computations that produced the solution in the first place. This led to a number of things: I soon found that it wasn’t
easy to get nontrivial solutions; we could numerically solve for periodic solutions, which was where the prediction was trivial anyway. I finally managed to produce one, which was what I wanted, that definitely appeared to be non-periodic. It was a 12-variable model and that was essentially the first one that I worked with, although I got the idea that sufficiently long-range prediction would be impossible if the atmosphere behaved in the way that the model did.
R.R. So you had just run one time integration then, is that right?
E.L. The difficulties were in finding a suitable system of equations to work with because if I had known exactly what equations to choose in the first place, and exactly what initial states to take in order to get this nonperiodic solution, I probably could have done the whole thing in a couple of months or so with hand computations which is about the same time it would take to write up the thing afterwards for publication. So it wouldn’t have taken much more time, but the problem of course is that I had to make many, many tries with many different systems. Even if I’d had the right system I wouldn’t know if I had initial conditions that didn’t work very well. It’s not just a matter of initial conditions,
but once we have the general form of the equations, you have the numerical value of the constants. Some constants will produce what we now call chaotic behavior and some won’t. So it meant trying out an enormous number of things, more than I ever could possibly have gone through without the computers. So this type of work had to wait until computers were available. So it’s the kind of thing that we couldn’t have imagined in the ‘50s, let’s say, or before then. Although computers existed by then they weren’t so sufficiently common to be used for this particular purpose – they were usually earmarked for something else. But by 1960 – I guess it was about ’58 or ’59, I finally got my own computer for the office - a little LGP (Librascope General Purpose) computer about the size of a regular desk, and it was ideal for these purposes because it was still a thousand times as fast as hand computations and fast enough to handle these small systems that I worked with. Then with the 3-variable model I finally used in the write-up in ’63, I felt I could make things a little clearer and get the points across better by using a smaller model than the 12-variable model. I spent some time looking for a model with fewer variables, and I finally found this one that Saltzman3 had been working with. He had his 7-variable model but he showed me one case that first of all wouldn’t settle down to a periodic solution, which was what he was interested in; and second, four of the seven variables stayed close to zero, which suggested that the other three were keeping each other going; and if I reduced it to those three it would behave the same way, which it did. R.R. Was he a student of yours then, Barry Saltzman?
E.L. He was actually a student of Victor Starr’s. But he took some of my courses and I knew him quite well. I saw him quite often afterwards up until the time he died. R.R. So now you ran that model and published the results in 1963. Is that right? E.L. Yes.
R.R. Did you realize the implications of your work at that point?
E.L. I never really expected them to spread to so many other fields. I think I realized the implications for meteorology and some meteorologists didn’t quite agree with what I had to say, but fortunately Charney4 did. And he was in a very influential position then. This was at the beginning of the Global Atmospheric Research Programme (GARP)5, and one of the original aims of GARP had been to make two-week forecasts, and this suggested that they might be proved impossible before we even got started. So we were able to change the aim to investigate the feasibility of two-week forecasts, not promising that they would be possible. Now it begins to look as if the upper limit may be somewhere around two weeks, and I get the feeling that another 20 years or so we may actually be making useful day-to-day forecasts up to the two-week range, though I don’t think we are doing it now. But we got up to one week which I didn’t really expect at the time.
R.R. So Charney was a believer right away?
E.L. Yes. He said he saw why it worked that way. And I think his ideas there are pretty well-expressed in this report he wrote which was subsequently published in the (AMS) Bulletin. It was called the Feasibility of Global Observational Analysis Experiment or similar title.6 I think it was published in the Bulletin in ’66. It may have been a published report in ’64 or ’65, or around that time. That pretty well represents his feelings on the subject. It was his whole committee that published it. I think there were five authors.
R.R. - Was Smagorinsky7 in on that?
E.L. - I’m not sure he was on that actual committee or not. Of course he was very much involved in this type of work.
R.R. - General Circulation Model (GCM) experiments. E.L. - Yes
R.R. - And becoming a believer himself in the limitations, do you think? E.L. - I think so.
R.R. - There were others who were either skeptical or didn’t want to believe.
R.R. So you had just run one time integration then, is that right?
E.L. The difficulties were in finding a suitable system of equations to work with because if I had known exactly what equations to choose in the first place, and exactly what initial states to take in order to get this nonperiodic solution, I probably could have done the whole thing in a couple of months or so with hand computations which is about the same time it would take to write up the thing afterwards for publication. So it wouldn’t have taken much more time, but the problem of course is that I had to make many, many tries with many different systems. Even if I’d had the right system I wouldn’t know if I had initial conditions that didn’t work very well. It’s not just a matter of initial conditions,
but once we have the general form of the equations, you have the numerical value of the constants. Some constants will produce what we now call chaotic behavior and some won’t. So it meant trying out an enormous number of things, more than I ever could possibly have gone through without the computers. So this type of work had to wait until computers were available. So it’s the kind of thing that we couldn’t have imagined in the ‘50s, let’s say, or before then. Although computers existed by then they weren’t so sufficiently common to be used for this particular purpose – they were usually earmarked for something else. But by 1960 – I guess it was about ’58 or ’59, I finally got my own computer for the office - a little LGP (Librascope General Purpose) computer about the size of a regular desk, and it was ideal for these purposes because it was still a thousand times as fast as hand computations and fast enough to handle these small systems that I worked with. Then with the 3-variable model I finally used in the write-up in ’63, I felt I could make things a little clearer and get the points across better by using a smaller model than the 12-variable model. I spent some time looking for a model with fewer variables, and I finally found this one that Saltzman3 had been working with. He had his 7-variable model but he showed me one case that first of all wouldn’t settle down to a periodic solution, which was what he was interested in; and second, four of the seven variables stayed close to zero, which suggested that the other three were keeping each other going; and if I reduced it to those three it would behave the same way, which it did. R.R. Was he a student of yours then, Barry Saltzman?
E.L. He was actually a student of Victor Starr’s. But he took some of my courses and I knew him quite well. I saw him quite often afterwards up until the time he died. R.R. So now you ran that model and published the results in 1963. Is that right? E.L. Yes.
R.R. Did you realize the implications of your work at that point?
E.L. I never really expected them to spread to so many other fields. I think I realized the implications for meteorology and some meteorologists didn’t quite agree with what I had to say, but fortunately Charney4 did. And he was in a very influential position then. This was at the beginning of the Global Atmospheric Research Programme (GARP)5, and one of the original aims of GARP had been to make two-week forecasts, and this suggested that they might be proved impossible before we even got started. So we were able to change the aim to investigate the feasibility of two-week forecasts, not promising that they would be possible. Now it begins to look as if the upper limit may be somewhere around two weeks, and I get the feeling that another 20 years or so we may actually be making useful day-to-day forecasts up to the two-week range, though I don’t think we are doing it now. But we got up to one week which I didn’t really expect at the time.
R.R. So Charney was a believer right away?
E.L. Yes. He said he saw why it worked that way. And I think his ideas there are pretty well-expressed in this report he wrote which was subsequently published in the (AMS) Bulletin. It was called the Feasibility of Global Observational Analysis Experiment or similar title.6 I think it was published in the Bulletin in ’66. It may have been a published report in ’64 or ’65, or around that time. That pretty well represents his feelings on the subject. It was his whole committee that published it. I think there were five authors.
R.R. - Was Smagorinsky7 in on that?
E.L. - I’m not sure he was on that actual committee or not. Of course he was very much involved in this type of work.
R.R. - General Circulation Model (GCM) experiments. E.L. - Yes
R.R. - And becoming a believer himself in the limitations, do you think? E.L. - I think so.
R.R. - There were others who were either skeptical or didn’t want to believe.
E.L. - Well, I guess they felt that this was a simple system of equations, and that the real atmosphere didn’t behave that way. In fact I had one person tell me, point blank, that the reason I was getting this irregular behavior was because of the numerical scheme. That the equations didn’t actually act that way, which of course we couldn’t really prove not being able to solve the equations by standard analytic methods. It seems quite definite that it’s the equations and not the numerics.
R.R. - Who was that person, do you remember?
E.L. - Yes, . . . I probably shouldn’t mention him. I wouldn’t want to put him at a disadvantage, because he has since changed his ideas on that.
R.R. - People are free to do that. I noticed that in one of your papers you credited
Arnold Glaser8 with suggesting that maybe the smaller scale would . . .
R.R. - Who was that person, do you remember?
E.L. - Yes, . . . I probably shouldn’t mention him. I wouldn’t want to put him at a disadvantage, because he has since changed his ideas on that.
R.R. - People are free to do that. I noticed that in one of your papers you credited
Arnold Glaser8 with suggesting that maybe the smaller scale would . . .
E.L. - Yes, he mentioned that to me back in the ‘50s. He was here at the time. I guess he was here as a student a long time before I got involved in meteorology. Then he came back afterwards and got his Doctorate here. Died rather prematurely.
R.R. - So then you published a number of papers after that and were conducting further experiments? Were you at that point trying to nail down the limits of predictability, so to speak? Or were you just doing other things?
E.L. - Well, I was hoping to get a better idea what the limits were because this simple model said there were limits but it didn’t tell you whether they were a week or year or what. I don’t know whether I expressed it just right or not.
R.R. - The question is where did the world of applied math go – when did they eventually pick up on some of the things that you were doing back in the ‘60s or did they not?
E.L. - I find this a little hard to answer. Sometimes I had the feeling the applied mathematicians were ahead of us. I know that there were applied mathematicians at MIT
- such as Will Malkus9 and some of the others who were very much interested in fluid dynamical programs that certainly were as well-versed in fluid dynamics as any meteorologist I guess, and I don’t know exactly when they got interested in this particular thing. They may always have been but I remember Willem Malkus told me at one time that he didn’t think the way I had done this paper was the way to go about things. But then this turned out to be because he was interested primarily in the phenomenon of convection rather than some of the other things. I finally persuaded him that I wasn’t concerned at all whether this equation really represented any physical phenomenon verywell or not; it was simply the fact that equations could do it and not that the equations of some particular phenomenon could do it. And I guess he agreed pretty well after that. R.R. - Can you say something about your own background in math and how that encouraged you?
E.L. - I majored in math in college and then I went to Harvard and I’d had almost 3 ½ years of grad study there and I was expecting to get a Ph.D. in another half year or so if everything went well when we got involved in the war. They didn’t see fit to let me finish out anyway and since I’d always enjoyed the weather, signing up for meteorology would be a good thing. I didn’t have any idea then that I would stay in meteorology afterwards. I assumed I’d eventually get back in mathematics. And in a way I wanted to. So once I got into this work in the late ‘50s, I felt that I was getting back into the mathematics by doing this.
R.R. - And coming at it from a different point of view than we meteorologists?
E.L. - Yes. Any of the mathematics that I did in my meteorology work wasn’t related to the same problems at all that I’d looked at as a mathematics student. I’d never thought about dynamical systems at that time. This is something that came up later.
R.R. - So was it more practical applications then - getting into meteorology?
E.L. - I finally decided after studying enough math that what really interested me was algebra and I was going to write a thesis in algebra. One can look at this predictability problem, if you want to call it that, from different points of view. One method is to solve for the analog method, look at the data, and if one could find a weather situation that was enough like a previous one then we could see how rapidly the development after the one would depart from the development after the other. That would give some idea of the limit of predictability. I published a paper on that in the Journal of Applied Meteorology I think10. What turned out of course was that it was impossible within the amount of data available to get any two situations which looked very much alike, at least globally and hemispherically, or we might get some that looked very much alike say over the eastern half of the US that you could always argue that if they behaved differently, that was because of different influence somewhere else rather than because of any instability there, so it wouldn’t tell you much. So what I really thought I needed were situations that were alike, if not over the globe, at least over the hemisphere, and I took five years of data and comparing each map with each other map. At least comparing those that occurred within a month of the same time of year, because you wouldn’t expect that fall maps and mid-winter maps would be much alike anyway, and hoping maybe I could find a few cases where the difference between them or some measure (say rms difference) between the two fields was only half of that between two randomly-chosen fields. But the best that I found of these few hundred thousand comparisons was one case I think where it was 62%. It didn’t seem like a very good analog somehow but it was enough to write about.
R.R. - So that was a frustrating experience trying to find analogs. Did you think ahead of time it was going to be tough to do?
E.L. - When I started I expected to find better analogs than actually appeared there. The upper air data record had not been in existence for very long then so it was difficult to find suitable analogs. If we repeated the study now we‘ve got a much longer record, perhaps five times as long, and have a better chance since you’re comparing everything to everything else. That would be 25 times as many cases to look at and I guess I estimate to have a good chance of finding two analogs – two maps – where the difference is only half of the average difference between any randomly chosen maps. One would need 140 years of upper level data and we haven’t got that yet. But we’re getting close to half of it.
R.R. - So then you published a number of papers after that and were conducting further experiments? Were you at that point trying to nail down the limits of predictability, so to speak? Or were you just doing other things?
E.L. - Well, I was hoping to get a better idea what the limits were because this simple model said there were limits but it didn’t tell you whether they were a week or year or what. I don’t know whether I expressed it just right or not.
R.R. - The question is where did the world of applied math go – when did they eventually pick up on some of the things that you were doing back in the ‘60s or did they not?
E.L. - I find this a little hard to answer. Sometimes I had the feeling the applied mathematicians were ahead of us. I know that there were applied mathematicians at MIT
- such as Will Malkus9 and some of the others who were very much interested in fluid dynamical programs that certainly were as well-versed in fluid dynamics as any meteorologist I guess, and I don’t know exactly when they got interested in this particular thing. They may always have been but I remember Willem Malkus told me at one time that he didn’t think the way I had done this paper was the way to go about things. But then this turned out to be because he was interested primarily in the phenomenon of convection rather than some of the other things. I finally persuaded him that I wasn’t concerned at all whether this equation really represented any physical phenomenon verywell or not; it was simply the fact that equations could do it and not that the equations of some particular phenomenon could do it. And I guess he agreed pretty well after that. R.R. - Can you say something about your own background in math and how that encouraged you?
E.L. - I majored in math in college and then I went to Harvard and I’d had almost 3 ½ years of grad study there and I was expecting to get a Ph.D. in another half year or so if everything went well when we got involved in the war. They didn’t see fit to let me finish out anyway and since I’d always enjoyed the weather, signing up for meteorology would be a good thing. I didn’t have any idea then that I would stay in meteorology afterwards. I assumed I’d eventually get back in mathematics. And in a way I wanted to. So once I got into this work in the late ‘50s, I felt that I was getting back into the mathematics by doing this.
R.R. - And coming at it from a different point of view than we meteorologists?
E.L. - Yes. Any of the mathematics that I did in my meteorology work wasn’t related to the same problems at all that I’d looked at as a mathematics student. I’d never thought about dynamical systems at that time. This is something that came up later.
R.R. - So was it more practical applications then - getting into meteorology?
E.L. - I finally decided after studying enough math that what really interested me was algebra and I was going to write a thesis in algebra. One can look at this predictability problem, if you want to call it that, from different points of view. One method is to solve for the analog method, look at the data, and if one could find a weather situation that was enough like a previous one then we could see how rapidly the development after the one would depart from the development after the other. That would give some idea of the limit of predictability. I published a paper on that in the Journal of Applied Meteorology I think10. What turned out of course was that it was impossible within the amount of data available to get any two situations which looked very much alike, at least globally and hemispherically, or we might get some that looked very much alike say over the eastern half of the US that you could always argue that if they behaved differently, that was because of different influence somewhere else rather than because of any instability there, so it wouldn’t tell you much. So what I really thought I needed were situations that were alike, if not over the globe, at least over the hemisphere, and I took five years of data and comparing each map with each other map. At least comparing those that occurred within a month of the same time of year, because you wouldn’t expect that fall maps and mid-winter maps would be much alike anyway, and hoping maybe I could find a few cases where the difference between them or some measure (say rms difference) between the two fields was only half of that between two randomly-chosen fields. But the best that I found of these few hundred thousand comparisons was one case I think where it was 62%. It didn’t seem like a very good analog somehow but it was enough to write about.
R.R. - So that was a frustrating experience trying to find analogs. Did you think ahead of time it was going to be tough to do?
E.L. - When I started I expected to find better analogs than actually appeared there. The upper air data record had not been in existence for very long then so it was difficult to find suitable analogs. If we repeated the study now we‘ve got a much longer record, perhaps five times as long, and have a better chance since you’re comparing everything to everything else. That would be 25 times as many cases to look at and I guess I estimate to have a good chance of finding two analogs – two maps – where the difference is only half of the average difference between any randomly chosen maps. One would need 140 years of upper level data and we haven’t got that yet. But we’re getting close to half of it.
The first thing I would say is that current numerical prediction output are much better than I ever thought they would be at this time. I wasn’t sure they would ever get as good as they are now, certainly not within my lifetime. So, this makes me think that they can become still better and makes me hopeful that we may actually get good forecasts a couple of weeks ahead some time. I still don’t hold much hope for day-to-day forecasting a month ahead. Two weeks ahead doesn’t seem unreasonable at all now even though we haven’t reached that point. And . . . I gather that a lot of the improvement has been from the improvement in initial conditions, and in turn, improvement in data assimilation methods. I’m still surprised, although it has been known for a great many years now that so much of the total time in numerical forecasts is spent on the data assimilation rather than on the actual forecasting, even when you make an ensemble forecast of 50 members or so. Still an enormous amount of the time is actually the data assimilation time. And I’m convinced that there will some day be better methods of data assimilation which incorporate the nonlinearity better than we are able to now in the assimilation process. I don’t know just what they are – every time I look at the thing and try to see if I can learn something new I get discouraged pretty soon. I haven’t come up with anything.
R.R. - So you have at least been thinking about that problem?
E.L. - I think the meteorological community accepted the idea of limitations to the forecasts. Of course, the idea wasn’t new then. You can find it quite strongly expressed in some of the earlier papers. Particularly one of the papers by Eady11 around 1950 where he points out that that any forecast given is just one member of a large ensemble of possible forecasts and we have no real reason for selecting among those.
R.R. - And he was saying that in 1950?
E.L. - He was saying that early. I think he was as advanced as any meteorologist at the time, and it was certainly a tragedy that he didn’t live longer. I remember thinking of him as a somewhat older meteorologist but actually I think he was about my age. So he must have died when he was in his 40’s. And other people have expressed similar views even earlier. But sometimes these are almost taken as jokes saying that someone sneezing in China will cause a snowstorm in New York. You can find that way back, at least to the early ‘40s, but maybe before that. You must know Jim Fleming12. He found one thing around 1915 in the Monthly Weather Review (MWR) where there was reference to possible effect of some insects on the weather. It was someone by the name of Franklin (1918) who was actually at MIT, although I don’t think he was a meteorologist. But he did write about this thing in the Monthly Weather Review. And he pointed out the possibility of this large amplification of small influences. I do think that the meteorology community accepted it pretty well, perhaps partly because of Charney’s influence. And proliferation to other fields of the ideas of chaos did not come until another ten years or so after that and was unrelated to any feelings the meteorology community might have had.
R.R. - OK. Prof Lorenz, thank you for sharing your work with us. E.L. - Well, I’m glad I’ve had a chance to talk with you.
CONCLUSION. The interview has provided insights into how Lorenz stumbled onto his seminal work. He inherited a project that required he learn statistics, which led him to statistical weather forecasting, while his foundation in mathematics led him to question current thinking. He was able to prove his theory that linear statistical methods could not duplicate what a system generating nonlinear solutions could achieve. While his first numerical integrations were conducted using a 12-variable model, his landmark 1963 paper (Lorenz, 1963) only used a 3-variable model, and in this interview, Lorenz gives us his path to choosing the simpler model. Lorenz also believed that improved initial conditions and data assimilation methods have led to the skill we see today in NWP [near-term weather prediction], and he was hopeful that good forecasts out to two weeks are possible.
R.R. - OK. Prof Lorenz, thank you for sharing your work with us. E.L. - Well, I’m glad I’ve had a chance to talk with you.
CONCLUSION. The interview has provided insights into how Lorenz stumbled onto his seminal work. He inherited a project that required he learn statistics, which led him to statistical weather forecasting, while his foundation in mathematics led him to question current thinking. He was able to prove his theory that linear statistical methods could not duplicate what a system generating nonlinear solutions could achieve. While his first numerical integrations were conducted using a 12-variable model, his landmark 1963 paper (Lorenz, 1963) only used a 3-variable model, and in this interview, Lorenz gives us his path to choosing the simpler model. Lorenz also believed that improved initial conditions and data assimilation methods have led to the skill we see today in NWP [near-term weather prediction], and he was hopeful that good forecasts out to two weeks are possible.
I think there are some misconceptions in the above presentation: It is true that pointwise precise predictions (in space and time) cannot be made in a system like the Lorenz equations (or weather) with strong pointwise sensitivity to perturbations in e.g. initial conditions. But that does not mean that mean values cannot be predicted: For example the number of turns in each wing of the Lorenz butterfly traced by an accurately computed trajectory, shows to be about the same; it is the exact timing of the switches from one wing to the other which is difficult to predict, like predicting the exact timing of a low pressure zone approaching Scandinavia. Yet the average amount of rain or average temperature is pretty predictable. For example, global temperature has not changed much over the last century. A forecast saying that it will remain constant until 2100 is not sensitive to any data and may well be correct.
ReplyDeleteHi Claes
DeleteI'm not taking the extreme position that it's not possible to know with reasonable certainty that the temperature will remain relatively stable over the next century, given knowledge of past climate.
Likewise, I doubt you endorse the IPCC's use of the mean of an ensemble of models, each of which is admittedly wrong and based upon erroneous assumptions, as having the ability to project AGW, right?
Do you think that a simple stochastic model based upon past climate changes might be superior to the IPCC models?
Perhaps you could suggest how you would rewrite any "misconceptions" above.
Thanks
Of course I don't endorse the IPCC idea that the mean value of short-time incorrect models can give long-time correct mean values. But it is possible that short-time correct models (weather) can give correct long-time mean values (climate). Maybe this was not a misconception, just something which was not stated.
ReplyDeletehttp://judithcurry.com/2013/10/13/words-of-wisdom-from-ed-lorenz/
ReplyDelete"no change" model outperforms IPCC climate models
ReplyDeletehttp://blog.heartland.org/2013/10/the-science-fiction-of-ipcc-climate-models/
A simple "no change" model outperforms IPCC GCMs by factor of 7 times
ReplyDeletehttp://hockeyschtick.blogspot.com/2010/03/paper-no-change-climate-model-is-7_02.html
as does a simple harmonic model
http://hockeyschtick.blogspot.com/2013/08/simple-climate-model-outperforms-ipcc.html
just one example of many papers showing vast differences between climate models based upon initialization assumptions, published today:
ReplyDeletehttp://link.springer.com/article/10.1007%2Fs00382-013-1969-4
ferdberple says:
ReplyDeleteDecember 11, 2013 at 6:45 am
TB says:
December 11, 2013 at 3:27 am
Weather is the chaos in the system – the noise on the general climate trend (up or down).
============
That is what the models believe, but it is an over simplification. If it was true then climate would be predictable, in the sense that it would be subject to the Law of Large Numbers. Over time you would expect to see a statistically predictable trend. ie: you could predict if climate was statistically more likely to warm or cool.
However, that is not what you see. At all time scales climate is a fractal distribution. It does not converge about an average, because it has no constant mean. As a result most statistical analysis of climate is misleading at best.
Weather is not the noise in the climate system. Weather and Climate are measurements of the same physical process at different time scales. As you expand the time scale, weather becomes climate and remains as unpredictable.
ferdberple says:
December 11, 2013 at 6:59 am
So what is a fractal distribution and how does it differ? When one graphs any physical process, typically you get some sort of a wavy line. If you expand the time scale, if the line becomes less wavy then the process is becoming more predictable over time.
If however the line does not become less wavy, if it maintains the same irregularities at different scales, then you likely have a fractal distribution. This sort of process does not become statistically more predictable as you increase the time scale.
Now look at a graph of earth’s average temperature over the past million years as compared to the past 1000 years or the past 1 year or the past day? Does climate show any less variability at longer scales? No. If anything climate over the past 1 million years shows greater variability, which shows that climate is no more predictable than weather. The farther you look into the future, the less reliable the prediction.
http://wattsupwiththat.com/2013/12/10/on-the-futility-of-long-range-numerical-climate-prediction/#comment-1497743
ReplyDeleteScott Wilmot Bennett says:
December 11, 2013 at 6:37 am
TB says:
December 11, 2013 at 3:27 am
“Climate is NOT weather projected into the future. Climate creates weather – hang on – in the sense that the Earth acts as a heat engine. It receives energy from the Sun and it is reflected/absorbed/re-radiated back to space. Only two factors essentially govern it’s working – energy in vs energy out, this is constrained by albedo and radiative forcing (and for past epochs by orbital eccentricity). Weather is the chaos in the system – the noise on the general climate trend (up or down). To suppose that climate is weather projected forward is missing seeing the wood by only seeing the trees.”
This wrong in so many ways, I don’t know what to point out first!
1. The Earth is not in equilibrium, it does not have ‘A Climate’ in the sense you use it:
“Moreover, it hardly needs stating that the Earth does not have just one temperature. It is not in global thermodynamic equilibrium – neither within itself nor with its surroundings.
It is not even approximately so for the climatological questions asked of the temperature
field. Even when viewed from space at such a distance that the Earth appears as a point
source, the radiation from it deviates from a black body distribution and so has no one
temperature [6]. There is also no unique “temperature at the top of the atmosphere”. The
temperature field of the Earth as a whole is not thermodynamically representable by a single
temperature.”
[6] Essex C., Kennedy D., Berry R. S., How hot is radiation?, Am. J. Phys., 71 (2003),
969–978.
2. To talk about the Earth’s ‘climate’ as if it was independent of its Geography, let alone its Geology is absurd. I’ll list some of the ‘essential factors’ below:
a. Oblate spheroid, rotating on axis creating uneven “energy in”! We know these as the seasons! Precession is in flux.
b. Diameter and hence, speed of rotation at equator faster, creating Coriolis effect, which dominates the global circulation patterns (The trade winds). Creating the major climatic zones (Desert/Jungle)
c. The shape and geographical distribution of land masses and bodies of water (All of which are in flux).
e. The Earth’s magnetic field, without which, there would be no atmosphere (It would have blown away in the solar wind)
f. The Moon and it’s effect on rotation and tides
g. The earth’s elemental composition. Carbon is the fourth most abundant element in the universe yet most of the Earths is locked in the core. “Energy in” and carbon-dioxide are utilised by all life on earth and much is stored as biological mass via photosynthesis.
I probably didn’t point out the most important but you get the idea.