Oklahoma has experienced some dramatic weather events the past several days - we had a major snow event preceded by thundersleet (and some hail!) last week that was quite well forecast by the numerical weather prediction (NWP) models and so was well-anticipated. The confidence in the forecast was high enough that many schools and businesses actually announced they were going to be shut down the next day, well before the first precipitation even began! Given Norman's pathetic snow and ice removal capabilities, many schools and businesses stayed closed for the rest of the week thanks to an extended period of bitter cold (i.e., below melting temperatures).
Another significant event was anticipated for this week, but it turned out to have missed Norman, for the most part. We experienced considerably less than the forecast snow amounts. Of course, schools and businesses had again announced they were going to shut down before a flake had fallen. This time, it seems, such precaution was unnecessary, given the modest snowfall. In such situations, many folks want to blame the forecasters for "hyping" the event into "Snowmageddon" or a "Snopocalypse" and causing unnecessary alarm. I have a couple of things to say about this. I'm not the only one - see here and here for some thoughts by a colleague.
First of all, let me say that weather forecasting continues to be an uncertain business. If you want the weather forecasters to make your decisions for you, there inevitably are going to be times when they're wrong. If snow falls on a 10% probability of snow, that should be expected one time out of ten times they forecast snow with a 10% probability! And if snow doesn't fall on a 90% probability, that's to be expected one time out of ten when they forecast snow with a 90% probability! There is no prospect that forecasting will ever be so good as to be absolutely correct 100% of the time.
What is your cost associated with taking unnecessary precautions? How does that compare to the loss associated with the failure to take precautions when you should have? It can be shown that the ratio of cost over loss is important in deciding at what level of confidence in some event you should take precautions. If your costs tied to taking precautions are higher than the losses for not taking precautions, you should never take precautions! If your costs are very low compared to your losses, it makes sense to take those precautions at a low confidence threshold. Cost/loss ratios vary among decision-makers, so different forecast users should make different decisions in situations where the forecast probabilities are the same. Decision-makers should understand this, but many don't. That's their problem, not that of the forecasters. Ignorance is not always blissful.
Having said that, it's also clear that the NWP models were a major factor in the degree of confidence the forecasters had in their forecasts. As explained by my colleague, we now run ensembles of forecasts in order to establish how likely an event might be. The assumption is that the variability among the ensemble members is wide enough that, for the most part, the actual observed weather will fall somewhere within the range of possibilities revealed by the ensemble. As shown by this week's event, that isn't always true, however. Quantitative precipitation forecasting (QPF) is one of the most challenging things that forecasters attempt, and it's becoming clear with time that forecasters are relying heavily on the information provided by NWP models (including ensembles of model runs) to guide their QPF products. If you live by the models, however, there will be times when they lead you astray, and you will "die" (metaphorically) by them.
I'm not a forecaster and I make no pretense of being one. But there are times when I wonder about what their logic might be when they make their choices. I don't like to be a "Monday morning quarterback" when it comes to specific forecasts, of course. Nevertheless, I was nowhere near as confident about this week's event being a significant one here in Norman as were most of those folks paid to forecast the weather. Of course, I didn't put out my own forecast, so I have no way to validate that I'm not simply using 20-20 hindsight. You'll just have to take my word for that. Or not.
Way back in 1977, Len Snellman introduced the notion of "meteorological cancer" wherein forecasters would come simply to pass on the NWP model guidance without any further consideration. It seems that Len Snellman was much better at forecasting the process of weather forecasting than we are at forecasting the actual weather!
Nice post regarding what I refer to as "model-casting". It does seem many times that this is the preferred method--and many forecasters/mets are more than willing to take the models verbatim and as truth with no consideration for weather analysis. Even the process of data assimilation and how models develop a forecast is sometimes not considered. What does this mean? I try to show that a "weather analysis" or assessment can result in better forecasts than simply buying the models verbatim. This is one example here where the NCEP models were drastically different--but it was possible to make a solid forecast even when the guidance spread was huge only 48 hours out. http://jasonahsenmacher.wordpress.com/2010/11/15/the-importance-of-model-timing-ohio-valley-and-se-rain-event/
ReplyDeleteFollowing this path makes the eventual demise of the public sector forecaster all the more certain. Even if forecasters only care about their own selfish interests (which I don't believe is generally the case), they should be working very hard to add value to objective guidance products. In my perception of things, this is only true for a small minority of them! They're hastening their own irrelevance!!
ReplyDeleteCould you explain a little more?
ReplyDelete"Following this path makes the eventual demise of the public sector forecaster all the more certain."
Are you suggesting that "weather analysis" and breaking down relevant atmospheric "forcings" will lead to the demise of public sector forecasters? I have found that such analysis often leads to forecasts far better than that of which the models themselves can deliver.
Clearly, I failed to make clear to what "this path" refers - anyone who knows me and has read my many writings on this subject would know that "this path" refers to following automated guidance to the exclusion of real meteorological analysis and diagnosis. Sorry for that misunderstanding.
ReplyDelete