A recent blog I read made an effort to explain probability in weather forecasting. From my perspective, it fell rather short of an explanation. Hence, this effort, which no doubt will also fall short.
In the early days of my science education, I was convinced that perfect weather forecasts were an inevitable outcome for the science of meteorology. With time, my confidence in that outcome was eroded, and the famous 1963 paper by meteorologist Edward N. Lorenz (that started the notion of "chaos") put the nails in the coffin of perfect deterministic forecasting. Simply put, perfect deterministic forecasts are just not possible. This is no indication of a shortfall in atmospheric science - it's a fact associated with the way the atmosphere works. To make perfect forecasts, we would need infinite amount of perfectly accurate data and have a perfect physical understanding of all processes that affect the weather. None of those conditions will ever be realized. The farther into the future we attempt to forecast, the greater the uncertainty. And the uncertainty itself is uncertain - we know it varies from day to day, and in one location versus another. Forecasting has definitely improved over the time of my professional career, but it cannot ever become perfect.
Since we don't observe the weather perfectly, we don't even know exactly what's happening in the present, and that's obviously a major challenge to our ability to know the future. Forecasts can only decrease in accuracy with time, and at some point, our forecasts become indistinguishable from random guessing. That point is the so-called "predictability limit" - it isn't a hard number (it varies!), but is somewhere around 7-10 days. Beyond that time, the best forecast (statistically) is climatology. When you see predicted high and low temperatures for a week in advance, those values are far less certain than the values for tomorrow, even if your source doesn't inform you of that declining confidence.
Along the way to my enlightenment regarding deterministic forecasts, I had the good fortune to meet the late Allan H. Murphy, who explained to me that subjective probability estimation associated with forecasting is tied to the forecaster's confidence in a particular outcome - and that this a perfectly acceptable form of probability. That different forecasters might have a different probability estimate is bothersome, but when the forecasters are properly 'calibrated' in their confidence, their forecasts tend to converge to similar values. Forecasters can become quite good at estimating forecast uncertainty, although this skill varies from one forecaster to another. As I write this, considerable research is underway to seek strategies to help forecasters estimate forecast uncertainty.
Allan spent a lot of his life trying to overcome stupid objections to the use of probability, and I've done some of this, as well (e.g., here, here, here, and here). I'm not going to repeat all that here. Our main challenge is to try to figure out a way to express the inevitable uncertainty in our forecasts in a way that's helpful to those trying to use our forecasts to make decisions.
People all the time are griping about probability in weather forecasts - they apparently want us to be absolutely certain - in pretending to be so certain, we meteorologists make the users' decisions for them. And then those users will be upset when the forecast doesn't work out that way. Users surely understand that we're not perfect, so it must follow that demanding we continue to pretend to be perfect is not going to work! The unwritten contract between forecasters and users needs to be renegotiated!
When probability was introduced in the mid-1960s in the US, it was done without a public information campaign explaining what the probabilities meant. That gap in public education remains unbridged to this very day - most of the confusion over probability is not about the absence of understanding regarding abstract probability theory (which many meteorologists don't understand, either!). The problem is that we don't know what the event is that is being forecast! Is it a forecast for the eight inch diameter opening in the official rain gauge? Is it an average probability over some region? What is the time period to which the probability refers? We simply have never done what it takes to explain just what that probability number means to the public, so it's not surprising that the public struggles with this. We must address this public understanding shortfall.
In the very non-homogeneous group known as "the public", there's some fraction of people who can use probability to make decisions quite easily, as well as those for whom probability is so mysterious as to be completely useless. Is there a "one size fits all" method for getting weather information to users? I doubt it, but I'd like to explore the issue of how to communicate uncertainty. Before we start changing the format and content of our forecasts, the first rule is: do no harm! Don't change things before we know with some confidence that the change won't be deleterious. We need a collaboration with the social sciences to come up with strategies for expressing the necessary uncertainty information in such a way that the users obtain the information they need. We do our users a disservice by "dumbing down" our forecast products and using the public as guinea pigs in ill-conceived experiments.
Finally, it seems that "the public" has some responsibility of its own. If people find our products confusing or unhelpful, they need to expend some effort on their part to become more knowledgeable. I'm not saying the problems with communicating weather information are wholly the fault of the intended recipients, but rather that the recipients share some measure of the responsibility for that communication breakdown.
No comments:
Post a Comment