For reasons mostly related to the failure of the National Weather Service to develop a pre-release information campaign, the public has been puzzled by the meaning of probability forecasts ever since the Probability of Precipitation (PoP) was introduced in the mid-1960s. That oversight can't be rectified easily but as we contemplate changing the content and wording of forecasts, that lesson looms large - or it should. The concept of uncertainty is best expressed as probability, but other ways (such as odds) might be more intuitive for most of the public.
Expressing a rain forecast in terms of probability (e.g., a "40 percent chance of rain" - which is equivalent to a 60 percent chance of no rain) always refers to a specific space-time volume.
That is, the forecast is for a specific area during a particular time
span. It might be for a particular metropolitan area during the next 12
hours, for instance. If you don't know the the forecast's space-time
volume, you don't yet know enough to grasp the intended meaning. (There's a
100 percent chance that it will rain somewhere on the Earth in the next
10 years! There's a zero percent chance of rain within the next 5
seconds when the skies currently are a cloudless blue.)
Another factor to consider is the "lead time" of the forecast; this is the time between when the forecast is issued and the beginning of the valid time for the forecast's space-time "window". Today's forecast for today is much more likely to be accurate than today's forecast for tomorrow. In general terms, the limit of predictability for weather forecasts is somewhere around 7-10 days, depending on the weather situation. Some forecasts are more difficult (and, hence, more uncertain) than others. At the predictability limit, the forecasts become so uncertain, they are no more accurate than forecasting the climatology - the average of all weather events for that date. They are said to have zero "skill" (which is not the same as accuracy - skill is relative accuracy - compared to some simple forecast, such as persistence, climatology, or some objective forecasting system).
You
also need to know what event to which the word "rain" applies.
In most cases, this means enough rain to be "measurable" (typically,
0.01 inches). The event being forecast could be different from
that, but most PoP forecasts are for measurable rain. In any case, it's another essential piece of the puzzle. The less frequent an event might be, the less confidence forecasters can have in predicting it. The probability of measurable rain is considerably higher than that of a rain event producing 10 inches (roughly 254 millimeters) of rainfall.
So, armed with knowledge of the space-time volume for which the forecast is valid and the nature of the forecast
event, the probability value is a quantitative expression of the
confidence that such a rain event will occur somewhere, sometime within
that space-time volume. The level of certainty (or uncertainty) can be estimated objectively using any of a number of methods (spread of ensemble members, Model Output Statistics, etc.) or subjectively. Subjective probability estimates can be calibrated with experience, such that all calibrated forecasters looking at the same data would arrive at similar probability estimates - subjective probabilities need not be understood as mere "guessing"! Assuming they follow the laws of probability, subjective probability estimates are legitimate expressions of forecaster confidence. Although some forecasters might be more confident in their abilities than others, if the forecasters are calibrated properly, they will mostly agree about their probability estimates. Real forecasters can become reasonably well-calibrated in about a year, given proper feedback about their forecasting accuracy.
If the forecast is for a 40 percent probability (two chances out
of five ... or four out of ten), then any one forecast can be neither
wholly correct or wholly incorrect. The only times when a probability forecast is either right or wrong is for forecasts of zero and 100 percent. We measure how good the forecast is
by its "reliability" - a perfectly reliable probability forecast of 40
percent means that on the average, it rains somewhere within the
space-time value 40 percent of the time whenever that 40 percent probability
forecast is issued. When it rains, of course, we should expect higher probability values, and lower values when it doesn't rain. Perfect
forecasting would consist only of (a) 100 percent probabilities when it
rains, and (b) zero percent probabilities when it doesn't rain, but that level of
certainty is impossible (for many reasons, both theoretical and practical). Thus, it rains one time out
of ten when the probability forecast is for 10 percent (assuming
reliable forecasting). Rain on a 10 percent probability is not only not necessarily wrong; it is just what we expect to happen (10 percent of the time!) when we make such a forecast.
Note that if the forecaster knows nothing
(i.e., has no confidence), then the best forecast to make is for the
climatological probability in that space-time volume. This is usually a
much lower value than 50 percent (a value that many people might incorrectly
interpret as "pure guessing") - if the climatological value for the
given space-time volume on that day of the year is 20 percent, that's
the best possible "know nothing" forecast.
Sunday, September 6, 2015
Subscribe to:
Posts (Atom)