A colleague of mine said something years ago that struck me as insightful: every model forecast ever issued was wrong! Wrong in some way or another, to a greater or lesser extent. Obviously, some forecasts are better than others, but none of them have ever been absolutely perfect. His point was to suggest that human forecasters need to avoid basing their forecasts purely on Numerical Weather Prediction (NWP) model output - a notion with which I agree fully. However, the same can be said of every forecast ever issued by human forecasters, as well! The reality is that we can never predict the weather with absolute certainty. I've not the space nor the inclination to go into the details of why this isn't just my opinion (maybe later) - it is, rather, based solidly in our scientific understanding of the atmosphere. So, put in terms of the information content of a weather forecast of any sort, a weather forecast is not a statement of what definitely and certainly is about to happen in the future, in detail.
Because weather has substantial impact on human society, it's obvious that people want to know what's going to happen weatherwise ahead of time. I'm fond of saying "Yes, of course, and people in Hell want a glass of ice water!" - which I heard many years ago from a co-worker. What people want isn't necessarily what they're going to get. The fact is that we have never been able to provide that sort of information and there is every reason to believe we'll never have that capability. That notwithstanding, our relationship to our users predominantly has been such as to perpetuate the myth that we can provide that with 100% confidence. Users want something and we pretend we can give it to them. Surely our users know by now that such a capability doesn't exist! Their own empirical evidence is that we can't do it and that evidence is at least a contributor to the widespread notion that weather forecasts are inevitably and totally wrong.
If plausible bounds are put on what constitutes a good forecast (as opposed to a perfect forecast), it should be noted that these days, today's weather forecasts are correct (within those bounds) a high percentage of the time (e.g., for 24-h daily maximum and minimum temperatures within 5 degrees of the observed value, it's about 85% or better). So our weather forecasts currently contain useful information (despite not being perfect), within some limits, out to about 7-10 days. What you experience is usually fairly close to what we forecast most of the time. Beyond that "predictability limit" of 7-10 days, our weather forecasts become no more accurate than what we would see if we simply forecast what climatology (i.e., the long term averages for a particular location, date, and time) says we should expect. At that limit point, we say our forecasts no longer have any skill, relative to climatology. The greater the lead time, the less accurate the forecasts (and the lower their skill), on the average, out to the predictability limit.
What I would like to have us do is re-negotiate the contract we have with the users of weather information. We need to be able to provide them with whatever forecast information we have, including some sort of statement of the uncertainty associated with the information we have. Let's put aside the existing relationship, in favor of putting information out that we actually have to capability to provide! Now the language of uncertainty is probability, and I'm constantly being told that people don't want probability (the glass of water in Hell problem) or they don't understand probability. You don't need to be an expert in probability theory to put it to good use, and many people are very familiar with the notion of odds (probability in another form). What we are doing now, with the lone exception of precipitation probabilities, is pretending to provide absolute certainty. The historical background of how Probability of Precipitation (PoP) was introduced is interesting but far more than I want to expound upon in this blog. Whatever the problems are with PoPs, they are a far more meaningful way to express our forecast information than all the non-probabilistic elements in a weather forecast. If we don't express our uncertainties, we are actually withholding information from forecast users! That can't be a good thing, and it comes back to bite us, time and time again.
An analogy with sports is a fair comparison, at least to some extent.
Our predictions for who will win the Super Bowl in the pre-season have much greater uncertainty than the night before the game is actually played. Even then, there remains some
uncertainty, and reasonable people can disagree about the outcome right
up to the time the whistle blows and the winner is known with absolute
certainty.
Therefore, to answer the question posed by the title of this blog, a weather forecast contains the forecaster's best estimate of what that forecaster (who might possibly be an NWP model) anticipates is going to happen with the weather. It's not a guess, but rather our assessment of the situation and what we believe is the most probable weather that will occur, at the time we issued the forecast, given the finite accuracy limits on the method used to create that forecast. As new information comes in, that forecast can change, sometimes dramatically. Our diagnosis of what is about to happen virtually never coincides precisely with reality, but at times we can get it fairly close, especially at the shorter lead times.
A weather forecast always should include information about forecast uncertainty and that is necessarily going to be more complicated to explain than just reading a list of numbers. More information inevitably requires more effort. If the user is going to make the best use of the information we reasonably can provide, the user must accept some of the responsibility to pay attention to the forecast, to learn what the forecast actually is saying. If all you want is the numbers, then you've forfeited a good deal of the value the forecast is trying to provide. The choice can be left up to the user.
Tomorrow's Windstorm in Four Acts
6 hours ago
3 comments:
Your analogy with the superbowl is a great example. Consider what it would be like if a sportscaster simply said Denver was going to win, and then went on to some other topic. But they don't do that. Instead, they discuss what could happen if a multitude of things happen (e.g., an injury occurs to Peyton Manning, the wind is strong and the temperature drops into the 20s, the running game doesn't develop, etc.). These are expressions of uncertainty. Thus, when we simply say the high temperature for tomorrow is going to be 20, and stop at that, we are ignoring the inevitable range of outcomes (e.g., the front slows down and the temperature reaches 30). It only makes sense we give some bounds or limits on our forecasts -- not only at long ranges but also in the short term. I am surprised that the private sector ignores this (e.g, deterministic forecasts out to day 45 by one entity) when they could be really providing a great service to their customers by sharing the uncertainty information (e.g., a high of 20, but a possible range of 15 to 30).
Of all things to do in South Dakota (from where you get wonderful input, right, Matt?) I have followed the tropics for years. I assume you approve of their fairly recent use of probabilities in tropical weather outlooks, as to whether a system being watched will develop into a tropical storm (1n 2 or 5 days).
As a forecaster, one of my personal preferences is to write my forecasts in text format. I know many users don't like to read, and many prefer to see pretty icons, but I feel that icons are too simplistic and do not capture the variable detail for the entire day (e.g., overcast becoming clear, increasing clouds, morning rain/afternoon dry, etc.). Furthermore, I am not a fan of forecasting exact temperatures. I prefer to use temperature ranges, which inherently relays uncertainty. In the old days, the NWS zone forecasts used phrases like “high in the mid 80s” or “lows around 30°“. I still prefer this format to “high of 83°” unless I am showing a graph that illustrates upcoming temperature trends.
I started thinking a lot about the societal implications of forecast wording after the recent “snow jam” fiasco in Atlanta. In the days leading up to that event, I was providing forecasts to some of my clients and I, like the National Weather Service, was forecasting 1 to 2 inches of snow in the Atlanta metro, with the greatest accumulations occurring south of Atlanta. This turned out to be quite accurate, as a swath of heavier snow fell from Columbus to Macon. Despite being as specific as possible in relaying the forecasts, many people in the Atlanta area interpreted the forecast as “the worst is going south of us.” They were overly focused on the path of the storm and where the “heaviest” snow would be rather than the amounts predicted for the metro. I kept fielding questions like, “are you still expecting the snow to stay south of Atlanta?” My response was consistently, “no, I expect snow to fall in Atlanta… don’t focus on the heavier amounts south of you… focus on the accumulations predicted for your area.” I would see people posting on social media things like, “it’s supposed to go south of us,” because that was how their brains were processing the forecasts.
I feel like we, as forecasters, need to encourage people to focus on what is being predicted for their town and how those amounts will impact THEIR respective communities. This approach would be very similar to how the NHC relays tropical cyclone forecasts, which reminds people to not focus on the exact path of the eye/center of circulation. Obviously, this approach would be helpful for all significant weather events, not just snow.
Finally (although a little off topic), I will also mention the recent uptick in social-media hypecasts from long-range model output. To some extent, it’s an unfortunate reality that model data are now available to everyone and in the age of social media, snapshots of enormous snow accumulations (and other hazardous events) that show up on a single model run valid 10 days into the future will spread like wildfire. Some amateur forecasters and weather enthusiasts have 20,000+ followers on Facebook or Twitter and share long-range model solutions routinely. They aren’t trained to accurately diagnose and predict the atmosphere, nor do they understand the intricacies of NWP models. They certainly won’t wait for model consensus and consistency.
In recent weeks, especially after the snow jam in Atlanta, the rumor mill has been churning, with epic snowfall predictions from various snow algorithms spreading across the Internet. Fortunately, local NWS offices have helped to mitigate hype by creating graphics that explain why 10-day model output isn’t always reliable. This was a very good move because it explained the inherent uncertainty in forecasting and some of the pitfalls of long-range models. Many amateur forecasters treat the models as the gospel; after all, they want to be the first to “call it” and get a pat on the back when they happen to get it (even a broken clock is right twice a day).
Anyway, I know this was a long reply, but this is a great topic and I just wanted to chime in on the topic of "What information does a weather forecast contain?" This is a topic I have been thinking about for a while.
Chris
Post a Comment