Finding #8: After the significance of this event was apparent, Tornado Warnings and Severe Weather Statements lacked enhanced wording to accurately portray that immediate action was necessary to save lives with this tornado.
Recommendation #8: WFO warning forecasters should use wording that conveys a sense of urgency in warnings and statements when extremely dangerous and life threatening weather situations are in progress.
This recommendation is based on a very limited study of a sample of precisely one case (that of the Joplin, MO tornado). This is pretty minimal evidence on which to base changes to the warning system. I don't accept this recommendation as sufficient justification to tinkering with the wording in weather warnings. A primary rule when considering changes to a system that has saved tens of thousands of lives over its history: first, do no harm! There's no solid foundation of peer-reviewed science to support changes to the warning system. This is not to say that the existing system is perfect. Far from it, but the IBW experiement is not the way to identify and address the problems within the existing system.
Check out the IBW site to find the following statements regarding the "experiment" (using public warnings):
Project Goals
- Provide additional valuable infromation to media and Emergency Management officials
- Facilitate improved public response and decision making
- Better meet societal needs in the most life-threatening weather events
Do we have any evidence that the proposed changes will improve public responses? How will 'public responses' be measured in order to ensure that any improvement in responses has, in fact, been a result of these changes? And if we don't know precisely what are proper responses, will any changed responses necessarily be an improvement?
Just what are these 'societal needs' and how were they chosen? Again, how will the meeting of these unspecified needs be measured?
Intended Outcomes
- Optimize the convective warning system within the existing structure
- Motivate proper response to warnings by distinguishing situational urgency
- Realign the warning message in terms of societal impacts
- Communicate recommended actions & precautions more precisely
- Evaluate ability to distinguish between low impact and high impact events
Again, what is a proper response? And how will it be demonstrated the the proposed changes will accomplish motivating a proper response?
I thought the NWS was a weather forecasting agency, not an impact forecasting agency. In what way have forecasters been educated and trained to forecast impacts? Knowing impacts in advance is pretty difficult to achieve.
The topic of 'call to action statements' can be debated - even assuming this is a good thing to do, how will it be shown to be more "precise" communication? What is meant by "precision" of communication?
So there is no study that has actually shown the ability of weather forecasters to discriminate between high and low impact events? It seems pretty evident that until such a study has demonstrated that ability in a convincing way, this experiment is pretty ill-advised. Using the public as experimental subjects is not a good idea!
Enhance warning through
- Improve communication of critical information
- Make it easier to quickly identify the most valuable information
- Enable prioritization of key warnings in your area of interest
- Indicate different levels of risk within the same product
- Enable the National Weather Service to express a confidence level of potential impacts
Has it been shown in a broad-based, perr-reviewed study that the proposed changes will improve the identification of critical information. Again, how will the improvement be measured. How is the value of information to be determined?
Has it been shown in careful studies that it is possible to identify diferent levels of risk?
This is an ill-advised experiment at this time. Much more needs to be done before we start messing with the existing warning system that goes out to the public. It is also ill-conceived: this is not even remotely a reasonable design for a proper experiment. The IBW 'experiment' is little more than a thinly-disguised effort to do something just for the sake of being able to say, "See, we're doing something!"
15 comments:
Dr Doswell - I am by no means justifying the project (other than the "tags" at the end stating whether a tornado is observed or just indicated) but Mike Hudson did a webinar for the pseudo-WAS*IS community today which was similar to his AMS presentation. Some results from WxEM studies were presented. The AMS talk was recorded at https://ams.confex.com/ams/93Annual/webprogram/Paper222556.html
I see nothing in the presentation that offers me any convincing, solidly-based evidence that these are the right changes to make.
"A primary rule when considering changes to a system that has saved tens of thousands of lives over its history"--I have never heard this before, what is the source for this info?
Chuck,
Before I post my thoughts, up above you wrote:
"I thought the NWS was a weather foreasting agency..."
If you criticize the spelling of the NWS statement on their website, some won't take you seriously when you misspell "forecasting". Anyway...
I think a very good thing about this is to remind people of possible impacts in a very concise matter. Not the standard call-to-action that gets slapped on at the end of each warning that nobody reads anyway. That might actually be more than the space-waster that's there now, gobbling up many seconds on NOAA Weather Radio unnecessarily. If you don't know what to do as a tornado is a short distance and time away, a call-to-action won't help you. You very well could be dead before it's all read to you.
Having said that...over the years, you have criticized TV meteorologists for saying "that's an (E)F-[1-5] without even seeing the damage and material up close. Now, we are telling the NWS, essentially, to GUESS what EF-rating that tornado COULD be or is, or even WILL BE before you can even see anything form below the clouds! To top it all off, I get administrative email messages from NOAA's data feed reminding WFO's not to watch streaming video during an event so that the public can get the information (since they don't have decent Internet bandwidth at the offices or regional centers). Meanwhile, the WFO's are stuck with a lousy standard definition Dish Network feed that only covers the local channels in their office's viewing area. In the CWA I live in, they can view the channels from their local broadcast market...but from a second major one they have, they can't see...even though at least one TV station there streams their coverage live over the Internet.
And on top of all of that, from radar they are to determine the impacts of a tornado-warned storm that they can't see below 8,000' due to the distance of the storm from the radar?
I actually DO want impact warnings. But to do that, the WFO's are NOT...repeat, NOT even remotely equipped to do so. They have no live skycams of their own, a TV feed of only one of their usually multiple TV markets (and only one or maybe two stations at a time), many of which do NOT do live severe weather coverage, and no streaming Internet feeds from spotters/chasers (officially, but that does go by the wayside sometimes)...that's just wrong.
So in summary, I'll just say this: the National Weather Service is being told to do a job this spring where they do not have the proper tools (nearly ZERO tools, in fact) to do something that will be very difficult to do under the best of circumstances if they did. Hint: a dual-pole radar only counts as one VERY SMALL piece of the atmospheric puzzle. It is NOT the end-all to say how bad a storm is! And I want to meet the person who thinks he or she can do impact forecasting with hardly any data. Yep, I smell "yay! We did something! Now we can proudly announce it!". The "cure" may wind up to be worse than the "disease"!
Even if changing the warnings to impact-based is the right way to go...and in part, I think it is...to do so without proper visual data to the forecaster trained to analyze a scenario quickly is just nothing short of crazy.
My $.02.
Gilbert Sebenste
Gil,
If someone fails to take me seriously because of a typo, they are just looking for an excuse to blow me off ... btw, I didn't "criticize" the IBW site for its misspelling. I just noted it. Thanks for noting mine ... it's been corrected
I certainly agree that the IBW 'experiment' participants are operating at a distinct disadvantage, but I don't think improving the technology at their disposal is going to make much difference.
I guess we're going to have to agree to disagree about IBW, though. I know of no compelling evidence that this is the path of change the NWS should follow in its efforts to improve warning effectiveness.
Evan,
Your question is unclear ... do you want the source for the "First, do no harm" rule, or the "tens of thousands of lives saved"? If it's the latter, I admit freely that there is no peer-reviewed source for this estimate. I have some evidence that indicates that tornado fatalities have declined substantially following the introduction of tornado forecasts/warnings ... see: Brooks, H.E., and C.A. Doswell III, 2002: Deaths in the 3 May 1999 Oklahoma City tornado from a historical perspective. Wea. Forecasting, 17, 354-361.
It is, of course, impossible to determine precisely what would have happened had there been no warnings for severe weather, but there's still good reasons to make such an estimate as made in my blog post.
part 1/2...
A few caveats up front… I don’t know the details of this NWS experiment; i.e. how it was devised, how it will be evaluated, what (and how) end users will be involved, what training will be given to the forecasters. However, after being part of the Met Service in Canada for some 35 years before retiring last year, and closely connected to operational forecasting and to users (especially in aviation), I have spent a fair amount of my time in this area. I believe it’s a step in the right direction.
Here are some thoughts, but keep in mind that it’s hard to discuss such ideas in only a few paragraphs.
As I am fond of saying… “Forecasts (and warnings) only have value when they influence decisions” (I believe I first heard this quote from you?!). To me, it’s paramount that met services act to bridge the gap between the forecasters and users of their products (and info). It’s *not* forecasters directly forecasting the impact of weather (that’s not their role), but they need to be aware of how weather info is used so that they can work with people whose responsibilities are affected by weather so that they can properly integrate MET info into their decision-making processes, and also so that their needs can be factored into the decisions on the forecasting desk, including when warnings are issued.
This means having awareness of users’ critical thresholds, decision points, the importance that users place on timing, on knowing the forecasters’ level of confidence, on their need for other scenarios that might develop, etc. In Canada, the MSC has worked closely with NAV Canada managers, people in air carrier dispatch operations, people in airport operations and others… to better understand how weather affects their operations, and thus where its aviation forecasters need to focus their attention.
For example, not all winds are equally important. Forecasters need to focus on winds that will change the active runway (based on crosswind limits) as this affects airport acceptance rates. That might mean in some cases that a difference of 10-20° or 3-5 knots might be critically important, and needing close attention. The MSC has conducted some experiments along these lines, including doing some cost-loss analysis. Sure, this type of work needs to be done thoroughly and objectively, especially if met services are establishing standards, such as level of performance, on it, and users are establishing procedures, and regulatory bodies are establishing operating rules.
The MSC’s work has had an impact; e.g. air carriers, following discussions with NC managers and MSC operational staff about significant storms, are repositioning their fleet and even cancelling flights ahead of time… it’s better to be proactive than get caught and have to scramble. To me such work is important, and needs to involve forecasters and users. Also business analysts?
TBC..
part 2/2...
Now this work has mostly been in aviation, where the users are more easily identifiable. Does the same effort apply to the world of public forecasters, where there are many different users? I would say yes, even though it’s harder to do. The NWS is already doing this to some extent, when it issues warnings re flash floods; i.e. it’s not the first-level parameter (rainfall rates and amounts), it’s the combination of rain and terrain that’s important to consider. And ultimately the impact.
In the States, is this a role for the NWS or should the private sector get involved? I do not know. I do believe that it’s important to the future role of the operational meteorologist. As numerical weather models and AI-based systems continue to improve, meteorologists will be unable to demonstrate that they can add much value to the computer-produced guidance. In Canada, we have accumulated a fair bit of info suggesting that the forecaster cannot add much value beyond Day-1, and that their role (and intervention) in forecasting needs to be focussed. Otherwise, their current role will diminish.
So I like to see that the NWS is venturing into this area, even if it’s not being done the best way. Just my thoughts. Your mileage may vary.
Steve,
I suspect we may not be so far apart as you might think. It is not that the IBW experiment is inherently bad, but it is premature to be testing these ideas via public warnings, especially before the social scientists have done their work about how forecasts might be made more effective. The design of the experiment is flawed and the timing is wrong.
As for forecasting impacts by knowing user thresholds ... is this not asking a lot of forecasters who are challenged by what they were educated and trained to do: weather forecasting. Now you want to add many more issues for them to consider, and they have neither the tools, nor the training to do these additional tasks. And this enhanced understanding of user needs can only be done for a fraction of the population of users, so many user's needs will go unanswered. This could become very contentious! It is indeed much more difficult in the public sector vs. the limited aviation sector.
Private-sector forecasters could do a lot of the impacts-based warnings for clearly-identified clients. They have more flexibility and can grow to fill niches that require support unavailable from the public sector products. In fact, I'd prefer they accepted this responsibility rather than adding to the workload of public sector forecasters.
it's good to hear that we're not that far apart in philosophy; it's the "how" we get there that's important to do properly
i remain convinced that the basic public forecast "product" (where product will be a database of info, not a text bulletin or graphic) will (and should) become almost fully-automated in the not-too-distant future, and that the role of the forecaster will shift to helping to design the system, overseeing its operation (and interceding when necessary), focussing on warnings, and working with users to understand how best to use the info.
yeah, it's hard for forecasters to be aware of and to incorporate the needs of all users and for sure the private sector will be engaged, but i can see the forecasters working more closely with the media, EMS folk and some major users.. both before (planning) and during events.
i'm thinking that could be an exciting role in the future...
..steve ricketts
I can't say I disagree. Unfortunately one thing that we have learned is that very few users read the body of our products, they just see the name (TOR, Blizzard, Winter Storm, etc) and go from there. Adding more words will likely not change much.
I think there are a handful of changes that need to be made that the NWS does not want to tackle for various reasons.
1) Decrease FAR. This can be done a few different ways but the main route I believe is to not be afraid to miss an occasional smaller tornado when the end result is much fewer warnings/much lower FAR/increased user confidence. One stat that didn't come out in the assessment you spoke of was that the office that covered this area had a FAR around .95 over the last 3-5 years. Yes, that 100 TOR warnings with only 5 verifying. The other idea I'm not 100% certain on is to regionalize warnings to ensure that the best possible radar operators are always working severe weather events. This would more than likely eliminate any local warning philosophies (like if a storm rotates for 2 volume scans go TOR regardless of environment).
2) Solve the radar horizon issue. Likely no physical way of accomplishing this via beam bending or anything so I think you are looking at a network of CASA radars on the order of 5-10 per cwa. This would still be far from perfect but would at least get us to the point where all lat/lons are covered down to about 3000 feet.
3) Expand the WEA and discontinue the expensive NWR network which reaches only a very small market of individuals. One huge benefit to the WEA through cell phones is obviously a much larger audience but I believe we may be able to train our users to understand that unless their phone alerts them, they are not in a warning. One issue that our users deal with beyond the normal FAR is the "perceived" FAR. Meaning that the broad coverage that the Weather Channel and local channels give to severe weather can many times give the perception that you are "under the gun" when in fact storms are not threatening your location.
Thanks for the post...Kyle Weisser
Kyle,
Your proposed method for reducing FARs is asking warning forecasters to be able to predict the EF-ratings of tornadoes that nay not yet have formed. This seems pretty challenging to me. I believe this would be better treated if the forecasters were allowed to make probabilistic warnings. Without a careful analysis of that office's problem with false alarms, I suspect the TOR probabilities in most of those false alarms was not ZERO, but perhaps might not meet some threshold probability and thereby not trigger a warning.
As always, the asymmetric penalty function issue is present, forcing categorical warnings toward overforecasting. With probabilities, you can have a well-defined minimum threshold probability for a warning and you can achieve unbiased warnings rather than biased toward overforecasting.
Good luck on solving the radar horizon problem in today's economy.
Weather Radio generally sucks as a means for warning dissemination but ... with any proposed change ... first, do NO HARM!
Actually, Chuck, I don't disagree with you in your strictest sense. I think IBW would be a good idea if the participants that the proper tools to do it. My argument is that it would require a VERY large investment to do so...and thus, not practical at this point. And then, even once you get to that point, will the current way of doing it be successful? I don't think we have the answers because we keep throwing the people who would be able to tell us under the bus. What do you think?
Gil,
What I think is that we don't have any answers because, for the most part, we haven't done what it takes to obtain credible answers. What we have is speculation based mostly on anecdotal evidence. I'd be the last person to take issue with the notion of graded threat levels, but not before a lot of work gets done. I continue to oppose having weather forecasters try to forecast anything other than the weather!
Joel,
NWS management, including CRH management, never has paid much attention to what its working staff thinks. I know of no reason to believe that has changed. In fact, your comments matched I've heard (privately) from forecasters who face having their careers torpedoed below the water line if they say what they think, even in private correspondence with NWS management! The NWS has always been a top-down world ... I don't see that changing any time soon.
Post a Comment