Hard on the heels of some unnecessary storm chaser fatalities, now social media are calling into question the use of the "particularly dangerous situation (PDS)" wording in watches, and the so-called impact-based warnings (IBW) that use terms like "tornado emergency" in them. I expressed my concerns about the initial IBW experiment a few years ago, here and here. The juggernaut of IBW has rolled on, nevertheless. A recent situation involved an event with a tornado headed toward a big city that triggered a "tornado emergency" warning, but the tornado dissipated soon thereafter, doing only relatively minor damage. It's precisely this very uncertainty about tornado tracks and intensities that makes warning forecasters agonize over their decisions. And people in the "general public" often get upset and some inevitably start whining about "crying wolf". [I've never understood the mindset of people who become upset about not experiencing a major disaster!]
The big issue is now, and always has been, the challenge to convey uncertainty in a way that people understand the reality of the difficulties we face in issuing storm forecasts, so their decision-making actually will benefit from the added uncertainty information. A major obstacle we face is that we as yet have no large dataset derived from interviews with a representative sample of the public. In the absence of such information, we're reduced to guessing how to improve things. NWS management feels the pressure to respond to the growing number of people who advocate the involvement of social scientists. Instead of supporting extensive survey efforts to create that representative sample, all we get is hollow talk and ill-considered management decisions, like the IBW experiment. There remain many questions to ask in surveys: How is the current system working? What do people consider to be a "false alarm"? If we include uncertainty information, what's the most effective way for that information to reach and be understood by the most people? What if we had a component in our watches and warnings that caters to reasonably sophisticated users (like emergency managers) and a different component to reach the broadest possible audience? A watch or warning doesn't have to be either X or Y, after all - it could include a multiplicity of options.
Personally, I don't believe that wordsmithing watches and warnings is likely to be very productive. Words have a nasty and virtually inevitable tendency to mean different things to different people. No specific choice of wording is ever going to be universally accepted. Even within a limited region of the nation, the diversity of the "general public" represents a serious challenge. Further, constantly-changing technology within the "social landscape" causes that landscape to change constantly, as well. What worked in the past may not work so well today or in the future. This isn't a one-time challenge we can solve forever with one big push.
We're beginning to realize that the use of PDS watches (and "tornado emergencis") may well result in people seeing "ordinary" watches as something less important than those given the PDS label. The verification of PDS watches is somewhat better than that for regular watches - evidently there's some skill in making the choice to use (or not use) the PDS designation. The "tornado emergency" form of the IBWs doesn't have a very good verification track record at all. There are just too many storm-scale uncertainties for this product to exhibit much skill. Its failures stir up controversy and there's no hard evidence to suggest that the IBW system has been a successful solution to conveying uncertainty. What people like or don't like doesn't necessarily track with what actually works to bring about some desired outcome.
That brings up another challenge: What's the outcome we desire? Do we really want to be telling people what to do, and seeking a magic bullet to make people do what we want them to do? Personally, I believe telling people what to do, say via "call to action" statements (CTAs) is not a good idea. What people need to do depends on their specific situations, about which we as forecasters know nothing! People should develop their own specific action plans to meet the situations they're likely to experience in a hazardous weather situation (at home, at school, at work, on the road, engaging in recreation, etc.) well in advance of the weather actually developing. With severe convective storms, there isn't time to make such preparations when the storm is minutes away.
It's no secret that probability is the proper language of uncertainty. It's the optimum mechanism for conveying confidence in various aspects of the forecast. For an example of probability-based forecasts, I encourage people to review the "severe weather outlook" products from the Storm Prediction Center as an example of how to show what I call "graded threat levels" - the probability is derived subjectively and is associated with the confidence the forecasters have about their threat forecasts. Big numbers imply high confidence, small numbers imply low confidence. They actually have different probabilities for each of the three severe local storm event types: tornadoes, hail, and strong winds, and they distinguish the cases with a threshold level of confidence for "significant" severe weather: EF2+ tornadoes, hail 2+ inches in diameter, and winds of 65+ knots. This is not just a black-and-white statement that some event will (or will not) happen - it's a complex picture that forecasters deduce from all the information they have. There's a rather similar but less complex system for the severe thunderstorm and tornado watches.
This sort of product is what we need to develop for the short-fuse threats associated with warnings, but the difficulty I foresee with that is the rapidity with which severe convective storm threat probabilities change - they can increase or decrease markedly in a matter of a few minutes! It would take a very close monitoring and updating procedure to capture the variability of the threat, and even if it's technically possible to do (say, using automated algorithms), with the threat changing so rapidly, would users find that helpful or simply confusing? I suspect the latter. The recent event referred to above exemplifies this challenge. With a tornado headed toward a city, the "tornado emergency" call seems pretty obvious, but the reality is that the threat vanished in short order and the resulting forecast sure looks like a false alarm. The threat was real, but it wasn't realized because of uncertainty in the storm.
We have lots of work to do, and are not well-served by hasty decisions made more or less in ignorance of the relevant facts, both in meteorology and in the social sciences. The existing system has worked well for decades, despite its imperfections. If we make changes, let us be confident we aren't making things worse, rather than better!