Showing posts with label Weather and Climate. Show all posts
Showing posts with label Weather and Climate. Show all posts

Friday, September 8, 2017

We've never experienced anything like it!

I write this as Hurricane Irma bears down on South Florida, with the potential to be up there with the worst ever disasters from a hurricane in Florida.  I also hear some people saying they have ridden out other hurricanes and so are planning on riding out Irma.  This ridiculous notion deserves some consideration ...

In my tornado research, I spent decades becoming familiar with the climatological record of tornado occurrences in the USA.  In the process, one can't help but observe that really big, bad tornadoes are but a small minority of the 1000 or so tornadoes that hit the USA every year.  In 1998, a tornado rated F3 hit Gainesville, GA in the early morning, killing 12 people in an event that was unusual in that it was not warned-for in advance.  In the wake of that event, my colleague Dr. Harold Brooks was talking via the phone to an emergency manager in the Gainesville area and she told him (I'm paraphrasing) that she had no idea things could get that bad in Gainesville!  Clearly, she didn't know anything about the last single tornado in the USA to kill more than 200 people - the tornado that struck Gainesville, GA on 06 April 1936 (part of a two-day outbreak including a single tornado that killed 200+ in Tupelo, MS the day before).  If a "generation" is roughly 30 years, this means that the institutional memory of that awful day in 1936 had been mostly lost in, even in an agency about being prepared, within roughly two generations!  My experience says that's pretty typical.  After a big disaster, awareness is high and people are receptive to the call for preparation.  But as time passes, people move away, people die, new people move in and the local memory of disaster fades all too quickly.  Resources for event preparation are re-allocated to other projects.  Complacency grows.  All too soon, the disaster is mostly forgotten.  But the weather data base doesn't ever forget.

Studying the climatology of hazardous weather gives researchers a mental model of dangerous storms that isn't widely known in the "general public".  While I was visiting Australia in 1989, it turned out there was a flash flood event in Melbourne while I was there.  It wasn't a major event, being confined mostly to urban flooding.  I watched a TV interview the next day with a couple living in the area hit by the flash flood, and they said "We've lived here for 9 years and we've never seen anything like this!"  So they apparently believed that living in Melbourne for 9 years was going to representative of all the possible weather in Melbourne for all the rest of eternity!  And this relatively modest event was a big deal for them!

It's understandable that non-meteorologists would fail to have an accurate understanding of the occurrence of rare events.  I'm not sure how to go about fixing this shortfall in our communication of science, but here, today, with the landfall of Irma in South Florida likely in the next two days, the complacency associated with people's flawed understanding of what is "typical" for their area seems to be influential in the choices some people are making.   Ignorance of such things almost never implies a blissful outcome. 

Immediately after a major storm disaster, people are likely to want to think of what happened to them as a "freak" weather event:  something unprecedented and very unlikely.  Being hit by a major storm is a relatively rare occurrence, but calling it a "freak" event is misleading and counter-productive.  If you're familiar with the climatology of tornadoes, someplace (and possibly someone)  is going to be hit by a violent tornado virtually every year!  Violent tornadoes are rare in any one place, but they aren't "freak" events, somehow outside the range of our human experience.  Of course, they likely are outside of your own personal experience!  You could live in central OK and not be hit by a violent tornado in 1000 years; on the other hand, Moore, OK has been hit by violent tornadoes in 1999, 2003, 2010, and 2013!  The distribution of tornadoes has all the signs of being "random":  being truly random doesn't mean the events are spread irregularly but more or less uniformly.  Instead, random spatial distributions have both clusters and voids.  If we had enough data (at least 1000 years worth), we might have a very clear picture of the climatology of violent tornadoes, at least in central OK.  But we don't have that much reliable data on tornadoes before, say, 1953.  The more rare the event, the more data are required for a meaningful analysis of the danger.

It's also likely that our knowledge of Cat-4/5 tropical cyclones is similarly flawed.  A much longer period of record is needed for an accurate picture to emerge.  The climatology of major events determines such numbers as the "return period" for these events.  The more data one has, the greater the number of major events in the database and the "return period" calculation is less about iffy extrapolation and more about reliable information.  The notion of "return period" is widely misunderstood by the public but that (obviously) is off-topic for this blog.

Folks, it's just not helpful for you to have faith that your personal experience with storms includes all that could possibly happen to you.  When the storms become more intense, the less useful your experience is.  This affects the decisions you make in advance of being hit by a particular storm, and your decisions will determine such issues as whether you live through it or not.  Your choice will affect your family and perhaps even your friends.  Don't trust your knowledge of the past to be helpful - listen to what the forecasters are saying and take it seriously!  Lives hang in the balance!

Saturday, August 12, 2017

Construction practices going in reverse?

Recently, it seems that some politicians in the state of Florida are attempting to weaken the enhancements to building codes put in place following the massive disaster of Hurricane Andrew in 1992.  The hurricane revealed the vulnerability of homes built to low standards and the idea was to prepare for the inevitable return of a strong hurricane to Florida.  This current effort to weaken the codes is being led by the GOP, and it seems likely that the pressure to do so is coming from the homebuilders, who are essentially the only group that stands to gain from weakening the building codes.

Natural hazards like hurricanes and tornadoes have a tendency to fade from people's memories with time.  Immediately after a disaster, there's widespread support to do something to reduce the impact of the inevitable return of that hazard.  Sometimes, this is referred to as "closing the barn door after the horses have escaped."  Unfortunately, with the passage of time, the enthusiasm for preparing for the next hazard begins to fade.  Other ways to spend resources become a higher priority than hazard preparation.  In my experience (with tornadoes) the collective memory of disasters in communities virtually disappears within roughly 2 generations - about 60 years.  People live under the false assumption that what they've seen in their lifetimes in their location up to that point is pretty much how things will go for the rest of their lives.  Natural hazards are rare in any one place, but it's only a matter of time before they strike again.

For people who experience for themselves the horrors of a natural disaster, the memories often are still vivid decades later.  But survivors move elsewhere, older victims die, and people who move in afterwards generally haven't experienced with the survivors and victims experienced.  In our reanalysis of the Tri-State tornado, we found that the stories told by survivors are widely regarded locally as unreliable and exaggerated, whereas in our interviews with survivors, many of those stories could be corroborated by independent evidence!  I suppose it's something of an "inconvenient truth" to learn that the natural hazards can be so devastating in the place where you live.  The unpleasant reality is that if an event has happened at least once in some area, there's no reason to believe it won't happen again.  Low probability does not equal zero probability!

Interestingly, over much of Europe, building construction standards are substantially higher than in most of the USA.  This can be seen directly in the degree of damage when tornadoes in Europe hit human structures; equivalently strong tornadoes in Europe do less damage than in the USA!

Think about the relationship between construction practices and the lethality of, say, a violent tornado or a powerful hurricane.  What's responsible for most of the fatalities in a tornado?  It's flying debris ... broken 2x4s, shingles, tree branches, sometimes even cars!  There's a kind of mythology that says there's no point in strengthening building that might be hit by a tornado, because no affordable construction can withstand a tornado, right?  No, that couldn't be more wrong!

The costs to enhance structural integrity over the existing code standard of 90 mph in most of the USA, when amortized over the life of a 30-year mortgage is pretty small.  What the builders don't like is that it takes more time to build a better home, and that is what reduces their profit.  If they can build 10 shoddy homes in the time it takes to build 6 well-constructed homes, that's where they make their gains.

First of all, even in a violent tornado (i.e., one rated EF-4 or EF-5 on the enhanced Fujita scale), the most violent winds are experienced in only a small fraction of the total damage path of a tornado - typically less than 10%.  Those areas experiencing EF-3 winds or less would experience considerably less damage if their structural integrity would be enhanced over what is typical construction in the US.  Decreasing damage means less flying debris.  Shoddy construction increases the potential death toll, as well as increasing the destruction.  In most of the US, the building code requirements are such that the building should experience no structural damage at windspeeds of 90 mph or less.  The fact is that most wood frame homes built in the US are built below the code requirements, sometimes far below.  Code enforcement is pretty often woefully inadequate.  The cost of a home isn't a very good indicator of construction quality, unfortunately.  Local communities often give in to pressure from developers and homebuilders, passing laws to allow "exemptions" from code-prescribed building practices. 

When subjected to powerful winds, structural failures begin with the weakest component in the structure - often the attachments of the roof to the walls and/or the attachment of the walls to the foundation.  A 90-mph wind speed puts a tornado with that as its peak wind toward the bottom of EF-1.  Thus, even a weak tornado can cause structural damage under this building standard.  Once structural failure begins, further failures are likely - a home can be "unzipped" starting from one initial weak point.  Further, a 90-mph wind can push a home off its foundation when the walls are poorly attached - we call such homes "sliders" because they can be slid off their foundation and then utterly collapse.  Such a home can be totally wrecked by a 90-mph wind!

The building code requirements in Miami under the enhancements after Hurricane Andrew are on the order of 120 mph before structural damage will occur.  That wind speed falls about in the middle of the EF-2 category, such that much of the area experiencing EF-2 winds will have only marginal structural damage.  The area of EF-2 or less wind speed includes the majority of the damage path in even a violent tornado.  Even EF-3 winds will produce less damage with the enhanced code.

For Florida to weaken its building codes is to return to a time of lowered resistance to damage, likely resulting in more casualties.  That some of the politicians in Florida are seeking legislation to lower the standards is an indicator that the homebuilders are using their political influence to lobby the state government for the benefit of their profits.  Who else benefits from lowering the construction standards in Florida?  Weakening construction standards is an idea that should be nipped at the bud!

Monday, July 17, 2017

It struck without warning!

A recent fatal flash flood incident has led me to think over the topic of media coverage of weather-related incidents.  We in the "tornado community" frequently hear interviews with the public to the effect that tornadoes have hit somewhere "without warning" when the facts are that the National Weather Service (NWS) has indeed issued a warning.  Clearly, what this statement by some victim reflects is that she/he didn't hear that warning (or ignored it!) and then was unfortunate enough to be in the tornado's path.  I suppose they think that it was someone's responsibility to notify them personally that they were going to be hit.  First of all, it's not the responsibility of the NWS to notify personally everyone in danger.  Second, it's a fact that although technology might eventually make personal warnings possible, but at the moment, it's pretty much impossible to notify everyone who will be affected (and no one else).  The NWS might have the means to contact individuals with warning information, but the state of the art of forecasting simply doesn't permit 100% accuracy regarding who will and who won't be in the damage path of a tornado.

Most fatality-producing tornadoes these days have warnings issued at least a few minutes before someone is struck, and sometimes the lead times can be as much as an hour!  Is an hour's lead time too long?  This is a debate within the tornado community that's not yet settled and clearly requires the involvement of social scientists.  But just for the sake of the argument, let's consider some things about how warnings can be effective in reducing casualties:  for an issued warning to be effective, it requires a chain of events.  The user must

1. receive the warning by some means
2. understand what information the warning provides
3. know what to do with the warning information
4. believe the warning is relevant to him/her
5. take effective action based on the warning

All of the links in that chain must be met, or the warning will not be effective.  In the case of the recent flash flood, the people in the path of the flood evidently did not receive the warning.  It hadn't even rained at the location where the fatalities occurred - the rainfall was miles away upstream.  This is not uncommon when hiking and camping in the wild, away from TV and cell phone coverage.  If  people are to recognize the danger signs without benefit of hearing the warning, they must have experienced one or more similar events (unlikely) or have been given training in heavy rainfall situation awareness (also unlikely).  Flash floods have a special handicap relative to tornadoes:  most everyone has experienced heavy rainfall without a flash flood, whereas most people have never been hit by a tornado.  Rain seems "normal" and not very threatening, whereas a tornado is "exotic" and would automatically be seen as a threat.

A more extensive treatment of the chain of events needed for weather warnings to be effective can be found here.  There are many ways for this chain to be broken, often leading people to think that the event struck them without warning.  In the interest of their own safety, weather warning recipients should make it their personal business to learn situation awareness with respect to potential weather hazards.  The unfortunate part is that many users won't take the relatively simple steps necessary for their own safety, and seem to expect that it's solely someone else's responsibility to protect them from weather hazards.

And the fact that some particular hazard is relatively rare where the user lives and works and recreates, doesn't mean the threat is non-existent.  Tornadoes are infrequent in New England, for example, but violent tornadoes can and do occur there.  Though the danger is not high most of the time, sometimes violent tornadoes happen in New England.  Thinking it could never happen to you is the first step toward a personal disaster.  The weather is not malevolent or evil;  it's just indifferent to what we puny humans do or don't do.  At times, we find ourselves in the path of a potentially fatal hazard  Being prepared is a personal choice;  it's no one's responsibility but your own.  The NWS does its best, but there are still times when they fail to issue a warning, or issue the warning too late to be of much use to at least some people.  That's the state of the art and it should not require much to understand that a warning may not be issued sometimes.  Then, public safety depends on good luck and proper situation awareness;  i.e., recognition of danger signs even in the absence of a warning.

Wednesday, July 13, 2016

On disagreement's role in science

Some discussion has arisen about whether or not meteorologists who are not climate scientists can contribute to the discussion regarding global climate change.  A while back, I wrote a series of short essays about religion that I called "Leading Horses to Water".  This is one of those essays that I believe represents something we meteorologists who are not doing climate research can contribute to any discussion of global climate.
__________________________________________________
Previously, I’ve talked about the apparent controversy surrounding the science of global climate.  The media have put out so much misinformation regarding this topic, it’s hard to imagine how the communication between the scientists and the general public can ever recover.

One of the most egregious pieces of misinformation being put forth in the media is that there is much controversy within the science regarding the main issue:  that the global average temperature is increasing, and that the human contribution (the so-called anthropogenic part) through the emission of greenhouse gases is a major causative factor in that temperature increase.  This alleged controversy is being used to support the notion that the consensus science somehow is bad science.  The level of scientific agreement about these basic ideas is nearly unanimous.  But of course, what most people don’t know, and what the media seem incapable of grasping and thereby conveying to the consumers of their rubbish, is that disagreement is an essential and never-ending component of any science!  Disagreement continues within the science, even among those who agree about the consensus findings regarding anthropogenic global warming (AGW).

Any rational argument must come out of a basis of agreement.  Without that, all one has is people talking past one another.  The basis of agreement in science can be referred to as the scientific consensus.  It establishes certain principles and bodies of evidence as having a special status.  Most scientists accept the consensus.

In a very real sense, every scientist is a salesman for his/her own ideas, competing in a “marketplace” of ideas, with the winners being given credit for improving our understanding of the natural world, and the losers being left to try to salvage what they can.  This is a perfect example of a rational free market, actually.  Ideas compete not on the public relations image, or a catchy advertising gimmick, or on pandering to the psychology of investors, but on the evidence that supports them.  If one idea provides a better fit to the evidence, then it wins a temporary victory.  I say temporary because new evidence can revive old, discarded ideas and push them to the forefront long after they were first proposed.

Science makes progress precisely because there is disagreement.  Without internal disagreement, a science is cold and dead.  Just because an individual’s idea is discarded in the marketplace of ideas (from which the so-called “consensus” emerges), this doesn’t mean that he/she slinks away utterly defeated.  A “loser” in the marketplace can redouble their efforts to uncover more compelling evidence, seek to devise an experiment that can provide a more stringent test of the ideas, or try to make a slight modification to their discarded idea to provide an improved fit to the evidence.  Ideas may be defeated now but can emerge later as new (but still provisional!) winners.  When no clear winner emerges, a host of competing ideas clash in the marketplace.  This is healthy, not some sort of scientific malaise.  Scientists improve their ideas by the criticism of their peers, and the science advances through that process.

Science establishes no idea on an absolute basis – science is not a religion, after all.  There are no sacred truths, no meaningful arguments by authority, no ultimate arbiter.  Its most respected ideas are promoted from their original status as  hypothesis to theory to law, but even laws can be superceded.  Newton’s Law of Gravity was supplanted by Einstein’s Theory of General Relativity, for example.

The AGW deniers, a tiny minority within the global climate science community and most confined to non-participants in global climate science, have failed to gain much traction in the traditional media for scientific controversy: peer-reviewed scientific journals.  Thus, they’ve resorted to using the public media, advancing themselves as the true scientists, being victimized by a vast conspiracy within the global climate change scientific community.  There are political and economic ramifications to maintaining the illusion of a scientific controversy regarding AGW, so there are others seeking to promote the deniers as persecuted champions of truth, when the fact is the whole campaign is a tissue of lies and deceit.  There is no scientific controversy regarding AGW, per se! 

The disagreement you read and hear from demagogues disguised as pundits in the media is not the wholesome, necessary conflict among those scientists who are pushing the frontiers of our understanding of the natural world forward.  The disagreement being promoted by the media springs from those who dislike the reality of AGW for their own reasons, often pecuniary or political or both.  The mere existence of disagreement in science is not news, nor does it indicate anything wrong with the science.  It’s the natural state of a healthy, active science.  But this public conflict, outside the traditional place for the marketplace of scientific ideas (in refereed journals and scientific conferences), is not about the normal scientific disagreement.  It’s about personal agendas, about politics, and corporate greed advancing its interests above the public good.  Remember the pseudo-scientific conflict about the health effects of smoking?  Perhaps you should ask yourself who gains from the promotion of claims about non-existent scientific controversy!  Is it the science?  Is it the public?  I think not.

The public has a right, nay, a need to know the truth, but people have to work and think to separate truth from falsehood, science from pseudo-science, real disagreements from manufactured false controversy.  They need to learn how to recognize the demagogues and reality-distorters from those who are attempting to help us all make important decisions for the future.

Monday, June 6, 2016

Thoughts on decision-making in the face of uncertainty

Many people struggle with the notion that weather forecasts are uncertain.  They have to make binary decisions in the course of their lives:  go on that picnic, bring that umbrella, pour that concrete - or not do those things.  Weather plays a role in many such decisions, and people seem to know that forecasts are not perfect and never have been, but they persist in being upset when they make a decision based on weather that ultimately doesn't pan out, at least as they understand the forecast.

Perhaps I'm oversimplifying this, but it seems to me that the real challenge with decision-making in the face of uncertainty is the absence of accurate uncertainty estimates.  If weather forecasts are always wrong, you could always do the exact opposite of what the forecasts say and have it work very effectively for you - a permanently, completely wrong forecasting system would be just as valuable at a permanently, completely right forecasting system!  A forecast need not be perfectly accurate to be of value to users!

Of course, no weather forecasting system is perfect and there never will be such a perfect system.  If you know the uncertainties in the forecast, there are techniques by which you can manage your decision-making so as to optimize your results.  That optimization incorporates knowledge of both the losses experienced associated with not taking some action and having that weather event actually occur, and the cost of taking that action.  This is called the cost-loss problem.  If the cost of taking some action to prevent losses from some weather event exceeds the losses if the event occurs, it makes no sense to ever take such an action.  Different circumstances demand different decisions.  Optimizing the results of your decision-making requires you to have knowledge of your costs and losses, in addition to an accurate estimate of the uncertainties.

Sadly, it's well known that people often have difficult with knowing the true risks associated with hazards.  For example, although tornadoes are very scary to many people, the reality is that the probability of being killed in a tornado is pretty low.  There are much greater risks associated with, say, food poisoning in fast-food restaurants, or driving motor vehicles.

Most people struggle with understanding the probabilities related to weather uncertainties but they have a reasonable idea of some uncertainties.  For example, uncertainties tied to their jobs are usually more or less familiar to the workers.  If your work involves manufacturing something, you usually know about the likelihood of producing a defective product.  Similarly, uncertainties related to your home are often reasonably well-understood.  You generally know something about the chances that your water heater will fail.  When the uncertainties pertain to the weather, most people generally have no idea what those probabilities might mean and how to use them to make choices.  It's not that they need to know abstract probability theory to begin to grasp what weather probabilities (the language of uncertainty in weather science).  Most forecasters never did very well in probability and statistics!!  But people can use a concept effectively even when they don't actually follow the abstract mathematics.  Card counters in blackjack are making effective use of their knowledge of uncertainties - so well that casinos don't allow card counters to play!

Part of our problem is that traditionally, weather forecasts have not been expressed in probabilistic terms.  The use of probability of precipitation (PoP) was introduced in the mid-1960s but there never was any sort of public information campaign to help forecast users understand them - an awful oversight!  Curiously, even many forecasters don't know the proper definition of PoPs, although with experience and some feedback, they can become very adept at estimating their uncertainties in terms of PoP.

The end result of having little or no understanding of weather forecast uncertainty - and all forecasts are uncertain to a greater or lesser extent - is that forecast users will develop all sorts of heuristic methods for making choices.  Many of these are likely to be rather less than optimal use of the information the users have.  And apart from PoP and some severe weather forecasts, uncertainties are not mentioned in forecast texts and broadcasts.  Since that information is known, at least in the minds of the forecasters, this amounts to withholding needed knowledge from the users!  If we don't include the uncertainty information in some form, users must guess about that uncertainty, and their guesses often are wildly incorrect, such as thinking that forecasts are "wrong" the majority of the time.

Users can handle decision-making in the face uncertainty only when they know the uncertainties reasonably well from being familiar with them.  Unfamiliar uncertainties (as are those in weather forecasts) are inevitably mysterious and are a source of anxiety in decision-making as well as the source for cynicism about the forecasts.  People demand, unreasonably, that forecasts express the weather in binary terms - this event either will or won't happen - even though they must already know forecasters can't do that very well all the time.  What they evidently want is for weather forecasts to make their complex decisions (including much information that forecasters can't know anything about) for them.  Forecasters simply can't and shouldn't do this. We need to help our users understand more about our uncertainties or this situation will never improve, and users will continue to get less value from weather forecasts than what forecasters are capable of providing. 

Thursday, March 3, 2016

School closings in tornado hazard situations, Part 2

So now, it should be clear that the NWS tornado forecast products ... from outlooks to warnings ... cannot be considered 100% accurate in all respects but will always involve uncertainty.  Furthermore,, decision-makers must consider other, non-meteorological issues in making their choices for how to react to a given situation so that it makes no sense to have some rigid rules for what choices to make.  Decision-makers must, therefore, invest considerable effort in "situation awareness" - they have to be deeply committed to staying informed about what is always an evolving situation.  The ultimate proper choice (i.e., the ex post facto "right decision") can change literally from one minute to the next as a tornado event unfolds.

With regard to school closings, what are some of the non-meteorological factors involved?  I make no claim to be able to list them all, here.  A big factor concerns the time of day.  If the school is closed on the basis of the forecast/warning, should the children stay at school or go home?  If the schools closes early and the children are sent home, will one or both of the parents of the children be at home?  What is the state of construction quality associated with the children's homes - do they live in a mobile home or a flimsy frame home or a multistory multiple family home?  How much time before an approaching tornado hits the school?  Are the kids in class or at recess outside or waiting for buses to take them home, with parents waiting for some of them (or on the way to pick them up)?

What sort of protection does the school actually offer and will it be adequate for a strong or violent tornado, should they be unlucky enough to be in the path of such a storm?  Does the school have a tornado plan?  Assuming they have one, has the school's tornado plan been vetted by structural engineers and/or meteorologists so that it's known to be the best they actually can do with the existing structure?  Is adequate shelter available anywhere in that school and who decided it was indeed adequate?  If the school has sheltering inadequacies, can they afford the necessary modifications, up to and including purpose-built tornado shelters?  I've seen plans at schools that are quite flawed and could eventually lead to a disaster.  I've seen schools that, without structural changes, have no local capable of occupation by the entire population of the school that would provide adequate shelter - only the least bad among all their sheltering options.

Have regular tornado drills been done at least once per year?  Are there means by which a school's decision-maker can be situation aware during a volatile weather situation - a weather radio and/or some internet connection that is being dedicated to weather situation awareness?  Is the decision-maker trained well enough to make such difficult life-and-death decisions in the face of a complex, rapidly-changing hazard?  Does the decision maker understand all the options and know their weak and strong points?

Moreover, as discussed in my first post on this topic, the tornado threat changes continually.  But the vulnerability of some segments of a school's population varies.  Physically handicapped people require more time to reach and enter shelter locations than the able-bodied, so they might have to commence their tornado precautions earlier than the rest of the group.  Has all that been accounted for in the tornado plan?

Although this discussion is about school closings in particular, many similar statements are valid for churches, businesses, shopping malls, recreation areas, entertainment venues, and so on.  For none of them is it trivially obvious what choices a decision-maker might have.  Schools in session have been  hit infrequently over the years, fortunately, but when they are hit while in session, the results can be tragic.  And the same goes for all the other public and private locations where people might be concentrated in relatively high numbers.  How many of those places have a tornado plan that's familiar to the occupants and easily implemented on relatively short notice?  How many even have a person designated as the tornado decision-maker (i.e., an emergency manager) who is trained and equipped for the task?  What if their designated decision-maker isn't there for some reason - do they have a properly prepared backup?

If the goal is to make the nation "weather ready", it's going to require a lot more than a few catchy slogans.  The certification of weather readiness requires some stringent milestones, not just a few simple requirements.  Being truly weather ready is a complex task that has many facets to be considered.  A knee-jerk response based on some simple criterion (such as being in a tornado watch or not) is not really demonstrating practical weather readiness or adequate preparation.

Wednesday, December 30, 2015

US Building construction practices, revisited

In 1999, after the 03 May tornado outbreak of that year, I wrote a web essay on home construction practices and how that affects wind damage.  The recent December tornado events have re-awakened this topic, and it seems appropriate to offer some additional remarks after 16 years have passed with virtually no comprehensive change in construction practices.

The damage surveys I (and others) have done since then have continued to reveal not only the inadequacy of existing building codes, but also just how widespread violations of existing building codes are.  The existing codes remain, for the most part, pegged to a 90 mph standard for resistance to structural damage from winds.  The operational EF-scale puts 90 mph winds (a 3-second gust) at the low end of EF-1 tornadic winds (86-110 mph).  According to this standard, an EF-1 tornado (or anything stronger) is considered to be capable of initiating structural damage.  This isn't a very good standard for most of the United States east of the continental divide.  If a home is poorly anchored to its foundation (which is, unfortunately, all too often what is observed in American frame homes, despite such practices being below code standards), an 80 mph wind might well be able to slide it off the foundation, resulting in total loss of the home.

The reason for the widespread occurrence of code violations (in all buildings, including schools, not just homes) is simple.  Although structural enhancements can be added to new construction for about $1000 or so, the real issue is the time it takes to add those clips and strapping (see Fig. 1) to the frame and roof.  The added cost to the homeowner (passed on by the builder), amortized over a 3-year mortgage, is trivial.  For homebuilders and contractors, the key to profit is speed of construction.  In far too many cases, this means "shortcuts" are taken by the builders.  For instance, the code standard for attaching the wall plate to the foundation is the use of "J-bolts" embedded in the concrete, with washers and nuts tightened onto the threaded end of the J-bolt (see Fig. 1).


Figure 1.  An example of strapping used to attach a wall stud to the wall plate and the use of a washer and nut to attach the wall's bottom plate to the concrete foundation via a J-bolt.  Toe-nailing the stud to the bottom plate is poor practice (but acceptable by most current building codes) because it offers little resistance to forces acting to lift the wall.  The strapping uses nails (or screws) that are at right angles to lift forces, thereby creating much more resistance to those forces. 
Image courtesy of Tim Marshall. 

Surveys after tornadoes reveal too many examples where builders have installed the J-bolts, but failed to attach the washer and nut to the end!  That makes the J-bolt completely useless but of course it saves time!  Another common practice is for builders to be granted an "exemption" (by local governments) from codes requiring the use of J-bolts, allowing builders to use powder-driven cut nails for attaching the wall bottom plate to the concrete foundation.  This results in an extremely weak attachment (Fig. 2) when lift forces are applied to that attachment, as they are in tornadoes.



Figure 2.  A powder-driven cut nail left behind in the foundation after the wall bottom plate was torn away.  Note the damage to the concrete caused by the process of driving the nail through the board into the concrete.  Sometimes this shatters enough of the concrete to utterly negate the attachment of the nail but it would not be visible to the builder.  Image courtesy of Tim Marshall.

In my surveys, I've seen that shoddy, sometimes appallingly weak construction practices are widespread.  Buying an expensive home is no guarantee of high quality construction.  We see about as many code violations in expensive homes after tornadoes hit as we see in low-cost tract homes.  We also have seen that in many (not all) cases, the homes rebuilt after tornadoes are no better constructed than the destroyed homes they replaced.  The lessons learned from previous tornado event evidently are not being used widely to change construction practices.

If the standard for wind resistance to structural failure were raised in the tornado-prone parts of the US (everything east of the continental divide) to 120 mph (in the middle of EF-2 tornado winds - 111-135 mph), this would result in a substantial reduction in tornado damage.  Structural damage would primarily be associated with EF-3+ tornado events.  Flying debris in tornadoes from structural failures initiates considerable additional damage and increases the casualty risk to anyone caught in the tornado (Fig. 3).  Enhanced construction codes would thereby reduce tornado damage considerably and also lower the casualty risks.



Figure 3.  Damage from the 03 May 1999 tornado in Oklahoma City/Moore, OK.  Note the prevalence of broken 2X4 timbers within the debris - these are structural frame elements that may have been carried considerable distance by the winds.  FEMA image taken by C. Doswell.

Note also that even in the strongest tornadoes (EF-5), only a tiny fraction of the damage path experiences the strongest winds (at most only a few percent of the total damage area).  And EF-3+ tornadoes represent only about 10% (or less) of all tornadoes.  Limiting damage to parts of a tornado track with EF-3+ winds would represent a significant reduction in the amount of structural damage, and reduce casualties, as well.

At the very least, there's a need for more rigorous enforcement of building codes.  It would be an important advance if all buildings actually were built according to the existing building codes, to say nothing of additional benefits from enhanced code requirements.  We as a nation need to be benefiting  from our experiences with tornadoes, not ignoring the lessons they've provided.

Monday, December 28, 2015

Just who is responsible for your safety in tornadoes?

Media coverage of the late fall/early winter tornadoes that have been occurring this year includes their entirely too common efforts to seek out and give voice to those victims who say "the tornado struck without warning", despite the facts almost always showing precisely the opposite.  In most examples, timely forecasts and warnings were issued in advance of the event, sometimes literally days in advance!  I've seen this happen almost without fail every time we have major tornado impacts - over the course of my 40-year career, this has been an element in media coverage virtually in every example.

I consider this to be irresponsible reporting.  According to their clearly biased view of things, if people actually received a warning, that's not news.  It's only newsworthy when the perception is that the meteorologists dropped the ball and failed in their responsibilities.  The media should be ashamed for perpetuating the public misconceptions about the weather information meteorologists are providing for them.  So how does this false perception arise so frequently and consistently?  Why can such interview subjects be so readily found?

I've not done the surveys and research, but it seems pretty clear to me that when people interviewed by the media make the counterfactual "it struck without warning" statement, what they mean is that no one contacted them personally and told them that they needed to take shelter.   Sometimes they say that they didn't hear any tornado sirens, as if that's the only medium by which warning information can be conveyed.  Sirens can reach people who are outside in the vicinity of a siren - they're not the best and most important mechanism for disseminating tornado warning information!

What this mostly indicates to me is that some people - often those whose story is featured by the media - are simply not accepting any responsibility for their own personal safety.  In many cases, there's information about tornado threats that's available many hours, and even days, in advance of the storms.  The main point of providing this information is simply to let people know that there's a heightened tornado threat and that they need to maintain situation awareness during the time before storms develop and approach them.  Although such forecasts are not inevitably followed by major tornado events, they're sufficiently accurate to serve their intended purpose of helping people be prepared should the need arise.  Even in a major tornado outbreak, most people are not affected.  It's only the unlucky few who find themselves in a tornado's path.  But everyone should be responsible enough to keep up with the developing weather situation in cases of an enhanced threat level.

Once storms develop, the warnings are issued - in most cases, at least 15 minutes or so before the impact of the approaching tornado.  The warnings are not perfect and many of them turn out to be false alarms.  As noted, even when tornadoes do occur, the vast majority of people are not struck.  The state of the science simply won't permit perfectly accurate, extremely precise warnings and the number of perceived false alarms can only be reduced slightly by applying the best knowledge science has to offer.  The cases with the types of storms that produce the powerful tornadoes responsible for most fatalities are already handled pretty well by the forecast/warning meteorologists.  The primary problem with reducing false alarms is that it increases the likelihood of failing to warn for a tornado.  So what do people want?  Relatively frequent false alarms, or relatively frequent failures to issue a warning for an actual tornado?  Those are the only options.

It's not now possible, nor is it ever likely to be possible, to issue tornado warnings only for people who eventually will be struck by a tornado.  That's an ideal very far from the reality of what we meteorologists can do.  Even given that, however, it's evident that tornado forecasts and warnings have been saving lives here in the US since they began in 1952.  That means many thousands of fatalities have been prevented by a system that isn't even close to perfect!  While the forecasts and warnings can be improved with improved science, the existing products are not "broken"!

There's an asymmetry in the penalty function for tornado warnings.  There are ZERO tornado fatalities in a false alarm!  However, failing to issue a tornado warning for a fatality-producing tornado has a much higher penalty.  The result is a tendency to over-warn.  Tornado warnings are biased toward overforecasting tornadoes because meteorologists have a binary decision to make:  a warning forecaster either warns or s/he doesn't warn.  It's possible to reduce the bias for overwarning by issuing warnings with graded threat levels - in effect, a probabilistic threat forecast - rather than a yes/no forecast.  But such a system has yet to be implemented operationally, in part because of public demand for a virtually nonexistent certainty regarding what will happen in the weather.

In today's world, most people have many different options by which they can receive tornado forecasts and warnings.  Almost all of them require the user to make the decision to become situation aware.  People cannot simply assume zero responsibility for their own safety without running the risk of suddenly finding themselves in mortal danger.  Information about tornado hazards is readily available but people must make the effort to seek out that information without having to be told to do so.  They must plan for tornado hazards well in advance and take it on themselves to seek out information about what they can do to reduce the threat to their lives (and even their property).

The media need to become responsible for telling an accurate version of tornado events, rather than continuing to reprise the counterfactual scripted version of events that reinforces the myth of "it struck without warning".  The media have some responsibility here and to my mind, many of them are failing to carry out that responsibility.  The men and women who dedicate their lives to providing the public with the most accurate and timely weather information they can muster deserve our respect and admiration for their selfless efforts to inform.  They do not deserve to be portrayed as failing in their duties when the facts are clearly contrary to the media script.  Their failure to achieve perfection is far from being entirely their fault.

Wednesday, October 7, 2015

The recurrence interval - a PR disaster for meteorology

The terrible floods that occurred recently in South Carolina have triggered the usual brouhaha over the notion of the "recurrence interval", with the SC event having been said by some to be even more rare than a "1000-year" event.  The general public mostly takes this to mean at least 1000 years should pass between such events, so it seems weather disasters of this sort must be "freak" occurrences, demanding some sort of special explanation.  Unfortunately, a ready "explanation" for this event has been that it was caused by anthropogenic (i.e., human-caused) global warming (AGW).  Even some people who should know better have jumped onto the AGW bandwagon with regard to this event.  I'll return to that shortly, but first I want to try to dispel some of the misunderstandings associated with recurrence intervals.

Atmospheric events are not "periodic" - that is, they don't occur at regular intervals in time.  If that were so, weather forecasting would be a heckuva lot easier and considerably more accurate.  Hence, the perception that a recurrence interval is based on some periodic atmospheric behavior is simply a misunderstanding of the term's meaning.  Forecasting is difficult, in part, precisely because weather is most definitely not periodic!

Recurrence intervals generally are calculated by fitting some sort of statistical distribution (e.g., a Poisson distribution) to the existing record of events.  It doesn't take much knowledge to realize that we don't have a record of heavy rainfall events longer than about 200 years (in the USA), so how can we come up with a meaningful definition of a "1000 year" event?  The answer is simple - we can't. To do so is an exercise in extrapolation, and extrapolation is well-known to be a risky thing to do.  If we had 10 000 years of data for every location in the USA, we might well be able to have a plausible definition for a 1000-year event at all those places, but such data simply don't exist.  What we can say, from our knowledge of the occurrences we have observed, is that the SC floods were an event that is outside of prior experience (in SC).  This doesn't mean it's a "freak event" for which some exotic explanation must be offered.  Curiously, given that weather events happen when the ingredients for such events are brought together, the chances for a similar event to occur soon after one event has already happened are relatively high.  If the weather brought those ingredients together on one day, there's an increased probability that it might happen again soon after.

For flash flooding, in particular, it's not at all uncommon for heavy rain to occur on two (or more) days in succession.  The first event saturates the soil, creating the hydrologic conditions that make it possible for a flash flood on the second day.  This has happened many times in the history of flash floods around the world.  It seems silly to refer to an event as a 100-year (or 1000-year) event when it happens on consecutive days!  It's quite acceptable to use recurrence interval terminology in the context of communication among scientists, because (hopefully) they understand what the term means.

Fortunately, the notion of recurrence intervals is almost never mentioned in the context of major tornado outbreaks or high-impact tropical cyclone landfalls.  Most people (at least in the USA) seem to understand that these are not "freak" events, but rather occur at irregular intervals when their ingredients come together.  They occur somewhere in the USA every few decades or so ... frequently enough that the public is reminded of the possibility that such things can happen.  Major rainfall events also occur at irregular intervals, and often enough that people should get the right message:  really big events can happen somewhere in any given year.  But for some reason I can't explain, the reference to recurrence intervals is common with respect to heavy rains, and so this issue comes up over and over again.  It's a public relations nightmare that we should stop inflicting on ourselves.  In scientific papers, discussion of recurrence intervals is more acceptable, but making public statements about them is just confusing and makes us look silly.

Finally ... was this event "caused" by AGW?  In a word ... NO!  The event happened because the ingredients for a flash flood-producing rainfall event were brought together.  Nothing particularly exotic was required and those ingredients were not, on their own, remarkable or unusual.  It's always rare for really heavy rainfalls to happen in any given location and this was an example of a somewhat atypical weather pattern in which the flash flood-producing rainfall ingredients to be brought together.  The threat of an additional event - landfall of Hurricane Joaquin - was never realized, fortunately.  Nevertheless, the numerical weather prediction models were quite accurate in anticipating the heavy rainfall event in SC days in advance; the forecasts for heavy rains were quite good, as a result. 

A somewhat more nuanced description of the role of AGW in this event is that AGW is thought by many climate scientists to make it likely that extreme flash flood events will become more frequent.  This event is consistent with that prediction but, on its own, doesn't provide "proof" that AGW was a contributing factor.  There are indeed scientific studies that suggest that heavy rainfall events are becoming more frequent.  Thus, it's appropriate to say that this SC flood case is one more piece of evidence to that effect.  But to say that this event was caused by AGW is simply not scientifically acceptable.

Sunday, September 6, 2015

For the record: Understanding what probability forecasts mean

For reasons mostly related to the failure of the National Weather Service to develop a pre-release information campaign, the public has been puzzled by the meaning of probability forecasts ever since the Probability of Precipitation (PoP) was introduced in the mid-1960s.  That oversight can't be rectified easily but as we contemplate changing the content and wording of forecasts, that lesson looms large - or it should.  The concept of uncertainty is best expressed as probability, but other ways (such as odds) might be more intuitive for most of the public.

Expressing a rain forecast in terms of probability (e.g., a "40 percent chance of rain" - which is equivalent to a 60 percent chance of no rain) always refers to a specific space-time volume.  That is, the forecast is for a specific area during a particular time span.  It might be for a particular metropolitan area during the next 12 hours, for instance.  If you don't know the the forecast's space-time volume, you don't yet know enough to grasp the intended meaning.  (There's a 100 percent chance that it will rain somewhere on the Earth in the next 10 years!  There's a zero percent chance of rain within the next 5 seconds when the skies currently are a cloudless blue.)

Another factor to consider is the "lead time" of the forecast;  this is the time between when the forecast is issued and the beginning of the valid time for the forecast's space-time "window".  Today's forecast for today is much more likely to be accurate than today's forecast for tomorrow.  In general terms, the limit of predictability for weather forecasts is somewhere around 7-10 days, depending on the weather situation.  Some forecasts are more difficult (and, hence, more uncertain) than others.  At the predictability limit, the forecasts become so uncertain, they are no more accurate than forecasting the climatology - the average of all weather events for that date.  They are said to have zero "skill" (which is not the same as accuracy - skill is relative accuracy - compared to some simple forecast, such as persistence, climatology, or some objective forecasting system).

You also need to know what event to which the word "rain" applies.  In most cases, this means enough rain to be "measurable" (typically, 0.01 inches).  The event being forecast could be different from that, but most PoP forecasts are for measurable rain.  In any case, it's another essential piece of the puzzle.  The less frequent an event might be, the less confidence forecasters can have in predicting it.  The probability of measurable rain is considerably higher than that of a rain event producing 10 inches (roughly 254 millimeters) of rainfall.

So, armed with knowledge of the space-time volume for which the forecast is valid and the nature of the forecast event, the probability value is a quantitative expression of the confidence that such a rain event will occur somewhere, sometime within that space-time volume.  The level of certainty (or uncertainty) can be estimated objectively using any of a number of methods (spread of ensemble members, Model Output Statistics, etc.) or subjectively.  Subjective probability estimates can be calibrated with experience, such that all calibrated forecasters looking at the same data would arrive at similar probability estimates - subjective probabilities need not be understood as mere "guessing"!  Assuming they follow the laws of probability, subjective probability estimates are legitimate expressions of forecaster confidence.  Although some forecasters might be more confident in their abilities than others, if the forecasters are calibrated properly, they will mostly agree about their probability estimates.  Real forecasters can become reasonably well-calibrated in about a year, given proper feedback about their forecasting accuracy.

If the forecast is for a 40 percent probability (two chances out of five ... or four out of ten), then any one forecast can be neither wholly correct or wholly incorrect.  The only times when a probability forecast is either right or wrong is for forecasts of zero and 100 percent.  We measure how good the forecast is by its "reliability" - a perfectly reliable probability forecast of 40 percent means that on the average, it rains somewhere within the space-time value 40 percent of the time whenever that 40 percent probability forecast is issued.  When it rains, of course, we should expect higher probability values, and lower values when it doesn't rain.  Perfect forecasting would consist only of (a) 100 percent probabilities when it rains, and (b) zero percent probabilities when it doesn't rain, but that level of certainty is impossible (for many reasons, both theoretical and practical).  Thus, it rains one time out of ten when the probability forecast is for 10 percent (assuming reliable forecasting).  Rain on a 10 percent probability is not only not necessarily wrong;  it is just what we expect to happen (10 percent of the time!) when we make such a forecast.

Note that if the forecaster knows nothing (i.e., has no confidence), then the best forecast to make is for the climatological probability in that space-time volume.  This is usually a much lower value than 50 percent (a value that many people might incorrectly interpret as "pure guessing") - if the climatological value for the given space-time volume on that day of the year is 20 percent, that's the best possible "know nothing" forecast.

Wednesday, August 19, 2015

El Niño - "Godzilla" or just another actor?

It seems this year is another in a lengthening string of occasions when El Nino (more properly, the El Nino-Southern Oscillation that includes La Nina) becomes a big media story, anticipating how it will affect the weather during this coming winter.  The developing El Nino this year may be at record or near-record intensity, which could magnify its impacts on the weather, so even a respected oceanographer felt compelled to describe it with the adjective "Godzilla" during a media interview.  Of course, the media grabbed onto this label with its typical overblown enthusiasm.  Shades of "Snowmageddon" and "Frankenstorm"!!

The "Godzilla El Nino" has become the focus for some controversy in the scientific community, however.  Many meteorologists dislike the use of such hyperbole, preferring that the public face of our science be more restrained, as scientists try to be when communicating with their colleagues.  Others feel that the use of such language helps get the message of science across to the lay public.  A well-written science story doesn't need bombastic language to get its message across - in fact, it can be argued that such excesses muddy the clarity of the message.

I've made no secret that I'm not among the supporters of wildly dramatic language.  First of all, an unintended consequence could be the creation of unnecessary fear in some folks regarding what could become an impending disaster.  Another unintended consequence is public pushback against the "hype" such terminology creates - some segments of the public are sick of all the "gloom and doom" the media convey about upcoming weather events.  There's no hard evidence that the use of such hyperbolic terminology does anything to attract more attention to the message that scientists are trying to convey, nor is there evidence to suggest that the purely factual information content of that scientific message is conveyed more effectively to the consumers as a result of the inflated descriptions.  If the claim is made that melodramatic terminology is actually an aid to effective communication, the burden of proof is on those who make such claims.  Let there be a carefully-done survey that demonstrates this is indeed the effect of sensational verbiage.  Absent that, count me among the skeptics!

Furthermore, and more importantly, it's pretty bad science to equate the strength of a given El Nino to specific weather events or seasonal weather trends at a specific location.  ENSO is just one among a host of global and regional climate "oscillations" that are all operating concurrently.  How this year's El Nino affects the global weather pattern is determined by the complex interaction among all the known oscillations that influence the weather pattern, to say nothing of factors affecting global weather about which we scientists know little or nothing. It's been shown, for instance, that snowfall in Washington DC can be at or near record levels during a strong El Nino, but can also be near zero during a strong El Nino.  By itself, El Nino is not a good predictor of local, seasonal weather patterns.  To create all this brouhaha about this year's El Nino is just bad meteorology and conveying a message that is not justified by the science. 

A more rational approach would be to indicate that an intense El Nino, which is what this year's event is likely to be, could create serious impacts, for which some segments of our society would need to prepare in advance.  It would be important to indicate that this is not a statement of absolute certainty, or even close to that level of confidence.  Rather, it suggests one potentially important development among many possibilities, but the likelihood is high enough that it deserves to be mentioned as a possibility - it is not a forecast for a "Godzilla" creating widespread havoc and destruction, but something that might require some advance planning for that possibility.  Do we really need to "hype" an event to get people to understand our message so that they take appropriate actions?  If so, then we can blame the media, but we also might have to share the responsibility for failing to state our message in clear and understandable terms.  The consumers of media (Aren't we all?) have been desensitized, perhaps, by all the sensationalism.  But that's another topic ...

Friday, April 24, 2015

Dealing with drought

Lately, some alarming messages have been spread via the media about drought in California - some of them are a bit misleading (e.g., that CA has only one year's worth of water left - the situation is dire, but that's a hyperbolic statement of the reality).  There are those who will lay the blame on global climate change.  This is also not an appropriate position, since there always has been a danger of drought in the semiarid and arid regions of the US, and no guarantee can be made that what we've observed in the historical record is as bad a drought as natural variability can generate.  Climate change may be making the situation worse, but it's not the whole challenge.

A major part of the problem for CA and, indeed, for much of the western third of the USA (east of the coastal mountain ranges) is that drought always has been a frequent visitor - from the Great Plains westward to those coastal mountain ranges.  The notion that the expanding population centers from the continental divide westward are living on borrowed time is not a new one.  I recommend reading "Cadillac Desert" to learn some of the history of the "water wars" in the west.  Dividing up the scarce water resources among all the competitors has always been a challenge even during non-drought years;  increasing populations demand more of everything.  Making choices is not necessarily easy.  Drought magnifies the urgency and the seriousness of the consequences for any set of choices.  Large population centers, like Los Angeles, El Paso, Denver, Las Vegas, Phoenix, Tucson, etc., are reaching out over increasing distances to find new water to support additional growth. 

A lot of agriculture west of the Mississippi River uses irrigation to grow crops that in most years would not be possible depending only on natural rainfall.  On the plains, much of this water for irrigation is "fossil water" in underground aquifers that represent finite resources.  When (not if!) those aquifers dry up, that agriculture can't be sustained.  Decreasing fresh water sources in the west leave agricultural (and industrial) uses competing with human water needs.

What's worse is that water is being squandered stupidly ... for example, building grassy golf courses in the desert is an ecological nightmare.  Where I used to live in CO, the neighborhood association discouraged xeriscaping, and encouraged homeowners to maintain Kentucky bluegrass lawns that required heavy watering at least every other day in that semiarid climate.  Wasting fresh water in such stupid ways has potentially harmful consequences even in non-drought times, but when drought is ongoing, such waste can be criminal.

Some simple calculations show that the cost in terms of energy to pipe in water from water-rich areas, mostly east of the continental divide, is quite high.  Using that much energy to import water - generally uphill - creates problems in its own right, and will make that water very expensive.  The real problem here isn't the current drought.  It's the growth of unsustainable populations in regions that inevitably are going to experience serious droughts that constitutes the real problem.  Anthropogenic global climate change may be enhancing that concern, but it's always been there.   Whenever local sources of water become inadequate for the population centers, those centers have populations that have become unsustainable.  Even before anthropogenic climate change became a topic for discussion, there were ongoing battles for fresh water resources.  Everyone feels their personal concerns take priority, but politicians who make the laws governing water rights can be influenced by the rich to favor the claims of the rich to that fresh water.

Drought has been ongoing for several years in the western half of Oklahoma and the Texas Panhandle, to the extent that an old bogeyman is making an unwelcome appearance:  cloud seeding.  It's not widely understood that the conditions of a real drought - an absence of rainclouds - make seeding completely useless in mitigating that drought.  Even in cases when rainclouds are present, the actual contribution of cloud seeding to net rainfall has never been shown to be effective in any carefully-done statistical trials.   I have a more comprehensive discussion of weather modification, but the substance of the science is that weather modification to enhance rain has never passed rigorous statistical tests of its effectiveness.  The weather modification companies who sell their "services" for the purpose of drought mitigation haven't a scientific leg to stand on, and yet are profiting from the misery of those suffering from drought.  Those companies may well honestly believe in what they're doing, but the opinion of weather modification activities from the science of meteorology is pretty much dubious.

In OK, we now have vast quantities of waste water from fracking being pumped underground, which not only consumes that water, but has the potential to contaminate the underground water (to say nothing of causing earthquakes in certain areas).  As I said, drought makes many "minor" concerns morph into serious concerns.

Dealing with drought is never easy ... as fresh water availability declines, there will be winners ... and losers.  Who decides who wins and who loses?  On what basis?  The simple fact is we can't survive without adequate fresh water, and as the resource declines, it's going to get ugly.  The sooner we face the unpleasant realities of drought and its consequences, the better.  Sticking your head in the sand won't solve anything.  Solutions won't come easily and it's impossible to use the diminishing resource to satisfy the needs of the increasing demand.

Wednesday, February 25, 2015

VORTEX - SE: A political scientific boondoggle

It's come to my attention that a project to study tornadoes in the Southeastern US has been created, via political 'pork barrel' machinations.  This project is predicated on the following basis:

"The southeastern United States commonly experiences devastating tornadoes under conditions that differ considerable from those on the Great Plains region where tornado research has historically been focused.  NOAA/NSSL has a newly funded mandate to collaborate with the National Science Foundation in better understanding how environmental factors that are characteristic of the southeastern U.S. affect the formation, intensity, and storm path of tornadoes for this region."

Several institutions within the southeastern US have been pushing this sort of idea for years.  With the help of their Congressional delegations, they evidently have succeeded in forcing this absurd project on the rest of us.  They assert that tornadoes in the southeast are different, and that their regional storm problems therefore have been overlooked.  There's little doubt that tornado fatality counts in the southeastern US are higher than elsewhere, but it's never been demonstrated that this is the result of a difference in the meteorology of tornadic storms in the southeast.  There are many non-meteorological reasons for high death rates in the southeastern US - this blog isn't the venue for a complete discussion of those non-meteorological explanations. 

Nor has it ever been shown that tornadoes in the southeastern US are the result of some (as yet, unspecified) difference in the physics of severe storms and tornadoes.  To the contrary, there is every reason to believe that the meteorology of severe storms and tornadoes is the same the world over.  Absent a compelling demonstration of an important difference in the meteorology, this program is based on an unvalidated hypothesis. 

Yes, the climatology of tornadoes in the southeast differs from that of the Great Plains.  For instance, there's a well-defined tornado "season" in the plains:  tornadoes occur with high frequency in the months of April, May, and June on the plains, and relatively low frequency at other times of the year.  In the southeast, tornado frequencies generally are much lower than the peak months of the plains tornado season, but those relatively low frequencies only decrease substantially during the summer months in the southeast.  Thus, although tornadoes are less frequent in the southeast, they can occur at almost any time of the year, including in the winter.  The reasons for this are clear to most severe storms meteorologists:  they have to do with the ingredients for severe storms and tornadoes, which come together often in the early to late spring on the Plains, and rather less frequently in the southeast but without a clearly defined "tornado season".  This is a clear indication that severe storms and tornadoes in the southeast are more or less identical to comparable storms on the Plains.  The only difference in the regions is the climatology of the ingredients, but the ingredients are everywhere the same!  It seems quite unlikely that any particularly useful meteorological insight is to be gained by this project.

The proposed program is patterned after the already completed VORTEX and VORTEX2 field observation campaigns in 1994-5, and 2009-10, respectively.  These observational campaigns included mobile radars, instrumented vehicles to intercept storms, and so on.  Doing a similar project in the southeast will be much more challenging, owing to the presence of extensive trees, substantial orography, a high frequency of low cloud bases, and a higher overall population density compared to the Plains.  Visibilities needed for successful storm intercepts are just not common in most of the southeastern US.  This renders even more questionable the basic concept of conducting such an exercise in the southeastern US, since it adds to the danger level for the participants, who will be much less able to see and avoid storm hazards in the course of their observational assignments.

This situation is simply an example of how some institutions can game the system to secure funding for themselves.  Unfortunately, government funding is basically a zero-sum game.  What existing programs and projects will have to be cancelled or delayed because of this boondoggle?  This is not the path to scientific cooperation and collaboration - rather, it's divisive and will damage the relations among scientists for decades to come.  This is not a good idea in any way, and it speaks loudly that this ill-advised reallocation of scarce scientific resources is the result of political posturing rather than a reflection of sound scientific justification.

Tuesday, January 27, 2015

Let the recriminations commence!

It's becoming apparent that the forecasts for a 'historic' snowstorm for New York city have proven to be incorrect.  At least one meteorologist has already apologized for not getting this correct.  So the recriminations will commence in the media, I'm sure.  Blogs will be written, Bill Belichick's nasty mischaracterization of weather forecasts will be resurrected, politicians will voice their displeasure, letters to the editor will be written, and so on.  Mea culpas and excuses will be offered.

All of this could have been avoided, of course, and I don't mean by the obvious decision to issue a different forecast.  Let me explain briefly from where forecasts come these days:  to an ever-increasing extent, forecasts are more or less coming from numerical weather prediction [NWP] computer-based models.  Human forecasters who have to make forecast decisions are presented with a number of different model forecasts, some produced by different computer models, and some produced by starting the same model with slightly different input starting conditions.  To some extent, the variability within the ensemble is a measure of the uncertainty of the forecast, but even the uncertainty is, well, uncertain!  This collection of different model forecasts is referred to as an 'ensemble' of forecasts, and at times, there can be considerable variation among the different forecast model results, as was the case with this event.  It was quite likely to snow heavily somewhere at some time - the problem was to pinpoint when and where.  A 'historic' snowfall in New York city would have devastating consequences! 

In reality, what is inevitably true is that all the model forecasts will be wrong, to a greater or lesser degree.  No human- or computer-based weather forecast has ever been perfectly correct, and none ever will be.  Uncertainty is inevitable, so the best any forecaster can do is to say "[some weather event, like a blizzard or a tornado] is likely to happen, and my confidence in that occurrence is [some way to express the uncertainty of that forecast, such as a probability]."  Numerous studies have shown that forecasters are actually pretty good at estimating their uncertainty; this is seen by their forecast reliability.  Reliability of a probabilistic forecast means that, given a particular forecast probability, as that forecast probability increases, the frequency of occurrence of the forecast event also increases.  Forecasts are perfectly reliable when the probability forecast is X percent and the occurrence frequency is also X percent.  In general, it's not a good idea to present forecast event probabilities as either zero or 100 percent, although it's possible in certain situations - the probability of having a blizzard in the ensuing 24 hours when it's presently 100 deg F in July is pretty close to zero.

If the forecast probability is neither zero nor 100 percent, then a particular forecast is neither wholly right or wholly wrong.  If the event occurs although the forecast probability was low (say, 1 percent), then this is the one case out of 100 you would expect to find.  If the event fails to occur despite a forecast probability of 99 percent, again, it's the expected one case out of 100.  In the case of the snowstorm in the northeast, the problem was to know just where the heavy snow would be.  Clearly, when the event occurs, you want the forecast probabilities to be high and when it fails to occur, you want the forecast probability to be low.  Of course, there at times when it doesn't turn out that way, as I've described.

Getting back to the 'dilemma' of choosing from an ensemble of forecasts, the most likely event is the average of all the ensemble members.  But even if that average actually turns out to be the best forecast, it will not be completely accurate.  And there are times when the forecast that would have been the best is one of the ensemble members - the problem is to know with confidence which would be the correct choice, and the challenge is that science simply can't predict that with accuracy.  And in some situations, none of the ensemble members is very close to the real evolution of the weather, unfortunately - i.e., none of the model solutions were accurate.  There is no scientific way to choose the "model of the day" and efforts to do so are simply a waste time!

Everyone wants the forecasts to be absolutely correct, all the time - but like the Rolling Stones say, "You can't always get what you want!"  We need to re-negotiate our contract with forecast users who expect us to provide a level of certainty we simply are [and always will be] incapable of providing.  Users deserve to know our uncertainty in the forecast products we send out.  Not doing so is scientifically dishonest - forecasters know their uncertainty, so not sharing it is to withhold important information!

Thursday, January 15, 2015

Fraud and Insanity - Welcome to Meteorology 2015, Part 2

I would add the following final thought to my previous blog:  No forecast that doesn't contain information about the uncertainty of the forecast can be honest.  Uncertainty is inevitable with any forecast and to issue a product without including that information is dishonest and unprofessional.  If people want to use a long-range forecast as if it were just as accurate as a 1-day forecast, even after being told about the uncertainties, that's their choice.  But if we want to be professionals, we need to be honest, regardless of whether or not people choose to listen to our caveats.

OK - moving on to my next topic, I want to spend a little time on an example of a conspiracy theory that doesn't involve climate science:  the very weird and bizarre notion of the so-called "chemtrails".  Up until a few years ago, I didn't know that this nonsense existed.  The basic notion is that aircraft are dispersing chemicals via their contrails, and those chemicals are supposed to be associated with a whole array of fantasized evils.

It seems that a person named Scott Stevens ... "an award winning television weatherman"... has developed quite a following with wild claims about "geoengineering" projects including (but not limited to) the chemtrail concept.  If you're involved much with social media, no doubt you're well aware of the fanaticism associated with conspiracy theories of all sorts.  There is no scientific basis for claims of a conspiracy involving aircraft contrails, and the other similar parts of "sinister" geoengineering.  Mr. Stevens is careful not to claim he's a meteorologist ... probably since he was forced to resign from an Albany, NY TV station in February of 1995:

Weatherman Scott Stevens has resigned from WRGB (Channel 6) after station management accused him of lying about his credentials.

In a statement read during Tuesday's 6 p.m. broadcast, David Lynch, vice president and general manager, said WRGB "hired Scott Stevens to be chief meteorologist based on faulty information provided by Scott" and his agency. WRGB subsequently learned that "Scott has never completed the necessary academic course of studies that would lead him to the official title of meteorologist,'' according to the statement read by anchorwoman JoAnne Purtan.


In effect, he has essentially zero qualifications to engage in such claims.  It seems that these days anyone can make any wild claim they wish, including those that are virtually complete fabrications, and they can put such garbage out via the Internet.  Gullible, ignorant people are taken in by such nonsense, and the followers of such can infect others easily via electronic media.  This applies to a lot of such imaginary conspiracies (like the putative conspiracy by climate scientists to defraud the world into believing in anthropogenic global warming).

I'm a huge fan of the freedom of speech on the Internet - but people need to take some time to consider what is and is not credible.  Wild "scientific" ideas are common in today's world and nonprofessionals might easily be taken in by them   People need to be able to recognize reliable sources from the loonies out there trying to convince you of such absurdities as the government is using aircraft to fill the air with chemicals that will harm you.

Finally, I've recently seen several examples where junk science concerning topics involving meteorology that may even have been rejected by meteorology journals then turns up in some other journal.  There's the utter nonsense of the example about erecting walls to prevent tornadoes, another one where the authors make up some gibberish about:

... the theory of byuons, allegedly realized by means of a positive feedback between the tornado updraft and the cosmological vector representing the global anisotropy.

as an "additional energy source" for tornadoes, and so on.  My favorite subject - tornadoes and severe storms - long has been a magnet for crackpots and those who think they know something the subject but in reality are profoundly ignorant about severe storms meteorology.  When I dispute their ideas, I've even been subjected to insults.  As a recent example from someone responding to one of my blogs, he called me a "government paid phoney" and said about my science that "One thing we know is that it will be meaningless. It will be pseudoscience. It will be more Doswellian lunacy."

Next, I suppose, come death threats from the chemtrail fanatics ... 

Wednesday, January 14, 2015

Fraud and Insanity - Welcome to Meteorology 2015, Part 1

There are several things going on in the weather business that make me cringe these days.  This field has become politicized and even worse of late, in ways I never dreamed of in my past, when I was nose down into my research.  I can only touch on a few of them, but the list is long, especially in the climate business - a can of worms I won't review here.

Most everyone now is accustomed to seeing 7-10 day forecasts, where the forecast high and low temperature, as well as clouds and precipitation are presented, usually in some graphical format (as shown in the example).

These show up on TV weather broadcasts but are quite common on the Internet.  No doubt many people look at these on a regular basis, most without any suspicion that they're being sold a pig in a poke - such an extended range forecast includes a fundamental dishonesty.  Here's the crux of the problem:  if anyone gives the topic even a few moments of thought, it should be obvious that the farther ahead the forecast product extends, the skill and accuracy of that forecast diminishes.  It's a well established principle in meteorology that every self-respecting meteorologist knows - it's an indisputable fact of the science.  A 3-day forecast is less reliable than a 1-day forecast; a 7-day forecast is less reliable than a 3-day forecast.  By roughly 10 days, the forecasts are essentially without any skill: they're no better than simply forecasting the long-term average conditions.  At that range, if you know the local climatology, you know as much as the forecast can offer.

There are scientific reasons for this growth of error with forecast time - basically, in about 10 days, any very small errors at the start of the forecast will grow to the point where they have contaminated the resulting forecasts.  And small errors at the start are inevitable - we don't know atmospheric conditions well enough to eliminate those small errors.

Furthermore, the detail contained in the forecast becomes less and less reliable with increasing lead times (that is, the time from the start of the forecast to the time when the forecast is valid).  A 1-day forecast has one day of lead time, for example.  The farther into the future the forecast goes, those details (such as the location of a front, or the region where precipitation might occur) become "fuzzy" - it's sort of like looking into a crystal ball, where the farther ahead you look, the cloudier the crystal ball becomes.  In the example above, the day-7 maximum (57) and minimum temperature (39) simply aren't known to a precision of 1 degree Fahrenheit, despite what's shown.  At 10 days or so, none of those details can be trusted beyond what you would expect from climatology for that location and time of the year.

Presenting this information with attractive graphics but without any indication of decreasing reliability is just not being honest.  It's a misrepresentation of the information.  Where does that "detail" come from?  For all practical purposes, these forecasts are derived from numerical weather prediction (NWP) models, where the equations governing the atmosphere determine the values on a grid of points in space (and time).  These models not only provide the forecast data used in those pretty graphics.  They also have been used to determine the limits of reliable forecasts I've referred to previously - roughly 10 days or so.

Any source of forecast information that doesn't let the user know about these limitations on extended forecast skill and accuracy is misrepresenting the science of meteorology.  In my view of things, the meteorological community should rise up in protest over this fraudulent practice.  The only hope to reverse this situation lies with the professional meteorologists, who need to stand up for what is right.  My blog can only reach so many and has little influence by itself.  If professionals can somehow muster the courage to speak out against this egregious practice - yes, it takes courage, as many feel their jobs are at risk when they speak out for the truth - we can change the way our forecasts are presented to the users, for whom we can have no expectation that they will figure this out for themselves.  It's our professional duty!

... more to come ...  

Thursday, November 13, 2014

New Thoughts on the Human-Machine Mix in Weather Forecasting

With the development of digital computers in the 1940s, the stage was set for numerical weather prediction models based on the equations governing the atmosphere, as envisioned by such meteorological pioneers as Andrie S. Monin, Vilhelm Bjerknes, and Lewis Fry Richardson.  Numerical solution of those otherwise unsolvable equations was the catalyst for a revolution in the science of meteorology, and a continuing debate about the role of humans in weather forecasting.  Sverre Petterssen and Werner Schwerdtfeger, among others, began to anticipate how computer forecasts could compete with humans in the task of weather forecasting.  With the introduction of post-processing methods for turning the gridded variables of a numerical model into actual weather forecasts, Leonard Snellman recognized what he saw as a very real possibility:  fully automated public weather forecasting.  Snellman coined the term meteorological cancer to describe the eventual demise of human intervention in the forecast process.

The notion of the human-machine "mix" has been around since at least the 1970s.  The model developers and those using models as input for objective weather forecasting schemes have steadfastly denied their goal is to replace humans in the forecast process.  As I see it, anyone working to develop objective "guidance" for forecasters is basically in the business of replacing humans with their product, whether they admit it or not - or whether or not they even realize that's what a very successful "guidance" product will do.  As model forecasts improve - which they have done continuously since they began - the need for humans diminishes.  For "ordinary" weather situations, it can be argued that humans already no longer add value to the forecast, even at relatively short range.

The use of numerical models has evolved considerably over those first tentative steps at numerical weather prediction.  The models moved rapidly away from crude one-layer models with coarse resolution and very limited physical processes, to today's models based on the so-called "primitive equations" using vastly increased time and space resolution, fully 3-dimensional, and with extensive physical parameterizations, coupled with sophisticated post-processing schemes to convert gridded variables to sensible weather, and even text generation for fully automated forecasting.  The role of humans during this process has been one of "gap-filling" - the limitations of numerical models represented gaps where a human forecaster could add value to the automated products.  With time, the gaps continue to be filled as the technology of numerical weather prediction evolves.  There are fewer and fewer niches where humans have much of a chance to add value.  The gaps are disappearing.

I've talked about this before, in many essays that can be found here.  Recently, it came to my attention that something interesting is being explored in the UK, whereby forecasters could work with models interactively.  Up to now, computer-based forecasts were like the pronouncements of an oracle, and forecasters were faced with either accepting what the models said or rejecting that solution and providing their own alternative forecast by whatever means they had at their disposal.  Forecasters have been similar to high priests in the business of interpreting oracular pronouncements.  This has not been a truly interactive human-machine relationship. 

What I've envisioned for an interactive relationship is that the forecaster would use the model as a tool to test various possible scenarios in a dynamically consistent way.  What if the moisture available was actually greater than the initial conditions for the model showed?  What if the trough approaching was stronger or approaching more slowly?  How would the forecast change?  A forecaster educated and trained properly could use the model to test such possibilities intelligently and efficiently, and to see the ramifications of those "what if" scenarios.

As I now see things, if something of this sort is not explored and developed, virtually everything now done by forecasters eventually will be automated.  The only debate will be how soon full automation will take place.  Meteorological science is spending a considerable effort all time trying to improve the model guidance, by whatever means necessary.  What are we doing to refine the role of humans and to improve their performance?  Damned little!! Remember:  highly accurate guidance = no more need for forecasters!  Humans cost much more than computers. 

An interactive relationship between model and forecaster would demand a considerably more comprehensive grasp of the science by the forecaster than is now the case.  And it would require a much more extensive training program for human forecasters.  Today's forecasters need to consider their future - young entry-level forecasters may find themselves out of a job before they're old enough to retire!  No one in public weather forecasting is safe from this.  NO ONE!!