Thursday, November 13, 2014

New Thoughts on the Human-Machine Mix in Weather Forecasting

With the development of digital computers in the 1940s, the stage was set for numerical weather prediction models based on the equations governing the atmosphere, as envisioned by such meteorological pioneers as Andrie S. Monin, Vilhelm Bjerknes, and Lewis Fry Richardson.  Numerical solution of those otherwise unsolvable equations was the catalyst for a revolution in the science of meteorology, and a continuing debate about the role of humans in weather forecasting.  Sverre Petterssen and Werner Schwerdtfeger, among others, began to anticipate how computer forecasts could compete with humans in the task of weather forecasting.  With the introduction of post-processing methods for turning the gridded variables of a numerical model into actual weather forecasts, Leonard Snellman recognized what he saw as a very real possibility:  fully automated public weather forecasting.  Snellman coined the term meteorological cancer to describe the eventual demise of human intervention in the forecast process.

The notion of the human-machine "mix" has been around since at least the 1970s.  The model developers and those using models as input for objective weather forecasting schemes have steadfastly denied their goal is to replace humans in the forecast process.  As I see it, anyone working to develop objective "guidance" for forecasters is basically in the business of replacing humans with their product, whether they admit it or not - or whether or not they even realize that's what a very successful "guidance" product will do.  As model forecasts improve - which they have done continuously since they began - the need for humans diminishes.  For "ordinary" weather situations, it can be argued that humans already no longer add value to the forecast, even at relatively short range.

The use of numerical models has evolved considerably over those first tentative steps at numerical weather prediction.  The models moved rapidly away from crude one-layer models with coarse resolution and very limited physical processes, to today's models based on the so-called "primitive equations" using vastly increased time and space resolution, fully 3-dimensional, and with extensive physical parameterizations, coupled with sophisticated post-processing schemes to convert gridded variables to sensible weather, and even text generation for fully automated forecasting.  The role of humans during this process has been one of "gap-filling" - the limitations of numerical models represented gaps where a human forecaster could add value to the automated products.  With time, the gaps continue to be filled as the technology of numerical weather prediction evolves.  There are fewer and fewer niches where humans have much of a chance to add value.  The gaps are disappearing.

I've talked about this before, in many essays that can be found here.  Recently, it came to my attention that something interesting is being explored in the UK, whereby forecasters could work with models interactively.  Up to now, computer-based forecasts were like the pronouncements of an oracle, and forecasters were faced with either accepting what the models said or rejecting that solution and providing their own alternative forecast by whatever means they had at their disposal.  Forecasters have been similar to high priests in the business of interpreting oracular pronouncements.  This has not been a truly interactive human-machine relationship. 

What I've envisioned for an interactive relationship is that the forecaster would use the model as a tool to test various possible scenarios in a dynamically consistent way.  What if the moisture available was actually greater than the initial conditions for the model showed?  What if the trough approaching was stronger or approaching more slowly?  How would the forecast change?  A forecaster educated and trained properly could use the model to test such possibilities intelligently and efficiently, and to see the ramifications of those "what if" scenarios.

As I now see things, if something of this sort is not explored and developed, virtually everything now done by forecasters eventually will be automated.  The only debate will be how soon full automation will take place.  Meteorological science is spending a considerable effort all time trying to improve the model guidance, by whatever means necessary.  What are we doing to refine the role of humans and to improve their performance?  Damned little!! Remember:  highly accurate guidance = no more need for forecasters!  Humans cost much more than computers. 

An interactive relationship between model and forecaster would demand a considerably more comprehensive grasp of the science by the forecaster than is now the case.  And it would require a much more extensive training program for human forecasters.  Today's forecasters need to consider their future - young entry-level forecasters may find themselves out of a job before they're old enough to retire!  No one in public weather forecasting is safe from this.  NO ONE!!

The Persistence of Mythology in Science

My personal experiences as a scientist make it clear to me that scientific myths are quite prevalent in science, even among scientists - not just the non-scientific public.  What do I mean by a "myth"? - dictionary definitions include:
  1.  a traditional story, especially one concerning the early history of a people or explaining some natural or social phenomenon, and typically involving supernatural beings or events.  
  2.  a widely held but false belief or idea.
Mythology of the (1) sort can be thought of as a forerunner to science, in the sense that the myth is an attempt at offering an explanation for how things are.  Any myth that calls on the supernatural is, of course, well outside what we would call science.

No, I'm talking here of mythology of the (2) sort.  In a very real sense, a great deal of today's science incorporates mythology of type (2).  Science (as it is really done) never provides absolute truth.  Rather it offers provisional hypotheses that can always be reconsidered and revised, at least in principal.  The notion of a scientific  consensus is that a majority of scientists accept some provisional hypotheses as being not inconsistent with the observations (data).  I've deliberately used the double negative "not inconsistent" rather than its logical equivalent "consistent" in order to shade the interpretation of that consensus science toward being as provisional as possible.  New data from new experiments may overturn an earlier well-accepted hypothesis - the history of science is replete with examples:  Einstein's relativity, Wegener's continental drift, and so on.  By this process, our scientific understanding is ever a work in progress, even when applications of that science are quite successful.  There are no sacred truths in science, no dogma beyond question, no concepts that can't be challenged.

This uncertainty is inherent in the scientific process, not some sort of problem that needs to be solved.  Any scientific explanation is open to challenge, but challenges that invoke the supernatural (e.g., creationism) are not legitimate challenges in this context.  Rather, challenges based on the supernatural are attempts to impose mythology of type (1), an "explanation" entirely outside of the scientific process.

Every new contribution to science, mostly in the form of a paper submitted for publication in a refereed scientific journal, is a challenge to existing scientific understanding, to a greater or lesser degree.  A challenge to existing understanding inevitably gores someone's sacred cow.  It's natural that this creates controversy between proponents of the existing understand and those who advocate the new provisional hypothesis.  This is described by Thomas Kuhn in his controversial book The Nature of Scientific Revolutions as a paradigm shift.  Paradigm shifts may be minor (of interest only to those specialists in some narrow, specific topic of science) or major (e.g., nonlinear dynamical theory, or chaos theory), affecting many diverse disciplines, and anywhere in between.  Some newly-proposed paradigm shifts (not all) are then subjected to further testing and if those tests are not inconsistent with the data, they go on eventually to become a new consensus among scientists.  Others fall by the wayside, perhaps for lack of interest or because they fail some new test of their consistency with the data.

In my experience, there are many myths of type (2) in my chosen field.  I've written papers to challenge them and to replace those notions with a different understanding that I believe is a better fit to the facts than the older idea.  Not all science starts out to be directed at myth-bashing, but if new understanding is revealed, this sets the stage for a clash between old ideas and new ideas.  Most people, including scientists, who accept a myth are reluctant to abandon it - myths often have a sort of feel-good comfort about them that their adherents are reluctant to give up, so they do their best to attack the new ideas.  That reluctance to accept new evidence might be justified, if the new evidence is flawed in some way, or is being interpreted incorrectly.  The trick is to be as objective as possible, and most humans find this difficult to do, often on both sides of a controversy.  Such arguments frequently are plagued by confirmation bias:  "... the tendency to search for, interpret, or prioritize information in a way that confirms one's beliefs or hypotheses."

The main point to be made here is that controversy and challenge are inherent in science.  If you manage to accomplish anything at all, you will find those who oppose your ideas, sometimes even to the point of being mean-spirited in their critiques of your work.  This should not make you uncomfortable, but you should, in fact, embrace their opposition.  I tell my students that "Your most vigorous 'enemy' is your best friend!"  Such a crucible of intellectual heat is essential in helping you do your best science.  Their attacks can reveal weak points that need to be strengthened, and may even show that you're incorrect in at least part of what you're proposing.  Try to lose your fear of being wrong - being wrong is a learning opportunity!  Your opponent will have done you a favor by showing you're wrong!  When your opponent seems to misunderstand what you're saying, you should stop and consider how to express yourself so as not to generate that misunderstanding.

All in all, controversy is good for science and you should understand that without controversy, the science is dead.  When all scientists agree about everything, then that science had reached a dead end.  Fortunately, this has never happened and likely never will.  It's not a sign of some inherent problem with science.  Controversy is at the heart of a vibrant, living science!

Where scientists go astray is when they take or offer criticism personally.  The topic isn't supposed to be the scientist, it's the science!  No matter how mean-spirited an opponent may be, however, don't lower yourself to that level.  You're not being threatened.  It's your work!  And your work isn't beyond question, right?  Real humans find it all too easy to feel threatened by opposition, but you can't be a scientist without generating opposition!  Be prepared for it.  Keep your mind open to new ideas and be willing to admit when your notions need to be abandoned in the face of a superior understanding.

Monday, November 10, 2014

Some thoughts as we approach Veteran's Day

Collectively, Americans have come far from the days of the war in Vietnam.  Now, it seems we have learned that we can honor the warriors even as we protest the war.  My Vietnam experience was not one of a combat soldier.  I didn't believe that war was in the best interests of the US, and I have mixed feelings to this day about my service there.  I didn't carry a gun in the boonies and shoot at the "enemy", but I did what was asked of me by my nation.  No one spat on me when I arrived back in "the world" (as we called the US, then) at 3:00 am in Ft. Lewis, WA.  But there was no "welcome home" either.  My life was changed by my experiences in the military and I'm still trying to decide the sign of the balance - negative or positive. At least now I do see more positives than when the experience was more recent to me.

There will be an outpouring of thanks tomorrow for all the veterans, as well as for those currently serving.  That's a sort of progress, I think.  But there are other perspectives on this day of recognition for veterans.  Most of the wars on foreign soil we've conducted since WWII haven't involved a real threat to American freedoms at home, so our military personnel have been killing and being killed for causes that are pretty far removed from protection of American freedom.  The war fighters, now including both men and women, in the wars since WWII may have been heroic in their battlefield actions on behalf of their brothers/sisters in arms, but that heroism is not based on defending America, per se.  These soldiers, sailors, marines, and airmen (not all males, anymore) have been carrying out the orders passed on to them by their civilian leaders, irrespective of the rationale on which those orders are based.  They're doing their duty as best they can, doing what is asked of them by their nation, doing what they've sworn to do, doing what they're paid to do.  In real wars - not the sanitized wars of righteous Americans battling the evil servants of an evil nation often portrayed by political "leaders" - Americans engage in atrocities amounting to war crimes, just as their enemies do.  War is an evil, poisonous thing that attacks the morality of all its participants.  The victors may put the losers on trial for war crimes, but their hands are never lily-white clean.

I came home from Vietnam with no flashbacks, nightmares, and ingrained fears (all symptoms of PTSD) because I was "in the rear, with the gear".  But many did come back from wars on foreign soil with psychological problems, in part because of things they had seen and in which they had participated.  I offer no judgment of anyone who may have done immoral things in the military - who carried out unlawful orders.  Those participating in incidents like My Lai certainly are responsible for what they did (Nuremberg established that principle), but I'm not in any position to judge them.  I don't know what I would have done had I been there - my good fortune in my war was to escape such awful situations.  I'm grateful for that.  I'm certainly no hero, by any stretch of the imagination.  With time, I've mostly come to terms with my service and am not at all ashamed to be a military veteran who participated (in a very minor way) in a war on foreign soil - like my son - and my father before me.  In my family, we have answered the call of our nation.

The real crime, in my opinion, is that we ask our young people to engage in wars, not only to defend our liberty, but in many instances to carry out the political will of our government by the application of violent force on our "enemies".  We throw them into the cesspool that is a real war and we expect - no, demand - that they come back squeaky clean.  Let us all ponder that as we recognize our war fighters for their service on this national holiday!  May we eventually come to learn that war is supposed to be a last resort, engaged in to defend ourselves and our allies from those who would harm us - not to be a violent means of imposing our political will on others.