**Note**: This topic is very much focused on science, and may be challenging to nonscientist readers. I have tried to provide some background, but within the confines of a blog, this is necessarily heavily abbreviated.

Several years ago, I wrote an extended essay about the use of numerical models in weather forecasting and research. In this blog, I want to emphasize something I think is pretty important to consider when using numerical models in a research mode. Numerical models of the atmosphere are approximations to the mathematical models used to create the numerical model - they are a model (i.e., an approximation) of a model. The mathematical model in turn is a model of reality - this is an interesting aspect of mathematics - that is, mathematics can be used to describe phenomena in the real world - but I digress ...

The mathematical formulae approximated in the numerical model are drawn from physical laws (e.g., conservation of mass, energy, and momentum) that are assumed to apply to any physical system. Atmospheric models typically do not include relativistic effects or quantum physics, for instance, which are

*assumed*to be unimportant to the atmosphere. In a model of the atmosphere, the equations describing those physical laws are written in a form suitable to apply to the fluid that is the atmosphere. Depending on the model, certain physical effects are assumed to be important while others are ignored. No practical model of anything can incorporate

*everything*!

It's implicitly assumed that the aforementioned physical laws govern the temporal behavior of air "parcels" - an air parcel isn't defined in quantitative terms in any absolute sense, but is, rather, of

*indefinite size*. The properties of the air (temperature, humidity, pressure, wind velocity) within the parcel include the possibility of spatial variations within the parcel - parcel properties need not be constant within that indefinite volume. If the model is started at some instant with whatever knowledge we have of the distribution of atmospheric properties over a volume containing a large number of parcels (up to and including the entire, global atmosphere) at that given instant, then the model can be used to predict the evolution of that distribution over time. This is the basis for using numerical models to predict the weather days in advance. Numerical forecast models have been quite successful in advancing the art and science of weather forecasting!

Nevertheless, a huge problem in atmospheric science is that physical processes can affect the weather over an enormous range of time and space scales, from the microscopic at least to the size of the Earth (if not the whole solar system)! If we consider the volume within which the model operates and the model operates on something

*less*than a global scale, there may be relevant physical processes that are too

*large*to be included in the model. Anything less than global scale introduces spatial domain boundaries that represent another complication.

On the other hand, the number of parcels we choose to include within the domain over which the model operates determines its

**resolution**, where by "resolution" we mean its ability to represent physical processes properly. Any practical model cannot have

*infinite*resolution, so in addition to processes that are too large to be included in the model, there are also processes operating on scales that are too

*small*to be represented. They 'fall through the cracks' as it were. If it's deemed that processes not describable within the model by specifying parcel properties are nevertheless important, they must be represented within the model is some other way, such that the model can incorporate them in its mathematical framework - this is called

*parameterization*. It's a way to include physical processes felt to be important but not resolvable within the model. There's considerable

*art*associated with developing such schemes, but they inevitably

*misrepresent*the processes they've been developed to represent.

Anything contained within the model by whatever means (either by direct representation or by parameterization) defines the physical processes allowed by that model. Thus, any model is necessarily an approximation to the real atmosphere, that resolves certain processes, represents other process that can't be resolved in some relatively crude way, and finally, ignores many processes.

The main point of this very abbreviated tutorial is to provide some basis for what I'm about to say. Research scientists use models to try to understand the physics of the atmosphere. They can develop and study modeling results to explore the quantitative implications of the physical processes represented within the model. What researchers may sometimes fail to keep in mind is that

*once the model has been developed*,

*it can only represent those physical processes allowed by the assumptions the model developers have made about what is and is not important within that model*. The model is utterly and totally blind to any process that can't be represented by its numerical formulae. The assumptions made in building the model inevitably restrict the possibilities for what the model can show, right from the outset. The atmosphere contained within the model is only a "toy atmosphere" - an idealization. If an error was made anywhere in the development of the model (from the assumptions to the mathematical formulations, to the numerical approximations, and finally to the computer code used to run the model), then the challenge to using that model is to know how to recognize that error and track down its source.

New physical insight can be developed from numerical simulation models, but such insight has to be firmly vetted via empirical evidence. New understanding doesn't spring automatically from running and analyzing the results of numerical simulation models.

Numerical models can be seductive in their appearance of great precision and apparent quantitative insight. But the old computational law applies to them: garbage in, garbage out. The key to using such models is that they must remain grounded in the principle of observational confirmation. A pure modeling effort not validated against empirical evidence is nothing more than

*speculation*!

## 2 comments:

I get a little worried when I wander through the posters at AGU or AMS and see how much importance is given to modeling, in contrast to observation and analysis. The mathematician Richard Hamming said "The purpose of computing is insight, not numbers." There should be some corollary that says "The purpose of modeling is understanding the physical world--not just to study the model."

I get a little worried when I wander through the posters at AGU or AMS and see how much importance is given to modeling, in contrast to observation and analysis. The mathematician Richard Hamming said "The purpose of computing is insight, not numbers." There should be some corollary that says "The purpose of modeling is understanding the physical world--not just to study the model."

Post a Comment