Numerical Methods, Analysis, and Scientific Computing

Since the advent of High Performance Computing  (HPC) for climate and weather models computer processor speeds have been keeping up with the increase in the number of grid points.  Practically, this has meant that it has always taken about 2 weeks of wall clock time for an HPC DNS start up run, or say, 2 years of wall clock time for the highest resolution ocean model run, at the highest resolutions we can afford to store in computer memory. This is about to change.

In the coming years no longer will processor speeds keep increasing. Instead, the degree of parallelism will increase to unprecedented levels. Why? Because computers now require the same amount of power as a town. If computer systems were to evolve like they have been they’d need 200 MW of power to operate in the exaflop regime. Compare this with a typical US city of 80,000 people which requires 45 MW of power. Computer companies are now attempting designs which will use 20 MW of power rather than 200 MW, and this is what is driving the drastic change in computing architectures.

The problem with this development for climate and weather models is that the equations they solve numerically have oscillatory stiffness. Practically speaking this means that every time modelers increase the total number of grid points, leading to more refined projections, the time step must decrease. This could lead to situations where it may take 10 years of wall clock time for a 10 year high resolution model run. This is too long to wait to understand the science of climate and weather. Furthermore, this problem is not only in weather and climate models, but any model which has oscillatory stiffness such as for Magneto Hydro Dynamics (MHD).

This is called the strong scaling limit in High Performance Computing.

In 2001 LIons, Maday, and Turinici published an influential paper on parallel-in-time methods. This was quickly followed by another explication of the method by Maday and Turinici in 2003.

In 2013, Terry Haut and I were experimenting with a mathematical method, developed by Schochet (1994) , Klainerman and Majda (1981), and Embid and Majda (1996)  for studying fast singular limits of hyperbolic equations.  we were trying to use as a time stepping method. The idea came from ideas developed as a response to understanding the mathematics and physics behind numerical weather prediction. A sampling of these papers is: On the existence of a slow Manifold, Lorenz (1986), On the non-existence of the slow Manifold, Lorenz & Krishnamurth, 1987, The slow manifold, what is it,? Lorenz (1992)


These were papers outlined during the earliest history of numerical weather prediction. The outcome was that Charney found the quasi-geostrophic equations that lead to some of the first successful numerical weather prediction models.

Today’s models, however,  have abandoned the simpler equations in favor of the equations that contained all of the fast scales that Charney had removed. One of the key papers that make this argument for the atmosphere is Davies, Staniforth, wood, and Thuburn, 2003.

The idea behind our work was that if most of the flow was actually slow, then turning the mathematical method into a numerical one would provide a very accurate time stepping method. But it turned out not to be true. The method was close to the high resolution time stepping method but not accurate enough for numerical prediction. We realized that we had discovered a good ‘slow’ guess for the Parareal method for systems of equations with oscillatory stiffness. These ideas are intimately related to ideas we are studying in theoretical fluid dynamics.

Back to the research page