Earth Cubed

Distributed Climate Science and Computing

Numeric Solutions to The Heat Equation

I have been reading a lot on Lucia’s blog about two box models which are essentially an approximation of the heat equation with basis functions which are constant over a box.

The heat equation is given by:

\frac{\partial u}{\partial t} -k\left(\frac{\partial^2u}{\partial x^2}+\frac{\partial^2u}{\partial y^2}+\frac{\partial^2u}{\partial z^2}\right)=0

or equivalently:

\frac{\partial u}{\partial t} = k \nabla^2 u

The fundamental solutions or Greens functions (also see main body theory) are of the form:

\Phi(\mathbf{x},t) = \Phi(x_1,t)\Phi(x_2,t)\dots\Phi(x_n,t)=\frac{1}{(4\pi k t)^{n/2}}e^{-\mathbf{x}\cdot\mathbf{x}/4kt}.

This suggests my choice of a negative exponential basis in my post (Lagrangian Mechanics and the Heat Equation) was not two bad a choice although, Guasian functions will decay slightly faster then negative exponentials.

Not all solutions are based on on fundamental solutions for instance in the post (Approximations used in Crank-Nicolson method for solving PDEs numerically) I read that the Crank-Nicolson method was the standard method of soliving the Heat equation numericaly.

For instance in 1-D

\frac{\partial u}{\partial t}=k \frac{\partial^2u}{\partial x^2}

the Cranck Nicholson Method is given by:

\frac{u_{i}^{n + 1} - u_{i}^{n}}{\Delta t} = \frac{k}{2 (\Delta x)^2}\left((u_{i + 1}^{n + 1} - 2 u_{i}^{n + 1} + u_{i - 1}^{n + 1}) + (u_{i + 1}^{n} - 2 u_{i}^{n} + u_{i - 1}^{n})\right)

It should be noted that this method produces a difference equation. The values at the next time step can be solved for analytically, using cramer’s rule (see also Invertible matrix, analytic solutions). The frequency domain characteristics can be explored using the z transform. Where the frequency response is given by evaluating the z transform along the unit circle.

Also note that numeric error can be reduced when computing future time steps by either recursive squaring:
or by using matrix decomposition.

For other numeric method of solve partial differential equations see (Numerical partial differential equations), which I posted in the thread (Preparation for PDEs).

Further with regards to crank Nicholson, there is no time dependency on the right hand side of the equations so other methods, can be used to descretize the heat equations such as using Laplace transforms or the matrix exponential.

With regards to Lucia’s blog

My understanding as posted in (Arthur’s Case 2 (I think)) that the main focus of Lucia’s blog posts is to test the model chosen by Tamino:

lucia (Comment#19822) September 12th, 2009 at 9:23 pm

What I mean is– when testing the two box model, you don’t switch to the diffusive model even if it’s more inherently sensible and intelligent. That’s because to test “X” you must test “X”. You can’t test “Y” even if “Y” seems more likely to pass the test.

This is fine but I think that a wider discussion is warranted about how this model is just a simplified version of the heat equations and what principles of modeling and differential equations can be useful to obtain better solutions.

September 15, 2009 Posted by | Math | 3 Comments

The Cross Product in Non Orthogonal Coordinate Systems

The form of the cross product I’ve shown in my post Coriolis Forces is:

\boldsymbol{\Omega \times v} =  \begin{pmatrix} \Omega_y v_z - \Omega_z v_y \\ \Omega_z v_x - \Omega_x v_z  \\ \Omega_x v_y - \Omega_y v_x  \end{pmatrix}

The components of this cross product can be written as follows:

\left[ \boldsymbol{\Omega \times v} \right]_x=  \left[ \begin{matrix}\Omega_x&\Omega_y&\Omega_z \end{matrix} \right]\left[ \begin{matrix}0&0&0 \\ 0&0&1 \\ 0&-1&0\end{matrix} \right]\left[ \begin{matrix}v_x\\v_y\\v_z \end{matrix} \right]

\left[ \boldsymbol{\Omega \times v} \right]_y=  \left[ \begin{matrix}\Omega_x&\Omega_y&\Omega_z \end{matrix} \right]\left[ \begin{matrix}0&0&-1 \\ 0&0&0 \\ 1&0&0\end{matrix} \right]\left[ \begin{matrix}v_x\\v_y\\v_z \end{matrix} \right]

\left[ \boldsymbol{\Omega \times v} \right]_z=  \left[ \begin{matrix}\Omega_x&\Omega_y&\Omega_z \end{matrix} \right]\left[ \begin{matrix}0&1&0 \\ -1&0&0 \\ 0&0&0\end{matrix} \right]\left[ \begin{matrix}v_x\\v_y\\v_z \end{matrix} \right]

We will abbreviate these relationships as follows:

\left[ \boldsymbol{\Omega \times v} \right]_x=\Omega^T[\times]_xv

\left[ \boldsymbol{\Omega \times v} \right]_y=\Omega^T[\times]_yv

\left[ \boldsymbol{\Omega \times v} \right]_z=\Omega^T[\times]_zv

Now define the coordinate transform:

\boldsymbol{r'}=T \boldsymbol{r}
\boldsymbol{r}=\left[\begin{matrix}  x \\ y \\ z \end{matrix} \right]

Then the cross product components can be written as follows:

\left[ \boldsymbol{\Omega \times v} \right]_x=(\Omega^TT^T)((T^{-1})^T[\times]_xT^{-1})(Tv)

\left[ \boldsymbol{\Omega \times v} \right]_y=(\Omega^TT^T)((T^{-1})^T[\times]_yT^{-1})(Tv)

\left[ \boldsymbol{\Omega \times v} \right]_z=(\Omega^TT^T)((T^{-1})^T[\times]_zT^{-1})(Tv)

Now Right multiplying the matrix \left[ \boldsymbol{\Omega \times v} \right] by the transform T gives:

\left[ \boldsymbol{\Omega \times v} \right]_{x'}=\sum_{k \in \{x,y,z\}}T_{1,k}(\Omega^TT^T)((T^{-1})^T[\times]_kT^{-1})(Tv)

\left[ \boldsymbol{\Omega \times v} \right]_{y'}=\sum_{k \in \{x,y,z\}}T_{2,k}(\Omega^TT^T)((T^{-1})^T[\times]_kT^{-1})(Tv)

\left[ \boldsymbol{\Omega \times v} \right]_{z'}=\sum_{k \in \{x,y,z\}}T_{3,k}(\Omega^TT^T)((T^{-1})^T[\times]_kT^{-1})(Tv)

Which can be written in this form:
\left[ \boldsymbol{\Omega \times v} \right]_{x'}=\Omega'^T[\times]_{x'}v'

\left[ \boldsymbol{\Omega \times v} \right]_{y'}=\Omega'^T[\times]_{y'}{v'}

\left[ \boldsymbol{\Omega \times v} \right]_{z'}=\Omega^T[\times]_{z'}{v'}


\left[[\times]_{x'}\right]_{i,j}=\sum_{k \in \{x,y,z\}}T_{1,k}[((T^{-1})^T[\times]_kT^{-1})]_{i,j}

\left[[\times]_{y'}\right]_{i,j}=\sum_{k \in \{x,y,z\}}T_{2,k}[((T^{-1})^T[\times]_kT^{-1})]_{i,j}

\left[[\times]_{z'}\right]_{i,j}=\sum_{k \in \{x,y,z\}}T_{3,k}[((T^{-1})^T[\times]_kT^{-1})]_{i,j}


September 7, 2009 Posted by | Math | 2 Comments

Laplace Transform of f(t) Related to smoothed f(t)?

When reading (Comment#18839) I started to wonder if there was a relationship between the Fourier Transform of a smoothed signal and the Laplace transform.  I assumed there was a relationship (Comment#18854). After further derivation, I recommenced that if the goal is to derive the Laplace transform from the Fourier transom of the filtered signal:

1) The signal be properly windowed.

2) The FFT of the windowed Fourier Transform, needs to be compensated for the frequency effects that resulted from the low pass filter.

Weather it is a good idea to compute the Laplace transform from a windowed FFT of a filtered signal is outside of the scope of this thread (but feel free to comment bellow) .

The Laplace transform is given by:

1)           (\mathcal{L}f)(s) = \int_{0^-}^\infty e^{-st}f(t)\,dt

The Fourier transform is given by:

2)           \hat{f}(\omega) := \int_{-\infty}^{\infty} f(x)\ e^{- \omega i t }\,dt,

The Two Sided Laplace Transform is given by:

3)           \mathcal{B} \left\{f(t)\right\} = F(s) =  \int_{-\infty}^{\infty}  e^{-st} f(t) dt.

Therefor the Fourier transform is the two sided Laplace transform evaluated at s=i\omega

Returning to the one sided Laplace transform:

4)            (\mathcal{L}f)(s) = \int_{a}^b e^{-st}f(t)\,dt= \int_{0^-}^\infty e^{-(\alpha+iw)t}f(t)\,dt

5)            (\mathcal{L}f)(s) = \int_{a}^b e^{-st}f(t)\,dt= \int_{0^-}^\infty e^{-iw t}f(t)e^{- \alpha t}\,dt

Integrating by Parts:


6)            u=e^{- iw t}
7)           dv=f(t)e^{- \alpha t}\,dt

8)            V=\int_{a}^tf(t')e^{-\alpha t'}\,dt'=e^{-\alpha t}\int_{t'=a}^tf(t')e^{\alpha (t-t')}\,dt'

where the low pass filtered version of f(t):

9)            g(t)=f(t)*e^{\alpha t} =e^{-\alpha t} \int_{a}^tf(t')e^{\alpha (t-t')}\,dt'

and is the convolution of f(t) and the impulse response of a filter (or atleast aproximatly so) with bandwidth \alpha .

Plugging this result into integration by parts gives:

10)         (\mathcal{L}f)(s)=[e^{i \omega t}g(t)]_a^b-\int_a^b-i \omega e^{-i \omega t}e^{-\alpha t}g(t)dt

or equivalently:

11)          (\mathcal{L}f)(s)=g(b)e^{-i\omega b}-g(a)e^{-i \omega a}+i\int_a^b \omega e^{-i \omega t}e^{-\alpha t}g(t)dt

The first two terms show how the endpoints chosen effect the transfrom.  These two terms will cancel for a given frequency if the distance between the endpoints is some multiple of the period. The last term is the Fouier transform of the smoothed function with the frequencies weighted by \omega and using a windowing function e^{-\alpha t}
(note the multiple i is there because the Fourier transform variable \omega is the Laplace transform variable but rotated by 90 degrees.)

The effect of the windowing function is to smooth the frequency response. This is because multiplication in the time domain is equivalent to convolution in the frequency domain. The following Fourier transform relationship is useful (relationship 205):

12)           \mathcal{F}(e^{- \alpha t} u(t)) \,=\frac{1}{\sqrt{2 \pi} (\alpha + i \omega)}=\frac{1}{\sqrt{2 \pi (\alpha^2 + \omega^2)}}e^{(-i \ tan^{-1}(\alpha,\omega))}

Note, that if a non causal filter was used for the smoothing the relationship is much simpler.

13)           \mathcal{F}({e}^{-a|t|}) \,=\sqrt{\frac{2}{\pi}} \cdot \frac{a}{a^2 + \omega^2}

In both cases to properly deal with the end points the time shifting property of the Fourier transform is needed (relationship 102):

14)           f(x - t)\, =e^{- i a \omega} \hat{f}(\omega)\,

Applying this property to the last two relationships gives:

15)           \mathcal{F}(e^{- \alpha (t-a)} (u(t-a)-u(t-b)) \,=\frac{e^{-ia \omega}-e^{\alpha a-ib\omega}}{\sqrt{2 \pi} (\alpha + i \omega)}

16)           \mathcal{F}({e}^{-a|(t-b)|}) \,=\sqrt{\frac{2}{\pi}} \cdot \frac{a}{a^2 + \omega^2}e^{-i\omega\frac{1}{2}(a+b)}

Strictly dealing with the case where a causal filter is used and applying the rule for the Fourier transform of a convolution (Rule 109) we obtain:

17)           (\mathcal{L}f)(s)=g(b)e^{-i\omega b}-g(a)e^{-i \omega a}+i\omega\frac{1}{\sqrt{2\pi}}\mathcal{F}(g(t))*\left( \frac{e^{-ia \omega}-e^{\alpha a-ib\omega}}{\sqrt{2 \pi} (\alpha + i \omega)} \right)

of equivalently:

18)           \mathcal{L}(f(y))=g(b)e^{-i\omega b}-g(a)e^{-i \omega a}+i\omega\frac{1}{\sqrt{2\pi}}\mathcal{F}(g(t))*\left( \frac{e^{-ia \omega}-e^{\alpha a-ib\omega}}{\sqrt{2 \pi(\alpha^2 +\omega^2)}}e^{ -i tan^{-1}(\alpha,\omega) } \right)

Some Comments:

1) If  \alpha is negative the system is causal, and the filtered version g(t) of the signal f(t) will be causal.

2) Computing the smoothed signal does not save any computations with regards to computing the Laplace transform.

3) The derivation seems to show that their is a relationship between Laplace trancform and a windowed Fouier transform of the filtered signal.

4) To compute the Laplace transform based on the orginal signal use equation (5). To compute it based on the filtered signal use equation (11).

August 30, 2009 Posted by | Math | 8 Comments

Hoskins and Simmons (1974) Coordinate System

sHoskins and Simmons (1974) Coordinate System

Hoskin and Simon used the following coordinate system:

\mu = sin( \theta ) where theta is the latitude.

\sigma = pressure/P_* Where P_*  is the surface pressure and \sigma is the verticle coordinate.

\lambda is the longitude.

This coordinate system is also used in the following model which was recomended to me as a simple model:

As the complexity of general circulation models has been and still is growing considerably, it is not surprising that, for both education and research, models simpler than those comprehensive GCMs at the cutting edge of the development, are becoming more and more attractive. These medium complexity models do not simply enhance the climate model hierarchy. They support understanding atmospheric or climate phenomena by simplifying the system gradually to reveal
the key mechanisms. They also provide an ideal tool kit for students to be educated and to teach themselves, gaining practice in model building or modeling. Our aim is to provide such a model of intermediate complexity for the university environment: the PlanetSimulator. It
can be used for training the next GCM developers, to support scientists to understand climate processes, and to do fundamental research.

In order to use the coordinates given by Hoskins and Simmons in the Navier Stokes equations it is necissary to find the operations for Div Grad and Curl in this coordinate system. You can get a free PDF here:

Which shows how to derive these operations for any coordinate system. These derivations express the apove operations in terms of scaling quantities. The scaling quantitys are the magnitude of the derivative of the change in postion, with respect to a change in one of the coordinates in the new coordinate system.

In our case for the coordinate \mu the scalling quanity is:

h_{\mu}=\left| \frac{ \partial \vec r}{\partial \mu} \right| = \left| \frac{ \partial (r  \theta)}{\partial \mu} \right|

because when you change latitude the distance moved is proportional to the radius multiplied by the change in latitudinal (measured in radians).

Now expressing the latitude in terms of \mu
h_{\mu}=\left| \frac{ \partial (r \ sin^{-1}( \mu ))}{\partial \mu} \right| = \frac{r}{\sqrt{1-\mu^2}}

Now for the \lambda which is the longitude coordinate, h_{\lambda} depends on how you define zero latitude. The distance changed for a given change in longitude is the product of the change in logitude (measured in radians) multiplied by the distance to the earths axis of rotation. If zero lattitude is the center of the earth then this distance is r cos(\theta) if zero latitude is the north pole (typical in spherical coordinates) then this distance is r sin(\theta) .

We will see that taking zero latitude as the north pole will be simpler.

h_{\lambda}=\left| \frac{ \partial \vec r}{\partial \lambda} \right| = \left| \frac{ \partial (r  \lambda sin(\theta))}{\partial \lambda} \right|=r \ sin(\theta)

But \theta=sin^{-1}(\mu) Therefore:

h_{\lambda}=r \mu

The vertical coordinate \sigma is the ratio of the hydrostatic pressure to the hydrostatic pressure at sea level.

In Hoskins & Simmons the hydrostatic pressure used in the coordinate ignores the lapse rate. In some models variations on this are used. For instance in a model used by Judith Curry this coordinate is somewhat adjusted based on the surface topology of the earth. (see my post: hybird Coordinate System)

hybird coordinate system

In Curries model this is done to try and keep wind flow on the horizontal plane. Anyway, the equation for hydrostatic equilibrium is:

\frac{dp}{dh} = - \frac {mpg}{RT}


\frac{\partial \sigma}{\partial h}=\frac{\partial \sigma}{\partial P}\frac{\partial P}{\partial h}=\frac{1}{P_*}\frac{\partial P}{h}=-\frac{1}{P_*} \frac {mpg}{RT} =-\sigma \frac{mg}{RT}

Hmmmmmmm: From what I’ve done the only way I can see to make the constants \frac{mg}{RT} disappear is to set T=\frac{mg}{R}

Here is what Hoskins and Simmons have to say about temperature scaling and such.

all non-dimensionalized using as length scale the radius of the planet, a, ; as time scale the reciprocal of its angular velocity \Omega ; temperature scale a^2\Omega^2/R (R as the gas constant) and pressure scale P_o=1 Bar.

In T_o was taken to be 250K. Some numbers of rellevence:

Gas constant of air: 287.058 J kg−1 K−1

and for the molar mass of air:

Standard temperature =273.16k

Source for the molar mass of air:

Standard pressure=1 Atm=1013.250*10^2 Pa.
The molar mass of air at standard temperature and pressure is 28.964*10^-3 kg/mole. Molar

Standard Gravity: 9.80665 m/s2

August 27, 2009 Posted by | GCM (General Circulation Models), Math | 3 Comments

Vector Identities and Derivations

To derive the prognostic equations in a GCM usually requires the use of vector identities. Here is a list of some links which give vector identities:

I’ve been searching for the derivations of these identities and have found an excellent source:

Vector algebra is a powerful and needful tool for Physics but unfortunately, due to lack of mathematical skills, it becomes misleading for first undergraduate courses of science and engineering studies. Standard vector identities are usually proved using Cartesian components or geometrical arguments, accordingly. Instead, this work presents a new teaching strategy in order to derive symbolically vector identities without analytical expansions in components, either explicitly or using indicial notation. This strategy is mainly based on the correspondence between three-dimensional vectors and skew-symmetric second-rank tensors. Hence, the derivations are performed from skew tensors and dyadic products, rather than cross products. Some examples of skew-symmetric tensors in Physics are illustrated.

This will require some understanding of some concepts of tensor algebra. Here are some helpfull links of concepts that will be needed:

As a side note, the prognostic equations are typically derived from the vorticy equation.  See:

August 23, 2009 Posted by | Math | 2 Comments

Small Region Aprox. of Spherical Harmonics

The beauty of the method I orginaly proposed, is that if you take a small enough region of the earth the grid becomes rectangular. This means, that over a small region,  spherical harmonics, will look more like sine, and cosine functions.

If we use sine and cosine functions to approximate spherical harmonics over a small region the mathematics is greatly simplified and we can use standard FFTs, to compute the spectral components over this region. In fact little will be gained in this case by using the FFT instead of the DFT because we can use information about the spectral components, computed over one small region, to reduce the computation of the spectral components in an overlapping spectral region. This recurive comutation, will only be of linear complexity while the FFT is log-linear complexity.

Of course on a large scale we want to take fall advantage of the FFT like alrorithims to compute the spectral components, but when it comes to small scales we want to use interpolationg functions which capture the local rather then the global behavior and consequently we need a new DFT for each small overlapping region and therefore recursive algorithims are more efficient then FFTs.

Some consideration needs to be given as to weather, a Fourier Series is a good interpolating function for small regions of Navier stokes equations.  Aproximation, can be done with regards to another bais but the foier series, has a lot of advantages in terms of computational complxity. Even if another basis is used, it can be used in conjuction with the FFT by using one of the basis to interpolate the error given that truncated expansion in terms of the other basis.

August 18, 2009 Posted by | Math | Leave a comment

Recursive Numeric Integration, The FFT and Spherical Harmonics

I was briefly exploring various methods of numeric integration,  because if any spectral techniques are used (e.g. Spherical Harmonics) then, the computation of there coefficients involves an inner product between the basis function, and the properties of the earth who’s spectral expansion is being computed. The computation of the inner product involves integration.

In order to reduce computations, some kind of recursive integration may be desirable. Examples include: Clenshaw Curtis_Quadratures, Gaussian Quadrature, Romberg method.

These are of course  one dimensional techniques. An interesting concept for multiple dimensional integrals is as follows:

Recursive Integration Methodologies with Statistical Applications
A. J. Hayter, Industrial & Systems Engineering
Georgia Tech, Atlanta, USA

This paper shows how recursive integration methodologies can be used to evaluate high dimensional integral expressions. This has applications to many areas of statistical inference where probability calculations and critical point evaluations often require such high dimensional integral evaluations. Recursive integration can allow an integral expression of a given dimension to be evaluated by a series of calculations of a smaller dimension. This significantly reduces the computation time. The application of the recursive integration methodology is illustrated with several examples.

As for the single dimension methods of integration, Gaussian quadrature works well, when the  function being ingegrated is well approximated by a polynomial.  Romberg is quite interesting, it will work for uneven step sizes. It is derived by applying Richardson extrapolation (Richardson 1910) repeatedly on the trapezium rule.

Clenshaw Curtis_quadratures is particularly interesting. In this method a change of variables is done by letting x=cos(theata). This reduces the problem of integration to the problem of finding the Fourier series, which can be efficiently by using the Fast Fourier Transform. The Fast Fourier Transform has the advantage that the computational complexity grows with the logarithm of the number of the points.

In climatology spherical harmonics are used to represent the spectrum instead of Fourier Transforms. It is worth checking to see if there are efficient ways to compute these transforms. Some relevant links include:

Spherical Harmonic Transforms (SHTs) which are essentially Fourier transforms on the sphere are critical in global geopotential and related applications. Among the best known strategies for discrete SHTs are Chebychev quadratures and least squares. The numerical evaluation of the Legendre functions are especially challenging for very high degrees and orders which are required for advanced geocomputations. The computational aspects of SHTs and their inverses using both quadrature and least-squares estimation methods are discussed with special emphasis on numerical preconditioning that guarantees reliable results for degrees and orders up to 3800 in REAL*8 or double precision arithmetic. These numerical results of spherical harmonic synthesis and analysis using simulated spectral coefficients are new and especially important for a number of geodetic, geophysical and related applications with ground resolutions approaching 5 km.


SpharmonicKit is a collection of routines, written in C, which implement discrete Legendre and spherical harmonic transforms by a number of different algorithms. For certain algorithms, code for the inverse transform is also provided. Included as well are routines for spherical convolutions.The Legendre transforms in the Kit require that a function of bandwidth B be sampled on the 2B-many Chebyshev points and not the B-many Gaussian points. Therefore, the number of samples needed is twice the function’s bandwidth. This implies that for the spherical transforms in the Kit, a function of bandwidth B is sampled on the equiangular 2Bx2B grid on the sphere.

Some of the algorithms implemented in this package are based on the work of Driscoll and Healy. A preprint that offers a detailed development and description of the algorithms implemented in SpharmonicKit is here.

Caveat emptor – this is research code only and has not been hardened. All the code works quite well on DEC Alpha workstations using OSF1 Versions 3.2 and 4.0. Some code has also been tested and successfully run on Pentium-based GNU/Linux workstations, SGI workstations using IRIX 5.3 and IRIX 6.4, Sun workstations using SunOS 4.1.2 1, an HP Exemplar X-Class running SPPUX 5.2.1, and even a NeXTstation Color Turbo running NeXTStep 3.3!

We would also like to mention that there exists other software for performing spherical transforms. Details on other possibilities may be found here.

August 17, 2009 Posted by | Math | 7 Comments

Spectral Solutions to the prognaustic equations

In some GCMs spectral solutions (see [1] For instance) to the prognaustic equations are sought out.  This is either done for long term predictions or for reasons of stability testing. Another reason to use spectral solutions, is to relate points, in space and time, so that we can extrapolate both spacialy and temporarily given that we are given limited data.

Spectral solutions also seem to be useful for separating fast gravity waves from slower nonlinear dynamics as is done in:

I don’t really have much to say about this at the moment. I am wonder what is the basis for using spherical harmonics, for the basis expansion. It is true that spherical harmonics form a linear independent basis but they also arise as the solution to Laplace’s equation.  I’m wondering if there is any connection.

I was thinking a bit about what predictive properties these basises must have. If we are trying to use information at Y(P2) to estimate Y(P1), then the amount the point P2 contributes to the basis expansion is:

Y(P1)=sum_i f_i(P1)fi(P2)Y(P(2))


Y(P1) is the value of the property at P1
Y(P2) is the value of the property at P2
f_i(P1) is the ith basis funciton evaluated at P1
f_i(P2) is the ith basis function evaluated at P2

Notice that the quantity:

sum_i f_i(P1)fi(P2)

acts like a correlation coefficent between Y(P1) and Y(P2)

it is not clear in general if this quantity will converge but if we instead of choosing a single point Y(P2) integrate about some region around Y(P2) then the higher frequency basis functions will become less significant to this correlation.

Therefore it might be usefull to develop some expressions for the integral of this over a sufrace area corresponding to the smallest grid resolution. It is worth considering basis functions that may be easier to integrate then, integrating spherical harmonics.

Another question, is with regards to the choice of extrapolation funciton what will give good extrapolation along the z component. There are solid spherical harmonics, but the verticle components of these are not orthogonal.

[1] A multi-layer spectral model and the semi-implicit method by B J. Hoskins and A.J. Simmons

August 16, 2009 Posted by | Math | 1 Comment