Laplace Transform Via Limits
Quite a while back I had a brief interest in wheather the Laplace transform made numeric sense. I was interested in this for its own sake:
http://www.physicsforums.com/showthread.php?t=334191
and because if we could numericaly understand the Laplace transform it might tell us more about a dynamic system then a fourier transform.
https://earthcubed.wordpress.com/2009/08/30/usingtheffttocalculatethelaplacetransform/
Memory of my failures in obtaining the desired results led me to believe that the Laplace transform was not Riemann integratable. However when chalanged on this in another forum:
http://forums.philosophyforums.com/threads/notitle5425012.html
I rethought a suggested solution and decided that I was perhaps partly incorrect. We can derive the integral from sums & limits bit it converges in a very strange way. The sum converges exactly where the geometric series diverges. All that is left prior to cancellation is a ratio of infinitesimally small quantities.
The summation suggested by andrewk for the laplace transform was:
(1) –
It looks simple enough that I may have or should have considered it but it is not entirely obvious for how the double sum should work. Moreover, wikipedia discusses some problems with trying to use Riemann integration with improper integrals.
http://en.wikipedia.org/wiki/Riemann_integral#Generalizations
Needless to I am able to use a Riemann like result and get the right answer. Whether the approach is mathematically sound is an entirely different question. In the above equation if we are trying to integrate an exponentail function of the form
(2) –
(3) –
combining terms
(4) –
Expressing as a geometric series:
(5) –
Using the geometric series:
(6) –
Now here is where the Voodoo comes in we need to let m/n approach zero and m approach infinity. Rather them attempting to do this formally lets just cut to the chase.
when m/n approaches zero we can we can approximate:$latex e^{(s+alpha)m/n} $ as
and as m approach infinity approaches zero.
Making these substitution into equation (6) we get:
(7) –
which further simplifies to
(8) –
cancelling m/n from the numerator and denominator (
(9) –
and now the limits are irrelivant:
(10) –
log(CO2) and Scary Graphs
After reading Aurther’s blog post “New Congressional Budget Office Report on Climate Change” I got curious as to what the temperature response to CO2 would look like if it is actually logarithmic.
I descovered (what I should have realized from the start) that the curve is concave up but will converge to a linear curve for large T
It is commonly believed that the response of the earth to greenhouse gasses is logarithmic. I heard people suggest on other forums that this view was obtained empirically though climate models and perhaps more specifically radiative transfer models.
For analytic justification one would derive an expression for how the spectral width of an absorption peak grows with CO2 concentration (as I’ve done here). However, I am not convinced that this is sufficient justification as there will always be some radiative transfer at a given wave length even if the majority of it is absorbed over a very short distance. This is because the temperature gradient produced by the lapse rate produces an outward radiative flux that would exceed radiative feedback. I discuss this in more detail in my post:
Tropospheric Feedback
The logarithmic response is important because it is a type of saturation, in other words the more CO2 that is added to the atmosphere the less effective the next unit of CO2 will be in contributing to the warming. What I learned from reading Arther’s blog is that the current CO2 levels have not yet overwhelmed the natural levels of CO2. This can be seen in the following graph:
More specificity from about 10001800 the CO2 concentration in the atmosphere stayed around 280 ppm. The following graph is more useful for measuring the current growth in CO2 concentration:
This graph is surprisingly very linear. If the growth in CO2 is truely exponential then it should be possible to estimate in from the slope on this graph which is given as 1.4203 PPM per year. For an exponential function:
The derivative is:
And the second derivative is:
The second derivative was taken because two equations are needed to find both can be found.
The site also where I obtained the above figures gives a quadratic fit which can be used to estimate the first and second derivatives:
Therefore at year 2007 the first derivative is given by:
and the second derivative is 0.0119942
Giving:
Dividing the second equation by the first:
From this the doubling time can be obtained as follows:
years.
is taken to be the base level of CO2. That is:
A third equation can now be found as:
giving:
If t_o is taken to be 1800 this gives:
which suggests the CO2 growth rate has decreased over the last 200 years.
The CO2 is estimated to follow this function:
The question now is how does this growth rate in CO2 effect the temperature. There are several estimates for the sensitivity of the climate to changes in CO2. Lucia’s one box model “lumpy” suggest a sensitivity of:
1.7 degrees Celsius per CO2 doubling.
The IPCC estimates the lower bound for sesitivity to be:
1.5 degrees Celsius per CO2 doubling (see CO2 Climate Sensitivity)
Isaac M. Held suggests a climate sensitivity of about 2.8C/CO2 doubling. See:
http://www.gfdl.gov/isaacheldhomepage
Selected recent papers on climate sensitivity:
 Soden, Held, Colman, Shell, Kiehl, and Shields, 2008: Quantifying climate feedbacks using radiative kernels. Journal of Climate
 Zhang, Delworth, and Held, 2007: Can the Atlantic Ocean drive the observed multidecadal variability in Northern Hemisphere mean temperature? Geophysical Research Letters
 Soden and Held, 2006: An assessment of climate feedbacks in coupled oceanatmosphere models . Journal of Climate
Here is what wikipedia has to say:
In Intergovernmental Panel on Climate Change (IPCC) reports, equilibrium climate sensitivity refers to the equilibrium change in global mean nearsurface air temperature that would result from a sustained doubling of the atmospheric (equivalent) CO_{2} concentration. This value is estimated, by the IPCC Fourth Assessment Report (AR4) as likely to be in the range 2 to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values. This is a slight change from the IPCC Third Assessment Report (TAR), which said it was “likely to be in the range of 1.5 to 4.5°C” [1]. AR3 defined climate sensitivity alternatively in systematic units, equilibrium climate sensitivity refers to the equilibrium change in surface air temperature following a unit change in radiative forcing and is expressed in units of °C/(W/m^{2}) or equivalently K/(W/m^{2}). In practice, the evaluation of the equilibrium climate sensitivity from models requires very long simulations with coupled global climate models, or it may be deduced from observations. Therefore the 2007 AR4 renamed the alternative climate sensitivity to climate sensitivity parameter adding a new definition of effective climate sensitivity which is “a measure of the strengths of the climate feedbacks at a particular time and may vary with forcing history and climate state”.
The logarithmic law of CO2 forcing is given as:
where is the CO2 sensitivity for doubling CO2
I plotted this function for several values of the doubling sensitivity k
The labels on the right hand side of the plot are the climate sensitivities for each curve. This is actually a considerably smaller response then one would expect given the doubling time is around 100 years. However, while this is a nearly sufficient time for the exponential part of the curve to double, the CO2 only increases by a factor of 1.2 since at 1997 the CO2 concentration is 384 ppm and in 2100 the CO2 concentration is projected by this fit to be 473ppm. This reduces the expected response by a factor of:
Notice that if the sensitivity ranges given by the IPCC are used then with this fit to CO2 emission growth is around 0.51.2 degrees which is hardly the doomsday scenario shown in the following graph which was posted on Aurther’s blog.
As a final note, the MATLAB code I used to produce the above graph is:
clear all CO2= @(t)280+31.8931*exp(0.006*(t1800)) CO2_0=CO2(1997) t=linspace(1997,2100) K=[1.5 4 7 11 15 21 25 29 33] CO2s=CO2(t); DT=@(k)(k/log(2))*log(CO2s/CO2_0) for i=1:length(K) DTs=DT(K(i)) plot(t,DTs) AXIS([1997 2100 0 10]) gtext(num2str(K(i))) hold on; end xlabel('Year') ylabel('Temperature Change in Degrees Celcius') hold off,
Numeric Solutions to The Heat Equation
I have been reading a lot on Lucia’s blog about two box models which are essentially an approximation of the heat equation with basis functions which are constant over a box.
The heat equation is given by:
or equivalently:
The fundamental solutions or Greens functions (also see main body theory) are of the form:
This suggests my choice of a negative exponential basis in my post (Lagrangian Mechanics and the Heat Equation) was not two bad a choice although, Guasian functions will decay slightly faster then negative exponentials.
Not all solutions are based on on fundamental solutions for instance in the post (Approximations used in CrankNicolson method for solving PDEs numerically) I read that the CrankNicolson method was the standard method of soliving the Heat equation numericaly.
For instance in 1D
the Cranck Nicholson Method is given by:
It should be noted that this method produces a difference equation. The values at the next time step can be solved for analytically, using cramer’s rule (see also Invertible matrix, analytic solutions). The frequency domain characteristics can be explored using the z transform. Where the frequency response is given by evaluating the z transform along the unit circle.
Also note that numeric error can be reduced when computing future time steps by either recursive squaring:
or by using matrix decomposition.
For other numeric method of solve partial differential equations see (Numerical partial differential equations), which I posted in the thread (Preparation for PDEs).
Further with regards to crank Nicholson, there is no time dependency on the right hand side of the equations so other methods, can be used to descretize the heat equations such as using Laplace transforms or the matrix exponential.
With regards to Lucia’s blog
My understanding as posted in (Arthur’s Case 2 (I think)) that the main focus of Lucia’s blog posts is to test the model chosen by Tamino:
lucia (Comment#19822) September 12th, 2009 at 9:23 pm
What I mean is– when testing the two box model, you don’t switch to the diffusive model even if it’s more inherently sensible and intelligent. That’s because to test “X” you must test “X”. You can’t test “Y” even if “Y” seems more likely to pass the test.
This is fine but I think that a wider discussion is warranted about how this model is just a simplified version of the heat equations and what principles of modeling and differential equations can be useful to obtain better solutions.
Coriolis Forces in Hopkins and Simmons Vorticity Equation
In the thread vector operations in Hopkins and Simmons, I compute the components of the curl as:
In my post Coriolis forces in Hopkins and Simmons I compute the coriolis force as:
And the partial derivatives are given by:
I’ll derive the rest of this later but this doesn’t seem to be the form of the prognostic equation used by Hopkins and Simmons.
Divergence Free Flow
In the post Vector Operations in Hopkins and Simmons I derived the divergence operator as follows:
If the divergence of the velocity equals zero then:
Which implies:
The Cross Product in Non Orthogonal Coordinate Systems
The form of the cross product I’ve shown in my post Coriolis Forces is:
The components of this cross product can be written as follows:
We will abbreviate these relationships as follows:
Now define the coordinate transform:
where
Then the cross product components can be written as follows:
Now Right multiplying the matrix by the transform gives:
Which can be written in this form:
Where:
Lagrangian Mechanics and The Heat Equation
I have been noticing a lot of discussion on Lucia’s blog about an energy Balance Climate Model.
Two Box Model: Algebra for ver. 1 test.
Two Box Model: Now assuming surface temperature are ‘mixed’ values.
Two Box Model: Rough idea how to obtain parameters.
Two Box Models & The 2nd Law of Thermodynamics
Which started based on a post by Tamino, called “Two Box”
The biggest criticism I’ve scene about these models is that there is very little physics which is used to form these models and consequently they won’t capture much of the nonlinear dynamics.
The basic premises of the model is that the time lags are associated with the heat capacity of the ocean. Consequently, the ocean and some of the atmosphere is partitioned into boxes and the model tries to determine how these boxes are coupled.
The end result though is the time legs end up being associated with eignvectors which, project onto both boxes. Consequently the modes of the system do not coincide with the boxes. Moreover, it is desirable to minimize the number of model parameters which need to be fit and therefore principles of physics, should be used to derive a more realistic “Two box model” or should I say more generally “Two mode model” as the modes do not coincide with the boxes.
The most simple equation with regards to the transfer of energy is the heat equation:
1)
The heat equation is essentially a diffusion equation based on Brownian Motion. But we can adjust the constant to try and account for other heat exchange processes.
In one dimension the heat equation can be written as:
2)
In this equation the time derivative only depends on the spatial derivatives. Keeping a mind on the dynamics, the higher the frequency that the climate forcing is, the more shallow it will penetrate into the ocean because the more ocean you include the greater the thermal inertia (heat capacity) and consequently the greater the damping. Thus a good fit for the temperature response at a given frequency as a function of depth, may be an exponential function of the depth.
Thus, from this observation and because the mathematics are fairly simple consider a set of basis functions which the special component is exponentially decreasing from the surface. That is let:
3)
Plugging this into the heat equation one gets:
4)
Where:
The constant represents how quickly heat diffuses in the ocean at a given depth. Equating terms on the heat equation gives:
5)
Now let there be a function , and let the quanity being diffused in the heat equation be the temperature. Then the energy constraint is given by:
6)
This constraint makes one of the generalized coordinates redundant.
Differentiating (5) with respect to time, gives:
7)
is determined by the forcing. Now some notes, with regards to the coordinate system:
1)I suggest taking the as the redundant basis function, the one that has the smallest value .
2)Since this model is a linearization (or approximation), I suggest using as the temperature variable at a given depth, the amount by which it exceeds the average temperature at that depth. That is model the temperature anomaly instead of the actual temperature.
3)Given two then we are looking at the change in the heat transfer from the mean instead of the actual heat transfer.
4)These are only suggestions.
Now the Lagrangian, is essentially a cost function. In our cost function, we want to minimize the error in the derivatives (the square of equation (5) integrated over y) and as an additional constraint, I want to minimize the change in entropy with respect to y.
When a parcel of water rises because it is hotter then the surrounding and therefore less dense it will expand with constant entropy (adiabatic expansion) if it exchanges no heat with the surroundings. Therefore, the heat induced ocean currents will act in order to try and minimize the entropy change between adjacent regions. (Note the actual effect of this process is to increase entropy).
From thermodynamics:
8 )
Therefore:
9)
from which we can get:
10)
Substituting into (10) the expression for temperature (equation (3)) equation (10) gives:
11)
The entropy cost function is:
12)
The Lagrangian is given by:
13)
Where is a stiffness parameter for the entropy.
Equation 13) is subject to the constraints for some (These are the energy constraints mentioned above.
14)
15)
Now the Euler Lagrange Equation is used to obtain the dynamics (differential equations) from the Lagrangian
16)
This will give a second order differential equation and it might be necessary to use some linear algebra to rearrange the equation. You can convert this into a first order differential equation by using the Hamiltonian form. If only three basis functions are used then one gets “two mode model”. There are no restrictions on the number of bais functions used.
Laplace Transform of f(t) Related to smoothed f(t)?
When reading (Comment#18839) I started to wonder if there was a relationship between the Fourier Transform of a smoothed signal and the Laplace transform. I assumed there was a relationship (Comment#18854). After further derivation, I recommenced that if the goal is to derive the Laplace transform from the Fourier transom of the filtered signal:
1) The signal be properly windowed.
2) The FFT of the windowed Fourier Transform, needs to be compensated for the frequency effects that resulted from the low pass filter.
Weather it is a good idea to compute the Laplace transform from a windowed FFT of a filtered signal is outside of the scope of this thread (but feel free to comment bellow) .
The Laplace transform is given by:
1)
The Fourier transform is given by:
2)
The Two Sided Laplace Transform is given by:
3)
Therefor the Fourier transform is the two sided Laplace transform evaluated at
Returning to the one sided Laplace transform:
4)
5)
Let:
6)
7)
8)
where the low pass filtered version of f(t):
9)
and is the convolution of f(t) and the impulse response of a filter (or atleast aproximatly so) with bandwidth .
Plugging this result into integration by parts gives:
10)
or equivalently:
11)
The first two terms show how the endpoints chosen effect the transfrom. These two terms will cancel for a given frequency if the distance between the endpoints is some multiple of the period. The last term is the Fouier transform of the smoothed function with the frequencies weighted by and using a windowing function
(note the multiple is there because the Fourier transform variable is the Laplace transform variable but rotated by 90 degrees.)
The effect of the windowing function is to smooth the frequency response. This is because multiplication in the time domain is equivalent to convolution in the frequency domain. The following Fourier transform relationship is useful (relationship 205):
12)
Note, that if a non causal filter was used for the smoothing the relationship is much simpler.
13)
In both cases to properly deal with the end points the time shifting property of the Fourier transform is needed (relationship 102):
14)
Applying this property to the last two relationships gives:
15)
16)
Strictly dealing with the case where a causal filter is used and applying the rule for the Fourier transform of a convolution (Rule 109) we obtain:
17)
of equivalently:
18)
Some Comments:
1) If is negative the system is causal, and the filtered version of the signal will be causal.
2) Computing the smoothed signal does not save any computations with regards to computing the Laplace transform.
3) The derivation seems to show that their is a relationship between Laplace trancform and a windowed Fouier transform of the filtered signal.
4) To compute the Laplace transform based on the orginal signal use equation (5). To compute it based on the filtered signal use equation (11).
Coriolis Forces
A derivation for coriolis forces can be found be found on the wikipedia page for fictitious forces.
In general for an accelerating reference frame in rectangular coordinates the factious forces are given by:
Where:
he first term is the Coriolis force, the second term is the centrifugal force, and the third term is the Euler force. When the rate of rotation doesn’t change, as is typically the case for a planet, the Euler force is zero.
Looking specifically at the Coriolis force:
which gives in (eastwest,northsourth, height) coordinates:
Where is the latitudinal coordinate (equator=zero latitude).
In general the cross product for a coordinate with orthonormal direction vectors is given by:
http://en.wikipedia.org/wiki/Coriolis_effect#Formula
since the basis direction vectors are orthogonal in hopkins and simmons coordinates write write:
(note with regards to weather the system is right handed we can choose the direction of the logitudanal cordinate to make it right handed.)
Just to recall from the post (Hoskins and Simmons (1974) Coordinate System):
where theta is the latitude.
Where is the surface pressure and is the vertical coordinate.
is the longitude.
Additionally:
U is the longitudinal component of the velocity
V is the latitudinal component of the velocity
W is the vertical component of the veolicty (not used in Hopkins and Simmons 1974)
Now the angular velocity of the earth in Hopkins and Simmons is given by:
Where the sign of is positive for the northern hemisphere and negative for the southern hemisphere.
Therefore:
Some comments:
The result obtained is essentialy the same result that one would get if, they took the(east west,north south, altitude) coordinate system and replaced with .
The only differences are the order and sign of the components. These are the only differences because both coordinate contain the same unit vectors. In my example of a Hopkins and Simon’s like coordinate system I used a different order for the components then was used in my example for the (eastwest, north south altitude) coordinate system. This will effect the sign in the cross product.
I wrote the z component of the angular velocity as to emphasize that the positive direction for the z component in Simpons coordinate system is downward. However, the actual angular rotation of the earth in simons coordinate system still have a postive component depending on the which direction is defined as positive for the longitudinal coordinate.
The order which we specify the coordinates determines the right handedness of the coordinate system. Therefore, righthandedness is not inherently a geometric property because it depends on the order of the coordinates. For instance, in standard Cartesian coordinates
In our case the first coordinat,e , was specified in the downward direction, our second coordinate, , points south, now using the right hand rule means that gives the positive direction for the third coordinate in the east direction.
It is for these reasons that differences can arrise, and therefore it is very important when doing cross products to clearly express the postive direction of the coordinate unit vectors and the order of the coordinates.
Vector Operations in Hoskins and Simmons Coordinates
In my post Hoskins and Simmons (1974) Coordinate System, I derived the following scaling quantities which will be used to derive the vector operations of Grad, Div and Curl in Hoskins Coordinate system.
The coordinates in Hoskins coordinate system are dimensionless . (see nondimentionalization of Navier Stokes).
The gradient is defined as (see lectures on coordinate transforms):
The divergence is defined as:
The curl is defined by:
(note, the direction of the longitudinal coordinate is defined to obey the right hand rule)
This gives for the components
Which Simplifies to:
API/Object Viewers/Memory Mapping/
The more code a programmer can reuse the more efficient they can be. In windows this could mean reusing com/ole components and other APIs. Here are two useful programs for viewing APIs:
OLE/COM Object Explorer 1.1
http://www.softpedia.com/progDownload/OLECOMObjectExplorerDownload42531.html
Windows API Viewer
http://www.activevb.de/rubriken/apiviewer/indexapiviewer.html
I was inquiring about how to manage the transfer of large amounts of data between programs and I was pointed to two interesting concepts:
All modern operating systems include a facility called “memory mapping,” which maps a range of addresses in the program’s virtual address space to a file. If you read from those addresses, you’ll get data from the file. It is up to the operating system to determine whether to load the data into RAM all at once, or to read it from the disk in chunks as necessary.
…….
If you’re trying to share large amounts of memory between two programs running on the same computer, you should note that all modern operating systems provide mechanisms for shared memory. These shared memory segments can be mapped into the virtual address space of multiple programs simultaneously. Two or more programs can read or write to the shared memory exactly as if it were normal, private memory. (But you should include some threadsafety mechanisms, like mutexes, to make sure your programs won’t step on each other’s toes.)If you’re trying to share large amounts of memory between programs running on separate computers, use MPI or some other multiprocessing library.
http://www.physicsforums.com/showthread.php?t=333182
Here is what wikipedia has to say about memory maps:
The primary benefit of memory mapping a file is increased I/O performance, especially when used on small files. Accessing memory mapped files is faster than using direct read and write operations for two reasons. Firstly, a system call is orders of magnitude slower than a simple change of program’s local memory. Secondly, in most operating systems the memory region mapped actually is the kernel’s file cache, meaning that no copies need to be created in user space. Using system calls would inevitably involve the time consuming operation of memory copying.
Certain application level memorymapped file operations also perform better than their physical file counterparts. Applications can access and update data in the file directly and inplace, as opposed to seeking from the start of the file or rewriting the entire edited contents to a temporary location. Since the memorymapped file is handled internally in pages, linear file access (as seen, for example, in flat file data storage or configuration files) requires disk access only when a new page boundary is crossed, and can write larger sections of the file to disk in a single operation.
A possible benefit of memorymapped files is a “lazy loading”, thus using small amounts of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously pages from memory back to disk. Memorymapping may not only bypass the page file completely, but the system only needs to load the smaller pagesized sections as data is being edited, similarly to demand paging scheme used for programs.
http://en.wikipedia.org/wiki/Memorymapped_file#Benefits
The windows utility to do this is called CreateFileMapping
http://msdn2.microsoft.com/enus/library/aa366537.aspx
As for multiprocessing library the following was recommended:
http://scv.bu.edu/documentation/tutorials/MPI/MPI_text.html
I haven’t found much but the following wikipedia link seems relevant:
http://en.wikipedia.org/wiki/Cluster_(computing)

Recent
 Laplace Transform Via Limits
 log(CO2) and Scary Graphs
 Numeric Solutions to The Heat Equation
 Coriolis Forces in Hopkins and Simmons Vorticity Equation
 The Cross Product in Non Orthogonal Coordinate Systems
 Lagrangian Mechanics and The Heat Equation
 Laplace Transform of f(t) Related to smoothed f(t)?
 Coriolis Forces
 Vector Operations in Hoskins and Simmons Coordinates
 API/Object Viewers/Memory Mapping/
 Defining a Microsoft access Datasource
 Fractal Modeling of Turbulence

Links

Archives
 July 2012 (1)
 September 2009 (5)
 August 2009 (19)
 March 2009 (2)

Categories

RSS
Entries RSS
Comments RSS