Separation of variables

In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

Ordinary differential equations (ODE)

Suppose a differential equation can be written in the form

which we can write more simply by letting :

As long as h(y) ≠ 0, we can rearrange terms to obtain:

so that the two variables x and y have been separated. dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.

Alternative notation

Some who dislike Leibniz's notation may prefer to write this as

but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to , we have

or equivalently,

because of the substitution rule for integrals.

If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the derivative as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.

(Note that we do not need to use two constants of integration, in equation (1) as in

because a single constant is equivalent.)

Example

Population growth is often modeled by the differential equation

where is the population with respect to time , is the rate of growth, and is the carrying capacity of the environment.

Separation of variables may be used to solve this differential equation.

To evaluate the integral on the left side, we simplify the fraction

and then, we decompose the fraction into partial fractions

Thus we have

Therefore, the solution to the logistic equation is

To find , let and . Then we have

Noting that , and solving for A we get

Partial differential equations

The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as heat equation, wave equation, Laplace equation and Helmholtz equation.

Homogeneous case

Consider the one-dimensional heat equation.The equation is

 

 

 

 

(1)

The boundary condition is homogeneous, that is

 

 

 

 

(2)

Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:

 

 

 

 

(3)

Substituting u back into equation (1) and using the product rule,

 

 

 

 

(4)

Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value − λ. Thus:

 

 

 

 

(5)

and

 

 

 

 

(6)

− λ here is the eigenvalue for both differential operators, and T(t) and X(x) are corresponding eigenfunctions.

We will now show that solutions for X(x) for values of λ ≤ 0 cannot occur:

Suppose that λ < 0. Then there exist real numbers B, C such that

From (2) we get

 

 

 

 

(7)

and therefore B = 0 = C which implies u is identically 0.

Suppose that λ = 0. Then there exist real numbers B, C such that

From (7) we conclude in the same manner as in 1 that u is identically 0.

Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that

and

From (7) we get C = 0 and that for some positive integer n,

This solves the heat equation in the special case that the dependence of u has the special form of (3).

In general, the sum of solutions to (1) which satisfy the boundary conditions (2) also satisfies (1) and (3). Hence a complete solution can be given as

where Dn are coefficients determined by initial condition.

Given the initial condition

we can get

This is the sine series expansion of f(x). Multiplying both sides with and integrating over [0,L] result in

This method requires that the eigenfunctions of x, here , are orthogonal and complete. In general this is guaranteed by Sturm-Liouville theory.

Nonhomogeneous case

Suppose the equation is nonhomogeneous,

 

 

 

 

(8)

with the boundary condition the same as (2).

Expand h(x,t), u(x,t) and f(x) into

 

 

 

 

(9)

 

 

 

 

(10)

 

 

 

 

(11)

where hn(t) and bn can be calculated by integration, while un(t) is to be determined.

Substitute (9) and (10) back to (8) and considering the orthogonality of sine functions we get

which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor. Finally, we can get

If the boundary condition is nonhomogeneous, then the expansion of (9) and (10) is no longer valid. One has to find a function v that satisfies the boundary condition only, and subtract it from u. The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.

In orthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See spherical harmonics for example.

Matrices

The matrix form of the separation of variables is the Kronecker sum.

As an example we consider the 2D discrete Laplacian on a regular grid:

where and are 1D discrete Laplacians in the x- and y-directions, correspondingly, and are the identities of appropriate sizes. See the main article Kronecker sum of discrete Laplacians for details.

References

External links

This article is issued from Wikipedia - version of the 7/9/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.