# Integration by parts

In calculus, and more generally in mathematical analysis, integration by parts is a theorem that relates the integral of a product of functions to the integral of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be derived in one line simply by integrating the product rule of differentiation.

If u = u(x), v = v(x), and the differentials du = u '(xdx and dv = v'(xdx, then integration by parts states that

$\int u(x) v'(x) \, dx = u(x) v(x) - \int u'(x) v(x) \, dx$

or more compactly:

$\int u \, dv=uv-\int v \, du.\!$

More general formulations of integration by parts exist for the Riemann–Stieltjes integral and Lebesgue–Stieltjes integral. The discrete analogue for sequences is called summation by parts.

## Theorem

### Product of two functions

The theorem can be derived as follows. Suppose u(x) and v(x) are two continuously differentiable functions. The product rule states (in Leibniz’ notation):

$\frac{d}{dx}\left(u(x)v(x)\right) = v(x) \frac{d}{dx}\left(u(x)\right) + u(x) \frac{d}{dx}\left(v(x)\right).\!$

Integrating both sides with respect to x, over an interval axb:

$\int_a^b \frac{d}{dx}\left(u(x)v(x)\right)\,dx = \int_a^b u'(x)v(x)\,dx + \int_a^b u(x)v'(x)\,dx$

then applying the fundamental theorem of calculus,

$\int_a^b \frac{d}{dx}\left(u(x)v(x)\right)\,dx = \left[u(x)v(x)\right]_a^b$

gives the formula for integration by parts:

$\left[u(x)v(x)\right]_a^b = \int_a^b u'(x)v(x)\,dx + \int_a^b u(x)v'(x)\,dx.$

Since du and dv are differentials of a function of one variable x,

$du=u'(x)dx, \quad dv=v'(x)dx.$

The original integral ∫uv′ dx contains v′ (derivative of v); in order to apply the theorem, v (antiderivative of v′) must be found, and then the resulting integral ∫vu′ dx must be evaluated.

### Product of many functions

Integrating the product rule for three multiplied functions, u(x), v(x), w(x), gives a similar result:

$\int_a^b u v \, dw = [u v w]^{b}_{a} - \int_a^b u w \, dv - \int_a^b v w \, du.$

In general for n factors

$\frac{d}{dx} \left(\prod_{i=1}^n u_i(x) \right)= \sum_{j=1}^n \prod_{i\neq j}^n u_i(x) \frac{du_j(x)}{dx},$

$\Bigl[ \prod_{i=1}^n u_i(x) \Bigr]_a^b = \sum_{j=1}^n \int_a^b \prod_{i\neq j}^n u_i(x) \, du_j(x),$

where the product is of all functions except for the one differentiated in the same term.

## Visualization

Graphical interpretation of the theorem. The pictured curve is parametrized by the variable t.

Define a parametric curve by (x, y) = (f(t), g(t)). Assuming that the curve is locally one-to-one, we can define

$x(y) = f(g^{-1}(y))$
$y(x) = g(f^{-1}(x))$

The area of the blue region is

$A_1=\int_{y_1}^{y_2}x(y)dy$

Similarly, the area of the red region is

$A_2=\int_{x_1}^{x_2}y(x)dx$

The total area A1 + A2 is equal to the area of the bigger rectangle, x2y2, minus the area of the smaller one, x1y1:

$\overbrace{\int_{y_1}^{y_2}x(y)dy}^{A_1}+\overbrace{\int_{x_1}^{x_2}y(x)dx}^{A_2}=\biggl.x_iy_i\biggl|_{i=1}^{i=2}$

Assuming the curve is smooth within a neighborhood, this generalizes to indefinite integrals:

$\int xdy + \int y dx = xy$

Rearranging:

$\int xdy = xy - \int y dx$

Thus integration by parts may be thought of as deriving the area of the blue region from the total area and that of the red region.

This visualisation also explains why integration by parts may help find the integral of an inverse function f−1(x) when the integral of the function f(xv) is known. Indeed, the functions x(y) and y(x) are inverses, and the integral ∫x dy may be calculated as above from knowing the integral ∫y dx.

## Application to find antiderivatives

### Strategy

Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate it into a product of two functions u(x)v(x) such that the integral produced by the integration by parts formula is easier to evaluate than the original one. The following form is useful in illustrating the best strategy to take:

$\int uv \,dx = u \int v \,dx - \int \left ( u' \int v \,dx \right )\,dx.\!$

Note that on the right-hand side, u is differentiated and v is integrated; consequently it is useful to choose u as a function that simplifies when differentiated, and/or to choose v as a function that simplifies when integrated. As a simple example, consider:

$\int \frac{\ln x}{x^2}\,dx.\!$

Since the derivative of ln x is 1/x, we make (ln x) part of u; since the antiderivative of 1/x2 is −1/x, we make (1/x2) part of v. The formula now yields:

$\int \frac{\ln x}{x^2}\,dx = -\frac{\ln x}{x} - \int \biggl( \frac{1}{x} \biggr) \biggl( -\frac{1}{x} \biggr) \, dx.\!$

The antiderivative of −1/x2 can be found with the power rule and is 1/x.

Alternatively, we may choose u and v such that the product u' (∫v dx) simplifies due to cancellation. For example, suppose we wish to integrate:

$\int\sec^2x\ln|\sin x|dx.$

If we choose u(x) = ln |sin x| and v(x) = sec2x, then u differentiates to 1/ tan x using the chain rule and v integrates to tan x; so the formula gives:

$\int\sec^2x\ln|\sin x|dx=\tan x\ln|\sin x|-\int\tan x\frac{1}{\tan x}dx.$

The integrand simplifies to 1, so the antiderivative is x. Finding a simplifying combination frequently involves experimentation.

In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below.

Polynomials and trigonometric functions

In order to calculate

$I=\int x\cos (x) \,dx\,,$

let:

$u = x \Rightarrow d u = dx$
$dv = \cos(x)\,dx \Rightarrow v = \int\cos(x)\,dx = \sin(x)$

then:

\begin{align} \int x\cos (x) \,dx & = \int u \, dv \\ & = uv - \int v \, du \\ & = x\sin (x) - \int \sin (x) \,dx \\ & = x\sin (x) + \cos (x) + C, \end{align} \!

where C is an arbitrary constant of integration.

For higher powers of x in the form

$\int x^n e^x \,dx,\,\int x^n\sin (x) \,dx,\,\int x^n\cos (x) \,dx\,,$

repeatedly using integration by parts can evaluate integrals such as these; each application of the theorem lowers the power of x by one.

Exponentials and trigonometric functions

An example commonly used to examine the workings of integration by parts is

$I=\int e^{x} \cos (x) \,dx.$

Here, integration by parts is performed twice. First let

$u = \cos(x) \Rightarrow du = -\sin(x)\,dx$
$dv = e^x \, dx \Rightarrow v = \int e^x \,dx = e^x$

then:

$\int e^{x} \cos (x) \,dx = e^{x} \cos (x) + \int e^{x} \sin (x) \,dx.\!$

Now, to evaluate the remaining integral, we use integration by parts again, with:

$u = \sin(x) \Rightarrow du = \cos(x)\, dx$
$dv = e^x \,dx \Rightarrow v = \int e^x \,dx = e^x.$

Then:

$\int e^x \sin (x) \,dx = e^x \sin (x) - \int e^x \cos (x) \,dx.$

Putting these together,

$\int e^x \cos (x) \,dx = e^x \cos (x) + e^x \sin (x) - \int e^x \cos (x) \,dx.$

The same integral shows up on both sides of this equation. The integral can simply be added to both sides to get

$2 \int e^{x} \cos (x) \,dx = e^{x} ( \sin (x) + \cos (x) ) + C\!$

which rearranges to:

$\int e^x \cos (x) \,dx = {e^x ( \sin (x) + \cos (x) ) \over 2} + C'\!$

where again C (and C' = C/2) is an arbitrary constant of integration.

A similar method is used to find the integral of secant cubed.

Functions multiplied by unity

Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x is also known.

The first example is ∫ ln(x) dx. We write this as:

$I=\int \ln (x) \cdot 1 \,dx.\!$

Let:

$u = \ln(x) \Rightarrow du = \frac{dx}{x}$
$dv = dx \Rightarrow v = x\,$

then:

\begin{align} \int \ln (x) \,dx & = x \ln (x) - \int \frac{x}{x} \,dx \\ & = x \ln (x) - \int 1 \,dx \\ & = x \ln (x) - x + C \end{align}

where C is the constant of integration.

The second example is the inverse tangent function arctan(x):

$I=\int \arctan (x) \cdot 1 \,dx.$

Rewrite this as

$\int \arctan (x) \cdot 1 \,dx.$

Now let:

$u = \arctan(x) \Rightarrow du = \frac{dx}{1 + x^2}$
$dv =dx \Rightarrow v = x$

then

\begin{align} \int \arctan (x) \,dx & = x \arctan (x) - \int \frac{x}{1 + x^2} \,dx \\[8pt] & = x \arctan (x) - {1 \over 2} \ln \left( 1 + x^2 \right) + C \end{align}

using a combination of the inverse chain rule method and the natural logarithm integral condition.

### LIATE rule

A rule of thumb proposed by Herbert Kasube of Bradley University advises that whichever function comes first in the following list should be u:[1]

L - Logarithmic functions: ln x, logb x, etc.
I - Inverse trigonometric functions: arctan x, arcsec x, etc.
A - Algebraic functions: x2, 3x50, etc.
T - Trigonometric functions: sin x, tan x, etc.
E - Exponential functions: ex, 19x, etc.

The function which is to be dv is whichever comes last in the list: functions lower on the list have easier antiderivatives than the functions above them. The rule is sometimes written as "DETAIL" where D stands for dv.

To demonstrate the LIATE rule, consider the integral

$\int x\cos x \, dx.\!$

Following the LIATE rule, u = x and dv = cos x dx, hence du = dx and v = sin x, which makes the integral become

$x\sin x - \int 1\sin x \, dx\!$

which equals

$x\sin x + \cos x+C.\!$

In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos x was chosen as u and x as dv, we would have the integral

$\frac{x^2}2\cos x + \int \frac{x^2}2\sin x\, dx,$

which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere.

Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate

$\int x^3e^{x^2}\, dx,$

one would set

$u=x^2, \quad dv=xe^{x^2}\, dx,$

so that

$du = 2x\,dx, \quad v = \frac12 e^{x^2}.$

Then

$\int x^3e^{x^2}\, dx = \int \left(x^2\right) \left( xe^{x^2} \right) \, dx = \int u \, dv = uv - \int v\,du = \frac12 x^2 e^{x^2} - \int xe^{x^2}\,dx.$

Finally, this results in

$\int x^3e^{x^2}\, dx=\frac{1}{2}e^{x^2}(x^2-1)+C.$

## Applications in pure mathematics

Integration by parts is often used as a tool to prove theorems in mathematical analysis. This section gives a few of examples.

### Use in special functions

The gamma function is an example of a special function, defined as an improper integral. Integration by parts illustrates it to be an extension of the factorial:

\begin{align} \Gamma(z) & = \int_0^\infty d\lambda e^{-\lambda} \lambda^{z-1} \\ & = - \int_0^\infty d\left(e^{-\lambda}\right) \lambda^{z-1} \\ & = - \left[e^{-\lambda}\lambda^{z-1}\right]_0^\infty + \int_0^\infty d\left(\lambda^{z-1}\right) e^{-\lambda} \\ & = 0 + \int_0^\infty d\lambda\left(z-1\right) \lambda^{z-2} e^{-\lambda} \\ & = (z-1)\Gamma(z-1) \\ \end{align}

yielding the famous identity

$\Gamma(z) = (z-1)\Gamma(z-1)\,.$

For integer z, applying this formula repeatedly gives the factorial (denoted by the !):

$\Gamma(z+1) = z!$

### Use in harmonic analysis

Integration by parts is often used in harmonic analysis, particularly Fourier analysis, to show that quickly oscillating integrals with sufficiently smooth integrands decay quickly. The most common example of this is its use in showing that the decay of function's Fourier transform depends on the smoothness of that function, as described below.

Fourier transform of derivative

If f is a k-times continuously differentiable function and all derivatives up to the kth one decay to zero at infinity, then its Fourier transform satisfies

$(\mathcal{F}f^{(k)})(\xi) = (2\pi i\xi)^k \mathcal{F}f(\xi),$

where f(k) is the kth derivative of f. (The exact constant on the right depends on the convention of the Fourier transform used.) This is proved by noting that

$\frac{d}{dy} e^{-2\pi iy\xi} = -2\pi i\xi e^{-2\pi iy\xi},$

so using integration by parts on the Fourier transform of the derivative we get

\begin{align} (\mathcal{F}f')(\xi) &= \int_{-\infty}^\infty e^{-2\pi iy\xi} f'(y)\,dy \\ &=\left[e^{-2\pi iy\xi} f(y)\right]_{-\infty}^\infty - \int_{-\infty}^\infty (-2\pi i\xi e^{-2\pi iy\xi}) f(y)\,dy \\ &=2\pi i\xi \int_{-\infty}^\infty e^{-2\pi iy\xi} f(y)\,dy \\ &=2\pi i\xi \mathcal{F}f(\xi). \end{align}

Applying this inductively gives the result for general k. A similar method can be used to find the Laplace transform of a derivative of a function.

Decay of Fourier transform

The above result tells us about the decay of the Fourier transform, since it follows that if f and f(k) are integrable then

$\vert\mathcal{F}f(\xi)\vert \leq \frac{I(f)}{1+\vert 2\pi\xi\vert^k}$, where $I(f)=\int_{-\infty}^\infty\Bigl(\vert f(y)\vert + \vert f^{(k)}(y)\vert\Bigr) dy$.

In other words, if f satisfies these conditions then its Fourier transform decays at infinity at least as quickly as 1/|ξ|k. In particular, if k ≥ 2 then the Fourier transform is integrable.

The proof uses the fact, which is immediate from the definition of the Fourier transform, that

$\vert\mathcal{F}f(\xi)\vert \leq \int_{-\infty}^\infty \vert f(y) \vert \,dy.$

Using the same idea on the equality stated at the start of this subsection gives

$\vert(2\pi i\xi)^k \mathcal{F}f(\xi)\vert \leq \int_{-\infty}^\infty \vert f^{(k)}(y) \vert \,dy.$

Summing these two inequalities and then dividing by 1 + |2πξk| gives the stated inequality.

### Use in operator theory

One use of integration by parts in operator theory is that it shows that the -∆ (where ∆ is the Laplace operator) is a positive operator on L2 (see Lp space). If f is smooth and compactly supported then, using integration by parts, we have

\begin{align} \langle -\Delta f, f \rangle_{L^2} &= -\int_{-\infty}^\infty f''(x)\overline{f(x)}\,dx \\ &=-\left[f'(x)\overline{f(x)}\right]_{-\infty}^\infty + \int_{-\infty}^\infty f'(x)\overline{f'(x)}\,dx \\ &=\int_{-\infty}^\infty \vert f'(x)\vert^2\,dx \geq 0. \end{align}

## Recursive integration by parts

Integration by parts can often be applied recursively on the ∫ v du term to provide the following formula

$\int uv = u v_1 - u' v_2 + u'' v_3 - \cdots + (-1)^{n-1}\ u^{(n-1)} \ v_{n} + (-1)^n \int{u^{(n)}v_{n}}.\!$

Here, u′ is the first derivative of u and u′′ is the second derivative. Further, u(n) is a notation to describe its nth derivative with respect to the independent variable. Another notation approved in the calculus theory has been adopted:

$v_{n+1}(x)=\int\! \int\ \cdots \int v \ (dx)^{n+1}.\!$

There are n + 1 integrals.

Note that the integrand above (uv) differs from the previous equation. The dv factor has been written as v purely for convenience.

The above mentioned form is convenient because it can be evaluated by differentiating the first term and integrating the second (with a sign reversal each time), starting out with uv1. It is very useful especially in cases when u(k + 1) becomes zero for some k + 1. Hence, the integral evaluation can stop once the u(k) term has been reached.

### Tabular integration by parts

While the aforementioned recursive definition is correct, it is often tedious to remember and implement. A much easier visual representation of this process is often taught to students and is dubbed either "the tabular method",[2] "the Stand and Deliver method",[3] "rapid repeated integration" or "the tic-tac-toe method". This method works best when one of the two functions in the product is a polynomial, that is, after differentiating it several times one obtains zero. It may also be extended to work for functions that will repeat themselves.

For example, consider the integral

$\int x^3 \cos x \, dx.\!$

Let u = x3. Begin with this function and list in a column all the subsequent derivatives until zero is reached. Secondly, begin with the function v (in this case cos(x)) and list each integral of v until the size of the column is the same as that of u. The result should appear as follows.

Derivatives of u (Column A)Integrals of v (Column B)
$x^3 \,$$\cos x \,$
$3x^2 \,$$\sin x \,$
$6x \,$$-\cos x \,$
$6 \,$$-\sin x \,$
$0 \,$$\cos x \,$

Now simply pair the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc... with alternating signs (beginning with the positive sign). Do so until further pairing leads to sums of zeros. The result is the following (notice the alternating signs in each term):

$(+)(x^3)(\sin x) - (3x^2)(-\cos x) + (6x)(-\sin x) - (6)(\cos x) + C \,.$

Which, with simplification, leads to the result

$x^3\sin x + 3x^2\cos x - 6x\sin x - 6\cos x + C. \,$

With proper understanding of the tabular method, it can be extended. Consider

$\int e^x \cos x \,dx.$
Derivatives of u (Column A)Integrals of v (Column B)
$e^x \,$$\cos x \,$
$e^x \,$$\sin x \,$
$e^x \,$$-\cos x \,$

In this case in the last step it is necessary to integrate the product of the two bottom cells obtaining:

$\int e^x \cos x \,dx = e^x\sin x + e^x\cos x - \int e^x \cos x \,dx,$

$2 \, \int e^x \cos x \,dx = e^x\sin x + e^x\cos x,$

and yields the result:

$\int e^x \cos x \,dx = {e^x ( \sin x + \cos x ) \over 2} + C.\!$

## Higher dimensions

The formula for integration by parts can be extended to functions of several variables. Instead of an interval one needs to integrate over an n-dimensional set. Also, one replaces the derivative with a partial derivative.

More specifically, suppose Ω is an open bounded subset of ℝn with a piecewise smooth boundary Γ. If u and v are two continuously differentiable functions on the closure of Ω, then the formula for integration by parts is

$\int_{\Omega} \frac{\partial u}{\partial x_i} v \,d\Omega = \int_{\Gamma} u v \, \nu_i \,d\Gamma - \int_{\Omega} u \frac{\partial v}{\partial x_i} \, d\Omega,$

where $\hat{\mathbf{\nu}}$ is the outward unit surface normal to Γ, νi is its i-th component, and i ranges from 1 to n.

By replacing v in the above formula with vi and summing over i gives the vector formula

$\int_{\Omega} \nabla u \cdot \mathbf{v}\, d\Omega = \int_{\Gamma} u (\mathbf{v}\cdot \hat{\nu})\, d\Gamma - \int_\Omega u\, \nabla\cdot\mathbf{v}\, d\Omega,$

where v is a vector-valued function with components v1, ..., vn.

Setting u equal to the constant function 1 in the above formula gives the divergence theorem

$\int_{\Gamma} \mathbf{v} \cdot \hat{\nu}\, d\Gamma = \int_\Omega \nabla\cdot\mathbf{v}\, d\Omega.$

For $\mathbf{v}=\nabla v$ where $v\in C^2(\bar{\Omega})$, one gets

$\int_{\Omega} \nabla u \cdot \nabla v\, d\Omega = \int_{\Gamma} u\, \nabla v\cdot\hat{\nu}\, d\Gamma - \int_\Omega u\, \nabla^2 v\, d\Omega,$

which is the first Green's identity.

The regularity requirements of the theorem can be relaxed. For instance, the boundary Γ need only be Lipschitz continuous. In the first formula above, only u, vH1(Ω) is necessary (where H1 is a Sobolev space); the other formulas have similarly relaxed requirements.