# Limit of a function

x$\frac{\sin x}{x}$
10.841471...
0.10.998334...
0.010.999983...

Although the function (sin x)/x is not defined at zero, as x becomes closer and closer to zero, (sin x)/x becomes arbitrarily close to 1. In other words, the limit of (sin x)/x as x approaches zero equals 1.

In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input.

Formal definitions, first devised in the early 19th century, are given below. Informally, a function f assigns an output f(x) to every input x. We say the function has a limit L at an input p: this means f(x) gets closer and closer to L as x moves closer and closer to p. More specifically, when f is applied to any input sufficiently close to p, the output value is forced arbitrarily close to L. On the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, we say the limit does not exist.

The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the limit: roughly, a function is continuous if all of its limits agree with the values of the function. It also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function.

## History

Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bolzano who, in 1817, introduced the basics of the epsilon-delta technique to define continuous functions. However, his work was not known during his lifetime (Felscher 2000). Cauchy discussed limits in his Cours d'analyse (1821) and gave essentially the modern definition, but this is not often recognized because he only gave a verbal definition (Grabiner 1983). Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today. He also introduced the notations lim and limxx0 (Burton 1997).

The modern notation of placing the arrow below the limit symbol is due to Hardy in his book A Course of Pure Mathematics in 1908 (Miller 2004).

## Motivation

Imagine a person walking over a landscape represented by the graph of y = f(x). His horizontal position is measured by the value of x, much like the position given by a map of the land or by a global positioning system. His altitude is given by the coordinate y. He is walking towards the horizontal position given by x = p. As he gets closer and closer to it, he notices that his altitude approaches L. Say there's a wall there so he can't stand on that point exactly, but can still get arbitrarily close to it. If asked about the altitude of x = p, he would then answer L.

What, then, does it mean to say that his altitude approaches L? It means that his altitude gets nearer and nearer to L except for a possible small error in accuracy. For example, suppose a particular accuracy goal is set for our traveler: he must get within ten meters of L in altitude. He reports back that indeed he can get within ten meters of L, since he notes that when he is anywhere within fifty horizontal meters of p, his altitude is always ten meters or less from L.

The accuracy goal is then changed: can he get within one vertical meter? Yes. If he is anywhere within seven horizontal meters of p, then his altitude always remains within one meter from the target L. In summary, to say that the traveler's altitude approaches L as his horizontal position approaches p means that for every target accuracy goal, however small it may be, there is some neighborhood of p whose altitude fulfills that accuracy goal.

The initial informal statement can now be explicated:

The limit of a function f(x) as x approaches p is a number L with the following property: given any target distance from L, there is a distance from p within which the values of f(x) remain within the target distance.

This explicit statement is quite close to the formal definition of the limit of a function with values in a topological space.

## Definitions

To say that

$\lim_{x \to p}f(x) = L, \,$

means that ƒ(x) can be made as close as desired to L by making x close enough, but not equal, to p.

The following definitions (known as (ε, δ)-definitions) are the generally accepted ones for the limit of a function in various contexts.

### Functions on the real line

Suppose f : RR is defined on the real line and p,LR. It is said the limit of f, as x approaches p, is L and written

$\lim_{x \to p}f(x) = L, \$

if the following property holds:

• For every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < | x − p | < δ implies | f(x) − L | < ε.

The value of the limit does not depend on the value of f(p), nor even that p be in the domain of f.

A more general definition applies for functions defined on subsets of the real line. Let (ab) be an open interval in R, and p a point of (ab). Let f be a real-valued function defined on all of (ab) except possibly at p itself. It is then said that the limit of f, as x approaches p, is L if, for every real ε > 0, there exists a real δ > 0 such that 0 < | x − p | < δ and x ∈ (ab) implies | f(x) − L | < ε.

Here again the limit does not depend on f(p) being well-defined.

The letters ε and δ can be understood as "error" and "distance", and in fact Cauchy used ε as an abbreviation for "error" in some of his work (Grabiner 1983). In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired by reducing the distance (δ) to the limit point. As discussed below this definition also works for functions in a more general context. The idea that δ and ε represent distances helps suggest these generalizations.

#### One-sided limits

Main article: One-sided limit
The limit as: x → x0+ ≠ x → x0. Therefore, the limit as x → x0 does not exist.

Alternatively x may approach p from above (right) or below (left), in which case the limits may be written as

$\lim_{x \to p^+}f(x) = L$

or

$\lim_{x \to p^-}f(x) = L$

respectively. If these limits are equal then this can be referred to as the limit of f(x) at p. Conversely, if the two are not equal,the limit, as such, does not exist.

A formal definition is as follows. The limit of f(x) as x approaches p from above is L if, for every ε > 0, there exists a δ > 0 such that |f(x) − L| < ε whenever 0 < x − p < δ. The limit of f(x) as x approaches p from below is L if, for every ε > 0, there exists a δ > 0 such that |f(x) − L| < ε whenever 0 < p − x < δ.

If the limit does not exist then the oscillation of f at p is non-zero.

#### Example of a function without a limit

The function without a limit, at an essential discontinuity

The function

$f(x)=\begin{cases}\sin\frac{5}{x-1} & \text{ for } x< 1 \\ 0 & \text{ for } x=1 \\ \frac{0.1}{x-1}& \text{ for } x>1\end{cases}$

has no limit at $x_0 = 1$.

### Functions on metric spaces

Suppose M and N are subsets of metric spaces A and B, respectively, and f : MN is defined between M and N, with xM, p a limit point of M and LN. It is said that the limit of f as x approaches p is L and write

$\lim_{x \to p}f(x) = L$

if the following property holds:

• For every ε > 0, there exists a δ > 0 such that dB(f(x), L) < ε whenever 0 < dA(xp) < δ.

Again, note that p need not be in the domain of f, nor does L need to be in the range of f, and even if f(p) is defined it need not be equal to L.

An alternative definition using the concept of neighbourhood is as follows:

$\lim_{x \to p}f(x) = L$

if, for every neighbourhood V of L in B, there exists a neighbourhood U of p in A such that f(U ∩ M − {p}) ⊆ V.

### Functions on topological spaces

Suppose X,Y are topological spaces with Y a Hausdorff space. Let p be a limit point of Ω ⊆ X, and LY. For a function f : Ω → Y, it is said that the limit of f as x approaches p is L (i.e., f(x)L as xp) and write

$\lim_{x \to p}f(x) = L$

if the following property holds:

• For every open neighborhood V of L, there exists an open neighborhood U of p such that f(U ∩ Ω − {p}) ⊆ V.

This last part of the definition can also be phrased "there exists an open punctured neighbourhood U of p such that f(U∩Ω) ⊆ V ".

Note that the domain of f does not need to contain p. If it does, then the value of f at p is irrelevant to the definition of the limit. In particular, if the domain of f is X − {p} (or all of X), then the limit of f as xp exists and is equal to L if, for all subsets Ω of X with limit point p, the limit of the restriction of f to Ω exists and is equal to L. Sometimes this criterion is used to establish the non-existence of the two-sided limit of a function on R by showing that the one-sided limits either fail to exist or do not agree. Such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets.

Alternatively, the requirement that Y be a Hausdorff space can be relaxed to the assumption that Y be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point.

A function is continuous in a limit point p of and in its domain if and only if f(p) is the (or, in the general case, a) limit of f(x) as x tends to p.

### Limits involving infinity

The limit of this function at infinity exists.

For f(x) a real function, the limit of f as x approaches infinity is L, denoted

$\lim_{x \to \infty}f(x) = L,$

means that for all $\varepsilon > 0$, there exists c such that $|f(x) - L| < \varepsilon$ whenever x > c. Or, symbolically:

$\forall \varepsilon > 0 \; \exists c \; \forall x > c :\; |f(x) - L| < \varepsilon$

Similarly, the limit of f as x approaches negative infinity is L, denoted

$\lim_{x \to -\infty}f(x) = L,$

means that for all $\varepsilon > 0$ there exists c such that $|f(x) - L| < \varepsilon$ whenever x < c. Or, symbolically:

$\forall \varepsilon > 0 \; \exists c \; \forall x < c :\; |f(x) - L| < \varepsilon$

For example

$\lim_{x \to -\infty}e^x = 0. \,$

Limits can also have infinite values. When infinities are not considered legitimate values, which is standard (but see below), a formalist will insist upon various circumlocutions. For example, rather than say that a limit is infinity, the proper thing is to say that the function "diverges" or "grows without bound". In particular, the following informal example of how to pronounce the notation is arguably inappropriate in the classroom (or any other formal setting). In any case, for example the limit of f as x approaches a is infinity, denoted

$\lim_{x \to a} f(x) = \infty, \,$

means that for all $\varepsilon > 0$ there exists $\delta > 0$ such that $f(x) > \varepsilon$ whenever $|x - a| < \delta$.

These ideas can be combined in a natural way to produce definitions for different combinations, such as

$\lim_{x \to \infty} f(x) = \infty, \lim_{x \to a^+}f(x) = -\infty. \,$

For example

$\lim_{x \to 0^+} \ln x = -\infty. \,$

Limits involving infinity are connected with the concept of asymptotes.

These notions of a limit attempt to provide a metric space interpretation to limits at infinity. However, note that these notions of a limit are consistent with the topological space definition of limit if

• a neighborhood of −∞ is defined to contain an interval [−∞, c) for some c ∈ R
• a neighborhood of ∞ is defined to contain an interval (c, ∞] where c ∈ R
• a neighborhood of aR is defined in the normal way metric space R

In this case, R is a topological space and any function of the form fX → Y with XY⊆ R is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense.

#### Alternative notation

Many authors[1] allow for the real projective line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as R ∪ {−∞, +∞} and the projective real line is R ∪ {∞} where a neighborhood of ∞ is a set of the form {x: |x|>c}. The advantage is that one only needs 3 definitions for limits (left, right, and central) to cover all the cases. As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line, $x^{-1}$ does not possess a central limit (which is normal):

$\lim_{x \to 0^{+}}{1\over x} = +\infty, \lim_{x \to 0^{-}}{1\over x} = -\infty.$

In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit does exist in that context:

$\lim_{x \to 0^{+}}{1\over x} = \lim_{x \to 0^{-}}{1\over x} = \lim_{x \to 0}{1\over x} = \infty.$

In fact there are a plethora of conflicting formal systems in use. In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. A simple reason has to do with the converse of $\lim_{x \to 0^{-}}{x^{-1}} = -\infty$, namely, it is convenient for $\lim_{x \to -\infty}{x^{-1}} = -0$ to be considered true. Such zeroes can be seen as an approximation to infinitesimals.

#### Evaluating limits at infinity for rational functions

Horizontal asymptote about y = 4

There are three basic rules for evaluating limits at infinity for a rational function f(x) = p(x)/q(x): (where p and q are polynomials):

• If the degree of p is greater than the degree of q, then the limit is positive or negative infinity depending on the signs of the leading coefficients;
• If the degree of p and q are equal, the limit is the leading coefficient of p divided by the leading coefficient of q;
• If the degree of p is less than the degree of q, the limit is 0.

If the limit at infinity exists, it represents a horizontal asymptote at y = L. Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions.

### Limit of a function of more than one variable

By noting that |x − p| represents a distance, the definition of a limit can be extended to functions of more than one variable. In the case of a function f : R2R,

$\lim_{(x,y) \to (p, q)} f(x, y) = L$

if

for every ε > 0 there exists a δ > 0 such that for all (x,y) with 0 < ||(x,y) − (p,q)|| < δ, then |f(x,y) − L| < ε

where ||(x,y) − (p,q)|| represents the Euclidean distance. This can be extended to any number of variables.

### Sequential limits

Let f : XY be a mapping from a topological space X into a Hausdorff space Y, pX and LY.

The sequential limit of f as xp is L if, for every sequence (xn) in X − {p} which converges to p, the sequence f(xn) converges to L.

If L is the limit (in the sense above) of f as x approaches p, then it is a sequential limit as well, however the converse need not hold in general. If in addition X is metrizable, then L is the sequential limit of f as x approaches p if and only if it is the limit (in the sense above) of f as x approaches p.

### Other characterizations

#### Limit of a function in terms of sequences

For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. In this setting:

$\lim_{x\to a}f(x)=L$

if and only if for all sequences $x_n$ (with $x_n$ not equal to a for all n) converging to $a$ the sequence $f(x_n)$ converges to $L$. It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. Note that defining what it means for a sequence $x_n$ to converge to $a$ requires the epsilon, delta method.

#### Limit of a function in non-standard calculus

In non-standard calculus the limit of a function is defined by:

$\lim_{x\to a}f(x)=L$

if and only if for all $x\in \mathbb{R}^*$, $f^*(x)-L$ is infinitesimal whenever $x-a$ is infinitesimal. Here $\mathbb{R}^*$ are the hyperreal numbers and $f^*$ is the natural extension of f to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers.[2] On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods can not be realized in full.[3] Bŀaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".[4]

#### Limit of a function in terms of Nearness

At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness". A point $x$ is defined to be near a set $A\subseteq \mathbb{R}$ if for every $r>0$ there is a point $a\in A$ so that $|x-a|. In this setting the

$\lim_{x\to a} f(x)=L$

if and only if for all $A\subseteq \mathbb{R}$, $L$ is near $f(A)$ whenever $a$ is near $A$. Here $f(A)$ is the set $\{y\in\mathbb{R} \mid y=f(x)\}$. This definition can also be extended to metric and topological spaces.

## Relationship to continuity

The notion of the limit of a function is very closely related to the concept of continuity. A function ƒ is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c:

$\lim_{x\to c} f(x) = f(c).$

If the condition 0 < |x − c| is left out of the definition of limit, then the resulting definition would be equivalent to requiring f to be continuous at c.

## Properties

If a function f is real-valued, then the limit of f at p is L if and only if both the right-handed limit and left-handed limit of f at p exist and are equal to L.

The function f is continuous at p if and only if the limit of f(x) as x approaches p exists and is equal to f(p). If f : MN is a function between metric spaces M and N, then it is equivalent that f transforms every sequence in M which converges towards p into a sequence in N which converges towards f(p).

If N is a normed vector space, then the limit operation is linear in the following sense: if the limit of f(x) as x approaches p is L and the limit of g(x) as x approaches p is P, then the limit of f(x) + g(x) as x approaches p is L + P. If a is a scalar from the base field, then the limit of af(x) as x approaches p is aL.

If f is a real-valued (or complex-valued) function, then taking the limit is compatible with the algebraic operations, provided the limits on the right sides of the equations below exist (the last identity only holds if the denominator is non-zero). This fact is often called the algebraic limit theorem.

$\begin{matrix} \lim\limits_{x \to p} & (f(x) + g(x)) & = & \lim\limits_{x \to p} f(x) + \lim\limits_{x \to p} g(x) \\ \lim\limits_{x \to p} & (f(x) - g(x)) & = & \lim\limits_{x \to p} f(x) - \lim\limits_{x \to p} g(x) \\ \lim\limits_{x \to p} & (f(x)\cdot g(x)) & = & \lim\limits_{x \to p} f(x) \cdot \lim\limits_{x \to p} g(x) \\ \lim\limits_{x \to p} & (f(x)/g(x)) & = & {\lim\limits_{x \to p} f(x) / \lim\limits_{x \to p} g(x)} \end{matrix}$

In each case above, when the limits on the right do not exist, or, in the last case, when the limits in both the numerator and the denominator are zero, nonetheless the limit on the left, called an indeterminate form, may still exist—this depends on the functions f and g. These rules are also valid for one-sided limits, for the case p = ±∞, and also for infinite limits using the rules

• q + ∞ = ∞ for q ≠ −∞
• q × ∞ = ∞ if q > 0
• q × ∞ = −∞ if q < 0
• q / ∞ = 0 if q ≠ ± ∞

Note that there is no general rule for the case q / 0; it all depends on the way 0 is approached. Indeterminate forms—for instance, 0/0, 0×∞, ∞−∞, and ∞/∞—are also not covered by these rules, but the corresponding limits can often be determined with L'Hôpital's rule or the Squeeze theorem.

### Chain rule

In general, the statement

$\lim_{y \to b} f(y) = c$ and $\lim_{x \to a} g(x) = b \Rightarrow \lim_{x \to a} f(g(x)) = c$,

is not true. However, this "chain rule" does hold if one of the following additional conditions holds:

• f(b) = c (i. e. f is continuous at b) or
• g does not take the value b near a (i. e. there exists a $\delta >0$ such that if $0<|x-a|<\delta$ then $|g(x)-b|>0$).

For a counterexample, consider the following function which violates both additional restrictions:

$f(x)=g(x)=\begin{cases}0 & \text{if } x\neq 0 \\ 1 & \text{if } x=0 \end{cases}$

Since the value at f(0) is a removable discontinuity,

$\lim_{x \to a} f(x) = 0$ for all $a$.

Thus, the naïve chain rule would suggest that the limit of f(f(x)) is 0. However, it is the case that

$f(f(x))=\begin{cases}1 & \text{if } x\neq 0 \\ 0 & \text{if } x=0 \end{cases}$
$\lim_{x \to a} f(f(x)) = 1$ for all $a$.

### Limits of special interest

• $\lim_{x \to 0} \frac{\sin x}{x} = 1$
• $\lim_{x \to 0} \frac{1 - \cos x}{x} = 0$
• $\lim_{x \to \infty} x \sin \left(\frac{c}{x}\right) = c$

The first limit can be proven with the squeeze theorem. For 0 < x < π/2:

$\sin x < x < \tan x.$

Dividing everything by sin(x) yields

$1 < \frac{x}{\sin x} < \frac{\tan x}{\sin x}$
$1 < \frac{x}{\sin x} < \frac{1}{\cos x}$
$\lim_{x \to 0} \frac{1}{\cos x} = \frac{1}{1} = 1$
$\lim_{x \to 0} \frac{x}{\sin x} = 1$
$\lim_{x \to 0} \frac{\sin x}{x} = 1$

The second limit can be proven with the first limit and the following identity:

$1 - \cos^2 x = \sin^2 x$

Starting with

$\frac{1 - \cos x}{x}$

Multiplying numerator and denominator by (1 + cos x) yields

$\frac{(1 - \cos x)(1 + \cos x)}{x (1 + \cos x)} = \frac{(1 - \cos^2 x)}{x (1 + \cos x)}= \frac{\sin^2 x}{x (1 + \cos x)} = \frac{\sin x}{x} \frac{\sin x}{1 + \cos x}$
$\lim_{x \to 0}\left ( \frac{\sin x}{x} \frac{\sin x}{1 + \cos x} \right ) = \left (\lim_{x \to 0} \frac{\sin x}{x} \right ) \left ( \lim_{x \to 0} \frac{\sin x}{1 + \cos x} \right ) = \left (1 \right )\left (\frac{0}{2} \right )= 0$
$\lim_{x \to 0} \frac{1 - \cos x}{x} = 0$

### L'Hôpital's rule

Main article: l'Hôpital's rule

This rule uses derivatives to find limits of indeterminate forms 0/0 or ±∞/∞, and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions f(x) and g(x), defined over an open interval I containing the desired limit point c, then if:

1. $\lim_{x \to c}f(x)=\lim_{x \to c}g(x)=0, \text{ or } \lim_{x \to c}f(x)=\pm\lim_{x \to c}g(x) = \pm\infty, \text{and}$
2. $f \text{ and } g \text{ are differentiable over } I \setminus \{c\}, \text{ and}$
3. $g'(x)\neq 0 \text{ for all } x \in I \setminus \{c\}, \text{ and}$
4. $\lim_{x\to c}\frac{f'(x)}{g'(x)} \text{ exists, then}$
• $\lim_{x \to c} \frac{f(x)}{g(x)} = \lim_{x \to c} \frac{f'(x)}{g'(x)}$

Normally, the first condition is the most important one.

For example: $\lim_{x \to 0} \frac{\sin (2x)}{\sin (3x)} = \lim_{x \to 0} \frac{2 \cos (2x)}{3 \cos (3x)} = \frac{2 \sdot 1}{3 \sdot 1} = \frac{2}{3}.$

### Summations and integrals

Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit.

A short way to write the limit $\lim_{n \to \infty} \sum_{i=s}^n f(i)$ is $\sum_{i=s}^\infty f(i)$.

A short way to write the limit $\lim_{x \to \infty} \int_a^x f(t) \; dt$ is $\int_a^\infty f(t) \; dt$.

A short way to write the limit $\lim_{x \to -\infty} \int_x^b f(t) \; dt$ is $\int_{-\infty}^b f(t) \; dt$.