Improper limits

Published

November 13, 2025

Modified

November 21, 2025

1 Introduction

2 Unbounded limits

The notion of limit considered here requires the actual limit to be a finite real number. However, there are situations in which the closer we get to a point in the domain, the more the function increases or decreases.

For instance, let \(D=\mathbb{R}-\{0\}\), and consider the function \(f\colon D\subset \rightarrow\mathbb{R}\) given by \(f(x)=\frac{1}{x}\). We can compute \(f(x)\) for values of \(x\) which are closer and closer to \(0\). For positive values of \(x\), we see that the closer \(x\) is to \(0\), the bigger \(f(x)\) is. However, for negative values of \(x\), the closer \(x\) is to \(0\), the more \(f(x)\) becomes bigger in absolute value but with negative sign.

Faced with this situation, we conjecture that the limit at \(x_{0}^{\pm}\) does not exist in the sense of this definition. However, since we do have the intuitive understanding of how \(f\) behaves around \(0\) (it increases indefinitely when \(x\rightarrow 0^+\), and decreases indefinitely when \(x\rightarrow 0^{-}\)), we would like to be able to make this intuitive understanding rigorous. A suitable way to do it is by introducing the notion of unbounded limits.

NoteUnbounded limits

Definition 1 Consider the function \(f\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\), and let \(x_{0}\) be a left accumulation point for \(D\). We say that the formal equality \[ \lim_{x\rightarrow x_{0}} f(x)=+\infty \] holds if for every \(K>0\) there is a \(0< \delta\) such that \[ |x-x_{0}|<\delta \Rightarrow f(x)>K \] for every \(x_{0}\neq x\in D\). In this case, we say that \(+\infty\) is the limit of \(f\) at \(x_{0}\).

Analogously, we say that the formal equality \[ \lim_{x\rightarrow x_{0}} f(x)=-\infty \] holds if for every \(K>0\) there is a \(0< \delta\) such that \[ |x-x_{0}|<\delta \Rightarrow f(x)< -K \] for every \(x_{0}\neq x\in D\). In this case, we say that \(-\infty\) is the limit of \(f\) at \(x_{0}\).

Left and right unbounded limits are defined by replacing \(|x- x_{0}|<\delta\) with \(x_{0}-x<\delta\) or \(x-x_{0}<\delta\), respectively.

Before proceeding further, we need to make sure that the idea of unbounded limits formalized in Definition 1 makes actual sense.

First of all, just as we did for regular limits here, we must ensure that the notions of unbounded limits and unbounded left and right limits are well-posed.

NoteWell-posedness of unbounded limits

Proposition 1 Let \(x_{0}\) be an accumulation point for \(D\subseteq \mathbb{R}\), and consider the function \(f\colon D\subseteq\mathbb{R}\rightarrow\mathbb{R}\). If \[ \begin{split} \lim_{x\rightarrow +\infty}f(x)&=L \\ \lim_{x\rightarrow +\infty}f(x)&=M, \end{split} \] according to Definition 1, then \(L=M\).

Under the obvious modifications, a similar result holds for left and right unbounded limits.

Proof:

The proof is a slight modification of that given here, and you are strongly invited to try to fill the gaps on your own.

Then, similarly to what we did here, we have to check what is the relation between the existence of an unbounded limit at \(x_{0}\) and the existence of left and right unbounded limits at \(x_{0}\).

CautionExistence and equality of left and right unbounded limits at \(x_{0}\) is equivalent to existence of the unbounded limit at \(x_{0}\)

Finally, we have to verify that different functions that agree on an arbitrary subset of the intersection of their domains have the same unbounded limits (when they exist) at the points of the intersection.

CautionUnbounded limits of the restriction

Exercise 2 Let \(D,D'\subseteq \mathbb{R}\) be such that \(D\cap D'\neq\emptyset\), and consider two functions \(f\colon D \rightarrow \mathbb{R}\) and \(g\colon D'\rightarrow \mathbb{R}\) such that \(g(x)=f(x)\) on \(S\subseteq D\cap D'\). If \(x_{0}\) is an accumulation point for \(S\) prove that \[ \lim_{x\rightarrow x_{0}} f(x)=\lim_{x\rightarrow x_{0}} g(x) \] whenever either one of the limits exist in accordance with Definition 1.

Prove analogous results for the case of left and right limits.

Solution:

You are strongly invited to try to solve it on your own following what you should have already done for this exercise.

Let us now apply Definition 1 to rigorously investigate what happens in the motivating example \(f(x)=\frac{1}{x}\) discussed above.

TipThe case \(f(x)=\frac{1}{x}\)

Example 1 Let \(f\colon \mathbb{R}-\{0\}\rightarrow\mathbb{R}\) be given by \(f(x)=\frac{1}{x}\). The point \(0\) is an accumulation point for the domain of \(f\), so it makes sense to investigate the limit for \(x\rightarrow 0\). As mentioned before, when \(x\) reaches \(0\) from the right, the function \(f\) increases, while it decreases when \(x\) reaches \(0\) from the left. Therefore, we conjecture that \[ \lim_{x\rightarrow 0^{\pm}} \frac{1}{x}=\pm\infty . \] Let us consider \(x\rightarrow 0^{+}\), so that \(x>0\). Following Definition 1, we see that \[ x-0<\delta \implies \frac{1}{\delta}<\frac{1}{x}=f(x) . \] Therefore, taking \(\frac{1}{\delta}> K\), our conjecture follows. The case \(x\rightarrow 0^{-}\) follows analogously.

The algebraic manipulations of limits here can be appropriately modified to handle also unbounded limits, provided the so-called inderminate forms are treated with care.

NoteAlgebraic manipulations of unbounded limits

Proposition 2 Let \(\alpha,\beta\in\mathbb{R}\), let \(x_{0}\) be an accumulation point for \(D\subseteq \mathbb{R}\), and consider two functions \(f,g\colon D\rightarrow \mathbb{R}\). If \[ \lim_{x\rightarrow x_{0}}f(x)=L_{1} \mbox{ and } \quad \lim_{x\rightarrow x_{0}}g(x)=L_{2} \] in the sense of this definition or Definition 1, then \[ \begin{split} \lim_{x\rightarrow x_{0}} \left(\alpha \,f + \beta \,g \right)(x) &=\lim_{x\rightarrow x_{0}} \alpha \,f (x) + \lim_{x\rightarrow x_{0}} \beta \, g(x) = \alpha\,L_{1} + \beta \,L_{2} \\ & \\ \lim_{x\rightarrow x_{0}} \left((\alpha \,f)\cdot(\beta \,g)\right) (x) & =\left(\lim_{x\rightarrow x_{0}} \alpha \,f(x)\right)\left(\lim_{x\rightarrow x_{0}} \beta \,g(x)\right)= \alpha\,L_{1}\,\beta \,L_{2} \\ & \\ \lim_{x\rightarrow x_{0}}\,\left(\frac{\alpha \,f}{\beta \,g}\right)(x)&=\frac{\lim_{x\rightarrow x_{0} }\alpha \,f(x)}{\lim_{x\rightarrow x_{0} }\beta \,g(x)} =\frac{\alpha_{1}L_{1}}{\alpha_{2} L_{2}}, \end{split} \] where \(\alpha_{2},L_{2}\neq 0\) is assumed in the last equality, and where the following “conventions” are enforced: \[ \begin{split} 1) & \quad L + (\pm\infty) =\pm \infty \;\mbox{ if }\;\; L\in\mathbb{R} \mbox{ or } L=\pm\infty\\ & \\ 2) & \quad L\cdot \pm\infty = \pm \infty \;\mbox{ if }\;\; l>0\\ & \\ 3) & \quad l\cdot \pm\infty = \mp \infty \;\mbox{ if }\;\; l<0\\ & \\ 4) & \quad \frac{L}{\pm\infty} = 0 \;\mbox{ if }\;\; L\neq \pm \infty\\ & \\ 5) & \quad \frac{\pm\infty}{l} = \pm \infty \;\mbox{ if }\;\; l>0\\ & \\ 6) & \quad \frac{\pm\infty}{l} = \mp \infty \;\mbox{ if }\;\; l<0\\ & \\ 7) & \quad \frac{l}{0} =+\infty \;\mbox{ if }\;\; \left\{\begin{matrix}l>0 \mbox{ and } g(x)>0 \\ \mbox{ or } \\ l<0 \mbox{ and } g(x)<0\end{matrix}\right.\\ & \\ 8) & \quad \frac{l}{0} =-\infty \;\mbox{ if }\;\; l<0 . \end{split} \] All other cases are called indeterminate forms, and require a case by case analysis to determine if the limit on the left exists or not.

Under the obvious modifications, the same conclusions hold for left and right limits.

Proof:

The case where both limits are real numbers is already done here.

The rest of the proof is tedious, and it will likely take me some time to be uploaded.

Also the squeeze theorem can be adapted to unbounded limits.

NoteThe squeeze (pinching) theorem for unbounded limits

Theorem 1 Let \(x_{0}\) be an accumulation point of \(D\subseteq \mathbb{R}\), and consider the functions \(f,g\colon D\rightarrow \mathbb{R}\). If \(g(x)\leq f(x)\) for all \(x\in D\) and \[ \lim_{x\rightarrow x_{0}}g(x)= +\infty, \] then it follows that \[ \lim_{x\rightarrow x_{0}}f(x)=+\infty. \] Analogously, if \(f(x)\leq g(x)\) for all \(x\in D\) and \[ \lim_{x\rightarrow x_{0}}g(x)= -\infty, \] then it follows that \[ \lim_{x\rightarrow x_{0}}f(x)=-\infty. \]

Under the obvious modifications, the same conclusions hold for left and right limits.

Proof: The proof is similar to that given here, but it will take me some time to upload it.

We can also adapt the proposition on the limit of composite functions to the case of unbounded limits.

NoteUnbounded limit of composite functions

Proposition 3 Consider \(D,D'\subseteq\mathbb{R}\), let \(x_{0}\in\mathbb{R}\) and \(l\in\mathbb{R}\) be accumulation points for \(D'\) and \(D\), respectively, and take two functions \(f\colon D \rightarrow \mathbb{R}\) and \(g\colon D' \rightarrow \mathbb{R}\). Suppose that \[ \lim_{x\rightarrow x_{0}}g(x) = l\in\mathbb{R}, \tag{1}\] and that \[ \qquad \lim_{y\rightarrow l} f(y) =\pm\infty. \tag{2}\] Then, the equality \[ \lim_{x\rightarrow x_{0}} f\circ g(x)=\pm\infty \] holds if there is an open interval \(I\) centered at \(x_{0}\) on which \(g(x)\neq l\) for all \(x_{0}\neq x\in I\).

Under the obvious modifications, analogous results hold for left and right limits.

Proof:

The proof is essentially the same as that given here.

Let us consider only the case where the limit in Equation 2 is \(+\infty\), because the \(-\infty\) can be proved by analogy. From Equation 2, given \(K>0\), it follows that there is \(\eta>0\) for which \[ 0<|y-l|<\eta \implies f(y)>K, \tag{3}\] with \(y\neq l\). On the other hand, Equation 1 implies that, given the same \(\eta>0\) as before, there is \(\delta>0\) such that \[ 0<|x-x_{0}|<\delta \implies |g(x) - l|<\eta. \tag{4}\] Setting \(y=g(x)\), the assumption on \(g\) implies we can take \(\delta>0\) such that \(0<|x-x_{0}|<\delta\) implies \(x\in I\) and thus \(g(x)\neq l\) by assumption. Consequently, it holds \[ 0<|x-x_{0}|<\delta \implies 0<|g(x) - l|<\eta \implies |f(g(x)) -L|<\varepsilon, \] as desired.

3 Limits at infinity

It should be quite easy to convince ourselves that the function \(f\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(f(x)=x + c\), with \(c\in\mathbb{R}\), is obliged to increase/decrease indefinitely as the input \(x\) increases/decreases indefinitely. It should be less easy, but still quite manageable without too much thought, to convince ourselves that the function \(g\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(g(x)=x^{2}+c\), is obliged to increase indefinitely irrespectively of the input \(x\) increasing or decreasing indefinitely. We could dare to say that no sophisticated mathematical reasoning is necessary for us to confidently believe these ‘facts’.

However, a different situation is presented to us if we consider the function \(h\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(h(x)=\frac{x^{2}}{x^{2}+1}\). We ‘know’ that both the numerator and the denominator of \(h\) grow indefinitely irrespectively of the input \(x\) increasing or decreasing, but it’s difficult to get a clear mental picture of what really happens when \(x\) increases/decreases indefinitely just by looking at the definition of the function. After some thinking, we may arrive to the conclusion that, since the numerator and denominator only differ by \(1\), their ratio gets increasingly close to \(1\). Some quick numerical checks with \(x=100,1000,10000,100000\) may support this intuition, but we can not be as sure as we were for the cases \(f(x)=x\) or \(g(x)=x^{2}+c\) mentioned before.

Then, we may start to wonder about more complex examples like \(t\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(t(x)=\frac{x^{3}-3x^{2}}{x^{2}+x^{4}}\) or \(t(x)=\frac{\sqrt{x^{4}+3x^{2}}}{x^{2}+x^{4}+83}\), and we quickly realize we have no idea other than plugging in some numbers and hoping some pattern emerges to guide our intuition.

From a mathematician’s point of view, this is the appropriate moment to go looking for a new concept of limit that can help us understanding what happens when the input variable either grows or degrows indefinitely, by giving us rigorous analytical tools to guide our intuition. Luckily for us, this new notion of limit has been found quite some time, and we can ‘simply’ exploit its power.

NoteLimits at \(+\infty\)

Definition 2 Let \(D\subseteq \mathbb{R}\) be such that \(+\infty\) is a right accumulation point for it, and consider a function \(f\colon D\rightarrow \mathbb{R}\).

The formal equality \[ \lim_{x\rightarrow +\infty} f(x)= L \] holds with \(L\in\mathbb{R}\) if for every \(\varepsilon>0\) there is \(\varrho>0\) such that \[ D\ni x> \varrho \Rightarrow |f(x) - l|<\varepsilon . \]

The formal equality \[ \lim_{x\rightarrow +\infty} f(x)=+\infty \] holds if for every \(K>0\) there is \(\varrho>0\) such that \[ D\ni x> \varrho \rightarrow f(x)>K. \]

The formal equality
\[ \lim_{x\rightarrow +\infty} f(x)=-\infty \] holds if for every \(K>0\) there is \(\varrho>0\) such that \[ D\ni x> \varrho \rightarrow f(x)<-K. \]

NoteLimits at \(-\infty\)

Definition 3 Let \(D\subseteq \mathbb{R}\) be such that \(-\infty\) is a left accumulation point for it, and consider a function \(f\colon D\rightarrow \mathbb{R}\).

The formal equality \[ \lim_{x\rightarrow -\infty} f(x)= L \] holds with \(L\in\mathbb{R}\) if for every \(\varepsilon>0\) there is \(\varrho>0\) such that \[ D\ni x<-\varrho\Rightarrow |f(x) - l|<\varepsilon . \]

The formal equality \[ \lim_{x\rightarrow -\infty} f(x)=+\infty \] holds if for every \(K>0\) there is \(\varrho>0\) such that \[ D\ni x<- \varrho \rightarrow f(x)>K. \]

The formal equality
\[ \lim_{x\rightarrow -\infty} f(x)=-\infty \] holds if for every \(K>0\) there is \(\varrho>0\) such that \[ D\ni x<- \varrho \rightarrow f(x)<-K. \]

Once again, we need to check that the notions of limits at infinity introduced in Definition 2 and Definition 3 are well-posed. We start with the uniqueness of the limit at infinity.

NoteWell-posedness of limits at infinity

Proposition 4 Let \(+\infty\) be an accumulation point of \(D\subseteq\mathbb{R}\), and consider the function \(f\colon D\subseteq\mathbb{R}\rightarrow\mathbb{R}\). If \[ \begin{split} \lim_{x\rightarrow +\infty}f(x)&=L \\ \lim_{x\rightarrow +\infty}f(x)&=M, \end{split} \] according to Definition 2, then \(L=M\).

Under the obvious modifications, a similar result holds for \(x\rightarrow-\infty\).

Proof:

The proof is a slight modification of that given here, and you are strongly invited to fill the gaps on your own.

Then, we check that the notion of limits at infinity does not lead to weird situations when different functions agreeing on an arbitrary subset of the intersection of their domains are considered.

CautionLimits at infinity of the restriction

Exercise 3 Let \(D,D'\subseteq \mathbb{R}\) be such that \(D\cap D'\neq\emptyset\), and consider two functions \(f\colon D \rightarrow \mathbb{R}\) and \(g\colon D'\rightarrow \mathbb{R}\) such that \(g(x)=f(x)\) on \(S\subseteq D\cap D'\). If \(\pm\infty\) is an accumulation point for \(S\) prove that \[ \lim_{x\rightarrow \pm\infty} f(x)=\lim_{x\rightarrow \pm\infty} g(x) \] whenever either one of the limits exist in the sense of Definition 2 or Definition 3.

Solution: You are strongly invited to try to solve it on your own following what you should have already done for this exercise and Exercise 2.

Let us now use Definition 2 and Definition 3 to disucss two simple but important examples.

TipBasic examples of limits at infinity

Example 2 Consider the function \(f\colon \mathbb{R}\rightarrow \mathbb{R}\) given by \(f(x)=x\). We want to prove that \[ \lim_{x\rightarrow\pm\infty} x=\pm\infty. \] Focusing on the case \(x\rightarrow +\infty\), Definition 2 requires us to prove that, given \(K>0\), there is \(M>0\) such that \[ x>M\implies x>K. \] Taking \(M>K\) is enough.

For the case \(x\rightarrow -\infty\), Definition 3 requires us to prove that, given \(K>0\), there is \(M>0\) such that \[ x<-M\implies x<-K. \] Taking (again) \(M>K\) is enough.


Consider the function \(f\colon D\subset \mathbb{R}\rightarrow \mathbb{R}\) given by \(f(x)=x\), where \(D\) is any subset not containing \(0\) and such that \(\pm\infty\) is an accumulation point. We want to prove that \[ \lim_{x\rightarrow\pm\infty} \frac{1}{x}=0. \] \[ \lim_{x\rightarrow\pm\infty} x=\pm\infty. \] Focusing on the case \(x\rightarrow +\infty\), Definition 2 requires us to prove that, given \(\varepsilon>0\), there is \(M>0\) such that \[ x>M\implies |f(x) - 0|= \frac{1}{x}<\varepsilon. \] Since \(x>M>0\) implies \(0<\frac{1}{x}<\frac{1}{M}\), taking \(M\) such that \(\frac{1}{M}<\varepsilon\) is enough.

For the case \(x\rightarrow -\infty\), Definition 3 requires us to prove that, given \(\varepsilon>0\), there is \(M>0\) such that \[ x<-M<0\implies |f(x) - 0|= -\frac{1}{x}<\varepsilon. \] Since \(x<-M<0\) implies \(0<-\frac{1}{x}<\frac{1}{M}\), taking (again) \(M\) such that \(\frac{1}{M}<\varepsilon\) is enough.

The algebraic manipulations of limits here can be appropriately modified to handle also limits at infinity, provided we handle the so-called inderminate forms with care.

NoteAlgebraic manipulations of improper limits

Proposition 5 Let \(\alpha,\beta\in\mathbb{R}\), let \(\pm\infty\) be an accumulation point for \(D\subseteq \mathbb{R}\), and consider two functions \(f,g\colon D\rightarrow \mathbb{R}\). If \[ \lim_{x\rightarrow \pm\infty}f(x)=L_{1} \mbox{ and } \quad \lim_{x\rightarrow \pm\infty}g(x)=L_{2} \] in the sense of Definition 2 or Definition 3, then \[ \begin{split} \lim_{x\rightarrow \pm\infty} \left(\alpha \,f + \beta \,g \right)(x) &=\lim_{x\rightarrow \pm\infty} \alpha \,f (x) + \lim_{x\rightarrow \pm\infty} \beta \, g(x) = \alpha\,L_{1} + \beta \,L_{2} \\ & \\ \lim_{x\rightarrow \pm\infty} \left((\alpha \,f)\cdot(\beta \,g)\right) (x) & =\left(\lim_{x\rightarrow \pm\infty} \alpha \,f(x)\right)\left(\lim_{x\rightarrow \pm\infty} \beta \,g(x)\right)= \alpha\,L_{1}\,\beta \,L_{2} \\ & \\ \lim_{x\rightarrow \pm\infty}\,\left(\frac{\alpha \,f}{\beta \,g}\right)(x)&=\frac{\lim_{x\rightarrow \pm\infty }\alpha \,f(x)}{\lim_{x\rightarrow \pm\infty }\beta \,g(x)} =\frac{\alpha_{1}L_{1}}{\alpha_{2} L_{2}}, \end{split} \] where \(\alpha_{2},L_{2}\neq 0\) is assumed in the last equality, and where the following “conventions” are enforced: \[ \begin{split} 1) & \quad L + (\pm\infty) =\pm \infty \;\mbox{ if }\;\; L\in\mathbb{R} \mbox{ or } L=\pm\infty\\ & \\ 2) & \quad L\cdot \pm\infty = \pm \infty \;\mbox{ if }\;\; l>0\\ & \\ 3) & \quad l\cdot \pm\infty = \mp \infty \;\mbox{ if }\;\; l<0\\ & \\ 4) & \quad \frac{L}{\pm\infty} = 0 \;\mbox{ if }\;\; L\neq \pm \infty\\ & \\ 5) & \quad \frac{\pm\infty}{l} = \pm \infty \;\mbox{ if }\;\; l>0\\ & \\ 6) & \quad \frac{\pm\infty}{l} = \mp \infty \;\mbox{ if }\;\; l<0\\ & \\ 7) & \quad \frac{l}{0} =+\infty \;\mbox{ if }\;\; \left\{\begin{matrix}l>0 \mbox{ and } g(x)>0 \\ \mbox{ or } \\ l<0 \mbox{ and } g(x)<0\end{matrix}\right.\\ & \\ 8) & \quad \frac{l}{0} =-\infty \;\mbox{ if }\;\; l<0 . \end{split} \] All other cases are called indeterminate forms, and require a case by case analysis to determine if the limit on the left exists or not.

Under the obvious modifications, the same conclusions hold for left and right limits.

Proof: The proof is similar to the one already done here, and to the proof of Proposition 2. However, it will likely take me some time to upload it.

Now, we can go back and understand the ‘complex examples’ mentioned in the beginning of this section.

TipTwo simple examples of limits at infinity

Example 3 Consider the function \(f\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(f(x)=\frac{x^{2}}{x^{2}+1}\). Whenever \(x\neq 0\), we can write \[ f(x)=\frac{x^{2}}{x^{2}+1}=\frac{x^{2}}{x^{2}(1+\frac{1}{x^{2}})}=\frac{1}{1+\frac{1}{x^{2}}}. \] Consequently, we have the function \(g\colon (0,+\infty)\rightarrow\mathbb{R}\) given by \[ g(x)=f(x)=\frac{1}{1+\frac{1}{x^{2}}} \] and Exercise 3 implies \[ \lim_{x\rightarrow+\infty}g(x)=\lim_{x\rightarrow+\infty}f(x). \tag{5}\] The denominator of \(g\colon (0,+\infty)\rightarrow\mathbb{R}\) is the sum of the constant function \(h\colon (0,+\infty)\rightarrow\mathbb{R}\) given by \(h(x)=1\) with the function \(t\colon (0,+\infty)\rightarrow\mathbb{R}\) given by \(t(x)=\frac{1}{x^{2}}\). Applying the product rule in Proposition 5 to the second example in Example 2 leads to \[ \lim_{x\rightarrow +\infty}t(x)=\lim_{x\rightarrow +\infty} \frac{1}{x^{2}}=0 \] Then, Equation 5 and the quotient rule in Proposition 5 allow us to conclude that \[ \lim_{x\rightarrow +\infty}f(x)=\lim_{x\rightarrow +\infty}g(x)=\lim_{x\rightarrow +\infty} \frac{1}{1+\frac{1}{x^{2}}}=1 \]

Also the squeeze theorem can be adapted to limits at infinity.

NoteThe squeeze (pinching) theorem for limits at infinity

Theorem 2 Let \(\pm\infty\) be an accumulation point of \(D\subseteq \mathbb{R}\), and consider the functions \(f,g\colon D\rightarrow \mathbb{R}\). If \(g(x)\leq f(x)\) for all \(x\in D\) and \[ \lim_{x\rightarrow \pm\infty}g(x)= L, \] according to Definition 2 or Definition 3, then it follows that \[ \lim_{x\rightarrow \pm\infty}f(x)=L. \]

Proof: The proof is similar to that given here, but it will likely take me some time to upload it.

We can also adapt the proposition on the limit of composite functions to the case of unbounded limits.

NoteLimit at infinity of composite functions

Proposition 6 Let \(\pm\infty\) be an accumulation point of \(D'\subseteq \mathbb{R}\), and consider the function \(g\colon D' \rightarrow \mathbb{R}\) such that \[ \lim_{x\rightarrow \pm\infty}g(x) = l \] in the sense of Definition 2 or Definition 3. If \(l\) (which is possibly \(\pm\infty\)) is an accumulation point of \(D\subseteq \mathbb{R}\), and the functions \(f\colon D \rightarrow \mathbb{R}\) is such that \[ \lim_{y\rightarrow l} f(y) =L , \] according to this definition, or Definition 1, Definition 2, or Definition 3, the equality \[ \lim_{x\rightarrow \pm\infty} f\circ g(x)=L \] holds if there exists \(M_0>0\) such that for all \(|x|>M_0\), we have \(g(x)\in D\), and:

  1. if \(l\in\mathbb{R}\) and \(L\in\mathbb{R}\), either \(l\in D\) and \(f(l)=L\), or there exists \(M_1>0\) such that for all \(|x|> M_{1}\) it holds \(g(x)\in D -\{l\}\);
  2. if \(l\in\mathbb R\) and \(L=\pm\infty\), there exists \(M_1>0\) such that for all \(|x|> M_{1}\) it holds \(g(x)\in D -\{l\}\);
  3. if \(l=\pm\infty\), there exist \(K,M_{2}>0\) such that for all \(|x|> M_{2}\) it holds \(g(x)\in D\cap\{y\colon |y|> K\}\).

When \(l\in\mathbb{R}\), a similar result holds for left and right limits provided the obvious modifications are implemented.

Proof:

It will take me some time to upload the proof.

4 Discontinuities

Not all functions are continuous, and we now consider the different ways in which a function can fail to be continuous at a point. Consider the function \(f\colon D\subseteq\mathbb{R}\rightarrow\mathbb{R}\), and let \(x_{0}\) be an accumulation point for \(D\). Three situations are possibles:

  1. removable discontinuity: the limit at \(x_{0}\) exists, but either \(x_{0}\) is not in \(D\), or the limit is different from \(f(x_{0})\);
  2. jump discontinuity: the right and left limits at \(x_{0}\in D\) exist, are finite, but are different;
  3. essential discontinuity: either the left or right limit at \(x_{0}\) does not exist (or, when improper limits are allowed, both limits at \(x_{0}\) exist but at least one of them is either \(+\infty\) or \(-\infty\)).

In the first case (removable discontinuity), we can define a new function \(\tilde{f}\colon D \cup\{x_{0}\}\subseteq\mathbb{R}\rightarrow\mathbb{R}\) by setting \[ \tilde{f}(x)= \left\{\begin{matrix} f(x) & \mbox{ if } x_{0}\neq x\in D \\ \lim_{x\rightarrow x_{0}}f(x) & \mbox{ if } x=x_{0} .\end{matrix}\right. \] This new function is called an extension by continuity of \(f\) and it is a continuous function at \(x_{0}\) by construction. Note that this procedure works whether \(x_{0}\) is in the domain \(D\) of the original function or not. If \(x_{0}\) were in \(D\), then we are redefining \(f\) at that point. If \(x_{0}\) was not in \(D\), then we are genuinely extending \(f\).

In the second case (jump discontinuity), whether \(x_{0}\) lies in \(D\) or not, we can not redefine \(f\) in such a way that the result is a continuous function at \(x_{0}\). The best we can do is obtain an extension of \(f\) which is either left or right continuous at \(x_{0}\).

In the third case (essential singularity), there is nothing we can do to ameliorate the discontinuities of the function \(f\).

CautionExamples of discontinuous functions

Exercise 4 For each type of discontinuity, build a function presenting said discontinuity at least at one point.

5 Exercises

CautionThe square root

Exercise 5 Consider the function \(f\colon (0,+\infty)\rightarrow \mathbb{R}\) given by \(f(x)=\sqrt{x}\). Prove that \[ \lim_{x\rightarrow+\infty} \sqrt{x}=+\infty. \]

Solution: You are strongly invited to try to solve it on your own.
CautionExercises to keep up in good shape

Exercise 6 All the functions defined by the formulas given below are implicitely assumed to be defined on their maximal domains. Calculate the limits, or prove they do not exist, explaining which previous results you are using in each case: \[ \begin{split} 1) \quad & \lim_{x\rightarrow 1^{\pm}}\, \frac{x^{2}-1}{ x^{2} -2x + 1 }= \pm\infty \\ & \\ 2) \quad & \lim_{x\rightarrow 2^{+}}\, \frac{\sqrt{x^{2} -4}}{x-2} = +\infty \end{split} \]

Solution:
  1. When \(x=1\), both numerator and denominator vanish, and we get the indeterminate form “\(\frac{0}{0}\)”. The fact that both polynomials have a common root (\(x=1\)) suggests we factorize them. In particular, it holds \[ x^{2}-1=(x-1)(x+1), \quad \mbox{ and }\qquad x^{2}-2x +1=(x-1)(x-1), \] and thus \[ \frac{x^{2}-1}{x^{2}-2x+1}=\frac{x+1}{x-1}. \] Now, the numerator tends to 2 when \(x\rightarrow 1^{\pm}\), while the denominator tends to \(0\). Therefore, using Proposition 2, we obtain that the limit is \(+\infty\) when \(x\rightarrow 1^{+}\) (because the denominator tends to \(0\) from the left), and \(-\infty\) when \(x\rightarrow 1^{-}\) (because the denominator tends to \(0\) from the right).
  2. When \(x=2\), both numerator and denominator vanish, and we get the indeterminate form “\(\frac{0}{0}\)”. Putting the denominator in the square root, and using the equality \[ (x^{2} -4)=(x-2)(x+2), \] we obtain \[ \frac{\sqrt{x^{2}-4}}{x-2}=\sqrt{\frac{x+2}{x-2}}. \] Then, the rest of the proof uses the limit of composite functions (with the outer function equal to the square root function) and a reasoning analogous to that of the previous point.