Limits and continuity of real-valued functions

Published

October 23, 2025

Modified

November 21, 2025

1 Introduction

2 The definition of limit

Consider the function \(f\colon D\subseteq X\rightarrow Y\), a point \(x_{0}\in X\), and a point \(l\in Y\). The notion of limit we discuss here formalizes the fact that, when everything works, we can make the output of \(f\) stay as close as we want to \(l\) by taking input points that are sufficiently close to \(x_{0}\). In this case, we write \[ \lim_{x\rightarrow x_{0}} f(x)=l, \] and we say that \(l\) is the limit of \(f\colon D\subseteq X\rightarrow Y\) for \(x\) that goes to \(x_{0}\).

The previous intuitive description relies on a very strong hidden assumption: we must be able to say what it means when we say that \(f(x)\) is close to \(l\) and that \(x\) is close to \(x_{0}\). From a purely abstract point of view, the notion of ‘being close to’ works for a very general type of sets, known as metric spaces, as long as a suitable distance function between points can be introduced.

In this course, however, we are mainly concerned with the case \(X=Y=\mathbb{R}\), and the notion of ‘being close to’ has then a very strong and visual intuitive meaning. Indeed, visualizing \(\mathbb{R}\) as the real line, and taking three points \(x_{0},x_{1},x_{2}\in \mathbb{R}\), it would be very hard to disagree with the idea that \(x_{1}\) is closer to \(x_{0}\) than \(x_{2}\) if the ‘distance’ \(\Delta_{1}=|x_{1}-x_{0}|\) between \(x_{1}\) and \(x_{0}\) is smaller than the ‘distance’ \(\Delta_{2}=|x_{2}-x_{0}|\) between \(x_{2}\) and \(x_{0}\), as shown in Figure 1.

Figure 1: Graphical depiction of distances between real numbers on the real line.

The ‘distances’ \(\Delta_{1}\) and \(\Delta_{2}\) are precisely the lenghts of the segments joining \(x_{0}\) with \(x_{1}\) and \(x_{0}\) with \(x_{2}\), respectively, as measured with a standard ruler.

NoteEuclidean distance function

Definition 1 The two-point function \(d\colon \mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) given by \[ d(x,y)=\sqrt{(x-y)^{2}}=|x-y| \tag{1}\] is referred to as a Euclidean distance function on \(\mathbb{R}\).

Now that we know what ‘being close to’ means, we can visualize what is the \((\varepsilon-\delta)\)-definition of the limit of a function (see Definition 2 below) when everything works (source: Jason McCullough in Geogebra). The function used in the applet is \(f\colon\mathbb{R}\rightarrow\mathbb{R}\) with \(f(x) = \frac{x^3}{8}\), but can be changed in the input bar. We fix the reference point \(a\in\mathbb{R}\) in the input space by simply dragging it where we want, and visualize the validity of the formal equality \[ \lim_{x\rightarrow x_{0}} f(x)=L \] as follows. The interval \((L-\varepsilon, L+\varepsilon)\) in the output space is depicted as an horizontal blue strip of length \(2\varepsilon\), while the interval \((x_{0}-\delta,x_{0}+\delta)\) in the input space is depicted as a vertical red strip of lenght \(2\delta\). We can use the sliders to change the value of \(\varepsilon>0\) and \(\delta>0\), and we can see that, whatever the value of \(\varepsilon\) is, we can find a suitable value for \(\delta\) such that \(x\in (x_{0}-\delta,x_{0}+\delta)\) (or, equivalently, \(|x-x_{0}|<\delta\)) implies \(f(x)\in (L-\varepsilon, L+\varepsilon)\) (or, equivalently, \(|f(x)- L|<\epsilon\).

Interactive demo: when a limit exists.

In general, limits do not need to exist, as visualized in the applet below (source: Jason McCullough in Geogebra). This example exploits the oscillating behavior of trigonometric functions, a type of real-valued function that we will able to properly define only later in the course. The intuitive ideais that, when \(x\) approaches \(0\), the function keeps oscillating taking all values between \(-1\) and \(1\). Consequently, by suitably varying \(\varepsilon\) and \(\delta\) with the sliders, we can see that, as soon as \(\varepsilon\) is smaller than \(1\), there are no \(L\) and no \(\delta>0\) such that \(|x-0|<\delta\) implies \(|f(x)-L|<\varepsilon\).

Interactive demo: when a limit does not exists.

Now that we have an intuitive and visual understanding of what the notion of limit is, we can proceed to introduce a rigorous definition. Since we can approach the input point \(x_{0}\) from the left, from the right, or from both side simultaneously, we consider three types of limits.

Note\(\varepsilon\)-\(\delta\) definition of (left/right) limits

Definition 2 Consider the function \(f\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\) and the left accumulation point \(x_{0}\) for \(D\). We say that the formal equality
\[ \lim_{x\rightarrow x_{0}^{+}} f(x)=l \] holds if for every \(\varepsilon>0\) there is \(0< \delta\) such that \[ 0<x-x_{0}<\delta \Longrightarrow |f(x)-l|<\varepsilon . \] In this case, we say that \(l\) is the right limit of \(f\) at \(x_{0}\).

Similarly, if \(x_{0}\) is a right accumulation point for \(D\), we say that the formal equality
\[ \lim_{x\rightarrow x_{0}^{-}} f(x)=l \] holds if for every \(\varepsilon>0\) there is a \(0< \delta\) such that \[ 0<x_{0}-x<\delta \Longrightarrow =|f(x)-l|<\varepsilon . \] In this case, we say that \(l\) is the left limit of \(f\) at \(x_{0}\).

Finally, if \(x_{0}\) is an accumulation point for \(D\), we say that the formal equality \[ \lim_{x\rightarrow x_{0}} f(x)=l \] holds if for every \(\varepsilon>0\) there is a \(0< \delta\) such that \[ 0<|x-x_{0}|<\delta \Longrightarrow |f(x)-l|<\varepsilon . \] In this case, we say that \(l\) is the limit of \(f\) at \(x_{0}\).

Recall that the (left/right) accumulation point \(x_{0}\) of \(D\) in Definition 2 is not necessarily a point in \(D\), and thus it doesn’t necessarily make sense to compare the limit \(L\) with \(f(x_{0})\). When this is possible, and \(f(x_{0})=L\), then we are dealing with a particularly regular function (see the notion of continuity below).

Before proceeding further, we must be sure that Definition 2 is well-posed in the sense that a limit, if it exists, is unique.

NoteWell-posedness of limits

Proposition 1 Consider the function \(f\colon D\subseteq\mathbb{R}\rightarrow\mathbb{R}\) and the accumulation point \(x_{0}\) for \(D\). If \[ \begin{split} \lim_{x\rightarrow x_{0}}f(x)&=l \\ \lim_{x\rightarrow x_{0}}f(x)&=m, \end{split} \] then \(l=m\).

Under the obvious modifications, analogous results hold for left and right limits.

Proof:

From Definition 2, it follows there are \(\varepsilon_{1},\delta_{1},\varepsilon_{2},\delta_{2}>0\) such that
\[ \begin{split} |x-x_{0}|<\delta_{1}&\Rightarrow |f(x) - l|<\varepsilon_{1} \\ |x-x_{0}|<\delta_{2}&\Rightarrow |f(x) - m|<\varepsilon_{2}. \\ \end{split} \] Taking \(\varepsilon=\mathrm{max}(\varepsilon_{1},\varepsilon_{2})\) and \(\delta=\mathrm{min}(\delta_{1},\delta_{2})\), it holds \[ |x-x_{0}|<\delta\Rightarrow |f(x) - l|<\varepsilon \;\;\mbox{ and }\quad |f(x) - m|<\varepsilon. \] In particular, since \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are arbitrary, we can take \(\varepsilon=\frac{|l-m|}{2}\), so that , for \(|x-x_{0}|<\delta\), it holds \[ \begin{split} |l-m|&=|l-m+f(x) -f(x)|\leq |f(x) -m| + |f(x) -l|< \\ & < \frac{|l-m|}{2} + \frac{|l-m|}{2} =|l -m|, \end{split} \] which is a contradiction unless \(l=m\).

The case of left and right limits can be proved analogously.

We now pass to compute limits in three very instructive cases.

TipThree simple examples

Example 1 Let us start considering the function \(f\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(f(x)=c\) and prove that \[ \lim_{x\rightarrow x_{0}} f(x)= c = f(x_{0}) \tag{2}\] for every \(x_{0}\in\mathbb{R}\). According to Definition 2, we need to prove that, for every \(\varepsilon>0\), there is \(\delta>0\) such that \(|x-x_{0}|<\delta\) implies \(|f(x)-c|<\varepsilon\).

Since \(f(x)=f(y)=c\) for all \(x,y\in\mathbb{R}\), it follows that \(|f(x) - c|=0<\varepsilon\) for every \(x\in\mathbb{R}\) and every \(\varepsilon>0\). In particular, fixing \(\varepsilon,\delta>0\), the condition \(|x -x_{0}|<\delta\) trivially implies \(|f(x)-c|<\varepsilon\) which is precisely what we had to prove. Note that, in this case \(\delta=\delta\), which means that it does not depend on \(\varepsilon\).


Now, let us consider the function \(f\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(f(x)=x + a\), and prove that \[ \lim_{x\rightarrow x_{0}} f(x)= x_{0} + a=f(x_{0}) \tag{3}\] for every \(x_{0}\in\mathbb{R}\). According to Definition 2, we need to prove that, for every \(\varepsilon>0\), there is \(\delta>0\) such that \(|x-x_{0}|<\delta\) implies \(|f(x)-x_{0}|<\varepsilon\).

By the very definition of \(f\), it holds \(|f(x) - (x_{0}+a)|=|x - x_{0}|\). Taking \(\delta< \varepsilon\), we get that \(|x - x_{0}|<\delta\) implies \(|f(x) - f(x_{0})|<\varepsilon\) as desired. Note that, unlike what happens in the case of the constant function above, \(\delta_\varepsilon\) depends on \(\varepsilon\) because it must be \(\delta<\varepsilon\).


Finally, let us consider the function \(f\colon \mathbb{R}\rightarrow\mathbb{R}\) given by \(f(x)=x^{2}\), and prove that \[ \lim_{x\rightarrow x_{0}} f(x)= x_{0}^{2}= f(x_{0}) \tag{4}\] for every \(x_{0}\in\mathbb{R}\). According to Definition 2, we need to prove that, for every \(\varepsilon>0\), there is \(\delta>0\) such that \(|x-x_{0}|<\delta\) implies \(|f(x)-x_{0}^{2}|<\varepsilon\).

It is \(|f(x)-x_{0}^{2}|=|x^{2} - x_{0}^{2}|\), and, using the properties of the absolute value, we have \[ |x^{2} - x_{0}^{2}| =|(x+x_{0})(x-x_{0})|\leq |x + x_{0}||x-x_{0}|. \] Moreover, it holds \[ | x+ x_{0}| =|x - x_{0} + 2x_{0}|\leq |x - x_{0}| + 2|x_{0}|, \] so that \(|x - x_{0}|<\delta\) implies \[ |x^{2} - x_{0}^{2}|\leq (|x - x_{0}| + 2|x_{0}|)\,|x-x_{0}| < ( \delta+2|x_{0}|)\,\delta. \] Therefore, to get \(|f(x)-x_{0}^{2}|=|x^{2} - x_{0}^{2}|<\varepsilon\), we must ensure that \[ \delta^{2} + 2|x_{0}| \delta - \varepsilon<0 . \] Solving the previous quadratic equation in the variable \(\delta\) we obtain the condition \[ 0 < \delta < \sqrt{x^{2}_{0} + \varepsilon} -|x_{0}| = \sqrt{x^{2}_{0} + \varepsilon} - \sqrt{x^{2}_{0}}. \] Note that the way in which \(\delta\) depends on \(\varepsilon\) is more complex than in the case of the linear function above.

In Definition 2, three types of limits are considered because we can approach the input point \(x_{0}\) from the left, from the right, or from both sides simulatenously. In the last case, it is reasonable to ask if the three types of limits agree. In the exercise below, you are asked to prove that this is indeed the case.

CautionExistence of the limit at \(x_{0}\) is equivalent to existence and equality of left and right limits at \(x_{0}\)

As we stressed before, a rigorous definition of a function requires its domain to be specified. Consequently, the function \(f\colon (0,3)\subset\mathbb{R}\rightarrow\mathbb{R}\) given by \(f(x)=x+3\) and the function \(g\colon (-1,2)\subset\mathbb{R}\rightarrow\mathbb{R}\) given by \(g(x)=x+3\) are different functions. However, it is clear that \(f(x)=g(x)\) whenever \(x\in(0,2)\), and it would be quite inconvenient if the notion of limit in Definition 2 would allow something like \[ \lim_{x\rightarrow 1} f(x)=L\neq L'= \lim_{x\rightarrow 1} g(x). \] The following exercise asks you to prove that this weird behavior is not compatible with Definition 2.

CautionLimits of the restriction

Exercise 2 Consider \(D,D'\subseteq \mathbb{R}\) such that \(D\cap D'\neq \emptyset\), and consider two functions \(f\colon D\rightarrow \mathbb{R}\) and \(g\colon D' \rightarrow \mathbb{R}\) such that \(g(x)=f(x)\) on \(S\subseteq D\cap D'\). If \(x_{0}\) is an accumulation point for \(S\) prove that \[ \lim_{x\rightarrow x_{0}} f(x)=\lim_{x\rightarrow x_{0}} g(x) \] whenever either one of the limits exist in the sense of Definition 2.

Prove analogous results for the case of left and right limits.

Solution: You are strongly invited to try to solve it on your own.

The next theorem makes precise the intuition according to which, if a function \(f\) lies between two other functions, say \(g\) and \(h\), the limits of \(f\) are determined by the limits of \(g\) and \(h\).

NoteThe squeeze (pinching) theorem

Theorem 1 Consider the functions \(f,g,h\colon D\subseteq\mathbb{R}\rightarrow \mathbb{R}\), and let \(x_{0}\) be an accumulation point for \(D\). If \(h(x)\leq f(x)\leq g(x)\) for all \(x\in D\) and \[ \lim_{x\rightarrow x_{0}}h(x)=\lim_{x\rightarrow x_{0}}g(x)=L\in\mathbb{R}, \] then it follows that \[ \lim_{x\rightarrow x_{0}}f(x)=L. \]

Under the obvious modifications, the same conclusions hold for left and right limits.

Proof: The proof is will likely take me some time to upload it.

3 Continuity

As already noted right after Definition 2, the point \(x_0\) appearing when considering the limit \[ \lim_{x\rightarrow x_{0}}f(x)=L \] is an accumulation point of the domain \(D\subseteq\mathbb{R}\) of the function \(f\colon D\rightarrow\mathbb{R}\), but it is not necessarily true that \(x_0 \in D\) for the notion of limit to make sense. However, if \(x_{0}\) does belong to \(D\) and the limit exists, it makes sense to see if the limit \(L\) differs from the value of the function at \(x_0\).

NoteContinuity

Definition 3 Let \(x_{0}\in D\subseteq \mathbb{R}\) be an accumulation point for \(D\). The function \(f\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\) said to be continuous at \(x_{0}\in D\) if \[ \lim_{x\rightarrow x_{0}}f(x)=f(x_{0}). \] Left and right continuity are defined in the obvious way.

The function \(f\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\) said to be continuous if it is continuous at all accumulation points \(x_{0}\in D\), and left/right continuous at all right/left accumulation points \(x_{0}\in D\).

CautionContinuity is equivalent to simultaneous left and right continuity

Exercise 3 The function \(f\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\) is continuous at \(x_{0}\in D\) and only if it is both left and right continuous at \(x_0\).

Solution: It follows directly from this previous exercise on limits.

Just as for limits, we must ensure that the notion of continuity in Definition 3 behaves well we consider different functions that agrees on the intersections of their domains to avoid unpleasant pathological behaviours.

CautionContinuity of the restriction

Exercise 4 Consider two functions \(f\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\) and \(g\colon D'\subseteq \mathbb{R}\rightarrow \mathbb{R}\) such that \(g(x)=f(x)\) on \(D\cap D'\neq \emptyset\). Prove that \(h\colon D\cap D'\subseteq \mathbb{R}\rightarrow\mathbb{R}\) given by \(h(x)=g(x)=f(x)\) is (left/right) continuous at \(x_{0}\in D\cap D'\) if \(g\) is (left/right) continuous at \(x_{0}\in D\cap D'\).

In particular, prove that \(h\) is (left/right) continuous on \(D\cap D'\) if \(g\) is continuous on \(D\).

Solution: It follows directly from Exercise 2.

It is informally said that a function \(f\colon D\subseteq\mathbb{R}\rightarrow\mathbb{R}\) is a continuous function if its graph can be drawn on paper without lifting the pen/pencil. We now try to make this informal statement precise by working our way to the intermediate value theorem below. First, we prove the so-called sign-preserving property of continuous functions.

NoteSign-preserving property of continuous functions

Proposition 2 Let \(f\colon D\subseteq\mathbb{R}\rightarrow\mathbb{R}\) be continuous at \(c\in D\), and suppose that \(f(c) \neq 0\). Then there is an interval \((c — \delta, c + \delta)\) in which \(f\) has the same sign as \(f(c)\).

If continuity is replaced by left or right continuity at \(c\), then there is a corresponding one-sided interval \([c, c + \delta)\) or \((c - \delta, c]\) in which \(f\) has the same sign as \(f(c)\).

Proof: I’ll do it soon (hopefully).

From Proposition 2, we can derive the so-called Bolzano’s theorem, also known as the extreme value theorem for continuous functions. This theorem shows that continuous functions on closed intervals must always attain a (possibly non-unique) maximum and a (possibly non-unique) minimum.

Theorem 2 Let \(a,b\in\mathbb{R}\). If \(f\colon [a,b]\rightarrow\mathbb{R}\) is continuous it attains a maximum and minimum value.

Proof: I’ll do it soon (hopefully).

Finally, from Theorem 2, it follows the so-called intermediate value theorem for continuous functions.

Theorem 3 Let \(a,b\in\mathbb{R}\), let \(f\colon [a,b]\rightarrow\mathbb{R}\) be continuous, and let \(K\) be any number between \(f(a)\) and \(f(b)\), then there is at least one point \(c\in (a,b)\) such that \(f(c)=K\).

Proof: I’ll do it soon (hopefully).

4 Algebraic manipulations of limits and continuous functions

From the quadratic case in Example 1, it should be clear that computing limits following Definition 2 is not going to be an easy task for arbitrarily complex functions. Luckily for us, some mathematicians were able to prove general propositions that allow to exploit simple cases to solve more complex ones.

To understand how one may envision such general propositions, let us keep in mind the results obtained in Example 1. Setting \(f(x)=x +c\), \(g(x)=x\), and \(h(x)=c\), we see that Equation 2 and Equation 3 (once with \(a=c\) and once with \(a=0\)) imply \[ \lim_{x\rightarrow x_{0}} f(x)=\left(\lim_{x\rightarrow x_{0}} g(x) \right) + \left(\lim_{x\rightarrow x_{0}} h(x)\right), \] which means that the limit of the sum is the sum of the limits. Moreover, setting \(f(x)=x^{2}\) and \(g(x)=x=h(x)\), we also see that Equation 3 and Equation 4 imply \[ \lim_{x\rightarrow x_{0}} f(x)=\left(\lim_{x\rightarrow x_{0}} g(x)\right)\;\left(\lim_{x\rightarrow x_{0}} h(x)\right), \] which means that the limit of the product is the product of the limits.

A question then immediately arises: how general are these facts? It turns out that they are quite general.

NoteAlgebraic manipulations of limits

Proposition 3 Let \(\alpha,\beta\in\mathbb{R}\), and consider the functions \(f,g\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\). If \[ \lim_{x\rightarrow x_{0}}f(x)=L_{1} \mbox{ and } \quad \lim_{x\rightarrow x_{0}}g(x)=L_{2} \] in the sense of Definition 2, then \[ \begin{split} \lim_{x\rightarrow x_{0}} \left(\alpha \,f + \beta \,g \right)(x) &=\lim_{x\rightarrow x_{0}} \alpha \,f (x) + \lim_{x\rightarrow x_{0}} \beta \, g(x) = \alpha\,L_{1} + \beta \,L_{2} \\ & \\ \lim_{x\rightarrow x_{0}} \left((\alpha \,f)\cdot(\beta \,g)\right) (x) & =\left(\lim_{x\rightarrow x_{0}} \alpha \,f(x)\right)\left(\lim_{x\rightarrow x_{0}} \beta \,g(x)\right)= \alpha\,L_{1}\,\beta \,L_{2} \\ & \\ \lim_{x\rightarrow x_{0}}\,\left(\frac{\alpha \,f}{\beta \,g}\right)(x)&=\frac{\lim_{x\rightarrow x_{0} }\alpha \,f(x)}{\lim_{x\rightarrow x_{0} }\beta \,g(x)} =\frac{\alpha_{1}L_{1}}{\alpha_{2} L_{2}}, \end{split} \tag{5}\] where \(\alpha_{2},L_{2}\neq 0\) is assumed in the last equality.

Under the obvious modifications, the same conclusions hold for left and right limits.

Proof:

Let us start with the sum of two functions. From Definition 2 that there are \(\varepsilon_{1},\delta_{1},\varepsilon_{2},\delta_{2}>0\) such that \[ \begin{split} |x-x_{0}|<\delta_{1}&\Rightarrow 0<|f(x) -L_{1}|<\varepsilon_{1} \\ |x-x_{0}|<\delta_{2}&\Rightarrow 0<|g(x) -L_{2}|<\varepsilon_{2}. \end{split} \tag{6}\] For every \(\varepsilon>0\), we can take \(\varepsilon_{1}\) and \(\varepsilon_{2}\) such that \(\varepsilon=\varepsilon_{1}+\varepsilon_{2}\). Consequently, if \(|x-x_{0}|<\mathrm{min}(\delta_{1},\delta_{2})\), it holds \[ 0<|f(x) + g(x) - L_{1} - L_{2}|\leq |f(x) -L_{1}| + |g(x) -L_{2}| <\varepsilon_{1} +\varepsilon_{2}=\varepsilon \] which means that \(L_{1} + L_{2}\) is the limit at \(x_{0}\) of \((f+g)(x)=f(x) + g(x)\) according to Definition 2.

Passing to the product of two functions, again Definition 2 implies that \[ \begin{split} |f(x)g(x) - L_{1}L_{2}|&=\mid (f(x) -L_{1} + L_{1})(g(x) -L_{2} +L_{2}) -L_{1}L_{2}\mid = \\ &=\mid (f(x) -L_{1})(g(x)-L_{2}) + L_{1}(g(x)-L_{2}) + (f(x)-L_{1})L_{2}\mid \leq \\ &\leq \mid (f(x) -L_{1})(g(x)-L_{2})\mid + \mid L_{1}(g(x)-L_{2})\mid + \mid(f(x)-L_{1})L_{2}\mid < \\ &<\varepsilon_{1}\varepsilon_{2} + |L_{1}|\,\varepsilon_{2} + |L_{2}|\,\varepsilon_{1} . \end{split} \]

Given \(\varepsilon>0\), we can choose \(\varepsilon_{1}\) and \(\varepsilon_{2}\) such that \(\varepsilon_{1}\varepsilon_{2} + |L_{1}|\,\varepsilon_{2} + |L_{2}|\,\varepsilon_{1}<\varepsilon\) since \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are arbitrary, and thus we conclude that the \(L_{1}L_{2}\) is the limit at \(x_{0}\) of \((fg)(x)=f(x)g(x)\).

The case of left and right limits should be easy to obtain by following and suitably adapting the previous steps.

Recalling the summation and product notation \[ \begin{split} \sum\limits_{j=1}^{n} A_{j}&\equiv A_{1} + A_{2} + \cdots + A_{n} \\ \prod_{j=1}^{n}A_{j}&\equiv A_{1}\,A_{2}\,\cdots\,A_{n} , \end{split} \] where \((A_{1},\cdots,A_{n})\) can be a finite sequence of numbers or functions, we can use the previous results together with the associativity property for function sums and products to solve the following exercise.

CautionAlgebraic manipulations of more than 2 functions

Exercise 5 Let \((\alpha_{1},\cdots\alpha_{n})\mathbb{R}^{n}\), let \(x_{0}\in\mathbb{R}\) be an accumulation point for \(D\), and, for each \(j=1,\cdots,n\), consider the function \(f_{j}\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\). Prove that, if \[ \lim_{x\rightarrow x_{0}}f_{j}(x)=L_{j}\in \mathbb{R} \] for every \(j=1,...,n\), then \[ \begin{split} \lim_{x\rightarrow x_{0}} \left(\sum_{j=1}^{n}\,\alpha_{j}\,f_{j}\right)(x) &=\sum_{j=1}^{n}\,\lim_{x\rightarrow x_{0}} \left(\alpha_{j}\,f_{j}(x)\right) = \sum_{j=1}^{n}\,\alpha_{j}\,L_{j} \\ & \\ \lim_{x\rightarrow x_{0}}\left(\prod_{j=1}^{n} \,\alpha_{j}\,f_{j}\right)(x) & =\prod_{j=1}^{n}\,\lim_{x\rightarrow x_{0}} \left(\alpha_{j}\,f_{j}(x)\right) = \prod_{j=1}^{n}\,\alpha_{j}\,L_{j} . \end{split} \tag{7}\] Moreover, if \(n=2\) and \(\alpha_{2},L_{2}\neq 0\), it holds \[ \lim_{x\rightarrow x_{0}^{+}}\,\left(\frac{\alpha_{1}f_{1}}{\alpha_{2}f_{2}}\right)(x)=\lim_{x\rightarrow x_{0}^{+}}\,\frac{\alpha_{1}f_{1}(x)}{\alpha_{2}f_{2}(x)}=\frac{\alpha_{1}L_{1}}{\alpha_{2} L_{2}}. \]

Under the obvious modifications, the same conclusions hold for left and right limits.

Solution: Use Proposition 3 and the associativity property for function sums and products.

From Proposition 3 and Exercise 5 it naturally follows that continuous functions behave well with respect to sums and products.

NoteAlgebraic manipulations of continuous functions

Proposition 4 Let \((\alpha_{1},\cdots\alpha_{n})\mathbb{R}^{n}\) and \(x_{0}\) be an accumulation point for \(D\subseteq \mathbb{R}\). For each \(j=1,\cdots,n\), consider the function \(f_{j}\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\), assumed to be continuous at \(x_{0}\).

It follows that \[ F(x)=\left(\sum\limits_{j=1}^{n}\alpha_{j}f_{j}\right)(x),\qquad G(x)=\left(\prod_{j=1}^{n}\alpha_{j}f_{j}\right)(x) \] are continuous at \(x_{0}\).

Moreover, when \(n=2\) and \(\alpha_{2}f_{2}(x_{0})\neq 0\), it follows that \[ H(x)=\left(\frac{\alpha_{1}f_{1} }{\alpha_{2}f_{2}}\right)(x) \] is continuous at \(x_{0}\).

Analogous results hold for left/right continuity with the obvious modifications.

Proof: Follow Definition 3 and apply the results of Proposition 3 and Exercise 5.

Once Proposition 4 is in our toolbox, we can immediately prove that polynomials and rational functions are continuous.

CautionPolynomials and rational functions are continuous

Exercise 6 Consider the functions \(\mathrm{P}_{n}\colon\mathbb{R}\rightarrow\mathbb{R}\) where \(\mathrm{P}_{n}(x)\) is a polynomial of order \(n\). Prove that \(\mathrm{P}_{n}\) is continuous.

Consider another polynomial function \(\mathrm{Q}_{m}\colon\mathbb{R}\rightarrow\mathbb{R}\). Prove that the rational function \(\mathrm{R}\colon D\subseteq \mathbb{R}\rightarrow \mathbb{R}\), which is defined on \(D=\{x\in\mathbb{R}|\mathrm{Q}_{m}(x)\neq 0\}\) and is given by \(\mathrm{R}(x)=\frac{P_{n}(x)}{\mathrm{Q}_{m}(x)}\), is continuous.

Solution:

The polynomial \(\mathrm{P}_{n}(x)\) can be written as \[ \mathrm{P}_{n}(x)= a_{n}x^{n} + a_{n-1} x^{n-1}\cdots a_{1}x + a_{0}x^{0}, \] where \(a_{j}\in\mathbb{R}\) for every \(j=1,\cdots , n\). It is then clear that \(\mathrm{P}_{n}\) is a suitable combinations of sums and products of constant functions (which are continuous) with the identity function \(f(x)=x\) (which is continuous). Consequently, Proposition 4 ensures the desired result.

The continuity of \(\mathrm{R}\) follows again from Proposition 4 since both \(\mathrm{P}_{n}\) and \(\mathrm{Q}_{m}\) are continuous on \(D\) and \(\mathrm{Q}_{m}(x)\neq 0\) on \(D\).

5 Continuity of inverse and composite functions

Let \(D\subseteq \mathbb{R}\) be the maximal domain for which the function \(f\colon D\rightarrow\mathbb{R}\) given by \(f(x)=x^{n}\) admits an inverse. Explicitly, \(D=\mathbb{R}\) if \(n\) is odd, and \(D=\mathbb{R}_{+}\) if \(n\) is even. Exercise 6 tells us that \(f\colon D\rightarrow\mathbb{R}\) is continuous, and we would also like to say something about the continuity (or lack thereof) of its inverse function \(g\colon D\rightarrow\mathbb{R}\) given by \(g(x)=x^{\frac{1}{n}}\).

NoteContinuity of inverse function

Proposition 5 Let \(I\) be an interval, and let \(f\colon I\rightarrow \mathbb{R}\) be continuous and invertible on \(I\) (in particular, \(f\) is monotonic). Then \(J:=f(I)\) is an interval, and the inverse function \(f^{-1}\colon J\rightarrow\mathbb{R}\) is continuous.

Proof: The proof is tedious, and it will take me some time to upload it.

From Proposition 5, it immediately follows that \(g\colon D\rightarrow\mathbb{R}\), where \(D\) is as above and \(g(x)=x^{\frac{1}{n}}\), is a continuous function for every natural number \(n>0\).

CautionRational power functions are continuous

Exercise 7 Given \(D\) as above and \(m,n>0\) natural numbers, prove that \(f\colon D\rightarrow\mathbb{R}\) given by \(f(x)=x^{\frac{m}{n}}\) is a continuous function.

Solution: You are strongly invited to try to solve it on your own.

We are slowly building quite a useful toolbox for computing limits and understanding continuity of real-valued functions. However, we are still unable to prove that the function \(f\colon\mathbb{R}\rightarrow \mathbb{R}\) given by \(f(x)=\sqrt{x^{2} +3}\) is continuous without verifying that Definition 3 is fulfilled, and this would require us to compute a lot of limits using Definition 2.

Motivated by a powerful urge to avoid computing limits using Definition 2 unless physically forced to, we look for alternatives. We thus note that the function \(f\) introduced above is nothing but the composite function \(f=g\circ h\), with \(g\colon\mathbb{R}\rightarrow \mathbb{R}\) given by \[ g(y)=\sqrt{y}, \] and \(h\colon\mathbb{R}\rightarrow\mathbb{R}\) given by \[ h(x)=x^{2}+3. \] Moreover, Exercise 6 and Exercise 7 guarantee that both \(g\) and \(h\) are continuous everywhere in their domain.

Wouldn’t it be nice if the continuity of \(g\) and \(h\) were enough to guarantee the continuity of \(f\)? The following two propositions make us happy because they answer the previous question with a solid ‘yes’.

Proposition 6 Consider the functions \(f\colon D\subseteq \mathbb{R} \rightarrow \mathbb{R}\) and \(g\colon D'\subseteq \mathbb{R} \rightarrow \mathbb{R}\), and let \(x_{0}\) be an accumulation point for \(D'\). Suppose that \[ \lim_{x\rightarrow x_{0}}g(x) = l \tag{8}\] with \(l\) is an accumulation point for \(D\), and \[ \lim_{y\rightarrow l} f(y) =L . \tag{9}\] The equality \[ \lim_{x\rightarrow x_{0}} f\circ g(x)=L \] holds if either one of the following additional assumptions are met:

  1. \(f\) is continuous at \(y=l\);
  2. there is an open interval \(I\) centered at \(x_{0}\) on which \(g(x)\neq l\) for all \(x_{0}\neq x\in I\).

Analogous results hold for right/left limits.

Proof:

From Equation 9, give \(\varepsilon>0\), it follows that there is \(\eta>0\) for which \[ 0<|y-l|<\eta \implies |f(y)-L|<\varepsilon, \tag{10}\] with \(y\neq l\). On the other hand, Equation 8 implies that, given the same \(\eta>0\) as before, there is \(\delta>0\) such that \[ 0<|x-x_{0}|<\delta \implies |g(x) - l|<\eta. \tag{11}\] Setting \(y=g(x)\), Equation 11 leads to \(|g(x)-l|<\eta\), while Equation 10 states that \(0<|y-l|=|g(x)-l|<\eta\) implies \(|f(y)-L|=|f(g(x))-L|<\varepsilon\), and it does not necessarily hold that \(|g(x)-l|<\eta\) implies \(0<|y-l|=|g(x)-l|<\eta\) because \(y=g(x)\) may be equal to \(l\). See Exercise 8 for more details. We thus need to assume something more to otbain the desired result.

  1. In this case, \(L=f(l)\) because \(f\) is continuous at \(l\), and thus \(|f(y) -L|=|f(y) -f(l)|<\varepsilon\) even when \(y=g(x)=l\), and the result follows.
  2. In this case, we can take \(\delta\) sufficiently small so that \(0<|x-x_{0}|<\delta\) implies \(x\in I\) and thus \(g(x)\neq l\) by assumption. Consequently, it holds \[ 0<|x-x_{0}|<\delta \implies 0<|g(x) - l|<\eta \implies |f(g(x)) -L|<\varepsilon, \] as desired.
CautionComposition can be tricky

Exercise 8 Let \(f,g\colon\mathbb{R}\rightarrow\mathbb{R}\) be given by \[ f(x)=g(x)=\left\{\begin{matrix} 0 & \mbox{ if } x\neq 0 \\ 1 & \mbox{ if } x= 0 .\end{matrix}\right. \] Prove that the composite function \(h\colon\mathbb{R}\rightarrow\mathbb{R}\) given by \(h=f\circ g\) violates Proposition 6 and explain why.

Solution: Since \(f(x)=g(x)=0\) for \(x\neq 0\), it holds \[ \lim_{y\rightarrow 0} f(y)=0 =\lim_{x\rightarrow 0} g(x) . \] On the other hand, the composite function \(h=f\circ g\) is \[ h(x)=\left\{\begin{matrix} 1 & \mbox{ if } x\neq 0 \\ 0 & \mbox{ if } x= 0,\end{matrix}\right. \] so that \[ \lim_{x\rightarrow 0} h(x)= 0 \neq \lim_{x\rightarrow 0} (f\circ g)(x)= 1 . \] The reason behind the failure of Proposition 6 is that both both the additional hypothesis of Proposition 6 are not met since the function \(f\) is not continuous at \(y=0\), and the function \(g\) is constant in every open interval containing \(0\).

Proposition 6 immediately implies that continuous functions behaves well when composed among themseleves.

NoteComposition of continuous functions

Proposition 7 Consider the functions \(f\colon D\subseteq \mathbb{R} \rightarrow \mathbb{R}\) and \(g\colon D'\subseteq \mathbb{R} \rightarrow \mathbb{R}\). If \(g\) is continuous at \(x_{0}\) and \(f\) is continuous at \(g(x_{0})\) then the composite function \(f\circ g\) is continuous at \(x_{0}\).

Proof: Follow Definition 3 and apply Proposition 6.

6 Exercises

Now, we are ready to solve quite a number of interesting problems.

CautionSome exercises to keep us in shape

Exercise 9 All the functions defined by the formulas given below are implicitely assumed to be defined on their maximal domains. Calculate the limits, or prove they do not exist, explaining which previous results you are using in each case:

\[ \begin{split} 1) \lim_{x\rightarrow 0^{\pm}} \frac{|x|}{x} & \qquad & \qquad & 2) \lim_{x\rightarrow 0^{\pm}} \frac{\sqrt{x^{2}}}{x} \end{split} \]

Solution You are strongly invited to try to solve them on your own.