Section 3.0: Continuity: The Places You Must Visit
© 1996, 2001 by Karl Hahn
Likewise, if your 20,000 gallon swimming pool starts out empty, and you begin filling it, there will be a moment when it contains exactly 5,673 gallons. There will be another moment when it contains exactly 12,891 gallons. No matter what number of gallons you name between zero and 20,000, there will be a moment when your pool contains exactly that many gallons. It doesn't matter how hard you turn on the hose, or how many times you adjust the valve. You just don't get to skip any numbers, no matter how unlucky your numerologist says they are.
Just one more example -- the rubber-burning monster of a sports car you just bought can go from 0 to 60 mph in 6.9 seconds. But it can't get you from 0 to 60 without at some time taking you through all the speeds in between.
All of the above are examples of nature's tendency to be continuous. Nature doesn't like instantaneous changes or perfectly sharp edges. Nature wants things to happen smoothly. Even something as abrupt as the shock wave of an explosion does not happen instantaneously, but builds up over a period of time, though that time be measured in microseconds.
We can talk about real valued functions of a real variable in the same way. We have a sense about functions like f(x) = x2, that they progress smoothly as x changes, with no gaps or "instantaneous" changes. Another way we can put it is that we can graph the function without ever picking the pencil up off the paper.
These notions are useful for getting an idea of what continuity is all about, but they lack the rigor we need to do mathematics with them. There simply is no mathematical definition of smooth or sharp or of a pencil. Fortunately, the concept of limits, as we have developed it so far, gives us the perfect tool to nail down the notion of continuity.
First notice that continuity is a local property. If Interstate 95 has a bridge that is out, then we would like to say that it is continuous everywhere except at the missing bridge. The same is true when we talk about functions.
Suppose we define the function, u(x) = 0 whenever x < 0 and u(x) = 1 whenever x > 0 (u(x) is called the unit step function, and is graphed in figure 3-1). You can define it to be whatever you like at x = 0, or not define it there at all. However you choose it, we still have the notion that u(x) is smooth and continuous everywhere except at x = 0. But at x = 0, we see that u(x) takes an instantaneous jump -- that is that it goes from 0 to 1 without passing through all the points in between. So we have a notion that u(x) is discontinuous at x = 0.
So what is it that's happening to u(x) everywhere except at x = 0 that we can pin down using limits? It is simply this. Pick any point, a, on the x axis. Except at a = 0, it is always the case that:
lim u(x) = u(a) eq. 3.0-1 x → aAt every x = a where 3.0-1 holds, we say that u(x) is continuous. Wherever it doesn't hold, we say it is not continuous. Whenever a function is continuous in most places but not continuous at some isolated point or points, we say that such points are discontinuities.
The function, f(x) = x2, for example, is continuous everywhere. If you put f(x) into 3.0-1 in place of u(x), it holds no matter what you choose for a. And we shall explore the continuity of this very function in much more depth presently.
But first, let me answer a few questions that curiosity may be stuffing into your head at this very moment. In the example of u(x) we saw a function that was continuous everywhere except at one point. Is there a function that is continuous everywhere except at several points? Yes. Is there one that is continuous everywhere except at infinitely many isolated points? Yes. Is there one that is continuous nowhere at all? Yes. Is there one that is continuous only at a single point? Yes. Is there a function that is continuous everywhere yet cannot be graphed? Yes. And later on I shall give you examples of all of these. Click here if you want to skip ahead and see this optional material right now. Or you can continue reading and you will get to it eventually.
But now let's get back to our analysis of f(x) = x2. Let's use our "contract" scheme to describe what we mean by this function being continuous everywhere. First, here is 3.0-1 rewritten using f(x) rather than u(x):
lim x2 = a2 eq. 3.0-2 x → aand we say that this holds for any a you might choose. We can state the contract version of this just by applying the contract version of our definition of what a limit is. So the contract version is: whatever real number, a, you might name and whatever ε > 0, no matter how small, you might name, I can give you a δ > 0 such that:
|x2 - a2| ≤ ε eq. 3.0-3whenever |x - a| ≤ δ. What does this mean? It means that you tell me how close x2 has to be to a2, and I can tell you how close x has to be to a to make it so. When you translate the math symbols into plain English, it means just that.
But can we prove that 3.0-2 holds for all a using the contract? Let error = x - a. So what we're saying is that error is how far x is from a, and we will express our contract using that instead of using x. So |error| ≤ δ and x = a + error. Substitute this last expression for x into 3.0-3, and the contract looks like:
|(a + error)2 - a2| ≤ ε eq. 3.0-4whenever |error| ≤ δ. When you do the algebra on 3.0-4, you get:
|2a error + error2| ≤ ε eq. 3.0-5whenever |error| ≤ δ.
Can we choose error so that 3.0-5 always holds? Certainly. Just choose error so that it is the lesser of |ε/(4a)| and √ε/2. Why the lesser? Because each term in the sum gets closer to zero as error gets closer to zero. You want to pick the candidate that gives you the smallest possible sum. If you do that, you guarantee that both halves of the sum in 3.0-5 are are positive and less than or equal to ε/2, and hence the sum is less than or equal to ε. And clearly, if you choose error to be even closer to zero, you are still guaranteed that 3.0-5 holds. But the point is, if you choose error no worse than we have chosen it, success is guaranteed.
That works for all a, except zero. But when a = 0, notice that the left-hand summand in 3.0-5 is, in fact, zero. So, when a is zero, simply choose error to be √ε/2 (that is, pick the candidate that doesn't give you a problem).
So now you have a recipe for finding error given ε and a. Just make δ the same as the error we chose in the previous two paragraphs, and you have a recipe for finding δ given ε and a. And that is what the contract said we should be able to do. Not only that, but it works for any a.
You are likely to be called upon on an exam to prove continuity of some function using the method just detailed, or one similar to it. Read over the discussion of why f(x) = x2 is continuous everywhere, then see if you can reproduce a similar argument for, say, f(x) = 3x and for f(x) = (x + 1)2. If you do well on those and feel really ambitious, try tackling f(x) = x3, which is given as a coached exercise if you click here. If you get stuck, go back and study the line of thought shown here again. Take a short break if you need to. Then come back to it.
So what about the first function we looked at, our friend, u(x)? If it really is continuous everywhere except at zero, then our limit test should pass everywhere except at zero, and there it should fail.
Our contract for u(x) is: for any ε > 0 you name, no matter how small, I can name a δ such that
|u(x) - u(a)| ≤ ε eq. 3.0-6whenever |x - a| ≤ δ.
Whenever a is not zero, we can always find a δ small enough so that the interval that extends out a distance δ on each side of a does not include zero. You can see this graphically on a number line.
0 <No matter how close a is to zero, you can clearly pick δ small enough so that the interval shown in the diagram does not include zero. So let's pick such a δ. And when we do that, we get another effect. The entire interval is on the same side of zero -- that is you don't have one part of the interval on the positive side of zero and the other on the negative side.
δ >a< δ > | [ + ]fig. 3-2
Well, whenever x is inside the interval shown, it is certainly true that |x - a| ≤ δ. In fact, saying that x is in the interval is exactly the same as saying |x - a| ≤ δ. Or to state that using logic symbols, we have:
|x - a| ≤ δ <=> x is in the interval eq. 3.0-7And since the entire interval is on the same side of zero, u(x) is certainly constant throughout the interval. That is, if the interval is entirely on the positive side (as shown in the diagram), then u(x) = 1 whenever x is in the interval. Likewise, if the interval is entirely on the negative side (and perhaps you ought to make a similar diagram that shows that case), u(x) = 0 whenever x is in the interval.
That means that whenever |x - a| ≤ δ, then u(x) = u(a). But that means that |u(x) - u(a)| = 0. But that is just the same expression we had in 3.0-6, and it's equal to zero, which is always less than or equal to ε. So when a is not zero, our recipe for finding a δ is simply, choose δ small enough so that the interval it creates around a does not include zero. If you do that, the contract holds.
But what about when a = 0? If u(x) is truly not continuous at this point, then the contract should always fail here. First, recollect that I said you could choose u(0) any way you like, including leaving it undefined. Well, if you leave it undefined, then u(x) can't possibly be continuous at zero because our limit definition of continuity requires a function to be defined wherever it is continuous. Refer back to 3.0-1 to see this. How can we compare u(x) with u(0) if u(0) doesn't even exist?
Some candidates we might pick for u(0) are 0, 1, or anything in between. It turns out, it doesn't matter. You can pick any real number you like and say that that is what u(0) equals, and the contract will still fail at a = 0. But don't take my word for it. Let's prove it.
At a = 0, you have as your contract condition: whenever |x| ≤ δ. Whatever δ > 0 you choose, you can substitute zero for x into |x| ≤ δ, and it will be true. Here is a graphic that shows the interval specified:
<Again, x being in the interval is the same as |x| ≤ δ, or using logic symbols,
δ >0< δ > [ | ]fig. 3-3
|x| ≤ δ <=> x is in the interval eq. 3.0-8There simply is no way to exclude zero from this interval. More important than that, the interval must exist on both sides of zero -- that is some of it must be on the positive side and some of it must be on the negative side. This is true no matter how small you pick δ.
So if you told me that you wanted u(0) to be 0, then there are values of x in the interval shown for which u(x) = 1 (simply choose any 0 < x ≤ δ), so
|u(x) - u(0)| = 1 eq. 3.0-9That means that if you choose any ε < 1, then I cannot possibly find any δ that makes the contract hold.
Similarly, if you say that u(0) = 1, then there are values of x in the interval for which u(x) = 0 (simply choose any -δ ≥ x > 0), so again 3.0-9 will be true, and again, if you choose ε < 1, I cannot possibly find any δ that makes the contract hold.
And suppose you pick u(0) to be something that isn't 0 or 1? Well, let's do that. Let's call it y. We still have a problem. y is different from any u(x) when x is on the positive side of the interval (where u(x) = 1), and y is different from any u(x) when x is on the negative side of the interval (where u(x) = 0). So whichever of these differences, |1 - y| or |0 - y| is the greater, all you have to do is give me an ε that is closer to zero than that, and you guarantee that I will never be able to find a δ close enough to zero to make the contract hold. No matter how small I make δ, the interval will include an x for which u(x) is farther from y than ε is from zero.
I just can't win, no matter what I choose for u(0). The contract always fails. And that's why u(x) cannot be continuous at x = 0.
There are several common ways in which a function, f(x), might be discontinuous at a point, x = a. Recall that for f(x) to be continuous at x = a, the limit
lim f(x) = f(a) x → amust hold with x approaching a from both above and below. That requires:
We began this discussion with the metaphor of a highway, and that to get from point A to point B on a highway, you have to visit all the points in between. We went on to use a definition involving limits to define what continuity of a function is. Specifically, we stated that a function f(x) is continuous at x = a if and only if f(a) exists and
f(a) = lim f(x) eq. 3.0-10 x → aBut we never made the connection from the definition to the points-in-between concept. The intermediate value theorem makes that connection. It says that if you start at a and travel to b, visiting all the points in between, then for a continuous function, f(x), it must travel from f(a) to f(b) and must visit all the points in between f(a) and f(b).
The usual statement of the intermediate value theorem is: if f(x) is continuous on the closed interval, a ≤ x ≤ b (that is f(x) is continuous at every point on the interval and the interval includes the endpoints), and f(a) is not equal to f(b), then for every value, y, that falls in between f(a) and f(b), there exists at least one point, c, in between a and b such that f(c) = y.
Read that statement of the intermediate value theorem over several times and make sure you understand what it means. You are likely to be tested on it. Try to see why the theorem implies, among other things, that if
f(x) = x2is continuous everywhere and f(1) = 1 and f(2) = 4, then there must be some real number between 1 and 2 that is the square root of 2.
Think of it this way. If you take a pencil to paper and begin drawing a mark always moving from left to right, but while you do that you also move the pencil-point up and down, and you never pick the pencil-point off the paper, what can you say about the mark you made? If you made it to the top of the page anywhere, and you made it to the bottom of the page anywhere, then you made it to all the heights in between. It's just common sense.
In fact, the intermediate value theorem seems intuitively obvious until somebody comes along and says, "If it's so obvious, then prove it." It is then that you realize that its everyday applications to our experiences in life are obvious, but a rigorous proof is another matter. The proof is not easy, and the likelihood that you will be asked to produce it in a first year calculus course is almost nil. So it is given here as optional material for the curious and the brave. This proof is not so hard that a first year student can't understand it. So if you feel up to a challenge, click here and follow along slowly and carefully. But do it only if you can afford the time and effort.
The intermediate value theorem has a partner that also seems obvious called The extreme value theorem. It is stated without proof in almost every first year calculus text. This theorem says if f(x) is continuous on the closed interval, a ≤ x ≤ b (that is f(x) is continuous at every point on the interval and the interval includes the endpoints), then f(x) is bounded for all x on the interval. So there is some value that f(x) never exceeds and another that f(x) never gets under. But the extreme value theorem is stronger than just that. It also says that over all the x's on the closed interval there is a value of x that gives you a maximum f(x) and another that gives you a minimum f(x).
All this is saying is that if you fasten one end of a clothes line to one pole and the other to another pole, it doesn't make any difference how much slack you leave or how many branches you drape it through, there will be a highest point on the clothes line and a lowest point on the clothes line. This is true because 1) the clothes line is continuous and 2) because the clothes line includes its endpoints.
In other words, a function that is continuous on a closed interval cannot run off to infinity or to minus infinity on that interval. It must remain always between some minimum value and some maximum value. Not only that, there must be a point, c, on the interval such that f(c) is the maximum value, and another point, d, on the interval such that f(d) is the minimum.
It is important that the interval be closed -- that is that it include the endpoints. If the interval does not include the endpoints, it is easy to come up with a counterexample. The function
x f(x) =is continuous on the open interval (that is excluding the endpoints) that runs -1 < x < 1. Yet it is not bounded. And it does run for infinity and for minus infinity as it approaches those endpoints, and hence we say it is unbounded on the open interval. Its graph is shown here in figure 3-2. (can you show that this function is not continuous at x = -1 and at x = 1? Which of the tests does it fail there?)
eq. 3.0-11 x2 - 1
The proof of this theorem is even harder than that of the intermediate value theorem, and you won't be asked to produce it in a first year course. But it will be assumed that you know what the theorem means. It plays a big role in the theory of maximums and minimums. For the really brave, and because I hate to give you something as important as this without proof, you can see the proof as optional material by clicking here. Again go to the proof only if you can afford the time and effort.
The problems that follow are proofs. You should know that proofs concerning continuity may be on the exam.
Now here is one that you should be able to prove, and that you might encounter on an exam. Suppose f(x) is continuous on the closed interval a ≤ x ≤ b, and suppose that there is a point, c, on that interval (and not an endpoint) for which f(c) > 0. Prove that there exists a δ such that f(x) > 0 for all x in the interval, c - δ < x < c + δ. Think about how we defined continuity in equation 3.0-10 and the delta-epsilon contract that implies. Hint: Make ε less than f(c) but still positive.
As always, make a sincere effort to work this problem on your own. If you give up or if you want to compare your proof with mine, click here.
Here's another theorem (which I call The Stepping Stone Theorem) that you ought to be able to work through at least part of. Prove that a function, f(x), is continuous at a point, x = c, if and only if for every sequence of real numbers, x1, x2, x3, ... , whose limit is c, the limit of the sequence, f(x1), f(x2), f(x3), ... , is exactly f(c). In other words, if the stepping stones lead to the riverbank, then the moss on the stepping stones leads to the moss on the riverbank.
Notice that this is an "if or only if" proof. That means it is really two theorems in one. One theorem says that if for every set of stepping stones that leads to the river bank, the moss on the stepping stones leads to the moss on the riverbank, then the moss function is continuous at the riverbank. The other theorem is that if the moss is continuous at the riverbank, then for every set of stepping stones leading to the riverbank, the moss on them leads to the moss on the riverbank.
The usual approach to an "if and only if" theorem is to divide it into its two constituent theorems and prove each one separately. And that is what I recommend you do here. The second part of this one (that is the "only if" part) is the easier one to prove, so perhaps you should start with that. This part asserts that if the moss function is continuous at the riverbank, then each set of stepping stones that leads to the riverbank has moss that leads to the moss on the riverbank.
Approach it by demonstrating that f(xn) can always be brought to within ε of f(c) (no matter how close to zero you choose ε) by making n big enough. Remember that a sequence that has a limit will confine itself to smaller and smaller intervals around its limit (in this case the limit is c) as the subscripts get larger and larger. You can bridge your logic to the delta-epsilon statement of continuity by observing that the real numbers from c - δ to c + δ is indeed an interval around c, so you can always find an n big enough to guarantee that xk is in that interval whenever k ≥ n.
Go ahead and prove the "only if" part, and I'll give you a pass on doing the more difficult "if" part. When you are done, you can look at both proofs by clicking here.
You can email me by clicking this button: