Section 8: More Tricks with Derivatives

© 1999, 2004 by Karl Hahn
KCT logo (click for Home Page)

8.4 Little Red Riding Hood Goes to Town (approximation & Intro to Taylor & Maclaurin Series)

Little Red Riding Hood's grandma has moved out of her cabin in the woods and has taken an apartment in the city. Little Red Riding Hood with her basket of goodies took the bus downtown to visit her. Before she left, Little Red Riding Hood's mother gave her specific instructions.


Graphic courtesy of Suzanne's Square
Little Red Riding Hood

"Get off the bus at the intersection of Church and Chapel streets. Then continue to walk past the intersection in the same direction as the bus was going. Grandma's apartment is in the fourth building past the intersection. And be sure you don't talk to strangers, dear."

Little Red Riding Hood got off the bus at the proper place and crossed the street. But to her dismay, the buildings were all built right up against each other. She couldn't tell where one ended and the next one began. Again and again she tried to count the buildings, but in the end her efforts produced only tears of frustration.

The wolf, who was standing on the street-corner, saw everything. "Dear little girl," he said, "what could possibly be so troubling that you should cry like that?"

"Go away," she replied. "I'm not supposed to talk to strangers."

Hemera ©
The wolf was undaunted. "So you would prefer to stand there crying when you could ask me for assistance? All I desire is to put an end to your tears. Now tell me please, sweet child, what is the matter?"

Little Red Riding Hood could see that asking the wolf for help was much more likely to get her to Grandma's than crying. So she told him the whole story.

"The solution is oh so simple, tender lass," he said. "You see, the width of each building is exactly seventeen of your steps. You know how to multiply, don't you? You need only multiply those seventeen steps per building times the four buildings you want to count, and you will be able to find your Grandma's house. Just start at the corner, walk down the street counting your steps, and when you have counted to sixty-eight, you will be there."

The wolf stayed with her for her first few steps, then disappeared into an alley-way. Little Red Riding Hood carefully counted off the sixty-eight steps and was pleased to discover that it brought her quite close to a doorway. In she went, climbed the stairs, and knocked on the door. "Enter deary," came a raspy voice from within. And she did.

"Grandma!" she exclaimed. "What awful plastic surgery you've had..."

I'll let you provide your own ending to the story. Besides the lesson about not talking to strangers, what other lesson is there for us in this tale?

Suppose that the intersection of Church and Chapel streets is the point,  y = 1. And suppose  y = 1.000417  is Grandma's building. If each of Little Red Riding Hood's steps is 0.0001, how does the wolf know that it will take her 68 steps to get there?

Well it seems the wolf was also a calculus instructor who lost his job for having his freshman students for dinner. So here is the wolf's line of thinking:

If  y(x) = x17, then everybody knows that  y(1) = 1. The number,  x = 1.0004, is very close to  x = 1, which is means that Grandma's building is very close to Church and Chapel. But how close? I know that  y'(x) = 17x16. And it's easy to see from that that  y'(1) = 17. So whenever I'm nearby Church and Chapel, for every 0.0001 that x increases, I should see y(x) increase by approximately 17 times that.

In this way the wolf was able to approximate the location of Grandma's building as  y = 1.0068. How close was he? My calculator says that Grandma's building is really at  y = 1.006821804. So when Little Red Riding Hood had counted off 68 steps, she was only about one fifth of a step from Grandma's doorway.

The wolf's calculation is just another application of something that we have been using frequently in previous sections. That is, if f(x) is a continuous function and its derivative, f'(x), is also continuous, you can approximate f(x+h) using

   f(x + h)  ≈  f(x) + hf'(x)                                     eq. 8.4-1
where the symbol means "approximately equal to." And we know from our discussion of the Mean Value Theorem that this approximation improves as h gets smaller and smaller. You can employ this same method to make cocktail-napkin estimates of functions that you thought you needed a calculator for. For example, suppose you wanted to know the square root of 9.5.

Step 1: 9.5 is close to 9, which is a perfect square. And indeed you know that the square root of 9 is 3

Step 2: Find the derivative of the square root function. We've done that one before.

             _
   f(x)  =  √x

              1
   f'(x)  =     
             2√x

Step 3: Find the value of the derivative at the x where you know the square root. That would be at  x = 9.

              1      1
   f'(9)  =       =   
             2√9     6

Step 4: What's the difference between the number you know the square root of and the number you are interested in taking the square root of? That value is h.

   h  =  9.5 - 9  =  0.5
When you are not looking at this page you might have to ask yourself, "Which way do I take the difference?" Always set it up so that if you add h to the number where you know the value of the function (in this case that number is 9), you will get the number that you want to know the function of. Remember that you know  f(x)  and you are looking for  f(x + h). Or perhaps it is easier for you to remember that you always take the number you want the function of minus the number you know the function of.

Step 5: Apply the approximation formula.

    ___      _
   √9.5  ≈  √9  +  hf'(9)  =    3  +  0.5 ×

 1
    =
 6
     1
  3   
    12

Using a calculator, I find that squaring the approximation yields 9.5064444, which is within 0.1% of 9.5. Using the calculator again, I find that the square root of 9.5 is, to 10 figures, 3.082207001, which differs from the approximation by less than 0.1%. Not bad.

In some textbooks they will use the symbol, Δx, instead of h. For the problem we just did, it would look like

   Δx  =  9.5 - 9  =  0.5

   f(x + Δx)  ≈  f(x) + Δxf'(x)
Don't let that throw you. It's the same thing just using a different name.

In the movie, Infinity, Richard Feynman was depicted challenging a Chinese merchant to an arithmetic contest in which the merchant was permitted to use his abacus. The merchant beat him on simple addition and multiplication problems. The problem where Feynman bested the merchant was taking the cube root of 1729. The merchant came up with 12, but Feynman insisted that they carry it out to some decimal places, and he used the method described above to do just that.

Step 1: 1729 is close to 1728, which is 123 (you know this because you recall from junior high that there are 1728 cubic inches in a cubic foot).

Step 2: Find the derivative of the cube root function:

   f(x)  =  x1/3

             1
   f'(x)  =    x-2/3
             3

Step 3: Find the value of the derivative at the x where you know the cube root.


   f'(1728)  =

  1
   
  3

 1728-2/3  =

  1
    ×
  3
  1
      =
 144
   1
     
  432

Step 4: What's the difference between the number you know the cube root of and the number you are interested in taking the cube root of? That value is h.

   h  =  1729 - 1728  =  1

Step 5: Apply the approximation formula.


   17291/3  ≈  12 + hf'(1728)  =

      1
  12    
     432

How Feynman was able to divide out the fraction into the decimal, 12.0023, as quickly as he did, I don't know. Doing that division is the hardest part of this problem. The value Feynman gives is accurate to 4 places beyond the decimal. If you cube this number, you get 1728.99379.


Exercises

1) Use the method we have discussed here to approximate 2581/4. Click here to view solution.

2) Use the method we have discussed here to approximate

    __
   √80
Hint: h is negative in this case.
Click here to view solution.

Approximating Transcendental Functions

Transcendental functions are ones like sine, cosine, exponential, log, and all the others that cannot be described in terms of simple powers and roots of x. Even though these functions are qualitatively different from the roots we approximated so far, you can still apply the same approximation method to them as well. For example, lets apply the method to finding  e1.1

Step 1: For what x that is near 1.1 do we know the value of ex? Well we know that  e1 = e ≈ 2.72

Step 2: What is the derivative of ex? That's an easy one.

   f(x)  =  ex

   f'(x) =  ex

Step 3: What is the value of the derivative at the x where we know the function? Again this is an easy one.

   f'(1)  =  e1  =  e  ≈  2.72

Step 4: What is the difference between the x where you know the function's value and the x where you'd like to approximate the function's value?

   h  =  1.1 - 1  =  0.1

Step 5: Apply the approximation formula.

   e1.1  =  e(1 + h)  ≈  e + hf'(1)  ≈  2.72 + 0.272  =  2.992
The actual value of  e1.1  is  3.004166.

Approximate  cos(0.75).

Step 1: What x that is near 0.75 do I know the cosine of? A little investigation yields that 0.75 is very close to

   π
      =  0.7853981...
   4
and that
                 _
                √2
   cos(π/4)  =      =  0.7071057...
                 2
(if you don't remember this, see the trig id table)

Step 2: What is the derivative of  cos(x)? You already know that if

   f(x)  =  cos(x)
then
   f'(x)  =  -sin(x)

Step 3: What is the value of the the derivative of  cos(x)  at the x where we know the value of the function?

                    _
                   √2
   -sin(π/4)  =  -     =  -0.7071057...
                    2

Step 4: What is the difference between the x that you know the cosine of and the one you want to know the cosine of? That will be your h.

   h  =  0.75 - 0.7853981  =  -0.0353981

Step 5: Apply the approximation formula.

                                   _
                                  √2
   cos(0.75)  =  cos(π/4 + h)  ≈     + hf'(π/4)  =  0.732137
                                   2
The actual cosine of 0.75 is 0.7316888.

How Good is the Approximation?

All the approximations we have done so far have come out close. But the method begs some questions. Like, "How close will my approximation be?" and "How close does my x have to be to the number I want to take this function of for the method to work reasonably?" These two questions are not silly details. They probe deeply into what derivatives and continuous functions are all about.

For comparison, let's try the method on a very easy function.

   f(x)  =  3x + 2
which is the function of a straight line with slope of 3 and a y-intercept of 2. We know that  f(0) = 2.  Suppose we wanted to know f(1).

Since you know the method by now and since this is an easy function, we'll take all the steps at once.

   f(0)  =  2

   f'(x)  =  3

   h  =  1 - 0  =  1

   f(1)  =  f(0 + h)  ≈  2 + hf'(0)  =  2 + (1 × 3)  =  5
But indeed,  f(1) = 5  exactly. The approximation that the method turned out in this case is perfect. And with a little experimentation and thought you will see that the method is always perfect whenever the function is a straight line, no matter how big an h you choose.

So what is special about straight-line functions that makes them so susceptible to our method? You could put together some words about how the function is linear and so is the method, and such an explanation would be correct. But another much more concise way to put it is simply, "The method works perfectly on straight-line functions because a straight-line function always has a second derivative of zero."

The second derivative is the key to figuring out how good an approximation the method yields each time. If the derivative of f(x) were constant (which is another way of saying the second derivative is zero), then the method would be infallible. But for most functions we are interested in applying the method to the derivative is not constant.

Let's name the error that the approximation formula is off by using the symbol, ξ (that's the Greek letter, xi), which is the traditional math symbol for an error term. Suppose we know what f(x) is at  x = c,  and we are interested in the value of f(c+h). Then if our approximation is off by some error amount, ξ, you can represent that with the equation:

   f(c + h)  =  f(c) + hf'(c) + ξ                                 eq. 8.4-2
You can bound the error term, ξ, if you know what the second derivative of f(x) is:
           1
   |ξ|  ≤    |h2f"(a)|                                            eq. 8.4-3
           2
where a is some real number that lies somewhere between c and c+h. Although you probably won't have to know the derivation of equation 8.4-3, you might have to know the inequality itself. What it is saying is, pick the point, a, between c and c+h where f"(a) has its greatest magnitude. The error of the approximation will be no worse than half that magnitude times the square of the length of the interval.

How we get equation 8.4-3 is optional material. You can read it by clicking here.

Worked example: Develop a worst-case error term for when you approximate the square root of 9.5 using the method introduced in this section (click here to see how we developed an approximation for this square root in the first place).

Step 1: Find the second derivative. We know that

             _
   f(x)  =  √x  =  x1/2

              1      1
   f'(x)  =       =    x-1/2
             2√x     2
So applying the power rule to f'(x) you get
               1
   f"(x)  =  -   x-3/2
               4

Step 2: Where does the second derivative have its maximum magnitude? This step is sometimes difficult, but not in this case. Observe that as x increases, the magnitude (that is the absolute value) of f"(x) always decreases. So the magnitude of f"(x) will be greatest at the lower end of the interval. In this case the interval runs from  x = 9  to  x = 9.5.  So we find f"(x) at  x = 9.


   f"(9)  =

    1
  -   9-3/2  =  -
    4
  1
    
 108

Step 3: Take the absolute value of half that times the square of the length of the interval. This is the application of the inequality in eq. 8.4-3. The interval, in this case, goes from  x = 9  to  x = 9.5,  so the length, h, of the interval is  h = 0.5 = 1/2. The final inequality, then, will be

           1                1   1    1
   |ξ|  ≤    |h2 f"(9)|  =    ×   ×      =
           2                2   4   108
   1
     
  864

And indeed using a calculator I find that the actual error is


   |ξ|  =  0.001126331  =

  0.973
         ≤
   864
   1
     
  864

Scroll down for Maclaurin & Taylor Series intro


Maclaurin and Taylor Series


© 1999 by Karl Hahn
KCT logo

Little Red Riding Hood Escapes (intro to Taylor & Maclaurin Series)

No tale about Little Red Riding Hood would be complete without the wolf jumping up out of Grandma's bed and shouting, "All the better to eat you with!" So we pick up the story from there.

Of course Little Red Riding Hood was frightened beyond imagining. But the quick-minded little girl grabbed the phone and ran into the bathroom. Once she had the door locked with the wolf bellowing on the other side, she dialed 911.

"Please help me," she cried. "I'm locked in the bathroom and the wolf is going to eat me."

"Can you see any way to escape?" asked the 911 operator.

"I can see through the window," she replied, still panting with fear. "There's a fire escape out there."

"Now stay calm, little girl," said the operator. "Go out the window and down the fire escape. Then go to the Three Little Pigs' apartment. You'll be safe there. We've got an animal control officer on the way."

"But where do the Three Little Pigs live?" she whimpered.

"They're in the fifteenth building from the corner," was the answer.

Little Red Riding Hood regained her composure. "I know what to do," she said. "It's seventeen steps for each each building. That's 255 steps from the corner, right?"

"Well, not exactly," the operator replied. "The buildings get smaller as you go down the street. After four buildings you start counting only sixteen steps per building."

"I get it," said Little Red Riding Hood. "And after eight buildings I start counting only fifteen steps per building, then at the twelfth building count only fourteen steps per building, right?"

"Well not quite," the operator explained. "You see the rate at which they get smaller increases as you go down the block as well. So in the next four buildings the number of steps per building decreases by two instead of one. That means at the eighth building start counting fifteen steps per building, but after the tenth building start counting only fourteen steps per building. By the twelfth building you are down to only thirteen steps per building."

"Now I get it," said Little Red Riding Hood. "With each four buildings the rate at which the steps per building decreases -- that rate increases by one step, right?"

"Well, almost," droned the operator. "You see the rate of that rate changes as well. And the rate of the rate of the rate isn't constant either..."

Poor Little Red Riding Hood. This was making her head spin. And the wolf was starting to huff and puff just outside the door. She dropped the phone and ran down the fire escape.


With that story in mind, let's play the Name That Function game again. I'm thinking of a cubic polynomial. At  x = 0 , its value is 1. Also at  x = 0  the value of its first derivative is -2, the value of is second derivative is 3, and the value of its third derivative is -5.

   f(0)      =   1

   f'(0)     =  -2

   f"(0)     =   3

   f(3)(0)   =  -5

     table 8.4-1
This description of f(x) is the same kind of description that the 911 operator gave Little Riding Hood for the function that finds the Three Little Pigs' apartment building. So is there a way to determine f(x) from this kind of information?

I've already given away that the function I'm thinking of is a cubic polynomial. So it must be in the form of

   f(x)  =  Bx3 + Cx2 + Dx + E                                    eq. 8.4-8
You only have to find B, C, D, and E. But you know that when  x = 0,  then  f(x) = 1.  Putting  x = 0  into equation 8.4-8 gives:
   f(0)  =  1  =  (0)B + (0)C + (0)D + E  =  E                    eq. 8.4-9a
So clearly you have  E = 1.  Now take the derivative of equation 8.4-8 and put in  x = 0:
   f'(x)  =  3Bx2 + 2Cx + D                                       eq. 8.4-9b

   f'(0)  =  -2  =  (0)B + (0)C + D  =  D                         eq. 8.4-9c
It's just as clear from this that   D = -2.  Now find the second derivative of f(x) by taking the derivative of equation 8.4-9b and then put in  x = 0:
   f"(x)  =  6Bx + 2C                                             eq. 8.4-9d

   f"(0)  =  3  =  (0)B + 2C  =  2C                               eq. 8.4-9e
From this you can quickly see that  C = 3/2.  Now find the third derivative of f(x) by taking the derivative of equation 8.4-9d and again put in  x = 0:
   f(3)(x)  =  6B                                                 eq. 8.4-9f

   f(3)(0)  =  -5  =  6B                                          eq. 8.4-9g
And from that you should readily see that  B = -5/6.  So the cubic polynomial I was thinking of (that is the polynomial that meets the conditions of table 8.4-1) is
              5      3
   f(x)  =  -   x3 +   x2 - 2x + 1                                eq. 8.4-10
              6      2
But suppose I had said that it was a 4th degree polynomial with the same values at  x = 0  for the function itself, and its first, second, and third derivatives as in the first example, but for the fourth derivative had to be equal to 7 at  x = 0.
   f(0)      =   1

   f'(0)     =  -2

   f"(0)     =   3

   f(3)(0)   =  -5

   f(4)(0)   =   7

     table 8.4-2
Before you were looking for the coefficients of  Bx3 + Cx2 + Dx + E.  Now you have to look for an additional coefficient. So polynomial will be
   f(x)  =  Ax4 + Bx3 + Cx2 + Dx + E                              eq. 8.4-11
If you go through the same procedure as we did for the cubic, you will find that the coefficients, B, C, D, and E are the same for this fourth degree polynomial as they were for the cubic (try it and see why). As you take the higher and higher derivatives you will have found that
   f'(x)  =    4Ax3 + 3Bx2 + 2Cx + D                              eq. 8.4-12a

   f"(x)  =    12Ax2 + 6Bx + 2C                                   eq. 8.4-12b

   f(3)(x)  =  24Ax + 6B                                          eq. 8.4-12c

   f(4)(x)  =  24A                                                eq. 8.4-12d
Since  f(4)(0) = 7,  it is clear that  A = 7/24.  So the fourth degree polynomial that meets the conditions given in table 8.4-2 is:
             7      5      3
   f(x)  =     x4 -   x3 +   x2 - 2x + 1                          eq. 8.4-13
            24      6      2
The point here is that adding the condition that the fourth derivative of the polynomial had to be equal to a particular value at  x = 0  did not affect the coefficients of the cubed term, the squared term, the linear term, or the constant term of the polynomial one bit. All it did was to add a fourth power term.

At this moment you might be trying to generalize this last point. And the generalization is true. For any polynomial function, imposing the condition that the nth derivative is equal to a particular value at  x = 0  affects only the nth degree term of the polynomial and no other term. And is there an easy way to arrive at the nth term of the polynomial given such a condition? Yes. It will be self-evident as soon as you can see that

The nth derivative of xn is equal to the constant, n!.
Recall that n! (that's pronounced "n factorial") means the product of the counting numbers from 1 to n. In equations, the above statement is: If
   f(x)  =  xn
then
   f(n)(x)  =  n!
Can you see why? The power rule tells us that  f'(x) = nxn-1.  The second derivative will be  f"(x) = n(n-1)xn-2.  The third derivative will be  f(3)(x) = n(n-1)(n-2)xn-3.  And so on until you get to the nth derivative being
   f(n)(x)  =  n(n-1)(n-2)(n-3) × ... × 3 × 2 × 1 × x0
Clearly the product of all those terms ahead of the x is just n!. And you already know that  x0 = 1.  Convinced?

If the nth derivative of xn is the constant, n!, then the n+1st derivative of xn must be zero. Why? Because the derivative of a constant is always zero.

Now suppose I told you, for example, that the sixth derivative of a polynomial was equal to 13 at  x = 0.  Look at what happens to the fifth, sixth and seventh degree terms of the polynomial. Let's say they are  A5x5 A6x6,  and  A7x7  respectively. Well the sixth derivative of A5x5  is zero because its fifth derivative is a constant. The sixth derivative of  A6x6  is  6!A6.  And the sixth derivative of  A7x7  is something times A7x, but we don't care what that something is, because  x = 0.  The only term of the polynomial whose sixth derivative is not zero at  x = 0  is the sixth degree term. And from it we get

   13  =  6!A6


   A6  =

  13
    
  6!
So the condition that the sixth derivative of a polynomial is equal to 13 at  x = 0  determines that the coefficient of the sixth degree term of the original polynomial must be equal to 13/6!.  And in general
If you impose the condition that the nth derivative of a polynomial is equal to C at  x = 0,  then the coefficient of the nth degree term of the polynomial must be equal to C/n!.

But what if the function you are trying to match is not a polynomial? Can we use what we learned about derivatives at  x = 0  to approximate the function using a polynomial? Yes. The easiest function to see this with is  f(x) = ex.

Let's begin using the approximation method we started with to find e0.2. We know that  f(0) = e0 = 1.  We also know that  f'(0) = e0 = 1.  And we have  h = 0.2. Applying the approximation method we have

   e0.2  ≈  e0 + he0  =  1.2                                      eq. 8.4-14
The real e0.2 is more like 1.221402758. But you could have chosen any small x rather than 0.2. If you did so then h would be equal to x. You would get for the approximation:
   ex  ≈  e0 + xe0  =  1 + x                                      eq. 8.4-15
Look carefully. This is a first degree polynomial approximation of  ex. And although it works pretty well for small x, you can experiment to find that you don't have to make x very large before this approximation fails pretty miserably.

If  f(x) = ex  and  P1(x) = 1 + x,  then

   f(0)   =  P1(0)

   f'(0)  =  P1'(0)
In other words, at  x = 0 the function, ex, and its first derivative match the polynomial and its first derivative respectively. But their second derivatives do not match:
   f"(0)  ≠  P"1(0)
But ex is the same as its first derivative, its second derivative, its third derivative, and so on. That means that we know that  f"(0) = 1,  because for the nth derivative,  f(n)(0) = 1,  no matter how big n is. So how should we construct another polynomial, P2(x), so that at  x = 0  it will match ex, and its first two derivatives will match also? From the discussion so far, to form P2(x) you just have to add a 2nd degree term to the polynomial, P1(x). And since  P2"(0) = 1,  we know that the coefficient of that term must be 1/2!.
                             1
   ex  ≈  P2(x)  =  1 + x +    x2                                 eq. 8.4-16a
                            2!
You should be able to see that P2(x) matches ex at  x = 0,  and that they match in their first two derivatives as well. But they will differ in their third derivative. To make a P3(x) that matches ex at  x = 0  out to the third derivative, you would have to add a cubed term, and its coefficient would have to be 1/3!. And for a polynomial, Pn(x) to match ex at  x = 0 out to the nth derivative, you would have to have
                             1       1                1
   ex  ≈  Pn(x)  =  1 + x +    x2 +    x3 +  ...  +     xn        eq. 8.4-16b
                            2!      3!               n!
Using sigma notation, the above is the same as
                     n    1
   ex  ≈  Pn(x)  =         xk                                    eq. 8.4-16c
                    k=0  k!
remembering that  0! = 1  and  1! = 1.  This approximation works pretty well for  |x| < n/2.  For example,  P5(2) = 7.26666667  and  e2 = 7.389056099

So with all that in mind, would it surprise you to learn that

                                             1       1               1
   ex  =   lim    Pn(x)  =   lim    1 + x +    x2 +    x3 +  ...  +    xn
          n → ∞             n → ∞           2!      3!              n!
In other words, as you add more and more terms to this polynomial, it gets closer and closer to ex. No matter how big an x you choose, you can add enough terms to the polynomial to make it as accurate as you like. In sigma notation the polynomial is


   ex  =   lim
          n → ∞

  Pn(x)  =


   lim
  n → ∞
   n    1
         xk                     eq. 8.4-17a
  k=0  k!
But the more traditional way to write the limit on the right is to drop the limit sign and write

    1
    ex  =        xk                                              eq. 8.4-17b
          k=0  k!
When you see the atop the , it means you are taking the limit as the n that would have been on top goes to infinity.

Equation 8.4-17b is an example of something called a Maclaurin series. Click here to see a brief biography of Maclaurin. If, at  x = 0,  you know the function, and the rate it changes, and the rate that the rate changes, and the rate that that rate changes, and so on (which is the information the 911 operator was trying to give Little Red Riding Hood), a Maclaurin series can can be made to converge to your function (over some interval around  x = 0.)  You can take a Maclaurin series of any function provided it and all its derivatives (that is first derivative, second derivative, third derivative, and so on indefinitely) are continuous. But not every x works for every Maclaurin series. We will cover that aspect in a later section.

To create a Maclaurin series of a function, you must know the value of all its derivatives at  x = 0.  The function, ex, is especially well suited for this since it is its own derivative. So it is easy to know what any numbered derivative of ex is at  x = 0.  They are all equal to 1.

The Maclaurin Formula: In general, if you have a continuous function, f(x), all of whose derivatives are also continuous, and you know or can somehow figure out that

   f(0)     =  A0

   f'(0)    =  A1

   f"(0)    =  A2

   f(3)(0)  =  A3
   .
   .
   .
for every derivative of f(x), then the Maclaurin series is

   f(x)  =   lim
            n → ∞



  A0  +  A1x  +



   1
    
  2!



 A2x2  +



   1
    
  3!



 A3x3  +  ...  +



   1
    
  n!

  eq. 8.4-18a

 Anxn


or in sigma notation

     1
   f(x)  =          Akxk                                         eq. 8.4-18b
            k=0   k!

In some cases this formula will not work for all x, but it will usually work for some range of x near zero.


Worked Example of a Maclaurin Series

Problem: Find the Maclaurin series for  f(x) = sin(x)

Step 1: Make a table of the first few derivatives of f(x). This is often the hardest part of problems like this, but in this case sin(x) is a very orderly function with respect to its derivatives.

   f(x)     =   sin(x)

   f'(x)    =   cos(x)

   f"(x)    =  -sin(x)

   f(3)(x)  =  -cos(x)

   f(4)(x)  =   sin(x)

Step 2: Try to discern a pattern as the derivatives go higher. Since the fourth derivative brings sin(x) back to itself, we can expect that the pattern of the first four derivatives will repeat indefinitely for still the higher derivatives.

Step 3: Put  x = 0  into the table and evaluate.

   f(0)     =   sin(0)  =   0  =  A0

   f'(0)    =   cos(0)  =   1  =  A1

   f"(0)    =  -sin(0)  =   0  =  A2

   f(3)(0)  =  -cos(0)  =  -1  =  A3

   f(4)(0)  =   sin(0)  =   0  =  A4
Here is where you really have to identify the pattern that the Aj's are making. Clearly whenever the subscript, j, is even, the corresponding Aj is zero. When j is odd, then the Aj is either 1 or -1 depending upon whether j is 1 more than a multiple of 4 or 3 more than a multiple of 4 respectively. That is the pattern of Aj's that will go on forever.

Step 4: Put it all into the Maclaurin formula. Since the even-power terms will be all zero, we put up only the odd-power terms.


   sin(x)  =   lim
              n → ∞


         1         1         1                    1
  x  -     x3  +     x5  -     x7  +  ...  ±            x2n+1
        3!        5!        7!                (2n + 1)!

                                               eq. 8.4-19a
It might seem at first as if this one would be hard to put into sigma form, but with a little practice you will learn some tricks that enable you to tackle more difficult sigma expressions. The trick you use here is that (-1)n is -1 if n is odd and +1 if n is even. Also, to get a guaranteed unique odd number from any n you take  2n + 1.  So

               ∞     (-1)k
   sin(x)  =                x2k+1                                eq. 8.4-19b
              k=0  (2k + 1)!
Remember in
trig identity section we mentioned that sin(x) is an odd function (that is  sin(-x) = -sin(x) ). Note that for any odd function, its nth derivative (if it exists) will be nonzero at  x = 0  only when n is an odd number. Also for any odd function, when you take the Maclaurin series, only the terms with odd-numbered exponents will be nonzero.

Likewise with even functions. For any even function (that is where  f(-x) = f(x) ), its nth derivative (if it exists) will be nonzero at  x = 0  only when n is an even number. Also for any even function, when you take the Maclaurin series, only the terms with even-numbered exponents will be nonzero.

Maclaurin Approximations to sin(x)

The graph shows the first six Maclaurin approximations to sin(x). That is in green is the first degree polynomial approximation, in brown is the third degree polynomial approximation (has two nonzero terms), and so on out to the black trace, which has six nonzero terms. The actual trace of sin(x) is shown in blue, but for most of the graph the traces of the Maclaurin approximation polynomials are so close that they obscure the blue trace. You can see how the higher degree approximations stay close to sin(x) over a larger range of x. If you added more and more higher degree terms, you would be able to get the Maclaurin approximation to stay near sin(x) over as long an interval as you like.

Since cos(x) is the derivative of sin(x), what would you expect the Maclaurin series for cos(x) to be? Once you know the Maclaurin series for sin(x), you don't even have to apply the Maclaurin formula to find it for cos(x). All you need to do is take the derivative, term by term, of the Maclaurin series for sin(x). That is, take the nth power term, reduce that exponent by 1, and multiply the coefficient by n. In equation 8.4-19b, that exponent is,  n = 2k+1:

               ∞
   cos(x)  =   
              k=0
  (-1)k(2k + 1)
               
   (2k + 1)!

 x2k+1-1  =


  k=0
  (-1)k
       
  (2k)!

 x2k         eq. 8.4-19c


   cos(x)  =  lim    1  -
             n → ∞
   1
    
  2!

 x2  +

   1
    
  4!

 x4  -

   1
    
  6!

 x6  +  ...  ±

    1
       
  (2n)!

 x2n

And remember that cos(x) is an even function.


Coached Exercise

Find the Maclaurin series (or at least the first four nonzero terms of it) for

              1
   f(x)  =       
            x + 1

Step 1: Find the derivatives of the function. Go ahead and find the first three derivatives, f'(x), f"(x), and f(3)(x). Use the chain rule to aid you. If you have any doubt whether you got them right, click here.

Step 2: Look for the pattern in the derivatives. Can you see what's happening as you continue to take higher derivatives of this function? With each successive derivative you multiply by the next negative integer and take the next higher power in the denominator. Based upon that try to come up with a general expression for f(n)(x). When you think you have it, click here.

Step 3: Evaluate the function and derivatives at x = 0. This the easy step. Just put zero in for x into each of the original function and its derivative expressions and see what you get. Those numbers will become your coefficients, A0 (from the original function), A1 (from the first derivative), A2 (from the second derivative), and so on for the Maclaurin series. When you are done, click here to make sure you did it right.

Step 4: Put it all into the Maclaurin formula. Simply gather up the An's that you came up with in the preceding step and plug them in. Then make any simplifications that seem evident. When you are done, click here.


Think carefully about the formula:

     1
          =   lim    1  -  x  +  x2  -  x3  +  x4  -  ...  ±  xn
   x + 1     n → ∞
If  |x| > 1,  then each additional term will have magnitude greater than the last. As you add more and more terms, there is no way that the sum will grow closer and closer to some value -- that is the limit cannot exist, and we say that the series does not converge (or we say it diverges). Here is a case where not every x will work in the Maclaurin series. The only x's that will work are ones where  |x| < 1.  Only under that condition will the magnitude of the terms grow smaller as you go out in the series. Notice that the 1 that |x| must be less than is exactly the same as the distance between the  x = 0  where we evaluated all those derivatives and the  x = -1  where f(x) is discontinuous. This is not a coincidence. Remember we said at the outset that f(x) and all its derivatives must be continuous for Maclaurin to work. Here they are continuous, but only until you get to  x = -1.  The Maclaurin series cannot possibly work at  x = -1  because the function is undefined there. And it can't work beyond  x = -1  because then the region in which you want it to work would include a discontinuity. And going the other way, it can't work beyond  x = 1  because the extent of the region within which a Maclaurin series converges must be symmetric about  x = 0. The proof that that is the case is too advanced for this level of discussion, but take my word for it. The distance from  x = 0  to where the Maclaurin series stops working is called the radius of convergence. We will talk about it much more in a later section. The radius of convergence for this Maclaurin series is 1.

But let's try this Maclaurin series on a value that it can do, say,  x = 1/2.  So the series should give us the reciprocal of  1 + 1/2 = 3/2,  which means it should give 2/3. When I add up the first ten terms of this Maclaurin series with  x = 0.5  using a calculator, I get

      1
            ≈  0.6660151625
   1 + 0.5
Pretty close, eh? Try it yourself with, say,  x = -0.5.  You should see that in that case the series approaches 2 (do you see why?).

Note that not having any discontinuities in any derivative of f(x) over some symmetric interval around  x = 0  is a necessary but not a sufficient condition for the Maclaurin series to converge inside that interval. For example,  f(x) = arctan(x)  has no discontinuities in any of its derivatives at any real x. Yet the interval of convergence for its Maclaurin series is only between  x = -1  and  x = 1.  There is much more going on than meets the eye with Maclaurin series, and you will learn a lot more about it when you learn about how the domains of ordinary functions can be expanded to include complex numbers.


Tayloring a Maclaurin Series

Suppose you wanted a series for
            1
   f(x)  =                                                        eq. 8.4-20
            x
Maclaurin is not going to get it for you. Why? Because Maclaurin requires that you find the function's value at  x = 0,  as well as the value of all its derivatives. Since this f(x) and its derivatives are undefined at  x = 0,  there is no way to generate the required coefficients.

In the previous paragraphs, though, we did discover that we can find a perfectly serviceable Maclaurin series for

              1
   f(x)  =                                                        eq. 8.4-21
            x + 1
Suppose you substituted  u = x - 1  into equation 8.4-20. When you do the algebra you find
                           1
   f(x)  =  f(u + 1)  =                                           eq. 8.4-22
                         u + 1
But the right-hand side of equation 8.4-22 is just the right-hand side of equation 8.4-20 by a different name. And we know how to do a Maclaurin series for that:
     1        ∞
          =      (-1)k uk                                        eq. 8.4-23a
   u + 1     k=0
So what happens when you substitute back  x - 1 for u?
   1      ∞
      =      (-1)k (x - 1)k                                      eq. 8.4-23b
   x     k=0
This is called a Taylor series taken around  x = 1Click here to see a brief biography of Brook Taylor. We know that it's only going to work for  -1 < u < 1,  or equivalently for  0 < x < 2.  That is, it still has a radius of convergence of 1, but this time the region of convergence is symmetric around  x = 1  instead of  x = 0

Finding a Taylor series is, in reality, the same thing as finding a Maclaurin series, only you've shifted the x-axis by some amount (in this case we shifted it by 1). In general the Taylor formula for finding the series taken around the point,  x = a  that approximates f(x) is


   f(x)  =   lim   f(a)  +  f'(a)(x-a)  +
            n → ∞
   1
    
  2!

 f"(a)(x-a)2  +

           1
  ...  +    
          n!

 f(n)(a)(x-a)n

                                                                  eq. 8.4-24a

or in sigma notation

    1
   f(x)  =         f(k)(a)(x-a)k                                 eq. 8.4-24b
            k=0  k!

But once you know the Maclaurin formula, the recipe for the Taylor formula is easy. To find the Taylor series around the point,  x = a,  take the following steps:

  1. Let  u = x - a.
  2. Apply the Maclaurin formula to  f(u + a)  to get a series in the variable, u.
  3. Substitute back  x - a for u into the resulting Maclaurin series.

If learning to find a Maclaurin series were learning to tie your shoes when you are at home, then finding a Taylor series is no more different than tying your shoes at school. The two are really the same thing. Indeed a Maclaurin series is nothing but a Taylor series taken around  x = 0.


Exercises

Find the first four nonzero terms of the Tayor series for the following functions (and observe which point you are asked to take the series around).

               _
3)   f(x)  =  √x
around  x = 1 View solution.
             ex + e-x
4)  f(x)  =         
                2
around  x = 0  (that is do a Maclaurin series of this one). Note that this function is known as the hyperbolic cosine. We will be studying it in more detail later on. If you plot this curve, the trace will be the same shape as a slack chain or cable hung between two points. For that reason the curve that its trace makes is also called a catenary, which comes from the Latin word for chain.
View solution.
5)  f(x)  =  tan(x)
around  x = 0  (that is do a Maclaurin series of this one). This one has complicated derivatives, so just do the first three nonzero terms. That will keep you busy enough taking derivatives, and it's good that you get the practice taking complicated derivatives.
View solution.
6)  f(x)  =  arctan(x)
around  x = 0  (that is do a Maclaurin series of this one). Unless you know an advanced tricks, the derivatives of this one get very complicated very fast. So just take this one out to the first two nonzero terms. Remember that the derivative of arctan(x) is
                1
   f'(x)  =        
             1 + x2
View solution.

Making Connections (something to think about)

Suppose c is any real number and n is any positive integer. Let

   f(x)  =  (x + c)n
  1. Work out the Maclaurin series for this function.
  2. Show that the Maclaurin series for this function terminates by itself after n+1 terms (that is all the terms after the first n+1 of them are zero).
  3. Show that the Maclaurin series that you end up with is identical to what you would end up with if you expanded  (x + c)n  using the binomial formula.
Indeed it would have been surprising if the Maclaurin expansion of this did not match the binomial expansion. How could two different polynomials be equal to the same thing? But work it out for yourself to see just how they end up coming out the same.

Return to Table of Contents

Move on to Power Tools for Taking Derivatives of Products

You can email me by clicking this button:

  Use your own emailer     Use form