The theory of Taylor polynomials and Taylor series rests upon once crucial insight: in order to approximate a function, it is often enough to approximate its value and its derivatives (first, second, third, and so on) at one point. We will see why this is true in the next section; for now, we content ourselves with figuring out how to accomplish it.

Let us first examine polynomials themselves. Setting

 p(x) = anxn + ... + a1x + a0

we see that

 p'(x) = nanxn-1 + ... +2a2x + a1 p''(x) = (n)(n - 1)anxn-2 + ... + (3)(2)a3x + 2a2 p(3)(x) = (n)(n - 1)(n - 2)anxn-3 + ... + (4)(3)(2)a4x + (3)(2)a3

Substituting 0 for x in all of these functions yields

 p(0) = a0 p’(0) = a1 p’’(0) = 2a2 p(3)(0) = 6a3

Indeed, we see a pattern emerging. If we set p(0) = p, then we may write

 p(i)(0) = i!ai

for i = 0, 1,…, n. For i > n, it is easy to see that p(i)(0) = 0.

Given an arbitrary function f (x), we want to find a polynomial p(x) such that p(i)(0) = f(i)(0) for, say, i = 0, 1,…, n. Since, for a polynomial

 pn(x) = anxn + ... + a1x + a0

we have pn(i)(0) = i!ai for i = 0, 1,…, n, the coefficients must satisfy

 i!ai = pn(i)(0) = f(i)(0)

Solving for ai yields

 ai =

Allowing each ai to take on the value imposed by this equation gives the desired polynomial

 pn(x) = f (0) + f'(0)x + x2 + x3 + ... + xn

called the Taylor polynomial of degree n for the function f (x).

Now we see the relevance of series. We might guess that as we approximate a function f (x) with higher degree polynomials, having more derivatives in common with f (x) at x = 0, the resulting polynomials will be better approximations to the actual function f. The natural thing to do is therefore to write down the series that is in some sense the "Taylor polynomial of infinite degree," having the Taylor polynomial of degree n as its n-th partial sum. If we differentiate such a series term-by-term, we see that it will have all of its derivatives at 0 equal to those of f. We let

 p∞(x) = f (0) + f’(0)x + x2 + x3 + ... = xn

This series, called the Taylor series of f at 0, is a special kind of power series, an object that was explored in the last chapter. As with any power series, a Taylor series may or may not converge for a particular real number x. However, it will converge for all x inside the radius of convergence. In many cases, the Taylor series will define a function that is equal to the original function f (x) inside this radius of convergence.

Sometimes it is convenient to approximate a function using its derivatives at a point other than 0. For this, consider the function g(x) = f (x + a). By repeated applications of the chain rule, g(n)(x) = f(n)(x + a), so the Taylor series for f (x + a) is

 xn = xn

Letting y = x + a, we have the following Taylor series for f (y):

 (y - a)n

This new series is called the Taylor series of f at a. Letting a = 0, we see that we get back the original Taylor series at 0.