Computing Integrals

Contents

Introduction and Summary

We have already seen that, in order to be able to compute definite integrals, it is enough to be able to compute indefinite integrals (or antiderivatives). While for some functions, an antiderivative can be guessed fairly easily (for example, 2 cos(2x)dx = sin(2x) ), for other functions this task may be exceedingly difficult. We would like to be able to break these complicated antiderivative computations down into simpler ones.

Just as with differentiation, there are several methods that allow us to perform this simplification. Some of them, in fact, come directly from the corresponding methods for differentiation, once translated via the Fundamental Theorem of Calculus.

The rules for differentiating constant multiples and sums of functions have obvious analogues for antiderivatives obtained in this way. The product rule yields a method known as integration by parts, while the chain rule yields a method called change of variables.

We will also explore another integration technique, called partial fraction decomposition. With these methods at our disposal, we will be able to compute the antiderivatives of many functions.

It is important to note, however, a crucial difference between differentiation and antidifferentiation (that is, indefinite integration). Given a function f (x) that is built up from elementary functions by addition, multiplication, division, and composition, it is always possible to find its derivative in terms of elementary functions.

On the other hand, it is often impossible to find an antiderivative of such a function in terms of elementary functions. For example, even so simple a function as f (x) = e -x2 has no antiderivative that can be written down in terms of elementary functions.