A Neat Little Integration Trick

Abusing the +c

Here’s a pretty standard integral. How would you approach it?

xtan1x dxint x an^{-1}{x} dx

My go-to approach for these is unironically substituting tan1x=t\tan^{-1}{x} = t, but the fastest way is actually integration by parts.

f(x)g(x)=f(x)g(x)f(x)g(x) dxint f(x) g'(x) = f(x)g(x) - int f'(x)g(x) dx

We can’t integrate tan1x\tan^{-1}{x} easily so that’ll be what we differentiate. We’ll then integrate xx to give 12x2\frac{1}{2} x^2, and so we have

=12x2tan1x12x21x2+1 dx=12x2tan1x12x2x2+1 dxegin{align*} &= rac{1}{2} x^2 an^{-1}{x} - int rac{1}{2} x^2 cdot rac{1}{x^2 + 1} dx \ &= rac{1}{2} x^2 an^{-1}{x} - rac{1}{2} int rac{x^2}{x^2 + 1} dx end{align*}

Now here’s something we leave out. Think about what we’re doing when we use integration by parts. We differentiate one function, cool. But we integrate the other – and we all know that something you can never forget when integrating is the +c+c.

Wait, so how can we leave it out with integration by parts? We’ll look at exactly why that works in just a moment, but for now let’s just throw it back in like the law-abiding mathematicians we are:

=(12x2+c)tan1x12x2+cx2+1 dx= left( rac{1}{2} x^2 + c ight) an^{-1}{x} - rac{1}{2} int rac{x^2 + c}{x^2 + 1} dx

Hmm… what now?

Well, here’s the thing: cc is an arbitrary constant. We can let it be anything, anything we want. So looking at the terms we’ve got here, what value of cc would be most helpful to us? What would give a really nice solution path?

If we put c=1c = 1, notice what happens:

=(12x2+1)tan1x12x2+1x2+1 dx=(12x2+1)tan1x12 dxegin{align*} &= left( rac{1}{2} x^2 + 1 ight) an^{-1}{x} - rac{1}{2} int rac{x^2 + 1}{x^2 + 1} dx \ &= left( rac{1}{2} x^2 + 1 ight) an^{-1}{x} - rac{1}{2} int dx end{align*}

Yep. It just cancels out.

No +11+1 -1 required, just deal with it by setting cc. Isn’t that mindblowing?

It feels weird that this works – it’s like we’re abusing maths. It’s unsettling in that you find yourself questioning whether this always holds true.

Well, can’t argue with maths, so let’s go ahead and prove this. In fact, we’ll do it for the general case!

We’ll use ff and gg to denote 2 arbitrary functions f(x)f(x) and g(x)g(x), dropping the (x)(x) for brevity. Start by applying parts, with gg' integrating to g+cg + c.

 fg dx= f(g+c)f(g+c) dxegin{align*} & int fg' dx \ =& fleft(g+c ight)-int f'left(g+c ight) dx end{align*}

We’ll expand out and split the integral… (remembering not to make an S-I+G-N error!)

=fg+fcfg+fc dx=fg+fcfg dxcf dxegin{align*} &= fg+fc-int f'g+f'c dx \ &= fg+fc-int f'g dx- c int f' dx end{align*}

And the cc separates out into an integral of ff'. Of course by the fundamental theorem of calculus the integral of a derivative is just the original function, so this integrates to cfcf.

=fg+fcfg dxcf=fgfg dx+cfcf=fgfg dxegin{align*} &= fg+fc-int f'g dx - cf \ &= fg-int f'g dx + cf - cf \ &= fg-int f'g dx end{align*}

And would you look at that, the extra terms introduced by adding +c+c cancelled out.

Since this happens regardless of what we choose for f(x)f(x) and g(x)g(x), we can continue omitting the intermediary +c+c in integration by parts. Of course, don’t forget to add the general +c+c at the end to account for the accumulated constant from all the integrating!

I have yet to find more integrals where we can leverage the +c+c with this trick, though. Not many parts integrals feature derivatives and antidetivatives where all it takes is a constant to cancel them out. If you find any, let me know!