Today as I was trying to learn about the Leibniz method of differentiating under the integral sign, I came across the Bounded convergence theorem which happens to be very useful towards proving the one-dimensional case of the Leibniz method. It happens to be simpler to prove than the Lebesgue dominated convergence theorem but its still a very powerful theorem.

First, let’s introduce the notion of convergence in measure which is a useful generalization of the notion of convergence in probability.

Convergence in measure:

in measure if ,

Bounded convergence theorem:

If are measurable functions that are uniformly bounded and in measure as , then

Proof:

Since

we only need to prove that if in measure and , then . So we have:

From which we deduce:

However, in measure so the RHS of the above inequality tends to zero as . From this the theorem follows.

### Like this:

Like Loading...

Pingback: Differentiating under the integral sign – Kepler Lounge