This inequality is from statistics.
It provides an upper bound on the probability of a random variable being too different from its expected value. While the bound is quite weak, it provides a very useful tool in proving theoretical results.
Note: due to some problems with LaTeX renderer, I can’t use
| for absolute value. Instead I use
[ to signify absolute value, except when used in which stands for the expected value of the random variable .
The proof itself is a nice example of decomposing a problem into smaller parts. (Yes, it’s like calling a helper function!) To that end, we first prove a simpler inequality known as Markov’s inequality. Markov’s inequality says that for a non-negative random variable and :
Note: some write it with instead of .
To prove this result, we let be the indicator variable that is if and otherwise. Since , we know that
Taking the expectation of both sides gives us the desired result.
Now, we can use this to prove Chebyshev’s inequality. We set and . This yields:
The numerator of the right hand side is by definition. This almost looks like what we want, only the left hand side doesn’t quite look correct. All we need to note, is that if and only if . This implies that which completes the proof.
Sure, this was not as short as the last one, but it is still rather elegant. Without the intermediate inequality, the proof is rather difficult. However, with Markov’s inequality, the proof falls through easily with some substitution and definitions. In addition, Markov’s inequality can be proven without too much machinery as well.