I’m starting a new series of blog articles following a beginner friendly approach to understanding some of the challenging concepts in machine learning. To start with, we will start with KL divergence.

**Code**: Here

First of all let us build some ground rules. We will define few things we need to know like the back of our hands to understand KL divergence.

By distribution we refer to different things such as data distributions or probability distributions. Here we are interested in probability distributions. Imagine you draw two axis (that is, and ) on a paper, I like to imagine a distribution as a thread dropped between the two axis; and . represents different values you are interested in obtaining probabilities for. represents the probability of observing some value on the axis (that is, ). I visualize this below.

This is a continuous probability distribution. For example think of axis as the height or a human and as the probability of finding a person with that height.

If you want to make this probability distribution discrete, you cut this thread to fixed length pieces and turn the pieces in such a way that they are horizontal. And then create rectangles connecting the edges of each piece of thread and the x-axis. That is a discrete probability distribution.

For a discrete probability distribution, an event is you observing taking some value (e.g. ). Let us call probability of the event . In continuous space you can think of this as a range of values (e.g. ). Note that the definition of an event is not restricted to the values it takes on the X-axis. However we can move forward considering only that.

To continue from this point onwards, I will be humbly using the example found in this blog post [1]. It is a great post explaining the KL divergence, but felt some of the intricacies in the explanation can be explained in more detail. All right let’s get into it.

So the gist of the problem that is being solved in [1] is that, we’re a group of scientists visiting the vast outer-space and we discovered some space worms. These space worms have varying number of teeth. Now we need to send this information back to earth. But sending information from space to earth is expensive. So we need need to represent this information with a minimum amount of information. A great way to do this is, instead of recording individual numbers, we draw a plot where axis is different numbers of teeth that has been observed (0,1,2,…, etc.) and make axis the probability of seeing a worm with many teeth (that is, number of worms with teeth / total number of worms). We have converted our observations to a distribution.

This distribution is efficient than sending information about individual worms. But we can do better. We can represent this distribution with a known distribution (e.g. uniform, binomial, normal, etc.). For example, if we represent the true distribution with a uniform distribution, we only need to send two pieces of information to recover true data; the uniform probability and the number of worms. But how do we know which distribution explains the true distribution better? Well that’s where the KL divergence comes in.

Intuition: KL divergence is a way of measuring the matching between two distributions (e.g. threads)

To be able to check numerical correctness, let us change probability values to more human friendly values (compared to the values used in [1]). We will assume the following. Say we have 100 worms. And we have following types of worms in following amounts.

- 0 teeth:
**2 (Probability: )** - 1 tooth:
**3 (Probability: )** - 2 teeth:
**5 (Probability: )** - 3 teeth:
**14(Probability:** - 4 teeth:
**16 (Probability: )** - 5 teeth:
**15 (Probability: )** - 6 teeth:
**12 (Probability: )** - 7 teeth:
**8 (Probability: )** - 8 teeth:
**10 (Probability: )** - 9 teeth:
**8 (Probability: )** - 10 teeth:
**7 (Probability: )**

Quick sanity check! Let’s ensure that the values add up to 100 and probability add up to 1.0.

Here’s what it looks visually.

Now that out of the way, let us first try to model this distribution with a uniform distribution. A uniform distribution has only a single parameter; the uniform probability; the probability of a given event happening.

This is what the uniform distribution and the true distribution side-by-side looks like.

Let us keep this result aside and we will model the true distribution with another type of distributions.

You are probably familiar with the binomial probability through calculating probabilities of a coin landing on it’s head. We can extend the same concept to our problem. For a coin you have two possible outputs and assuming the probability of the coin landing on its head is and you run this experiment for trials, the probability getting successes is given by,

Let’s take a side trip and understand each term in the binomial distribution and see if they make sense. The first term is . We want to get successes, where the probability of a single success is . Then the probability of getting successes is . Remember that we’re running the experiment for trials. Therefore, there’s going to be failed trials, with a failure probability of . So the probability of getting successes is the joint probability of . Our work doesn’t end here. There are different permutations the trials can take place within the trials. The number of different permutations elements to be arranged within spaces is given by . Multiplying all these together gives us the binomial probability of successes.

We can also define a mean and a variance for a binomial distribution. These are given by,

What does the mean reflect? Mean is the expected (average) number of successes you get if you run trials. If each trial has a success probability of it make sense to say you will get trials if you run trials. Next what does the variance represent. It represents how much the true number of success trials to deviate from the mean value. To understand the variance, let us assume . Then the equation is, . You have the highest variance when (when it is equally likely to get heads and tail) and lowest when or (when for sure you’re getting head/tail).

Now with a solid understanding about the binomial distribution, let us spiral back to the problem at our hands. Let us first calculate the expected number of teeth for the worms. It would be,

With mean known, we can calculate where,

Note than is the maximum number of teeth observed from the population of worms. You might ask why we did not choose to be the total number of worms (that is ) or total number of events (that is ). We will soon see the reason. With that, we can define probabilities of any number of teeth as follows.

**Given that teeth can take values up to 10, what is the probability of seeing k teeth (where seeing a tooth is a success trial).**

From the perspective of the coin flip, this is like asking,

**Given that I have 10 flips, what is the probability of observing k heads.**

Formally, we calculate the probability for all different values of k. Here becomes the number of teeth we would like to observe. And is the binomial probabilities for the bin of teeth (that is, 0 teeth, 1 tooth, etc.). So when we calculate them as follows,

This is what a comparison between the true distribution and the binomial distribution looks like.

Okey, turn back and reflect on what we did so far. First we understood the problem we want to solve. The problem is to send statistics of teeth of a certain type of space worms across the space with minimal effort. For that we thought of representing the true statistics of worms with some known distribution, so we can just send the parameter of that distribution instead of true statistics. We looked at two types of distributions and came up with the following statistics.

- Uniform distribution – with probability of
- Binomial distribution – with , and taking different values between 0 to 10

Now let’s visualize everything in one place

Now with all these fancy calculations, we need a way to measure the matching between each approximated distribution and the true distribution. This is important, so that, when we send the information across, we can have a peace of mind without worrying about the question “did I choose correctly?” for the rest of our lives.

This is where the KL divergence comes in. KL divergence is formally defined as follows.

Here is the approximation and is the true distribution we’re interested in matching to. Intuitively this measures the how much a given arbitrary distribution is away from the true distribution. If two distributions perfectly match, otherwise it can take values between and . Lower the KL divergence value, the better we have matched the true distribution with our approximation.

Let’s look at the KL divergence piece by piece. First take the component. What happens if is higher than ? Then this component will produce a negative value (because log of less than 1 values are negative). On the other hand if is always smaller than this component will produce positive values. This will be zero only if . Then to make this an expected value, you weight the log component with . This means that, matching areas where has higher probability is more important than matching areas with low probability.

Intuitively it makes sense to give priority to correctly match the truly highly probable events in the approximation. Mathematically, this allows you to automatically ignore the areas of the distribution that falls outside of the support (support is the full length on the x axis used by a distribution) of the true distribution. Additionally this avoid calculating that will come up if you try to compute the log component for any area that falls outside of the support of the true distribution.

Let us now compute the KL divergence for each of the approximate distributions we came up with. First let’s take the uniform distribution.

Now for the binomial distribution we get,

Let’s just play around with the KL divergence now. First we will see how the KL divergence changes when the success probability of the binomial distribution changes. Unfortunately we cannot do the same with the uniform distribution because we cannot change the probability as is fixed.

You can see that as we are moving away from our choice (red dot), the KL divergence rapidly increases. In fact, if you print some of the KL divergence values small amount away from our choice, you will see that our choice of the success probability gives the minimum KL divergence.

Now let us see how and behaves. This behavior is shown in the following figure.

It seems there seems to be an area where and has a minimum distance between them. Let us plot the difference between the two lines and also zoom into the area where our choice of the probability lies.

It seems that our choice of probability also lies very close to the area where and has the least difference (not exactly). However still it’s an interesting finding. I’m not sure about the reason why it is that way. But someone can shed some light if they know. š

Now we have some solid results, though the uniform distribution appears to be simple and very uninformative where the binomial distribution carries more subtlety, the uniform distribution matches the true distribution better than the binomial distribution. To be honest, this result actually took me by surprise. Because I expected the binomial to model the true distribution better. Therefore, this teaches us the important less of why we should not trust our instincts alone!

**Code**: Here

[1] https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained

Jupyter Notebook for this Tutorial: Here Recently, I had to take a dive into the seq2seq library of TensorFlow. And I wanted to a quick intro to the library for the purpose of implementing a Neural Machine Translator (NMT). I simply wanted to know “what do I essentially need to...

Jupyter Notebook: Here Introduction: Why Optimization? It is no need to stress that optimization is at the core of machine learning algorithms. In fact this was a big enabler of deep learning; where “pre-training” (i.e. an optimization process) the network was used to find a good initialization for deep models....