Paper is found here

This paper’s main contribution is that instead of using standard neural network architectures such as Convolution Neural Networks (CNNs) and Long-Short Term Memory Netowrks (LSTMs). For example, Deep Q-Networks uses a standard CNN. The authors of this paper introduce an alterations to the network architecture that is better suited for model-free reinforcement learning (RL). However, the new network is still compatible with existing and future RL algorithms.

In a nutshell this “alteration” is to have shared convolution features and split the fully connected layers to two separate modules and then combin the two modules again to form a single output layer. “What in the world?” you might say. However this particular choise of design is justified. The figure below depits and gives more intuition to how it is done.

Now why does this “alteration” make sense. The primary objective of this deep model will be as same as any other network. This takes a high-dimensional input, i.e. an image from an Atari game (a state ), as the input and spit out Q-values i.e., the state-action values for all the actions where is the action space. However, this state-action value is made out of two main components.

- A state-dependent action-independent
function : Goodness of state*Value* - A state, action dependent
function : Goodness of taking action in state*Advantage*

Now looking more closely at these two entities it is possible to understand that they behave differently; changes slower than . Broadly speaking, (action-independent) is calculated from a combination of many actions where as will change with the effect of a single action. And Q is the result of addition of both these entities ( and are just parameters of the neural network).

(1)

Now let’s put the ideas in the last two paragraphs together. If the learning model outputs and is made from two entities that behaves differently, it makes sense to have different sets of parameters to learn these two entities ( and ) and then combine them at the output layer. Thus, giving rise to ** “Dueling Network Architecture”**.

The loss function for the Dueling Architecture is as same as for the Deep Q-Network.

(2)

where . I’m not going to explain the notation here but there are pretty standard in deep reinforcement learning. comes from a “target network” as in the Deep Q-Network paper.

We now delve into a certain issue underlying this approach known as the **“ issue of unidentifiability“**. To imagine this, add a constant to the value and subtract a constant from the effect will cancel out right? Well, this is a not good property to have as these two entities serve two very different purposes (

(3)

where is the baseline. In the paper they use . I’m not exactly sure about the purpose of the baseline but I assume it is motivated from this paper. So baseline helps to solve the problem of unidentifiability, reduce variance and also to converge faster (resource for theoritical understanding).

The first experiment is to navigate a simple corridor composed of 2 vertical (10 units) and 1 horizontal (50 units) section. The agent should start at one end and navigate itself to the other end. Actions include going left, right, up, down and no-op. 3 different experiment were performed by augmenting the action spaces to 5, 10 and 20 sizes by adding redundant no-op actions. This simple experiment is designed to show that Dueling Networks effectively converges with a large action space where standard single network has a slower convergence (Figure 3 in the paper).

To evaluate the performance of the Duel Network in the arcade game environment, the following performance measure is used.

(4)

where is the best human performance (median after playing 2 hours of the game), is the performance of a baseline agent that performs well and is score obtained by sampling actions uniformly random.

Experiments include comparisons against two baselines; Double Deep Q-Network (DDQN) and Prioritized DDQN. The experiments indicate that the duel network outperforms both the baselines in a majority of games (Figure 4 and 5 in the paper).

I find the idea behind the paper quite interesting; exploiting the formulation of state-action value and incorporating that into the neural network architecture allows it to make the learning more effective. However, I was thrown off by the following paragraph in the paper (page 4, from last).

**“ However, we need to keep in mind that Q is only a parameterized estimate of the true Q-function. Moreover, it would be wrong to conclude that V is a good estimator of the state-value function, or likewise that A provides a reasonable estimate of the advantage function.“**

Well this paragraph just collapses all their arguments built up to prove their method (in my opinion). They’re basically saying that splitting the network to learn V and A separately helps the network to outperform the standard architectures, but then they say that it is wrong to consider these as reasonable estimators. (¯\_(ツ)_/¯)

Also I think it would be interesting to see this model being used to some real world control problems instead of just Atari games.

Paper is found here. One of the key advantages of Deep Models is that they made feature engineering obsolete. With this came a paradim-shift; from engineering robust features to engineering deep architectures, i.e. hyperparameters, for machine learning tasks. This paper uses reinforcement learning (RL) to find the best deep architecture...

In this post, I’m going to introduce a type of a Stacked Autoencoders (SAE) (Don’t worry if you don’t understand what an SAE is. Will explain later.). And worth a mention, that this is some research work done by me and few colleague from our research lab. So yay for...