Paper is found here.

One of the key advantages of Deep Models is that they made feature engineering obsolete. With this came a paradim-shift; from engineering robust features to engineering deep architectures, i.e. hyperparameters, for machine learning tasks.

This paper uses reinforcement learning (RL) to find the best deep architecture for a given task. This is achieved by using a “controller” a Long-Short Term Memory (LSTM) Network (a variant of Recurrent Neural Network) , to output architectures, e.g. a child network which is a Convolution Neural Network (CNN). Then the child network is evaluated on a dataset (e.g. CIFAR10) and returns the accuracy on a validation dataset which is used as the reward signal in the RL environment. Finally the controller updates its weights in a direction to maximize the expectation of the reward signal. This is achieved based on an algorithm known as “REINFORCE”.

This problem of “Hyperparameter Optimization” is not an out-of-the-world concept. This is quite old and has previously been attempted using techniques such as Genetic Algorithms, Bayesian Optimization, etc. Also, this works shares a close relationship with “Learning to Learn” and “Meta-Learning”.

Let us now sail the high-seas, diving deep into the algorithm!

The controller is a LSTM. An LSTM is a flexible learning model that performs well for time-series data (Shallow Explanation). The paper doesn’t define what the initial input is. So I assume it’s a constant such as “ROOT”, that will be the same always. Then let us assume the controller predicts layers. Then the LSTM predicts a sequence of number of filters, filter height, filter width, stride height and stride width as a sequence where each of these is a single prediction. Each prediction is taken by sending the input through the network (a single column in figure) and getting the output probability using a softmax layer (colored cells in the figure). Prediction at time will be the input at time step . The controller is illustrated in the below figure. The number on the right upper corner indicates the number of output units in that layer.

The concatenated output of the controller is a sequence of tokens that specifices the hyperparameters of each layer. Then the network with the given specifications is designed and trained on the task, e.g. classification with CIFAR10. Next the performance (accuracy) of the generated network is evaluated using a validation set. This accuracy is used as the reward signal to update the parameters of the controller .

However there’s a pitfall here. is non-differentiable as is not an output of the controller, but the child network. To circumvent this a special policy gradient technique known as REINFORCE is used. The goal of REINFORCE (Ref) is to provide a mechanism, i.e. equation, that allows us to update in a direction that maximizes a non-differentiable reward. So by using REINFORCE and an empirical approximation we arrive at the weight update rule for as below.

(1)

This basically says that the gradient of the controller is approximated by * “the sum of gradient of log of predictions weighted by the reward of each network sampled by the controller, averaged over all the networks sampled in a single batch”*. I know it’s a mouthful, so process it slowly! That’s it for the basic algorithm.

It’s no secret that they have to evaluate thousands models with millions of parameters in each which is highly computationally costly. To solve this they use a “parameter-server” scheme (Ref).

Here they also introduce ways to incorporate ways to explore more advance hyperparameters such as “skip connections” or “branching layers”. Skip connections play a vital role in very deep networks as they help the flow of the gradient (Ref). By branch layers I assume something similar to the inception module of Google.

To implement this they introduce “Anchor Points” in between each layer hyperparameter set. The anchor point at layer has incoming connections to it which says which previous layer will be an input to the layer. Each of these connections predicts a sigmoidal output taking weights and hidden states as an input (See the original paper for details)

Things are about to get hairier so bare with me and I must warn you this part the paper I have at hand the notation is all over and difficult to follow (probably because it was under review). So I’m going to build my own interpretation of them following my LSTM post.

It’s very hard to give a comprehensive explanation of this due to poorly formulated explanation and the complexity of LSTMs itself. But I will explain it to the extent I understood. Designing a LSTM is not as straight forward as CNNs as they have numerous time-dependent and independent calculations happening. But a LSTM cell can be decoded as a tree structure where it takes three inputs (previous state), (input) and (previous memory) and does various transformation to output and as illustrated in the upper-left corner of the second figure.

Now let us see how the controller works for designing LSTMs. First the network outputs 2 outputs, i.e. a way to combine and and an activation per tree index. This is quite sensible but the rest is unclear. Next there are two outputs called “cell inject” which is used by the last output to calculate what authors call which then is used to calculate . And I don’t have a good explanation for how or why this part works (dashed red line).

In experiments they claim that their model outperforms and 1.05 x faster than the previous similar state-of-the-art models for CIFAR10. But I wouldn’t say these are ground-breaking results. Finally the LSTM their model designed said to outperform the previous best on Penn Treebank dataset.

I find their approach to solving the problem of hyperparameter optimization interesting. **But some parts of the explanation of the algorithm is poorly-structured and lacks certain details** (e.g. what is the input to the controller?, Latter part of the controller in Figure 5 is very confusing. Figure 5 (left) is missing the in it, what are )**. Also the experiments can improve**. It would be more convincing to see experiments on more datasets (e.g. CIFAR100, Imagenet). And the very fact they market in the abstract “*Our CIFAR-10 model … which is 0.09 percent better and 1.05x faster*” is counterargued in the experiments by saying *DenseNet model … uses 1×1 convolutions to reduce its parameters, which we did not do, so it is not an exact comparison*“. So I have my doubts about this method.

PS: Special thank to Pepe for directing me to this paper.

Tensorflow Version: 1.2 Original paper: Convolution Neural Networks for Sentence Classification Full code: Here RNN can be miracle workers, But… So, you’re all exhausted from trying to implement a Recurrent Neural Network with Tensorflow to classify sentences? You somehow wrote some Tensorflow code that looks like a RNN but unable...

Paper is found here Introduction and the Contribution This paper’s main contribution is that instead of using standard neural network architectures such as Convolution Neural Networks (CNNs) and Long-Short Term Memory Netowrks (LSTMs). For example, Deep Q-Networks uses a standard CNN. The authors of this paper introduce an alterations to...