In this tutorial, we will discuss how tf.get_variable() initialize a tensor when its initializer is None.
In order to improve the performance of our mdoel, we can use Xavier method to initialize weights. In this tutorial, we will introduce how to intialize tensorflow weights using Xavier.
In this tutorial, we introduce why we should add a forget bias for lstm forget gate and add a forget bias for our custom lstm network.
In this tutorial, we will use our custom GRU network to classify MNIST handwritten digits, which aims to evaluate the effectiveness of our custom GRU.
In this tutorial, we will introduce how to build our custom GRU network using tensorflow, which is very similar to create a custom lstm network.
There are many models that have improved LSTM, GRU (Gated Recurrent Unit) is one of them. In this tutorial, we will introduce GRU and compare it with LSTM.
As to GRU, there is a reset gate in it. Can we remove this reset gate in GRU? If we remove it, the performance of GRU will decreased? The answer is we can remove the reset gate.
In this tutorial, we compare the tf.reverse() and tf.reverse_sequence() then use an example to show tensorflow beginners how to use tf.reverse_sequence().
Bias is often used in neural network, why we need to use it? In this tutorial, we will introduce the effect of bias and explain the reason we should use it in neural network.
Matrix norm is one of important algorithm in deep learning, in this tutorial, we will introduce some basic features of matrix norm then tell you how to calculate it.