In this tutorial, we introduce why we should add a forget bias for lstm forget gate and add a forget bias for our custom lstm network.
In this tutorial, we will use our custom GRU network to classify MNIST handwritten digits, which aims to evaluate the effectiveness of our custom GRU.
In this tutorial, we will introduce how to build our custom GRU network using tensorflow, which is very similar to create a custom lstm network.
There are many models that have improved LSTM, GRU (Gated Recurrent Unit) is one of them. In this tutorial, we will introduce GRU and compare it with LSTM.
As to GRU, there is a reset gate in it. Can we remove this reset gate in GRU? If we remove it, the performance of GRU will decreased? The answer is we can remove the reset gate.
In this tutorial, we compare the tf.reverse() and tf.reverse_sequence() then use an example to show tensorflow beginners how to use tf.reverse_sequence().
Bias is often used in neural network, why we need to use it? In this tutorial, we will introduce the effect of bias and explain the reason we should use it in neural network.
Matrix norm is one of important algorithm in deep learning, in this tutorial, we will introduce some basic features of matrix norm then tell you how to calculate it.
TensorFlow tf.reverse() function allows us to reverse a tensor based on aixs. In this tutorial, we will use some examples to illustrate you how to use this function correctly.
When you are using tensorflow tf.reverse() function, you may find this error: ValueError: Shape must be rank 1 but is rank 0. In this tutorial, we will tell you how to fix this error.