Long Short-Term Memory Network Tutorials and Examples for Beginners

Long Short-Term Memory Network (LSTM) was firstly introduced
by Hochreiter and Schmidhuber in 1997, which is a Variant RNN and contains three gates: forget gate, input gate and output gate.

In this page, we write some tutorials and examples on Long Short-Term Memory Network, you can learn how to use this network by following our tutorials.

We often use RNN/GRU/LSTM/BiLSTM to encode sequence. In order to get the output of these models. We can average outputs or use attention to compute. In this tutorial, we will introduce how to average their outputs.

Advanced LSTM is a variation of LSTM, which is proposed in paper <> In this tutorial, we will compare it with Conventional LSTM, which will help us to understand it.

LSTM is a good method to process sequence in NLP, however, how long sequence can be handled effectively by it? In this tutorial, we will discuss this topic.

Highway Networks is proposed in paper: Highway Networks. It is proposed based on LSTM. In this tutorial, we will introduce it for machine learning beginners.

LSTMP (LSTM with Recurrent Projection Layer) is an improvement of LSTM with peephole conncections. In this tutorial, we will introduce this model for LSTM Beginners.

Tree LSTM model is widely used in many deep learning fields. It is often used to process tree structure data. In this tutorial, we will introduce it for deep learning beginners.

In order to improve the performance of lstm model in deep learning, we can use stacked lstm. In this tutorial, we will introduce the stacked lstm for deep learning beginners.

To improve lstm and bilsm, you should implement them by your own tensorflow code. In this tutorial, we will discuss why the performance of your custom lstm or bilstm model are worse than tf.nn.dynamic_rnn() and tf.nn.bidirectional_dynamic_rnn().